* Re: [Linux-ia64] kernel update (relative to v2.4.0-test1)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
@ 2000-06-03 17:32 ` Manfred Spraul
2000-06-10 1:07 ` David Mosberger
` (214 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Manfred Spraul @ 2000-06-03 17:32 UTC (permalink / raw)
To: linux-ia64
David Mosberger wrote:
>
> - ptrace interface should work again (at least strace works...)
>
There is a race in sys_ptrace that I fixed on i386:
arch/ia64/kernel/ptrace.c ~ line 830:
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
read_unlock(&tasklist_lock);
************ bad! the task could die!
if(!child)
goto out;
You must call get_task_struct() before read_unlock(&tasklist_lock);
Then you dereference child->mm. child->mm can change, you must call
task_lock(child);
task_unlock(child)
around these lines [check fs/proc/*.c]
Btw, child->mm can be NULL.
--
Manfred
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.0-test1)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
2000-06-03 17:32 ` Manfred Spraul
@ 2000-06-10 1:07 ` David Mosberger
2000-06-10 1:11 ` David Mosberger
` (213 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-06-10 1:07 UTC (permalink / raw)
To: linux-ia64
An updated kernel diff is available in the usual place:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/linux-2.4.0-test1-ia64-000609*
Summary of changes:
- Stephen Zeisset's module related fixes to ia64_ksyms.c and pci.c.
- Bill Nottingham's initrd additions.
- Takayoshi Kouchi's pointer-lock related SMP fixes.
- Jes Sorensen's mmap bug fix.
- New unwind support now almost works. Warning: don't enable
CONFIG_IA64_NEW_UNWIND unless you have a bleeding edge toolchain
with _all_ the unwind fixes. Even then you may not want to turn
it on as core-dump support hasn't been finished yet.
- Without the new unwind support, the kernel now is conservative
again and generates a switch_stack frame whenever there is a remote
possibility that it might be needed. This slows down several
signal related operations, but it will be fast again once
the new unwind support is complete.
I also added a printk whenever we get a timer tick before it was due.
Since that happens quite frequently, this quickly becomes annoying.
Look at it as an invitation to investigate the problem... ;-)
--david
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-test1-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/entry.S Fri Jun 9 17:09:56 2000
@@ -117,14 +117,11 @@
mov psr.l=r17
;;
srlz.d
-
- movl r28\x1f
- br.cond.sptk.many load_switch_stack
-1: UNW(.restore sp)
- adds sp=IA64_SWITCH_STACK_SIZE,sp // pop switch_stack
+ DO_LOAD_SWITCH_STACK( )
br.ret.sptk.few rp
END(ia64_switch_to)
+#ifndef CONFIG_IA64_NEW_UNWIND
/*
* Like save_switch_stack, but also save the stack frame that is active
* at the time this function is called.
@@ -135,6 +132,8 @@
DO_SAVE_SWITCH_STACK
br.ret.sptk.few rp
END(save_switch_stack_with_current_frame)
+#endif /* !CONFIG_IA64_NEW_UNWIND */
+
/*
* Note that interrupts are enabled during save_switch_stack and
* load_switch_stack. This means that we may get an interrupt with
@@ -343,7 +342,6 @@
;;
ld8.fill r4=[r2],16
ld8.fill r5=[r3],16
- mov b7=r28
;;
ld8.fill r6=[r2],16
ld8.fill r7=[r3],16
@@ -371,6 +369,19 @@
// also use it to preserve b6, which contains the syscall entry point.
//
GLOBAL_ENTRY(invoke_syscall_trace)
+#ifdef CONFIG_IA64_NEW_UNWIND
+ UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+ alloc loc1=ar.pfs,8,3,0,0
+ mov loc0=rp
+ UNW(.body)
+ mov loc2¶
+ ;;
+ br.call.sptk.few rp=syscall_trace
+.ret3: mov rp=loc0
+ mov ar.pfs=loc1
+ mov b6=loc2
+ br.ret.sptk.few rp
+#else /* !CONFIG_IA64_NEW_SYSCALL */
UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
alloc loc1=ar.pfs,8,3,0,0
;; // WAW on CFM at the br.call
@@ -384,6 +395,7 @@
mov b6=loc2
;;
br.ret.sptk.few rp
+#endif /* !CONFIG_IA64_NEW_SYSCALL */
END(invoke_syscall_trace)
//
@@ -802,112 +814,140 @@
// args get preserved, in case we need to restart a system call.
//
ENTRY(handle_signal_delivery)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+#ifdef CONFIG_IA64_NEW_UNWIND
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
mov r9=ar.unat
-
- // If the process is being ptraced, the signal may not actually be delivered to
- // the process. Instead, SIGCHLD will be sent to the parent. We need to
- // setup a switch_stack so ptrace can inspect the processes state if necessary.
- adds r2=IA64_TASK_FLAGS_OFFSET,r13
- ;;
- ld8 r2=[r2]
+ mov loc0=rp // save return address
+ .body
mov out0=0 // there is no "oldset"
- adds out1\x16,sp // out1=&pt_regs
- ;;
+ adds out1=0,sp // out1=&sigscratch
(pSys) mov out2=1 // out2=1 => we're in a syscall
- tbit.nz p16,p17=r2,PF_PTRACED_BIT
-(p16) br.cond.spnt.many setup_switch_stack
;;
-back_from_setup_switch_stack:
(pNonSys) mov out2=0 // out2=0 => not a syscall
- adds r3=-IA64_SWITCH_STACK_SIZE+IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
-(p17) adds sp=-IA64_SWITCH_STACK_SIZE,sp // make space for (dummy) switch_stack
- ;;
-(p17) st8 [r3]=r9 // save ar.unat in sw->caller_unat
- mov loc0=rp // save return address
- UNW(.body)
+ .fframe 16
+ .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
+ st8 [sp]=r9,-16 // allocate space for ar.unat and save it
br.call.sptk.few rp=ia64_do_signal
.ret11:
- adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+ .restore sp
+ adds sp\x16,sp // pop scratch stack space
;;
- ld8 r9=[r3] // load new unat from sw->caller_unat
+ ld8 r9=[sp] // load new unat from sw->caller_unat
mov rp=loc0
;;
-(p17) adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch_stack
-(p17) mov ar.unat=r9
-(p17) mov ar.pfs=loc1
-(p17) br.ret.sptk.many rp
-
- DO_LOAD_SWITCH_STACK( ) // restore the switch stack (ptrace may have modified it)
+ mov ar.unat=r9
+ mov ar.pfs=loc1
br.ret.sptk.many rp
- // NOT REACHED
-
-setup_switch_stack:
- UNW(.prologue)
- mov r16=loc1
+#else /* !CONFIG_IA64_NEW_UNWIND */
+ .prologue
+ alloc r16=ar.pfs,8,0,3,0 // preserve all eight input regs in case of syscall restart!
DO_SAVE_SWITCH_STACK
UNW(.body)
- br.cond.sptk.many back_from_setup_switch_stack
+ mov out0=0 // there is no "oldset"
+ adds out1\x16,sp // out1=&sigscratch
+ .pred.rel.mutex pSys, pNonSys
+(pSys) mov out2=1 // out2=1 => we're in a syscall
+(pNonSys) mov out2=0 // out2=0 => not a syscall
+ br.call.sptk.few rp=ia64_do_signal
+.ret11:
+ // restore the switch stack (ptrace may have modified it)
+ DO_LOAD_SWITCH_STACK( )
+ br.ret.sptk.many rp
+#endif /* !CONFIG_IA64_NEW_UNWIND */
END(handle_signal_delivery)
GLOBAL_ENTRY(sys_rt_sigsuspend)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2))
- alloc loc1=ar.pfs,2,2,3,0
-
- // If the process is being ptraced, the signal may not actually be delivered to
- // the process. Instead, SIGCHLD will be sent to the parent. We need to
- // setup a switch_stack so ptrace can inspect the processes state if necessary.
- // Also, the process might not ptraced until stopped in sigsuspend, so this
- // isn't something that we can do conditionally based upon the value of
- // PF_PTRACED_BIT.
+#ifdef CONFIG_IA64_NEW_UNWIND
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
+ alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
+ mov r9=ar.unat
+ mov loc0=rp // save return address
mov out0=in0 // mask
mov out1=in1 // sigsetsize
+ adds out2=0,sp // out2=&sigscratch
;;
- adds out2\x16,sp // out1=&pt_regs
- mov r16=loc1
- DO_SAVE_SWITCH_STACK
- mov loc0=rp // save return address
- UNW(.body)
- br.call.sptk.many rp=ia64_rt_sigsuspend
+ .fframe 16
+ .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
+ st8 [sp]=r9,-16 // allocate space for ar.unat and save it
+ .body
+ br.call.sptk.few rp=ia64_rt_sigsuspend
.ret12:
- adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+ .restore sp
+ adds sp\x16,sp // pop scratch stack space
;;
- ld8 r9=[r3] // load new unat from sw->caller_unat
+ ld8 r9=[sp] // load new unat from sw->caller_unat
mov rp=loc0
;;
+ mov ar.unat=r9
+ mov ar.pfs=loc1
+ br.ret.sptk.many rp
+#else /* !CONFIG_IA64_NEW_UNWIND */
+ UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2))
+ alloc r16=ar.pfs,2,0,3,0
+ DO_SAVE_SWITCH_STACK
+ UNW(.body)
+
+ mov out0=in0 // mask
+ mov out1=in1 // sigsetsize
+ adds out2\x16,sp // out1=&sigscratch
+ br.call.sptk.many rp=ia64_rt_sigsuspend
+.ret12:
// restore the switch stack (ptrace may have modified it)
- DO_LOAD_SWITCH_STACK(PT_REGS_UNWIND_INFO)
+ DO_LOAD_SWITCH_STACK( )
br.ret.sptk.many rp
- // NOT REACHED
+#endif /* !CONFIG_IA64_NEW_UNWIND */
END(sys_rt_sigsuspend)
ENTRY(sys_rt_sigreturn)
+#ifdef CONFIG_IA64_NEW_UNWIND
+ .regstk 0,0,3,0 // inherited from gate.s:invoke_sighandler()
+ PT_REGS_UNWIND_INFO
+ .prologue
+ PT_REGS_SAVES(16)
+ adds sp=-16,sp
+ .body
+ cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
+ ;;
+ adds out0\x16,sp // out0 = &sigscratch
+ br.call.sptk.few rp=ia64_rt_sigreturn
+.ret13:
+ adds sp\x16,sp // doesn't drop pt_regs, so don't mark it as restoring sp!
+ PT_REGS_UNWIND_INFO // instead, create a new body section with the smaller frame
+ ;;
+ ld8 r9=[sp] // load new ar.unat
+ mov b7=r8
+ ;;
+ mov ar.unat=r9
+ br b7
+#else /* !CONFIG_IA64_NEW_UNWIND */
.regstk 0,0,3,0 // inherited from gate.s:invoke_sighandler()
PT_REGS_UNWIND_INFO
- adds out0\x16,sp // out0 = &pt_regs
UNW(.prologue)
UNW(.fframe IA64_PT_REGS_SIZE+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp rp, PT(CR_IIP)+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp ar.pfs, PT(CR_IFS)+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp ar.unat, PT(AR_UNAT)+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp pr, PT(PR)+IA64_SWITCH_STACK_SIZE)
- adds sp=-IA64_SWITCH_STACK_SIZE,sp // make space for unat and padding
+ adds sp=-IA64_SWITCH_STACK_SIZE,sp
+ cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
;;
UNW(.body)
- cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
+
+ adds out0\x16,sp // out0 = &sigscratch
br.call.sptk.few rp=ia64_rt_sigreturn
.ret13:
adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
;;
ld8 r9=[r3] // load new ar.unat
- mov rp=r8
+ mov b7=r8
;;
PT_REGS_UNWIND_INFO
adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch-stack frame
mov ar.unat=r9
- br rp
+ br b7
+#endif /* !CONFIG_IA64_NEW_UNWIND */
END(sys_rt_sigreturn)
GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.0-test1-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/ia64_ksyms.c Fri Jun 9 17:10:26 2000
@@ -5,20 +5,15 @@
#include <linux/config.h>
#include <linux/module.h>
-#include <asm/processor.h>
-EXPORT_SYMBOL(cpu_data);
-EXPORT_SYMBOL(kernel_thread);
-
-#include <asm/uaccess.h>
-EXPORT_SYMBOL(__copy_user);
-
#include <linux/string.h>
-EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL_NOVERS(memset);
EXPORT_SYMBOL(memcmp);
-EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL_NOVERS(memcpy);
+EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strcmp);
+EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strlen);
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
@@ -29,9 +24,41 @@
EXPORT_SYMBOL(pci_alloc_consistent);
EXPORT_SYMBOL(pci_free_consistent);
+#include <linux/in6.h>
+#include <asm/checksum.h>
+EXPORT_SYMBOL(csum_partial_copy_nocheck);
+
#include <asm/irq.h>
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
+
+#include <asm/current.h>
+#include <asm/hardirq.h>
+EXPORT_SYMBOL(irq_stat);
+
+#include <asm/processor.h>
+EXPORT_SYMBOL(cpu_data);
+EXPORT_SYMBOL(kernel_thread);
+
+#ifdef CONFIG_SMP
+EXPORT_SYMBOL(synchronize_irq);
+
+#include <asm/smplock.h>
+EXPORT_SYMBOL(kernel_flag);
+
+#include <asm/system.h>
+EXPORT_SYMBOL(__global_sti);
+EXPORT_SYMBOL(__global_cli);
+EXPORT_SYMBOL(__global_save_flags);
+EXPORT_SYMBOL(__global_restore_flags);
+
+#endif
+
+#include <asm/uaccess.h>
+EXPORT_SYMBOL(__copy_user);
+
+#include <asm/unistd.h>
+EXPORT_SYMBOL(__ia64_syscall);
/* from arch/ia64/lib */
extern void __divdi3(void);
diff -urN linux-davidm/arch/ia64/kernel/mca_asm.S linux-2.4.0-test1-lia/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/mca_asm.S Fri Jun 9 17:23:02 2000
@@ -6,7 +6,6 @@
// 00/03/29 cfleck Added code to save INIT handoff state in pt_regs format, switch to temp kstack,
// switch modes, jump to C INIT handler
//
-#include <asm/offsets.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/mca_asm.h>
diff -urN linux-davidm/arch/ia64/kernel/pci.c linux-2.4.0-test1-lia/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/pci.c Fri Jun 9 17:23:13 2000
@@ -197,7 +197,7 @@
ranges->mem_end -= bus->resource[1]->start;
}
-int __init
+int
pcibios_enable_device (struct pci_dev *dev)
{
/* Not needed, since we enable all devices at startup. */
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.0-test1-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/process.c Fri Jun 9 17:23:40 2000
@@ -311,7 +311,12 @@
dst[12] = pt->r12; dst[13] = pt->r13; dst[14] = pt->r14; dst[15] = pt->r15;
memcpy(dst + 16, &pt->r16, 16*8); /* r16-r31 are contiguous */
+#ifdef CONFIG_IA64_NEW_UNWIND
+ printk("ia64_elf_core_copy_regs: fix me, please?");
+ dst[32] = 0;
+#else
dst[32] = ia64_get_nat_bits(pt, sw);
+#endif
dst[33] = pt->pr;
/* branch regs: */
@@ -332,6 +337,10 @@
struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
struct task_struct *fpu_owner = ia64_get_fpu_owner();
+#ifdef CONFIG_IA64_NEW_UNWIND
+ printk("dump_fpu: fix me, please?");
+#endif
+
memset(dst, 0, sizeof (dst)); /* don't leak any "random" bits */
/* f0 is 0.0 */ /* f1 is 1.0 */ dst[2] = sw->f2; dst[3] = sw->f3;
@@ -440,7 +449,7 @@
do {
if (unw_unwind(&info) < 0)
return 0;
- ip = unw_get_ip(&info);
+ unw_get_ip(&info, &ip);
if (ip < first_sched || ip >= last_sched)
return ip;
} while (count++ < 16);
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.0-test1-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/ptrace.c Fri Jun 9 17:15:55 2000
@@ -33,6 +33,89 @@
#define IPSR_WRITE_MASK 0x000006a00100003eUL
#define IPSR_READ_MASK IPSR_WRITE_MASK
+#ifdef CONFIG_IA64_NEW_UNWIND
+
+#define PTRACE_DEBUG 1
+
+#if PTRACE_DEBUG
+# define dprintk(format...) printk(format)
+# define inline
+#else
+# define dprintk(format...)
+#endif
+
+static int
+unwind_to_user (struct unw_frame_info *info, struct task_struct *child)
+{
+ unsigned long ip;
+
+ unw_init_from_blocked_task(info, child);
+ while (unw_unwind(info) >= 0) {
+ if (unw_get_rp(info, &ip) < 0) {
+ unw_get_ip(info, &ip);
+ dprintk("ptrace: failed to read return pointer (ip=0x%lx)\n", ip);
+ return -1;
+ }
+ if (ip < TASK_SIZE)
+ return 0;
+ }
+ unw_get_ip(info, &ip);
+ dprintk("ptrace: failed to unwind to user-level (ip=0x%lx)\n", ip);
+ return -1;
+}
+
+/*
+ * Collect the NaT bits for r1-r31 from scratch_unat and return a NaT
+ * bitset where bit i is set iff the NaT bit of register i is set.
+ */
+unsigned long
+ia64_get_scratch_nat_bits (struct pt_regs *pt, unsigned long scratch_unat)
+{
+# define GET_BITS(first, last, unat) \
+ ({ \
+ unsigned long bit = ia64_unat_pos(&pt->r##first); \
+ unsigned long mask = ((1UL << (last - first + 1)) - 1) << first; \
+ (ia64_rotl(unat, first) >> bit) & mask; \
+ })
+ unsigned long val;
+
+ val = GET_BITS( 1, 3, scratch_unat);
+ val |= GET_BITS(12, 15, scratch_unat);
+ val |= GET_BITS( 8, 11, scratch_unat);
+ val |= GET_BITS(16, 31, scratch_unat);
+ return val;
+
+# undef GET_BITS
+}
+
+/*
+ * Set the NaT bits for the scratch registers according to NAT and
+ * return the resulting unat (assuming the scratch registers are
+ * stored in PT).
+ */
+unsigned long
+ia64_put_scratch_nat_bits (struct pt_regs *pt, unsigned long nat)
+{
+ unsigned long scratch_unat;
+
+# define PUT_BITS(first, last, nat) \
+ ({ \
+ unsigned long bit = ia64_unat_pos(&pt->r##first); \
+ unsigned long mask = ((1UL << (last - first + 1)) - 1) << bit; \
+ (ia64_rotr(nat, first) << bit) & mask; \
+ })
+ scratch_unat = PUT_BITS( 1, 3, nat);
+ scratch_unat |= PUT_BITS(12, 15, nat);
+ scratch_unat |= PUT_BITS( 8, 11, nat);
+ scratch_unat |= PUT_BITS(16, 31, nat);
+
+ return scratch_unat;
+
+# undef PUT_BITS
+}
+
+#else /* !CONFIG_IA64_NEW_UNWIND */
+
/*
* Collect the NaT bits for r1-r31 from sw->caller_unat and
* sw->ar_unat and return a NaT bitset where bit i is set iff the NaT
@@ -80,28 +163,26 @@
# undef PUT_BITS
}
-#define IA64_MLI_TEMPLATE 0x2
+#endif /* !CONFIG_IA64_NEW_UNWIND */
+
+#define IA64_MLX_TEMPLATE 0x2
#define IA64_MOVL_OPCODE 6
void
ia64_increment_ip (struct pt_regs *regs)
{
- unsigned long w0, w1, ri = ia64_psr(regs)->ri + 1;
+ unsigned long w0, ri = ia64_psr(regs)->ri + 1;
if (ri > 2) {
ri = 0;
regs->cr_iip += 16;
} else if (ri = 2) {
get_user(w0, (char *) regs->cr_iip + 0);
- get_user(w1, (char *) regs->cr_iip + 8);
- if (((w0 >> 1) & 0xf) = IA64_MLI_TEMPLATE && (w1 >> 60) = IA64_MOVL_OPCODE) {
+ if (((w0 >> 1) & 0xf) = IA64_MLX_TEMPLATE) {
/*
- * rfi'ing to slot 2 of an MLI bundle causes
+ * rfi'ing to slot 2 of an MLX bundle causes
* an illegal operation fault. We don't want
- * that to happen... Note that we check the
- * opcode only. "movl" has a vc bit of 0, but
- * since a vc bit of 1 is currently reserved,
- * we might just as well treat it like a movl.
+ * that to happen...
*/
ri = 0;
regs->cr_iip += 16;
@@ -113,21 +194,17 @@
void
ia64_decrement_ip (struct pt_regs *regs)
{
- unsigned long w0, w1, ri = ia64_psr(regs)->ri - 1;
+ unsigned long w0, ri = ia64_psr(regs)->ri - 1;
if (ia64_psr(regs)->ri = 0) {
regs->cr_iip -= 16;
ri = 2;
get_user(w0, (char *) regs->cr_iip + 0);
- get_user(w1, (char *) regs->cr_iip + 8);
- if (((w0 >> 1) & 0xf) = IA64_MLI_TEMPLATE && (w1 >> 60) = IA64_MOVL_OPCODE) {
+ if (((w0 >> 1) & 0xf) = IA64_MLX_TEMPLATE) {
/*
- * rfi'ing to slot 2 of an MLI bundle causes
+ * rfi'ing to slot 2 of an MLX bundle causes
* an illegal operation fault. We don't want
- * that to happen... Note that we check the
- * opcode only. "movl" has a vc bit of 0, but
- * since a vc bit of 1 is currently reserved,
- * we might just as well treat it like a movl.
+ * that to happen...
*/
ri = 1;
}
@@ -292,7 +369,11 @@
laddr = (unsigned long *) addr;
child_regs = ia64_task_regs(child);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ child_stack = (struct switch_stack *) (child->thread.ksp + 16);
+#else
child_stack = (struct switch_stack *) child_regs - 1;
+#endif
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
@@ -336,7 +417,11 @@
laddr = (unsigned long *) addr;
child_regs = ia64_task_regs(child);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ child_stack = (struct switch_stack *) (child->thread.ksp + 16);
+#else
child_stack = (struct switch_stack *) child_regs - 1;
+#endif
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
@@ -395,21 +480,42 @@
long new_bsp,
int force_loadrs_to_zero)
{
- unsigned long *krbs, bspstore, bsp, krbs_num_regs, rbs_end, addr, val;
- long ndirty, ret;
- struct pt_regs *child_regs;
+ unsigned long *krbs, bspstore, *kbspstore, bsp, rbs_end, addr, val;
+ long ndirty, ret = 0;
+ struct pt_regs *child_regs = ia64_task_regs(child);
+
+#ifdef CONFIG_IA64_NEW_UNWIND
+ struct unw_frame_info info;
+ unsigned long cfm, sof;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ unw_get_bsp(&info, (unsigned long *) &kbspstore);
+
+ krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ ndirty = ia64_rse_num_regs(krbs, krbs + (child_regs->loadrs >> 19));
+ bspstore = child_regs->ar_bspstore;
+ bsp = (long) ia64_rse_skip_regs((long *)bspstore, ndirty);
+
+ cfm = child_regs->cr_ifs;
+ if (!(cfm & (1UL << 63)))
+ unw_get_cfm(&info, &cfm);
+ sof = (cfm & 0x7f);
+ rbs_end = (long) ia64_rse_skip_regs((long *)bspstore, sof);
+#else
struct switch_stack *child_stack;
+ unsigned long krbs_num_regs;
- ret = 0;
- child_regs = ia64_task_regs(child);
child_stack = (struct switch_stack *) child_regs - 1;
-
+ kbspstore = (unsigned long *) child_stack->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
ndirty = ia64_rse_num_regs(krbs, krbs + (child_regs->loadrs >> 19));
bspstore = child_regs->ar_bspstore;
bsp = (long) ia64_rse_skip_regs((long *)bspstore, ndirty);
- krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
+ krbs_num_regs = ia64_rse_num_regs(krbs, kbspstore);
rbs_end = (long) ia64_rse_skip_regs((long *)bspstore, krbs_num_regs);
+#endif
/* Return early if nothing to do */
if (bsp = new_bsp)
@@ -438,13 +544,15 @@
}
static void
-sync_thread_rbs (struct task_struct *child, int make_writable)
+sync_thread_rbs (struct task_struct *child, struct mm_struct *mm, int make_writable)
{
struct task_struct *p;
read_lock(&tasklist_lock);
- for_each_task(p) {
- if (p->mm = child->mm && p->state != TASK_RUNNING)
- sync_kernel_register_backing_store(p, 0, make_writable);
+ {
+ for_each_task(p) {
+ if (p->mm = mm && p->state != TASK_RUNNING)
+ sync_kernel_register_backing_store(p, 0, make_writable);
+ }
}
read_unlock(&tasklist_lock);
child->thread.flags |= IA64_THREAD_KRBS_SYNCED;
@@ -466,6 +574,234 @@
}
}
+#ifdef CONFIG_IA64_NEW_UNWIND
+
+#include <asm/unwind.h>
+
+static int
+access_fr (struct unw_frame_info *info, int regnum, int hi, unsigned long *data, int write_access)
+{
+ struct ia64_fpreg fpval;
+ int ret;
+
+ ret = unw_get_fr(info, regnum, &fpval);
+ if (ret < 0)
+ return ret;
+
+ if (write_access) {
+ fpval.u.bits[hi] = *data;
+ ret = unw_set_fr(info, regnum, fpval);
+ } else
+ *data = fpval.u.bits[hi];
+ return ret;
+}
+
+static int
+access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
+{
+ unsigned long *ptr, *rbs, *bspstore, ndirty, regnum;
+ struct switch_stack *sw;
+ struct unw_frame_info info;
+ struct pt_regs *pt;
+
+ pt = ia64_task_regs(child);
+ sw = (struct switch_stack *) (child->thread.ksp + 16);
+
+ if ((addr & 0x7) != 0) {
+ dprintk("ptrace: unaligned register address 0x%lx\n", addr);
+ return -1;
+ }
+
+ if (addr < PT_F127 + 16) {
+ /* accessing fph */
+ sync_fph(child);
+ ptr = (unsigned long *) ((unsigned long) &child->thread.fph + addr);
+ } else if (addr >= PT_F10 && addr < PT_F15 + 16) {
+ /* scratch registers untouched by kernel (saved in switch_stack) */
+ ptr = (unsigned long *) ((long) sw + addr - PT_NAT_BITS);
+ } else if (addr < PT_AR_LC + 8) {
+ /* preserved state: */
+ unsigned long nat_bits, scratch_unat, dummy = 0;
+ struct unw_frame_info info;
+ char nat = 0;
+ int ret;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ switch (addr) {
+ case PT_NAT_BITS:
+ if (write_access) {
+ nat_bits = *data;
+ scratch_unat = ia64_put_scratch_nat_bits(pt, nat_bits);
+ if (unw_set_ar(&info, UNW_AR_UNAT, scratch_unat) < 0) {
+ dprintk("ptrace: failed to set ar.unat\n");
+ return -1;
+ }
+ for (regnum = 4; regnum <= 7; ++regnum) {
+ unw_get_gr(&info, regnum, &dummy, &nat);
+ unw_set_gr(&info, regnum, dummy, (nat_bits >> regnum) & 1);
+ }
+ } else {
+ if (unw_get_ar(&info, UNW_AR_UNAT, &scratch_unat) < 0) {
+ dprintk("ptrace: failed to read ar.unat\n");
+ return -1;
+ }
+ nat_bits = ia64_get_scratch_nat_bits(pt, scratch_unat);
+ for (regnum = 4; regnum <= 7; ++regnum) {
+ unw_get_gr(&info, regnum, &dummy, &nat);
+ nat_bits |= (nat != 0) << regnum;
+ }
+ *data = nat_bits;
+ }
+ return 0;
+
+ case PT_R4: case PT_R5: case PT_R6: case PT_R7:
+ if (write_access) {
+ /* read NaT bit first: */
+ ret = unw_get_gr(&info, (addr - PT_R4)/8 + 4, data, &nat);
+ if (ret < 0)
+ return ret;
+ }
+ return unw_access_gr(&info, (addr - PT_R4)/8 + 4, data, &nat,
+ write_access);
+
+ case PT_B1: case PT_B2: case PT_B3: case PT_B4: case PT_B5:
+ return unw_access_br(&info, (addr - PT_B1)/8 + 1, data, write_access);
+
+ case PT_AR_LC:
+ return unw_access_ar(&info, UNW_AR_LC, data, write_access);
+
+ default:
+ if (addr >= PT_F2 && addr < PT_F5 + 16)
+ return access_fr(&info, (addr - PT_F2)/16 + 2, (addr & 8) != 0,
+ data, write_access);
+ else if (addr >= PT_F16 && addr < PT_F31 + 16)
+ return access_fr(&info, (addr - PT_F16)/16 + 16, (addr & 8) != 0,
+ data, write_access);
+ else {
+ dprintk("ptrace: rejecting access to register address 0x%lx\n",
+ addr);
+ return -1;
+ }
+ }
+ } else if (addr < PT_F9+16) {
+ /* scratch state */
+ switch (addr) {
+ case PT_AR_BSP:
+ if (write_access)
+ /* FIXME? Account for lack of ``cover'' in the syscall case */
+ return sync_kernel_register_backing_store(child, *data, 1);
+ else {
+ rbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ bspstore = (unsigned long *) pt->ar_bspstore;
+ ndirty = ia64_rse_num_regs(rbs, rbs + (pt->loadrs >> 19));
+
+ /*
+ * If we're in a system call, no ``cover'' was done. So to
+ * make things uniform, we'll add the appropriate displacement
+ * onto bsp if we're in a system call.
+ */
+ if (!(pt->cr_ifs & (1UL << 63))) {
+ struct unw_frame_info info;
+ unsigned long cfm;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ unw_get_cfm(&info, &cfm);
+ ndirty += cfm & 0x7f;
+ }
+ *data = (unsigned long) ia64_rse_skip_regs(bspstore, ndirty);
+ return 0;
+ }
+
+ case PT_CFM:
+ if (pt->cr_ifs & (1UL << 63)) {
+ if (write_access)
+ pt->cr_ifs = ((pt->cr_ifs & ~0x3fffffffffUL)
+ | (*data & 0x3fffffffffUL));
+ else
+ *data = pt->cr_ifs & 0x3fffffffffUL;
+ } else {
+ /* kernel was entered through a system call */
+ unsigned long cfm;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ unw_get_cfm(&info, &cfm);
+ if (write_access)
+ unw_set_cfm(&info, ((cfm & ~0x3fffffffffU)
+ | (*data & 0x3fffffffffUL)));
+ else
+ *data = cfm;
+ }
+ return 0;
+
+ case PT_CR_IPSR:
+ if (write_access)
+ pt->cr_ipsr = ((*data & IPSR_WRITE_MASK)
+ | (pt->cr_ipsr & ~IPSR_WRITE_MASK));
+ else
+ *data = (pt->cr_ipsr & IPSR_READ_MASK);
+ return 0;
+
+ case PT_R1: case PT_R2: case PT_R3:
+ case PT_R8: case PT_R9: case PT_R10: case PT_R11:
+ case PT_R12: case PT_R13: case PT_R14: case PT_R15:
+ case PT_R16: case PT_R17: case PT_R18: case PT_R19:
+ case PT_R20: case PT_R21: case PT_R22: case PT_R23:
+ case PT_R24: case PT_R25: case PT_R26: case PT_R27:
+ case PT_R28: case PT_R29: case PT_R30: case PT_R31:
+ case PT_B0: case PT_B6: case PT_B7:
+ case PT_F6: case PT_F6+8: case PT_F7: case PT_F7+8:
+ case PT_F8: case PT_F8+8: case PT_F9: case PT_F9+8:
+ case PT_AR_BSPSTORE:
+ case PT_AR_RSC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
+ case PT_AR_CCV: case PT_AR_FPSR: case PT_CR_IIP: case PT_PR:
+ /* scratch register */
+ ptr = (unsigned long *) ((long) pt + addr - PT_CR_IPSR);
+ break;
+
+ default:
+ /* disallow accessing anything else... */
+ dprintk("ptrace: rejecting access to register address 0x%lx\n",
+ addr);
+ return -1;
+ }
+ } else {
+ /* access debug registers */
+
+ if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
+ child->thread.flags |= IA64_THREAD_DBG_VALID;
+ memset(child->thread.dbr, 0, sizeof(child->thread.dbr));
+ memset(child->thread.ibr, 0, sizeof( child->thread.ibr));
+ }
+ if (addr >= PT_IBR) {
+ regnum = (addr - PT_IBR) >> 3;
+ ptr = &child->thread.ibr[0];
+ } else {
+ regnum = (addr - PT_DBR) >> 3;
+ ptr = &child->thread.dbr[0];
+ }
+
+ if (regnum >= 8) {
+ dprintk("ptrace: rejecting access to register address 0x%lx\n", addr);
+ return -1;
+ }
+
+ ptr += regnum;
+ }
+ if (write_access)
+ *ptr = *data;
+ else
+ *data = *ptr;
+ return 0;
+}
+
+#else /* !CONFIG_IA64_NEW_UNWIND */
+
static int
access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
{
@@ -486,6 +822,13 @@
sw = (struct switch_stack *) pt - 1;
switch (addr) {
+ case PT_NAT_BITS:
+ if (write_access)
+ ia64_put_nat_bits(pt, sw, *data);
+ else
+ *data = ia64_get_nat_bits(pt, sw);
+ return 0;
+
case PT_AR_BSP:
if (write_access)
/* FIXME? Account for lack of ``cover'' in the syscall case */
@@ -508,9 +851,6 @@
case PT_CFM:
if (write_access) {
- pt = ia64_task_regs(child);
- sw = (struct switch_stack *) pt - 1;
-
if (pt->cr_ifs & (1UL << 63))
pt->cr_ifs = ((pt->cr_ifs & ~0x3fffffffffUL)
| (*data & 0x3fffffffffUL));
@@ -545,18 +885,26 @@
case PT_R28: case PT_R29: case PT_R30: case PT_R31:
case PT_B0: case PT_B1: case PT_B2: case PT_B3:
case PT_B4: case PT_B5: case PT_B6: case PT_B7:
- case PT_F2: case PT_F3:
- case PT_F4: case PT_F5: case PT_F6: case PT_F7:
- case PT_F8: case PT_F9: case PT_F10: case PT_F11:
- case PT_F12: case PT_F13: case PT_F14: case PT_F15:
- case PT_F16: case PT_F17: case PT_F18: case PT_F19:
- case PT_F20: case PT_F21: case PT_F22: case PT_F23:
- case PT_F24: case PT_F25: case PT_F26: case PT_F27:
- case PT_F28: case PT_F29: case PT_F30: case PT_F31:
- case PT_AR_LC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
- case PT_AR_CCV: case PT_AR_FPSR:
- case PT_CR_IIP: case PT_PR:
- ptr = (unsigned long *) ((long) sw + addr - PT_PRI_UNAT);
+ case PT_F2: case PT_F2+8: case PT_F3: case PT_F3+8:
+ case PT_F4: case PT_F4+8: case PT_F5: case PT_F5+8:
+ case PT_F6: case PT_F6+8: case PT_F7: case PT_F7+8:
+ case PT_F8: case PT_F8+8: case PT_F9: case PT_F9+8:
+ case PT_F10: case PT_F10+8: case PT_F11: case PT_F11+8:
+ case PT_F12: case PT_F12+8: case PT_F13: case PT_F13+8:
+ case PT_F14: case PT_F14+8: case PT_F15: case PT_F15+8:
+ case PT_F16: case PT_F16+8: case PT_F17: case PT_F17+8:
+ case PT_F18: case PT_F18+8: case PT_F19: case PT_F19+8:
+ case PT_F20: case PT_F20+8: case PT_F21: case PT_F21+8:
+ case PT_F22: case PT_F22+8: case PT_F23: case PT_F23+8:
+ case PT_F24: case PT_F24+8: case PT_F25: case PT_F25+8:
+ case PT_F26: case PT_F26+8: case PT_F27: case PT_F27+8:
+ case PT_F28: case PT_F28+8: case PT_F29: case PT_F29+8:
+ case PT_F30: case PT_F30+8: case PT_F31: case PT_F31+8:
+ case PT_AR_BSPSTORE:
+ case PT_AR_RSC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
+ case PT_AR_CCV: case PT_AR_FPSR: case PT_CR_IIP: case PT_PR:
+ case PT_AR_LC:
+ ptr = (unsigned long *) ((long) sw + addr - PT_NAT_BITS);
break;
default:
@@ -591,6 +939,8 @@
return 0;
}
+#endif /* !CONFIG_IA64_NEW_UNWIND */
+
asmlinkage long
sys_ptrace (long request, pid_t pid, unsigned long addr, unsigned long data,
long arg4, long arg5, long arg6, long arg7, long stack)
@@ -613,17 +963,21 @@
ret = -ESRCH;
read_lock(&tasklist_lock);
- child = find_task_by_pid(pid);
+ {
+ child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
+ }
read_unlock(&tasklist_lock);
if (!child)
goto out;
ret = -EPERM;
if (pid = 1) /* no messing around with init! */
- goto out;
+ goto out_tsk;
if (request = PTRACE_ATTACH) {
if (child = current)
- goto out;
+ goto out_tsk;
if ((!child->dumpable ||
(current->uid != child->euid) ||
(current->uid != child->suid) ||
@@ -632,10 +986,10 @@
(current->gid != child->sgid) ||
(!cap_issubset(child->cap_permitted, current->cap_permitted)) ||
(current->gid != child->gid)) && !capable(CAP_SYS_PTRACE))
- goto out;
+ goto out_tsk;
/* the same process cannot be attached many times */
if (child->flags & PF_PTRACED)
- goto out;
+ goto out_tsk;
child->flags |= PF_PTRACED;
if (child->p_pptr != current) {
unsigned long flags;
@@ -648,78 +1002,98 @@
}
send_sig(SIGSTOP, child, 1);
ret = 0;
- goto out;
+ goto out_tsk;
}
ret = -ESRCH;
if (!(child->flags & PF_PTRACED))
- goto out;
+ goto out_tsk;
if (child->state != TASK_STOPPED) {
if (request != PTRACE_KILL)
- goto out;
+ goto out_tsk;
}
if (child->p_pptr != current)
- goto out;
+ goto out_tsk;
switch (request) {
case PTRACE_PEEKTEXT:
case PTRACE_PEEKDATA: /* read word at location addr */
- if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)
- && atomic_read(&child->mm->mm_users) > 1)
- sync_thread_rbs(child, 0);
+ if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)) {
+ struct mm_struct *mm;
+ long do_sync;
+
+ task_lock(child);
+ {
+ mm = child->mm;
+ do_sync = mm && (atomic_read(&mm->mm_users) > 1);
+ }
+ task_unlock(child);
+ if (do_sync)
+ sync_thread_rbs(child, mm, 0);
+ }
ret = ia64_peek(regs, child, addr, &data);
if (ret = 0) {
ret = data;
regs->r8 = 0; /* ensure "ret" is not mistaken as an error code */
}
- goto out;
+ goto out_tsk;
case PTRACE_POKETEXT:
case PTRACE_POKEDATA: /* write the word at location addr */
- if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)
- && atomic_read(&child->mm->mm_users) > 1)
- sync_thread_rbs(child, 1);
+ if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)) {
+ struct mm_struct *mm;
+ long do_sync;
+
+ task_lock(child);
+ {
+ mm = child->mm;
+ do_sync = mm && (atomic_read(&child->mm->mm_users) > 1);
+ }
+ task_unlock(child);
+ if (do_sync)
+ sync_thread_rbs(child, mm, 1);
+ }
ret = ia64_poke(regs, child, addr, data);
- goto out;
+ goto out_tsk;
case PTRACE_PEEKUSR: /* read the word at addr in the USER area */
if (access_uarea(child, addr, &data, 0) < 0) {
ret = -EIO;
- goto out;
+ goto out_tsk;
}
ret = data;
regs->r8 = 0; /* ensure "ret" is not mistaken as an error code */
- goto out;
+ goto out_tsk;
case PTRACE_POKEUSR: /* write the word at addr in the USER area */
if (access_uarea(child, addr, &data, 1) < 0) {
ret = -EIO;
- goto out;
+ goto out_tsk;
}
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_GETSIGINFO:
ret = -EIO;
if (!access_ok(VERIFY_WRITE, data, sizeof (siginfo_t))
|| child->thread.siginfo = 0)
- goto out;
+ goto out_tsk;
copy_to_user((siginfo_t *) data, child->thread.siginfo, sizeof (siginfo_t));
ret = 0;
- goto out;
+ goto out_tsk;
break;
case PTRACE_SETSIGINFO:
ret = -EIO;
if (!access_ok(VERIFY_READ, data, sizeof (siginfo_t))
|| child->thread.siginfo = 0)
- goto out;
+ goto out_tsk;
copy_from_user(child->thread.siginfo, (siginfo_t *) data, sizeof (siginfo_t));
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
case PTRACE_CONT: /* restart after signal. */
ret = -EIO;
if (data > _NSIG)
- goto out;
+ goto out_tsk;
if (request = PTRACE_SYSCALL)
child->flags |= PF_TRACESYS;
else
@@ -735,7 +1109,7 @@
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_KILL:
/*
@@ -744,7 +1118,7 @@
* that it wants to exit.
*/
if (child->state = TASK_ZOMBIE) /* already dead */
- goto out;
+ goto out_tsk;
child->exit_code = SIGKILL;
/* make sure the single step/take-branch tra bits are not set: */
@@ -756,13 +1130,13 @@
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_SINGLESTEP: /* let child execute for one instruction */
case PTRACE_SINGLEBLOCK:
ret = -EIO;
if (data > _NSIG)
- goto out;
+ goto out_tsk;
child->flags &= ~PF_TRACESYS;
if (request = PTRACE_SINGLESTEP) {
@@ -778,12 +1152,12 @@
/* give it a chance to run. */
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_DETACH: /* detach a process that was attached. */
ret = -EIO;
if (data > _NSIG)
- goto out;
+ goto out_tsk;
child->flags &= ~(PF_PTRACED|PF_TRACESYS);
child->exit_code = data;
@@ -802,12 +1176,14 @@
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
default:
ret = -EIO;
- goto out;
+ goto out_tsk;
}
+ out_tsk:
+ free_task_struct(child);
out:
unlock_kernel();
return ret;
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.0-test1-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/setup.c Fri Jun 9 17:16:12 2000
@@ -36,6 +36,10 @@
#include <asm/efi.h>
#include <asm/mca.h>
+#ifdef CONFIG_BLK_DEV_RAM
+# include <linux/blk.h>
+#endif
+
extern char _end;
/* cpu_data[bootstrap_processor] is data for the bootstrap processor: */
@@ -127,11 +131,22 @@
* change APIs, they'd do things for the better. Grumble...
*/
bootmap_start = PAGE_ALIGN(__pa(&_end));
+ if (ia64_boot_param.initrd_size)
+ bootmap_start = PAGE_ALIGN(bootmap_start + ia64_boot_param.initrd_size);
bootmap_size = init_bootmem(bootmap_start >> PAGE_SHIFT, max_pfn);
efi_memmap_walk(free_available_memory, 0);
reserve_bootmem(bootmap_start, bootmap_size);
+#ifdef CONFIG_BLK_DEV_INITRD
+ initrd_start = ia64_boot_param.initrd_start;
+ if (initrd_start) {
+ initrd_end = initrd_start+ia64_boot_param.initrd_size;
+ printk("Initial ramdisk at: 0x%p (%lu bytes)\n",
+ (void *) initrd_start, ia64_boot_param.initrd_size);
+ reserve_bootmem(virt_to_phys(initrd_start), ia64_boot_param.initrd_size);
+ }
+#endif
#if 0
/* XXX fix me */
init_mm.start_code = (unsigned long) &_stext;
diff -urN linux-davidm/arch/ia64/kernel/signal.c linux-2.4.0-test1-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/signal.c Fri Jun 9 17:16:43 2000
@@ -37,16 +37,26 @@
# define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0])
#endif
+struct sigscratch {
+#ifdef CONFIG_IA64_NEW_UNWIND
+ unsigned long scratch_unat; /* ar.unat for the general registers saved in pt */
+ unsigned long pad;
+#else
+ struct switch_stack sw;
+#endif
+ struct pt_regs pt;
+};
+
struct sigframe {
struct siginfo info;
struct sigcontext sc;
};
extern long sys_wait4 (int, int *, int, struct rusage *);
-extern long ia64_do_signal (sigset_t *, struct pt_regs *, long); /* forward decl */
+extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); /* forward decl */
long
-ia64_rt_sigsuspend (sigset_t *uset, size_t sigsetsize, struct pt_regs *pt)
+ia64_rt_sigsuspend (sigset_t *uset, size_t sigsetsize, struct sigscratch *scr)
{
sigset_t oldset, set;
@@ -72,18 +82,18 @@
* get saved in sigcontext by ia64_do_signal.
*/
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
- pt->r8 = -EINTR;
+ if (IS_IA32_PROCESS(&scr->pt)) {
+ scr->pt.r8 = -EINTR;
} else
#endif
{
- pt->r8 = EINTR;
- pt->r10 = -1;
+ scr->pt.r8 = EINTR;
+ scr->pt.r10 = -1;
}
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
schedule();
- if (ia64_do_signal(&oldset, pt, 1))
+ if (ia64_do_signal(&oldset, scr, 1))
return -EINTR;
}
}
@@ -98,9 +108,8 @@
}
static long
-restore_sigcontext (struct sigcontext *sc, struct pt_regs *pt)
+restore_sigcontext (struct sigcontext *sc, struct sigscratch *scr)
{
- struct switch_stack *sw = (struct switch_stack *) pt - 1;
unsigned long ip, flags, nat, um, cfm;
long err;
@@ -111,28 +120,32 @@
err |= __get_user(ip, &sc->sc_ip); /* instruction pointer */
err |= __get_user(cfm, &sc->sc_cfm);
err |= __get_user(um, &sc->sc_um); /* user mask */
- err |= __get_user(pt->ar_rsc, &sc->sc_ar_rsc);
- err |= __get_user(pt->ar_ccv, &sc->sc_ar_ccv);
- err |= __get_user(pt->ar_unat, &sc->sc_ar_unat);
- err |= __get_user(pt->ar_fpsr, &sc->sc_ar_fpsr);
- err |= __get_user(pt->ar_pfs, &sc->sc_ar_pfs);
- err |= __get_user(pt->pr, &sc->sc_pr); /* predicates */
- err |= __get_user(pt->b0, &sc->sc_br[0]); /* b0 (rp) */
- err |= __get_user(pt->b6, &sc->sc_br[6]); /* b6 */
- err |= __get_user(pt->b7, &sc->sc_br[7]); /* b7 */
- err |= __copy_from_user(&pt->r1, &sc->sc_gr[1], 3*8); /* r1-r3 */
- err |= __copy_from_user(&pt->r8, &sc->sc_gr[8], 4*8); /* r8-r11 */
- err |= __copy_from_user(&pt->r12, &sc->sc_gr[12], 4*8); /* r12-r15 */
- err |= __copy_from_user(&pt->r16, &sc->sc_gr[16], 16*8); /* r16-r31 */
+ err |= __get_user(scr->pt.ar_rsc, &sc->sc_ar_rsc);
+ err |= __get_user(scr->pt.ar_ccv, &sc->sc_ar_ccv);
+ err |= __get_user(scr->pt.ar_unat, &sc->sc_ar_unat);
+ err |= __get_user(scr->pt.ar_fpsr, &sc->sc_ar_fpsr);
+ err |= __get_user(scr->pt.ar_pfs, &sc->sc_ar_pfs);
+ err |= __get_user(scr->pt.pr, &sc->sc_pr); /* predicates */
+ err |= __get_user(scr->pt.b0, &sc->sc_br[0]); /* b0 (rp) */
+ err |= __get_user(scr->pt.b6, &sc->sc_br[6]); /* b6 */
+ err |= __get_user(scr->pt.b7, &sc->sc_br[7]); /* b7 */
+ err |= __copy_from_user(&scr->pt.r1, &sc->sc_gr[1], 3*8); /* r1-r3 */
+ err |= __copy_from_user(&scr->pt.r8, &sc->sc_gr[8], 4*8); /* r8-r11 */
+ err |= __copy_from_user(&scr->pt.r12, &sc->sc_gr[12], 4*8); /* r12-r15 */
+ err |= __copy_from_user(&scr->pt.r16, &sc->sc_gr[16], 16*8); /* r16-r31 */
- pt->cr_ifs = cfm | (1UL << 63);
+ scr->pt.cr_ifs = cfm | (1UL << 63);
/* establish new instruction pointer: */
- pt->cr_iip = ip & ~0x3UL;
- ia64_psr(pt)->ri = ip & 0x3;
- pt->cr_ipsr = (pt->cr_ipsr & ~IA64_PSR_UM) | (um & IA64_PSR_UM);
-
- ia64_put_nat_bits (pt, sw, nat); /* restore the original scratch NaT bits */
+ scr->pt.cr_iip = ip & ~0x3UL;
+ ia64_psr(&scr->pt)->ri = ip & 0x3;
+ scr->pt.cr_ipsr = (scr->pt.cr_ipsr & ~IA64_PSR_UM) | (um & IA64_PSR_UM);
+
+#ifdef CONFIG_IA64_NEW_UNWIND
+ scr->scratch_unat = ia64_put_scratch_nat_bits(&scr->pt, nat);
+#else
+ ia64_put_nat_bits(&scr->pt, &scr->sw, nat); /* restore the original scratch NaT bits */
+#endif
if (flags & IA64_SC_FLAG_FPH_VALID) {
struct task_struct *fpu_owner = ia64_get_fpu_owner();
@@ -186,15 +199,8 @@
}
}
-/*
- * When we get here, ((struct switch_stack *) pt - 1) is a
- * switch_stack frame that has no defined value. Upon return, we
- * expect sw->caller_unat to contain the new unat value. The reason
- * we use a full switch_stack frame is so everything is symmetric
- * with ia64_do_signal().
- */
long
-ia64_rt_sigreturn (struct pt_regs *pt)
+ia64_rt_sigreturn (struct sigscratch *scr)
{
extern char ia64_strace_leave_kernel, ia64_leave_kernel;
struct sigcontext *sc;
@@ -202,7 +208,7 @@
sigset_t set;
long retval;
- sc = &((struct sigframe *) (pt->r12 + 16))->sc;
+ sc = &((struct sigframe *) (scr->pt.r12 + 16))->sc;
/*
* When we return to the previously executing context, r8 and
@@ -234,18 +240,18 @@
recalc_sigpending(current);
spin_unlock_irq(¤t->sigmask_lock);
- if (restore_sigcontext(sc, pt))
+ if (restore_sigcontext(sc, scr))
goto give_sigsegv;
#if DEBUG_SIG
printk("SIG return (%s:%d): sp=%lx ip=%lx\n",
- current->comm, current->pid, pt->r12, pt->cr_iip);
+ current->comm, current->pid, scr->pt.r12, scr->pt.cr_iip);
#endif
/*
* It is more difficult to avoid calling this function than to
* call it and ignore errors.
*/
- do_sigaltstack(&sc->sc_stack, 0, pt->r12);
+ do_sigaltstack(&sc->sc_stack, 0, scr->pt.r12);
return retval;
give_sigsegv:
@@ -266,14 +272,13 @@
* trampoline starts. Everything else is done at the user-level.
*/
static long
-setup_sigcontext (struct sigcontext *sc, sigset_t *mask, struct pt_regs *pt)
+setup_sigcontext (struct sigcontext *sc, sigset_t *mask, struct sigscratch *scr)
{
- struct switch_stack *sw = (struct switch_stack *) pt - 1;
struct task_struct *fpu_owner = ia64_get_fpu_owner();
unsigned long flags = 0, ifs, nat;
long err;
- ifs = pt->cr_ifs;
+ ifs = scr->pt.cr_ifs;
if (on_sig_stack((unsigned long) sc))
flags |= IA64_SC_FLAG_ONSTACK;
@@ -293,46 +298,49 @@
* Note: sw->ar_unat is UNDEFINED unless the process is being
* PTRACED. However, this is OK because the NaT bits of the
* preserved registers (r4-r7) are never being looked at by
- * the signal handler (register r4-r7 are used instead).
+ * the signal handler (registers r4-r7 are used instead).
*/
- nat = ia64_get_nat_bits(pt, sw);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ nat = ia64_get_scratch_nat_bits(&scr->pt, scr->scratch_unat);
+#else
+ nat = ia64_get_nat_bits(&scr->pt, &scr->sw);
+#endif
err = __put_user(flags, &sc->sc_flags);
err |= __put_user(nat, &sc->sc_nat);
err |= PUT_SIGSET(mask, &sc->sc_mask);
- err |= __put_user(pt->cr_ipsr & IA64_PSR_UM, &sc->sc_um);
- err |= __put_user(pt->ar_rsc, &sc->sc_ar_rsc);
- err |= __put_user(pt->ar_ccv, &sc->sc_ar_ccv);
- err |= __put_user(pt->ar_unat, &sc->sc_ar_unat); /* ar.unat */
- err |= __put_user(pt->ar_fpsr, &sc->sc_ar_fpsr); /* ar.fpsr */
- err |= __put_user(pt->ar_pfs, &sc->sc_ar_pfs);
- err |= __put_user(pt->pr, &sc->sc_pr); /* predicates */
- err |= __put_user(pt->b0, &sc->sc_br[0]); /* b0 (rp) */
- err |= __put_user(pt->b6, &sc->sc_br[6]); /* b6 */
- err |= __put_user(pt->b7, &sc->sc_br[7]); /* b7 */
-
- err |= __copy_to_user(&sc->sc_gr[1], &pt->r1, 3*8); /* r1-r3 */
- err |= __copy_to_user(&sc->sc_gr[8], &pt->r8, 4*8); /* r8-r11 */
- err |= __copy_to_user(&sc->sc_gr[12], &pt->r12, 4*8); /* r12-r15 */
- err |= __copy_to_user(&sc->sc_gr[16], &pt->r16, 16*8); /* r16-r31 */
+ err |= __put_user(scr->pt.cr_ipsr & IA64_PSR_UM, &sc->sc_um);
+ err |= __put_user(scr->pt.ar_rsc, &sc->sc_ar_rsc);
+ err |= __put_user(scr->pt.ar_ccv, &sc->sc_ar_ccv);
+ err |= __put_user(scr->pt.ar_unat, &sc->sc_ar_unat); /* ar.unat */
+ err |= __put_user(scr->pt.ar_fpsr, &sc->sc_ar_fpsr); /* ar.fpsr */
+ err |= __put_user(scr->pt.ar_pfs, &sc->sc_ar_pfs);
+ err |= __put_user(scr->pt.pr, &sc->sc_pr); /* predicates */
+ err |= __put_user(scr->pt.b0, &sc->sc_br[0]); /* b0 (rp) */
+ err |= __put_user(scr->pt.b6, &sc->sc_br[6]); /* b6 */
+ err |= __put_user(scr->pt.b7, &sc->sc_br[7]); /* b7 */
+
+ err |= __copy_to_user(&sc->sc_gr[1], &scr->pt.r1, 3*8); /* r1-r3 */
+ err |= __copy_to_user(&sc->sc_gr[8], &scr->pt.r8, 4*8); /* r8-r11 */
+ err |= __copy_to_user(&sc->sc_gr[12], &scr->pt.r12, 4*8); /* r12-r15 */
+ err |= __copy_to_user(&sc->sc_gr[16], &scr->pt.r16, 16*8); /* r16-r31 */
- err |= __put_user(pt->cr_iip + ia64_psr(pt)->ri, &sc->sc_ip);
- err |= __put_user(pt->r12, &sc->sc_gr[12]); /* r12 */
+ err |= __put_user(scr->pt.cr_iip + ia64_psr(&scr->pt)->ri, &sc->sc_ip);
return err;
}
static long
-setup_frame (int sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *set, struct pt_regs *pt)
+setup_frame (int sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *set,
+ struct sigscratch *scr)
{
- struct switch_stack *sw = (struct switch_stack *) pt - 1;
extern char ia64_sigtramp[], __start_gate_section[];
unsigned long tramp_addr, new_rbs = 0;
struct sigframe *frame;
struct siginfo si;
long err;
- frame = (void *) pt->r12;
+ frame = (void *) scr->pt.r12;
tramp_addr = GATE_ADDR + (ia64_sigtramp - __start_gate_section);
if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && !on_sig_stack((unsigned long) frame)) {
new_rbs = (current->sas_ss_sp + sizeof(long) - 1) & ~(sizeof(long) - 1);
@@ -348,31 +356,39 @@
err |= __put_user(current->sas_ss_sp, &frame->sc.sc_stack.ss_sp);
err |= __put_user(current->sas_ss_size, &frame->sc.sc_stack.ss_size);
- err |= __put_user(sas_ss_flags(pt->r12), &frame->sc.sc_stack.ss_flags);
- err |= setup_sigcontext(&frame->sc, set, pt);
+ err |= __put_user(sas_ss_flags(scr->pt.r12), &frame->sc.sc_stack.ss_flags);
+ err |= setup_sigcontext(&frame->sc, set, scr);
if (err)
goto give_sigsegv;
- pt->r12 = (unsigned long) frame - 16; /* new stack pointer */
- pt->r2 = sig; /* signal number */
- pt->r3 = (unsigned long) ka->sa.sa_handler; /* addr. of handler's proc. descriptor */
- pt->r15 = new_rbs;
- pt->ar_fpsr = FPSR_DEFAULT; /* reset fpsr for signal handler */
- pt->cr_iip = tramp_addr;
- ia64_psr(pt)->ri = 0; /* start executing in first slot */
+ scr->pt.r12 = (unsigned long) frame - 16; /* new stack pointer */
+ scr->pt.r2 = sig; /* signal number */
+ scr->pt.r3 = (unsigned long) ka->sa.sa_handler; /* addr. of handler's proc desc */
+ scr->pt.r15 = new_rbs;
+ scr->pt.ar_fpsr = FPSR_DEFAULT; /* reset fpsr for signal handler */
+ scr->pt.cr_iip = tramp_addr;
+ ia64_psr(&scr->pt)->ri = 0; /* start executing in first slot */
+#ifdef CONFIG_IA64_NEW_UNWIND
+ /*
+ * Note: this affects only the NaT bits of the scratch regs
+ * (the ones saved in pt_regs), which is exactly what we want.
+ */
+ scr->scratch_unat = 0; /* ensure NaT bits of at least r2, r3, r12, and r15 are clear */
+#else
/*
* Note: this affects only the NaT bits of the scratch regs
- * (the ones saved in pt_regs, which is exactly what we want.
+ * (the ones saved in pt_regs), which is exactly what we want.
* The NaT bits for the preserved regs (r4-r7) are in
* sw->ar_unat iff this process is being PTRACED.
*/
- sw->caller_unat = 0; /* ensure NaT bits of at least r2, r3, r12, and r15 are clear */
+ scr->sw.caller_unat = 0; /* ensure NaT bits of at least r2, r3, r12, and r15 are clear */
+#endif
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sig=%d sp=%lx ip=%lx handler=%lx\n",
- current->comm, current->pid, sig, pt->r12, pt->cr_iip, pt->r3);
+ current->comm, current->pid, sig, scr->pt.r12, scr->pt.cr_iip, scr->pt.r3);
#endif
return 1;
@@ -391,17 +407,17 @@
static long
handle_signal (unsigned long sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *oldset,
- struct pt_regs *pt)
+ struct sigscratch *scr)
{
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
+ if (IS_IA32_PROCESS(&scr->pt)) {
/* send signal to IA-32 process */
- if (!ia32_setup_frame1(sig, ka, info, oldset, pt))
+ if (!ia32_setup_frame1(sig, ka, info, oldset, &scr->pt))
return 0;
} else
#endif
/* send signal to IA-64 process */
- if (!setup_frame(sig, ka, info, oldset, pt))
+ if (!setup_frame(sig, ka, info, oldset, scr))
return 0;
if (ka->sa.sa_flags & SA_ONESHOT)
@@ -418,12 +434,6 @@
}
/*
- * When we get here, `pt' points to struct pt_regs and ((struct
- * switch_stack *) pt - 1) points to a switch stack structure.
- * HOWEVER, in the normal case, the ONLY value valid in the
- * switch_stack is the caller_unat field. The entire switch_stack is
- * valid ONLY if current->flags has PF_PTRACED set.
- *
* Note that `init' is a special process: it doesn't get signals it
* doesn't want to handle. Thus you cannot kill init even with a
* SIGKILL even by mistake.
@@ -433,26 +443,26 @@
* user-level signal handling stack-frames in one go after that.
*/
long
-ia64_do_signal (sigset_t *oldset, struct pt_regs *pt, long in_syscall)
+ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall)
{
struct k_sigaction *ka;
siginfo_t info;
long restart = in_syscall;
- long errno = pt->r8;
+ long errno = scr->pt.r8;
/*
* In the ia64_leave_kernel code path, we want the common case
* to go fast, which is why we may in certain cases get here
* from kernel mode. Just return without doing anything if so.
*/
- if (!user_mode(pt))
+ if (!user_mode(&scr->pt))
return 0;
if (!oldset)
oldset = ¤t->blocked;
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
+ if (IS_IA32_PROCESS(&scr->pt)) {
if (in_syscall) {
if (errno >= 0)
restart = 0;
@@ -461,7 +471,7 @@
}
} else
#endif
- if (pt->r10 != -1) {
+ if (scr->pt.r10 != -1) {
/*
* A system calls has to be restarted only if one of
* the error codes ERESTARTNOHAND, ERESTARTSYS, or
@@ -555,7 +565,7 @@
case SIGQUIT: case SIGILL: case SIGTRAP:
case SIGABRT: case SIGFPE: case SIGSEGV:
case SIGBUS: case SIGSYS: case SIGXCPU: case SIGXFSZ:
- if (do_coredump(signr, pt))
+ if (do_coredump(signr, &scr->pt))
exit_code |= 0x80;
/* FALLTHRU */
@@ -575,29 +585,29 @@
if ((ka->sa.sa_flags & SA_RESTART) = 0) {
case ERESTARTNOHAND:
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt))
- pt->r8 = -EINTR;
+ if (IS_IA32_PROCESS(&scr->pt))
+ scr->pt.r8 = -EINTR;
else
#endif
- pt->r8 = EINTR;
- /* note: pt->r10 is already -1 */
+ scr->pt.r8 = EINTR;
+ /* note: scr->pt.r10 is already -1 */
break;
}
case ERESTARTNOINTR:
-#ifdef CONFIG_IA32_SUPPOT
- if (IS_IA32_PROCESS(pt)) {
- pt->r8 = pt->r1;
- pt->cr_iip -= 2;
+#ifdef CONFIG_IA32_SUPPORT
+ if (IS_IA32_PROCESS(&scr->pt)) {
+ scr->pt.r8 = scr->pt.r1;
+ scr->pt.cr_iip -= 2;
} else
#endif
- ia64_decrement_ip(pt);
+ ia64_decrement_ip(&scr->pt);
}
}
/* Whee! Actually deliver the signal. If the
delivery failed, we need to continue to iterate in
this loop so we can deliver the SIGSEGV... */
- if (handle_signal(signr, ka, &info, oldset, pt))
+ if (handle_signal(signr, ka, &info, oldset, scr))
return 1;
}
@@ -606,9 +616,9 @@
/* Restart the system call - no handlers present */
if (errno = ERESTARTNOHAND || errno = ERESTARTSYS || errno = ERESTARTNOINTR) {
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
- pt->r8 = pt->r1;
- pt->cr_iip -= 2;
+ if (IS_IA32_PROCESS(&scr->pt)) {
+ scr->pt.r8 = scr->pt.r1;
+ scr->pt.cr_iip -= 2;
} else
#endif
/*
@@ -617,7 +627,7 @@
* is adjust ip so that the "break"
* instruction gets re-executed.
*/
- ia64_decrement_ip(pt);
+ ia64_decrement_ip(&scr->pt);
}
}
return 0;
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-test1-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/smp.c Fri Jun 9 17:49:36 2000
@@ -81,7 +81,6 @@
#ifndef CONFIG_ITANIUM_PTCG
# define IPI_FLUSH_TLB 3
#endif /*!CONFIG_ITANIUM_PTCG */
-#define IPI_KDB_INTERRUPT 4
/*
* Setup routine for controlling SMP activation
@@ -133,24 +132,17 @@
}
-static inline void
-pointer_unlock(void **lock, void **data)
-{
- *data = *lock;
- *lock = NULL;
-}
-
static inline int
pointer_lock(void *lock, void *data, int retry)
{
again:
- if (cmpxchg_acq(lock, 0, data) = 0)
+ if (cmpxchg_acq((void **) lock, 0, data) = 0)
return 0;
if (!retry)
return -EBUSY;
- while (*(void**) lock)
+ while (*(void **) lock)
;
goto again;
@@ -191,13 +183,13 @@
int wait;
/* release the 'pointer lock' */
- pointer_unlock((void **) &smp_call_function_data, (void **) &data);
+ data = smp_call_function_data;
func = data->func;
info = data->info;
wait = data->wait;
mb();
- atomic_dec (&data->unstarted_count);
+ atomic_dec(&data->unstarted_count);
/* At this point the structure may be gone unless wait is true. */
(*func)(info);
@@ -205,7 +197,7 @@
/* Notify the sending CPU that the task is done. */
mb();
if (wait)
- atomic_dec (&data->unfinished_count);
+ atomic_dec(&data->unfinished_count);
}
break;
@@ -344,41 +336,35 @@
{
struct smp_call_struct data;
long timeout;
- static spinlock_t lock = SPIN_LOCK_UNLOCKED;
+ int cpus = smp_num_cpus - 1;
+
+ if (cpus = 0)
+ return 0;
data.func = func;
data.info = info;
data.wait = wait;
- atomic_set(&data.unstarted_count, smp_num_cpus - 1);
- atomic_set(&data.unfinished_count, smp_num_cpus - 1);
+ atomic_set(&data.unstarted_count, cpus);
+ atomic_set(&data.unfinished_count, cpus);
if (pointer_lock(&smp_call_function_data, &data, retry))
return -EBUSY;
- smp_call_function_data = &data;
- spin_unlock (&lock);
- data.func = func;
- data.info = info;
- atomic_set (&data.unstarted_count, smp_num_cpus - 1);
- data.wait = wait;
- if (wait)
- atomic_set (&data.unfinished_count, smp_num_cpus - 1);
-
/* Send a message to all other CPUs and wait for them to respond */
send_IPI_allbutself(IPI_CALL_FUNC);
/* Wait for response */
timeout = jiffies + HZ;
- while ( (atomic_read (&data.unstarted_count) > 0) &&
- time_before (jiffies, timeout) )
- barrier ();
- if (atomic_read (&data.unstarted_count) > 0) {
+ while ((atomic_read(&data.unstarted_count) > 0) && time_before(jiffies, timeout))
+ barrier();
+ if (atomic_read(&data.unstarted_count) > 0) {
smp_call_function_data = NULL;
return -ETIMEDOUT;
}
if (wait)
- while (atomic_read (&data.unfinished_count) > 0)
- barrier ();
+ while (atomic_read(&data.unfinished_count) > 0)
+ barrier();
+ /* unlock pointer */
smp_call_function_data = NULL;
return 0;
}
diff -urN linux-davidm/arch/ia64/kernel/sys_ia64.c linux-2.4.0-test1-lia/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/sys_ia64.c Fri Jun 9 17:18:00 2000
@@ -156,6 +156,9 @@
{
struct pt_regs *regs = (struct pt_regs *) &stack;
+ if ((off & ~PAGE_MASK) != 0)
+ return -EINVAL;
+
addr = do_mmap2(addr, len, prot, flags, fd, off >> PAGE_SHIFT);
if (!IS_ERR(addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.4.0-test1-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/time.c Fri Jun 9 17:18:07 2000
@@ -162,6 +162,11 @@
*/
write_lock(&xtime_lock);
new_itm = itm.next[cpu].count;
+
+ if (!time_after(ia64_get_itc(), new_itm))
+ printk("Oops: timer tick before it's due (itc=%lx,itm=%lx)\n",
+ ia64_get_itc(), new_itm);
+
while (1) {
/*
* Do kernel PC profiling here. We multiply the
@@ -220,7 +225,7 @@
ia64_set_itm(new_itm);
}
-#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_IA64_SOFTSDV_HACKS)
/*
* Interrupts must be disabled before calling this routine.
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.0-test1-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/traps.c Fri Jun 9 17:49:52 2000
@@ -36,10 +36,6 @@
#include <linux/init.h>
#include <linux/sched.h>
-#ifdef CONFIG_KDB
-#include <linux/kdb.h>
-#endif
-
#include <asm/ia32.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
@@ -92,13 +88,6 @@
}
printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
-
-#ifdef CONFIG_KDB
- while (1) {
- kdb(KDB_REASON_PANIC, 0, regs);
- printk("Cant go anywhere from Panic!\n");
- }
-#endif
show_regs(regs);
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.0-test1-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Fri Jun 9 17:38:59 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/unwind.c Fri Jun 9 17:18:53 2000
@@ -49,7 +49,7 @@
#define UNW_LOG_HASH_SIZE (UNW_LOG_CACHE_SIZE + 1)
#define UNW_HASH_SIZE (1 << UNW_LOG_HASH_SIZE)
-#define UNW_DEBUG 0
+#define UNW_DEBUG 1
#define UNW_STATS 0 /* WARNING: this disabled interrupts for long time-spans!! */
#if UNW_DEBUG
@@ -191,8 +191,10 @@
struct unw_ireg *ireg;
struct pt_regs *pt;
- if ((unsigned) regnum - 1 >= 127)
+ if ((unsigned) regnum - 1 >= 127) {
+ dprintk("unwind: trying to access non-existent r%u\n", regnum);
return -1;
+ }
if (regnum < 32) {
if (regnum >= 4 && regnum <= 7) {
@@ -238,7 +240,12 @@
nat_addr = ia64_rse_rnat_addr(addr);
if ((unsigned long) addr < info->regstk.limit
|| (unsigned long) addr >= info->regstk.top)
+ {
+ dprintk("unwind: %lx outside of regstk "
+ "[0x%lx-0x%lx)\n", addr,
+ info->regstk.limit, info->regstk.top);
return -1;
+ }
if ((unsigned long) nat_addr >= info->regstk.top)
nat_addr = &info->sw->ar_rnat;
nat_mask = (1UL << ia64_rse_slot_num(addr));
@@ -307,9 +314,12 @@
/* preserved: */
case 1: case 2: case 3: case 4: case 5:
addr = *(&info->b1 + (regnum - 1));
+ if (!addr)
+ addr = &info->sw->b1 + (regnum - 1);
break;
default:
+ dprintk("unwind: trying to access non-existent b%u\n", regnum);
return -1;
}
if (write)
@@ -325,8 +335,10 @@
struct ia64_fpreg *addr = 0;
struct pt_regs *pt;
- if ((unsigned) (regnum - 2) >= 30)
+ if ((unsigned) (regnum - 2) >= 30) {
+ dprintk("unwind: trying to access non-existent f%u\n", regnum);
return -1;
+ }
pt = (struct pt_regs *) info->sp - 1;
@@ -412,6 +424,7 @@
break;
default:
+ dprintk("unwind: trying to access non-existent ar%u\n", regnum);
return -1;
}
@@ -1327,7 +1340,8 @@
}
if (!e) {
/* no info, return default unwinder (leaf proc, no mem stack, no saved regs) */
- dprintk("unwind: no unwind info for ip=0x%lx\n", ip);
+ dprintk("unwind: no unwind info for ip=0x%lx (prev ip=0x%lx)\n", ip,
+ unw.cache[info->prev_script].ip);
sr.curr.reg[UNW_REG_RP].where = UNW_WHERE_BR;
sr.curr.reg[UNW_REG_RP].when = -1;
sr.curr.reg[UNW_REG_RP].val = 0;
@@ -1338,7 +1352,7 @@
return script;
}
- sr.when_target = (3*((ip & ~0xfUL) - (table->segment_base + e->start_offset))
+ sr.when_target = (3*((ip & ~0xfUL) - (table->segment_base + e->start_offset))/16
+ (ip & 0xfUL));
hdr = *(u64 *) (table->segment_base + e->info_offset);
dp = (u8 *) (table->segment_base + e->info_offset + 8);
@@ -1383,7 +1397,7 @@
case UNW_WHERE_FR: printk("f%lu", r->val); break;
case UNW_WHERE_BR: printk("b%lu", r->val); break;
case UNW_WHERE_SPREL: printk("[sp+0x%lx]", r->val); break;
- case UNW_WHERE_PSPREL: printk("[psp+0x%lx]", 0x10 - r->val); break;
+ case UNW_WHERE_PSPREL: printk("[psp+0x%lx]", r->val); break;
case UNW_WHERE_NONE:
printk("%s+0x%lx", unw.preg_name[r - sr.curr.reg], r->val);
break;
@@ -1531,6 +1545,7 @@
if (info->ip & (my_cpu_data.unimpl_va_mask | 0xf)) {
/* don't let obviously bad addresses pollute the cache */
+ dprintk("unwind: rejecting bad ip=0x%lx\n", info->ip);
info->rp = 0;
return -1;
}
@@ -1590,18 +1605,18 @@
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
- info->cfm = *info->pfs;
+ info->cfm = info->pfs;
/* restore the bsp: */
pr = info->pr_val;
num_regs = 0;
if ((info->flags & UNW_FLAG_INTERRUPT_FRAME)) {
if ((pr & (1UL << pNonSys)) != 0)
- num_regs = info->cfm & 0x7f; /* size of frame */
+ num_regs = *info->cfm & 0x7f; /* size of frame */
info->pfs (unsigned long *) (info->sp + 16 + struct_offset(struct pt_regs, ar_pfs));
} else
- num_regs = (info->cfm >> 7) & 0x7f; /* size of locals */
+ num_regs = (*info->cfm >> 7) & 0x7f; /* size of locals */
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -num_regs);
if (info->bsp < info->regstk.limit || info->bsp > info->regstk.top) {
dprintk("unwind: bsp (0x%lx) out of range [0x%lx-0x%lx]\n",
@@ -1669,8 +1684,8 @@
info->memstk.top = stktop;
info->sw = sw;
info->sp = info->psp = (unsigned long) (sw + 1) - 16;
- info->cfm = sw->ar_pfs;
- sol = (info->cfm >> 7) & 0x7f;
+ info->cfm = &sw->ar_pfs;
+ sol = (*info->cfm >> 7) & 0x7f;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
info->ip = sw->b0;
info->pr_val = sw->pr;
@@ -1704,7 +1719,7 @@
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
- info->cfm = sw->ar_pfs;
+ info->cfm = &sw->ar_pfs;
info->ip = sw->b0;
#endif
}
@@ -1741,7 +1756,7 @@
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs(bsp, -sof);
- info->cfm = regs->cr_ifs;
+ info->cfm = ®s->cr_ifs;
info->ip = regs->cr_iip;
#endif
}
@@ -1777,7 +1792,7 @@
int
unw_unwind (struct unw_frame_info *info)
{
- unsigned long sol, cfm = info->cfm;
+ unsigned long sol, cfm = *info->cfm;
int is_nat;
sol = (cfm >> 7) & 0x7f; /* size of locals */
@@ -1796,16 +1811,16 @@
info->ip = read_reg(info, sol - 2, &is_nat);
if (is_nat || (info->ip & (my_cpu_data.unimpl_va_mask | 0xf)))
- /* don't let obviously bad addresses pollute the cache */
+ /* reject let obviously bad addresses */
return -1;
+ info->cfm = ia64_rse_skip_regs((unsigned long *) info->bsp, sol - 1);
cfm = read_reg(info, sol - 1, &is_nat);
if (is_nat)
return -1;
sol = (cfm >> 7) & 0x7f;
- info->cfm = cfm;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -sol);
return 0;
}
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.0-test1-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Fri Jun 9 17:38:59 2000
+++ linux-2.4.0-test1-lia/arch/ia64/mm/init.c Fri Jun 9 17:19:34 2000
@@ -183,6 +183,19 @@
}
void
+free_initrd_mem(unsigned long start, unsigned long end)
+{
+ if (start < end)
+ printk ("Freeing initrd memory: %ldkB freed\n", (end - start) >> 10);
+ for (; start < end; start += PAGE_SIZE) {
+ clear_bit(PG_reserved, &mem_map[MAP_NR(start)].flags);
+ set_page_count(&mem_map[MAP_NR(start)], 1);
+ free_page(start);
+ ++totalram_pages;
+ }
+}
+
+void
si_meminfo (struct sysinfo *val)
{
val->totalram = totalram_pages;
diff -urN linux-davidm/arch/ia64/vmlinux.lds.S linux-2.4.0-test1-lia/arch/ia64/vmlinux.lds.S
--- linux-davidm/arch/ia64/vmlinux.lds.S Fri Jun 9 17:38:59 2000
+++ linux-2.4.0-test1-lia/arch/ia64/vmlinux.lds.S Fri Jun 9 17:20:07 2000
@@ -46,6 +46,11 @@
{ *(__ex_table) }
__stop___ex_table = .;
+ __start___ksymtab = .; /* Kernel symbol table */
+ __ksymtab : AT(ADDR(__ksymtab) - PAGE_OFFSET)
+ { *(__ksymtab) }
+ __stop___ksymtab = .;
+
/* Unwind table */
ia64_unw_start = .;
.IA_64.unwind : AT(ADDR(.IA_64.unwind) - PAGE_OFFSET)
@@ -56,6 +61,8 @@
.rodata : AT(ADDR(.rodata) - PAGE_OFFSET)
{ *(.rodata) }
+ .kstrtab : AT(ADDR(.kstrtab) - PAGE_OFFSET)
+ { *(.kstrtab) }
.opd : AT(ADDR(.opd) - PAGE_OFFSET)
{ *(.opd) }
diff -urN linux-davidm/fs/proc/generic.c linux-2.4.0-test1-lia/fs/proc/generic.c
--- linux-davidm/fs/proc/generic.c Sun May 21 20:34:37 2000
+++ linux-2.4.0-test1-lia/fs/proc/generic.c Fri Jun 9 17:24:08 2000
@@ -42,7 +42,7 @@
#endif
/* 4K page size but our output routines use some slack for overruns */
-#define PROC_BLOCK_SIZE (3*1024)
+#define PROC_BLOCK_SIZE (PAGE_SIZE - 1024)
static ssize_t
proc_file_read(struct file * file, char * buf, size_t nbytes, loff_t *ppos)
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.0-test1-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/processor.h Fri Jun 9 17:24:33 2000
@@ -658,13 +658,16 @@
thread_saved_pc (struct thread_struct *t)
{
struct unw_frame_info info;
+ unsigned long ip;
+
/* XXX ouch: Linus, please pass the task pointer to thread_saved_pc() instead! */
struct task_struct *p = (void *) ((unsigned long) t - IA64_TASK_THREAD_OFFSET);
unw_init_from_blocked_task(&info, p);
if (unw_unwind(&info) < 0)
return 0;
- return unw_get_ip(&info);
+ unw_get_ip(&info, &ip);
+ return ip;
}
/*
diff -urN linux-davidm/include/asm-ia64/ptrace.h linux-2.4.0-test1-lia/include/asm-ia64/ptrace.h
--- linux-davidm/include/asm-ia64/ptrace.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/ptrace.h Fri Jun 9 17:24:41 2000
@@ -220,10 +220,17 @@
extern long ia64_peek (struct pt_regs *, struct task_struct *, unsigned long addr, long *val);
extern long ia64_poke (struct pt_regs *, struct task_struct *, unsigned long addr, long val);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ /* get nat bits for scratch registers such that bit N=1 iff scratch register rN is a NaT */
+ extern unsigned long ia64_get_scratch_nat_bits (struct pt_regs *pt, unsigned long scratch_unat);
+ /* put nat bits for scratch registers such that scratch register rN is a NaT iff bit N=1 */
+ extern unsigned long ia64_put_scratch_nat_bits (struct pt_regs *pt, unsigned long nat);
+#else
/* get nat bits for r1-r31 such that bit N=1 iff rN is a NaT */
extern long ia64_get_nat_bits (struct pt_regs *pt, struct switch_stack *sw);
/* put nat bits for r1-r31 such that rN is a NaT iff bit N=1 */
extern void ia64_put_nat_bits (struct pt_regs *pt, struct switch_stack *sw, unsigned long nat);
+#endif
extern void ia64_increment_ip (struct pt_regs *pt);
extern void ia64_decrement_ip (struct pt_regs *pt);
diff -urN linux-davidm/include/asm-ia64/ptrace_offsets.h linux-2.4.0-test1-lia/include/asm-ia64/ptrace_offsets.h
--- linux-davidm/include/asm-ia64/ptrace_offsets.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/ptrace_offsets.h Fri Jun 9 17:25:05 2000
@@ -118,7 +118,7 @@
#define PT_F126 0x05e0
#define PT_F127 0x05f0
/* switch stack: */
-#define PT_PRI_UNAT 0x0600
+#define PT_NAT_BITS 0x0600
#define PT_F2 0x0610
#define PT_F3 0x0620
diff -urN linux-davidm/include/asm-ia64/system.h linux-2.4.0-test1-lia/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/system.h Fri Jun 9 17:25:19 2000
@@ -54,6 +54,8 @@
__u16 num_pci_vectors; /* number of ACPI derived PCI IRQ's*/
__u64 pci_vectors; /* physical address of PCI data (pci_vector_struct)*/
__u64 fpswa; /* physical address of the the fpswa interface */
+ __u64 initrd_start;
+ __u64 initrd_size;
} ia64_boot_param;
extern inline void
diff -urN linux-davidm/include/asm-ia64/unwind.h linux-2.4.0-test1-lia/include/asm-ia64/unwind.h
--- linux-davidm/include/asm-ia64/unwind.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/unwind.h Fri Jun 9 17:25:28 2000
@@ -54,9 +54,9 @@
unsigned long bsp;
unsigned long sp; /* stack pointer */
unsigned long psp; /* previous sp */
- unsigned long cfm;
unsigned long ip; /* instruction pointer */
unsigned long pr_val; /* current predicates */
+ unsigned long *cfm;
struct switch_stack *sw;
@@ -123,11 +123,21 @@
*/
extern int unw_unwind (struct unw_frame_info *info);
-#define unw_get_ip(info) ((info)->ip)
-#define unw_get_sp(info) ((unsigned long) (info)->sp)
-#define unw_get_psp(info) ((unsigned long) (info)->psp)
-#define unw_get_bsp(info) ((unsigned long) (info)->bsp)
-#define unw_get_cfm(info) ((info)->cfm)
+#define unw_get_ip(info,vp) ({*(vp) = (info)->ip; 0;})
+#define unw_get_sp(info,vp) ({*(vp) = (unsigned long) (info)->sp; 0;})
+#define unw_get_psp(info,vp) ({*(vp) = (unsigned long) (info)->psp; 0;})
+#define unw_get_bsp(info,vp) ({*(vp) = (unsigned long) (info)->bsp; 0;})
+#define unw_get_cfm(info,vp) ({*(vp) = *(info)->cfm; 0;})
+#define unw_set_cfm(info,val) ({*(info)->cfm = (val); 0;})
+
+static inline int
+unw_get_rp (struct unw_frame_info *info, unsigned long *val)
+{
+ if (!info->rp)
+ return -1;
+ *val = *info->rp;
+ return 0;
+}
extern int unw_access_gr (struct unw_frame_info *, int, unsigned long *, char *, int);
extern int unw_access_br (struct unw_frame_info *, int, unsigned long *, int);
@@ -135,11 +145,35 @@
extern int unw_access_ar (struct unw_frame_info *, int, unsigned long *, int);
extern int unw_access_pr (struct unw_frame_info *, unsigned long *, int);
-#define unw_set_gr(i,n,v,nat) unw_access_gr(i,n,v,nat,1)
-#define unw_set_br(i,n,v) unw_access_br(i,n,v,1)
-#define unw_set_fr(i,n,v) unw_access_fr(i,n,v,1)
-#define unw_set_ar(i,n,v) unw_access_ar(i,n,v,1)
-#define unw_set_pr(i,v) unw_access_ar(i,v,1)
+static inline int
+unw_set_gr (struct unw_frame_info *i, int n, unsigned long v, char nat)
+{
+ return unw_access_gr(i, n, &v, &nat, 1);
+}
+
+static inline int
+unw_set_br (struct unw_frame_info *i, int n, unsigned long v)
+{
+ return unw_access_br(i, n, &v, 1);
+}
+
+static inline int
+unw_set_fr (struct unw_frame_info *i, int n, struct ia64_fpreg v)
+{
+ return unw_access_fr(i, n, &v, 1);
+}
+
+static inline int
+unw_set_ar (struct unw_frame_info *i, int n, unsigned long v)
+{
+ return unw_access_ar(i, n, &v, 1);
+}
+
+static inline int
+unw_set_pr (struct unw_frame_info *i, unsigned long v)
+{
+ return unw_access_pr(i, &v, 1);
+}
#define unw_get_gr(i,n,v,nat) unw_access_gr(i,n,v,nat,0)
#define unw_get_br(i,n,v) unw_access_br(i,n,v,0)
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.0-test1)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
2000-06-03 17:32 ` Manfred Spraul
2000-06-10 1:07 ` David Mosberger
@ 2000-06-10 1:11 ` David Mosberger
2000-07-14 21:37 ` [Linux-ia64] kernel update (relative to 2.4.0-test4) David Mosberger
` (212 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-06-10 1:11 UTC (permalink / raw)
To: linux-ia64
An updated kernel diff is available in the usual place:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/linux-2.4.0-test1-ia64-000609*
Summary of changes:
- Stephen Zeisset's module related fixes to ia64_ksyms.c and pci.c.
- Bill Nottingham's initrd additions.
- Takayoshi Kouchi's pointer-lock related SMP fixes.
- Jes Sorensen's mmap bug fix.
- New unwind support now almost works. Warning: don't enable
CONFIG_IA64_NEW_UNWIND unless you have a bleeding edge toolchain
with _all_ the unwind fixes. Even then you may not want to turn
it on as core-dump support hasn't been finished yet.
- Without the new unwind support, the kernel now is conservative
again and generates a switch_stack frame whenever there is a remote
possibility that it might be needed. This slows down several
signal related operations, but it will be fast again once
the new unwind support is complete.
I also added a printk whenever we get a timer tick before it was due.
Since that happens quite frequently, this quickly becomes annoying.
Look at it as an invitation to investigate the problem... ;-)
--david
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-test1-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/entry.S Fri Jun 9 17:09:56 2000
@@ -117,14 +117,11 @@
mov psr.l=r17
;;
srlz.d
-
- movl r28\x1f
- br.cond.sptk.many load_switch_stack
-1: UNW(.restore sp)
- adds sp=IA64_SWITCH_STACK_SIZE,sp // pop switch_stack
+ DO_LOAD_SWITCH_STACK( )
br.ret.sptk.few rp
END(ia64_switch_to)
+#ifndef CONFIG_IA64_NEW_UNWIND
/*
* Like save_switch_stack, but also save the stack frame that is active
* at the time this function is called.
@@ -135,6 +132,8 @@
DO_SAVE_SWITCH_STACK
br.ret.sptk.few rp
END(save_switch_stack_with_current_frame)
+#endif /* !CONFIG_IA64_NEW_UNWIND */
+
/*
* Note that interrupts are enabled during save_switch_stack and
* load_switch_stack. This means that we may get an interrupt with
@@ -343,7 +342,6 @@
;;
ld8.fill r4=[r2],16
ld8.fill r5=[r3],16
- mov b7=r28
;;
ld8.fill r6=[r2],16
ld8.fill r7=[r3],16
@@ -371,6 +369,19 @@
// also use it to preserve b6, which contains the syscall entry point.
//
GLOBAL_ENTRY(invoke_syscall_trace)
+#ifdef CONFIG_IA64_NEW_UNWIND
+ UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+ alloc loc1=ar.pfs,8,3,0,0
+ mov loc0=rp
+ UNW(.body)
+ mov loc2¶
+ ;;
+ br.call.sptk.few rp=syscall_trace
+.ret3: mov rp=loc0
+ mov ar.pfs=loc1
+ mov b6=loc2
+ br.ret.sptk.few rp
+#else /* !CONFIG_IA64_NEW_SYSCALL */
UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
alloc loc1=ar.pfs,8,3,0,0
;; // WAW on CFM at the br.call
@@ -384,6 +395,7 @@
mov b6=loc2
;;
br.ret.sptk.few rp
+#endif /* !CONFIG_IA64_NEW_SYSCALL */
END(invoke_syscall_trace)
//
@@ -802,112 +814,140 @@
// args get preserved, in case we need to restart a system call.
//
ENTRY(handle_signal_delivery)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+#ifdef CONFIG_IA64_NEW_UNWIND
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
mov r9=ar.unat
-
- // If the process is being ptraced, the signal may not actually be delivered to
- // the process. Instead, SIGCHLD will be sent to the parent. We need to
- // setup a switch_stack so ptrace can inspect the processes state if necessary.
- adds r2=IA64_TASK_FLAGS_OFFSET,r13
- ;;
- ld8 r2=[r2]
+ mov loc0=rp // save return address
+ .body
mov out0=0 // there is no "oldset"
- adds out1\x16,sp // out1=&pt_regs
- ;;
+ adds out1=0,sp // out1=&sigscratch
(pSys) mov out2=1 // out2=1 => we're in a syscall
- tbit.nz p16,p17=r2,PF_PTRACED_BIT
-(p16) br.cond.spnt.many setup_switch_stack
;;
-back_from_setup_switch_stack:
(pNonSys) mov out2=0 // out2=0 => not a syscall
- adds r3=-IA64_SWITCH_STACK_SIZE+IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
-(p17) adds sp=-IA64_SWITCH_STACK_SIZE,sp // make space for (dummy) switch_stack
- ;;
-(p17) st8 [r3]=r9 // save ar.unat in sw->caller_unat
- mov loc0=rp // save return address
- UNW(.body)
+ .fframe 16
+ .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
+ st8 [sp]=r9,-16 // allocate space for ar.unat and save it
br.call.sptk.few rp=ia64_do_signal
.ret11:
- adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+ .restore sp
+ adds sp\x16,sp // pop scratch stack space
;;
- ld8 r9=[r3] // load new unat from sw->caller_unat
+ ld8 r9=[sp] // load new unat from sw->caller_unat
mov rp=loc0
;;
-(p17) adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch_stack
-(p17) mov ar.unat=r9
-(p17) mov ar.pfs=loc1
-(p17) br.ret.sptk.many rp
-
- DO_LOAD_SWITCH_STACK( ) // restore the switch stack (ptrace may have modified it)
+ mov ar.unat=r9
+ mov ar.pfs=loc1
br.ret.sptk.many rp
- // NOT REACHED
-
-setup_switch_stack:
- UNW(.prologue)
- mov r16=loc1
+#else /* !CONFIG_IA64_NEW_UNWIND */
+ .prologue
+ alloc r16=ar.pfs,8,0,3,0 // preserve all eight input regs in case of syscall restart!
DO_SAVE_SWITCH_STACK
UNW(.body)
- br.cond.sptk.many back_from_setup_switch_stack
+ mov out0=0 // there is no "oldset"
+ adds out1\x16,sp // out1=&sigscratch
+ .pred.rel.mutex pSys, pNonSys
+(pSys) mov out2=1 // out2=1 => we're in a syscall
+(pNonSys) mov out2=0 // out2=0 => not a syscall
+ br.call.sptk.few rp=ia64_do_signal
+.ret11:
+ // restore the switch stack (ptrace may have modified it)
+ DO_LOAD_SWITCH_STACK( )
+ br.ret.sptk.many rp
+#endif /* !CONFIG_IA64_NEW_UNWIND */
END(handle_signal_delivery)
GLOBAL_ENTRY(sys_rt_sigsuspend)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2))
- alloc loc1=ar.pfs,2,2,3,0
-
- // If the process is being ptraced, the signal may not actually be delivered to
- // the process. Instead, SIGCHLD will be sent to the parent. We need to
- // setup a switch_stack so ptrace can inspect the processes state if necessary.
- // Also, the process might not ptraced until stopped in sigsuspend, so this
- // isn't something that we can do conditionally based upon the value of
- // PF_PTRACED_BIT.
+#ifdef CONFIG_IA64_NEW_UNWIND
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
+ alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
+ mov r9=ar.unat
+ mov loc0=rp // save return address
mov out0=in0 // mask
mov out1=in1 // sigsetsize
+ adds out2=0,sp // out2=&sigscratch
;;
- adds out2\x16,sp // out1=&pt_regs
- mov r16=loc1
- DO_SAVE_SWITCH_STACK
- mov loc0=rp // save return address
- UNW(.body)
- br.call.sptk.many rp=ia64_rt_sigsuspend
+ .fframe 16
+ .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
+ st8 [sp]=r9,-16 // allocate space for ar.unat and save it
+ .body
+ br.call.sptk.few rp=ia64_rt_sigsuspend
.ret12:
- adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+ .restore sp
+ adds sp\x16,sp // pop scratch stack space
;;
- ld8 r9=[r3] // load new unat from sw->caller_unat
+ ld8 r9=[sp] // load new unat from sw->caller_unat
mov rp=loc0
;;
+ mov ar.unat=r9
+ mov ar.pfs=loc1
+ br.ret.sptk.many rp
+#else /* !CONFIG_IA64_NEW_UNWIND */
+ UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2))
+ alloc r16=ar.pfs,2,0,3,0
+ DO_SAVE_SWITCH_STACK
+ UNW(.body)
+
+ mov out0=in0 // mask
+ mov out1=in1 // sigsetsize
+ adds out2\x16,sp // out1=&sigscratch
+ br.call.sptk.many rp=ia64_rt_sigsuspend
+.ret12:
// restore the switch stack (ptrace may have modified it)
- DO_LOAD_SWITCH_STACK(PT_REGS_UNWIND_INFO)
+ DO_LOAD_SWITCH_STACK( )
br.ret.sptk.many rp
- // NOT REACHED
+#endif /* !CONFIG_IA64_NEW_UNWIND */
END(sys_rt_sigsuspend)
ENTRY(sys_rt_sigreturn)
+#ifdef CONFIG_IA64_NEW_UNWIND
+ .regstk 0,0,3,0 // inherited from gate.s:invoke_sighandler()
+ PT_REGS_UNWIND_INFO
+ .prologue
+ PT_REGS_SAVES(16)
+ adds sp=-16,sp
+ .body
+ cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
+ ;;
+ adds out0\x16,sp // out0 = &sigscratch
+ br.call.sptk.few rp=ia64_rt_sigreturn
+.ret13:
+ adds sp\x16,sp // doesn't drop pt_regs, so don't mark it as restoring sp!
+ PT_REGS_UNWIND_INFO // instead, create a new body section with the smaller frame
+ ;;
+ ld8 r9=[sp] // load new ar.unat
+ mov b7=r8
+ ;;
+ mov ar.unat=r9
+ br b7
+#else /* !CONFIG_IA64_NEW_UNWIND */
.regstk 0,0,3,0 // inherited from gate.s:invoke_sighandler()
PT_REGS_UNWIND_INFO
- adds out0\x16,sp // out0 = &pt_regs
UNW(.prologue)
UNW(.fframe IA64_PT_REGS_SIZE+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp rp, PT(CR_IIP)+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp ar.pfs, PT(CR_IFS)+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp ar.unat, PT(AR_UNAT)+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp pr, PT(PR)+IA64_SWITCH_STACK_SIZE)
- adds sp=-IA64_SWITCH_STACK_SIZE,sp // make space for unat and padding
+ adds sp=-IA64_SWITCH_STACK_SIZE,sp
+ cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
;;
UNW(.body)
- cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
+
+ adds out0\x16,sp // out0 = &sigscratch
br.call.sptk.few rp=ia64_rt_sigreturn
.ret13:
adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
;;
ld8 r9=[r3] // load new ar.unat
- mov rp=r8
+ mov b7=r8
;;
PT_REGS_UNWIND_INFO
adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch-stack frame
mov ar.unat=r9
- br rp
+ br b7
+#endif /* !CONFIG_IA64_NEW_UNWIND */
END(sys_rt_sigreturn)
GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.0-test1-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/ia64_ksyms.c Fri Jun 9 17:10:26 2000
@@ -5,20 +5,15 @@
#include <linux/config.h>
#include <linux/module.h>
-#include <asm/processor.h>
-EXPORT_SYMBOL(cpu_data);
-EXPORT_SYMBOL(kernel_thread);
-
-#include <asm/uaccess.h>
-EXPORT_SYMBOL(__copy_user);
-
#include <linux/string.h>
-EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL_NOVERS(memset);
EXPORT_SYMBOL(memcmp);
-EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL_NOVERS(memcpy);
+EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strcmp);
+EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strlen);
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
@@ -29,9 +24,41 @@
EXPORT_SYMBOL(pci_alloc_consistent);
EXPORT_SYMBOL(pci_free_consistent);
+#include <linux/in6.h>
+#include <asm/checksum.h>
+EXPORT_SYMBOL(csum_partial_copy_nocheck);
+
#include <asm/irq.h>
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
+
+#include <asm/current.h>
+#include <asm/hardirq.h>
+EXPORT_SYMBOL(irq_stat);
+
+#include <asm/processor.h>
+EXPORT_SYMBOL(cpu_data);
+EXPORT_SYMBOL(kernel_thread);
+
+#ifdef CONFIG_SMP
+EXPORT_SYMBOL(synchronize_irq);
+
+#include <asm/smplock.h>
+EXPORT_SYMBOL(kernel_flag);
+
+#include <asm/system.h>
+EXPORT_SYMBOL(__global_sti);
+EXPORT_SYMBOL(__global_cli);
+EXPORT_SYMBOL(__global_save_flags);
+EXPORT_SYMBOL(__global_restore_flags);
+
+#endif
+
+#include <asm/uaccess.h>
+EXPORT_SYMBOL(__copy_user);
+
+#include <asm/unistd.h>
+EXPORT_SYMBOL(__ia64_syscall);
/* from arch/ia64/lib */
extern void __divdi3(void);
diff -urN linux-davidm/arch/ia64/kernel/mca_asm.S linux-2.4.0-test1-lia/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/mca_asm.S Fri Jun 9 17:23:02 2000
@@ -6,7 +6,6 @@
// 00/03/29 cfleck Added code to save INIT handoff state in pt_regs format, switch to temp kstack,
// switch modes, jump to C INIT handler
//
-#include <asm/offsets.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/mca_asm.h>
diff -urN linux-davidm/arch/ia64/kernel/pci.c linux-2.4.0-test1-lia/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/pci.c Fri Jun 9 17:23:13 2000
@@ -197,7 +197,7 @@
ranges->mem_end -= bus->resource[1]->start;
}
-int __init
+int
pcibios_enable_device (struct pci_dev *dev)
{
/* Not needed, since we enable all devices at startup. */
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.0-test1-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/process.c Fri Jun 9 17:23:40 2000
@@ -311,7 +311,12 @@
dst[12] = pt->r12; dst[13] = pt->r13; dst[14] = pt->r14; dst[15] = pt->r15;
memcpy(dst + 16, &pt->r16, 16*8); /* r16-r31 are contiguous */
+#ifdef CONFIG_IA64_NEW_UNWIND
+ printk("ia64_elf_core_copy_regs: fix me, please?");
+ dst[32] = 0;
+#else
dst[32] = ia64_get_nat_bits(pt, sw);
+#endif
dst[33] = pt->pr;
/* branch regs: */
@@ -332,6 +337,10 @@
struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
struct task_struct *fpu_owner = ia64_get_fpu_owner();
+#ifdef CONFIG_IA64_NEW_UNWIND
+ printk("dump_fpu: fix me, please?");
+#endif
+
memset(dst, 0, sizeof (dst)); /* don't leak any "random" bits */
/* f0 is 0.0 */ /* f1 is 1.0 */ dst[2] = sw->f2; dst[3] = sw->f3;
@@ -440,7 +449,7 @@
do {
if (unw_unwind(&info) < 0)
return 0;
- ip = unw_get_ip(&info);
+ unw_get_ip(&info, &ip);
if (ip < first_sched || ip >= last_sched)
return ip;
} while (count++ < 16);
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.0-test1-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/ptrace.c Fri Jun 9 17:15:55 2000
@@ -33,6 +33,89 @@
#define IPSR_WRITE_MASK 0x000006a00100003eUL
#define IPSR_READ_MASK IPSR_WRITE_MASK
+#ifdef CONFIG_IA64_NEW_UNWIND
+
+#define PTRACE_DEBUG 1
+
+#if PTRACE_DEBUG
+# define dprintk(format...) printk(format)
+# define inline
+#else
+# define dprintk(format...)
+#endif
+
+static int
+unwind_to_user (struct unw_frame_info *info, struct task_struct *child)
+{
+ unsigned long ip;
+
+ unw_init_from_blocked_task(info, child);
+ while (unw_unwind(info) >= 0) {
+ if (unw_get_rp(info, &ip) < 0) {
+ unw_get_ip(info, &ip);
+ dprintk("ptrace: failed to read return pointer (ip=0x%lx)\n", ip);
+ return -1;
+ }
+ if (ip < TASK_SIZE)
+ return 0;
+ }
+ unw_get_ip(info, &ip);
+ dprintk("ptrace: failed to unwind to user-level (ip=0x%lx)\n", ip);
+ return -1;
+}
+
+/*
+ * Collect the NaT bits for r1-r31 from scratch_unat and return a NaT
+ * bitset where bit i is set iff the NaT bit of register i is set.
+ */
+unsigned long
+ia64_get_scratch_nat_bits (struct pt_regs *pt, unsigned long scratch_unat)
+{
+# define GET_BITS(first, last, unat) \
+ ({ \
+ unsigned long bit = ia64_unat_pos(&pt->r##first); \
+ unsigned long mask = ((1UL << (last - first + 1)) - 1) << first; \
+ (ia64_rotl(unat, first) >> bit) & mask; \
+ })
+ unsigned long val;
+
+ val = GET_BITS( 1, 3, scratch_unat);
+ val |= GET_BITS(12, 15, scratch_unat);
+ val |= GET_BITS( 8, 11, scratch_unat);
+ val |= GET_BITS(16, 31, scratch_unat);
+ return val;
+
+# undef GET_BITS
+}
+
+/*
+ * Set the NaT bits for the scratch registers according to NAT and
+ * return the resulting unat (assuming the scratch registers are
+ * stored in PT).
+ */
+unsigned long
+ia64_put_scratch_nat_bits (struct pt_regs *pt, unsigned long nat)
+{
+ unsigned long scratch_unat;
+
+# define PUT_BITS(first, last, nat) \
+ ({ \
+ unsigned long bit = ia64_unat_pos(&pt->r##first); \
+ unsigned long mask = ((1UL << (last - first + 1)) - 1) << bit; \
+ (ia64_rotr(nat, first) << bit) & mask; \
+ })
+ scratch_unat = PUT_BITS( 1, 3, nat);
+ scratch_unat |= PUT_BITS(12, 15, nat);
+ scratch_unat |= PUT_BITS( 8, 11, nat);
+ scratch_unat |= PUT_BITS(16, 31, nat);
+
+ return scratch_unat;
+
+# undef PUT_BITS
+}
+
+#else /* !CONFIG_IA64_NEW_UNWIND */
+
/*
* Collect the NaT bits for r1-r31 from sw->caller_unat and
* sw->ar_unat and return a NaT bitset where bit i is set iff the NaT
@@ -80,28 +163,26 @@
# undef PUT_BITS
}
-#define IA64_MLI_TEMPLATE 0x2
+#endif /* !CONFIG_IA64_NEW_UNWIND */
+
+#define IA64_MLX_TEMPLATE 0x2
#define IA64_MOVL_OPCODE 6
void
ia64_increment_ip (struct pt_regs *regs)
{
- unsigned long w0, w1, ri = ia64_psr(regs)->ri + 1;
+ unsigned long w0, ri = ia64_psr(regs)->ri + 1;
if (ri > 2) {
ri = 0;
regs->cr_iip += 16;
} else if (ri = 2) {
get_user(w0, (char *) regs->cr_iip + 0);
- get_user(w1, (char *) regs->cr_iip + 8);
- if (((w0 >> 1) & 0xf) = IA64_MLI_TEMPLATE && (w1 >> 60) = IA64_MOVL_OPCODE) {
+ if (((w0 >> 1) & 0xf) = IA64_MLX_TEMPLATE) {
/*
- * rfi'ing to slot 2 of an MLI bundle causes
+ * rfi'ing to slot 2 of an MLX bundle causes
* an illegal operation fault. We don't want
- * that to happen... Note that we check the
- * opcode only. "movl" has a vc bit of 0, but
- * since a vc bit of 1 is currently reserved,
- * we might just as well treat it like a movl.
+ * that to happen...
*/
ri = 0;
regs->cr_iip += 16;
@@ -113,21 +194,17 @@
void
ia64_decrement_ip (struct pt_regs *regs)
{
- unsigned long w0, w1, ri = ia64_psr(regs)->ri - 1;
+ unsigned long w0, ri = ia64_psr(regs)->ri - 1;
if (ia64_psr(regs)->ri = 0) {
regs->cr_iip -= 16;
ri = 2;
get_user(w0, (char *) regs->cr_iip + 0);
- get_user(w1, (char *) regs->cr_iip + 8);
- if (((w0 >> 1) & 0xf) = IA64_MLI_TEMPLATE && (w1 >> 60) = IA64_MOVL_OPCODE) {
+ if (((w0 >> 1) & 0xf) = IA64_MLX_TEMPLATE) {
/*
- * rfi'ing to slot 2 of an MLI bundle causes
+ * rfi'ing to slot 2 of an MLX bundle causes
* an illegal operation fault. We don't want
- * that to happen... Note that we check the
- * opcode only. "movl" has a vc bit of 0, but
- * since a vc bit of 1 is currently reserved,
- * we might just as well treat it like a movl.
+ * that to happen...
*/
ri = 1;
}
@@ -292,7 +369,11 @@
laddr = (unsigned long *) addr;
child_regs = ia64_task_regs(child);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ child_stack = (struct switch_stack *) (child->thread.ksp + 16);
+#else
child_stack = (struct switch_stack *) child_regs - 1;
+#endif
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
@@ -336,7 +417,11 @@
laddr = (unsigned long *) addr;
child_regs = ia64_task_regs(child);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ child_stack = (struct switch_stack *) (child->thread.ksp + 16);
+#else
child_stack = (struct switch_stack *) child_regs - 1;
+#endif
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
@@ -395,21 +480,42 @@
long new_bsp,
int force_loadrs_to_zero)
{
- unsigned long *krbs, bspstore, bsp, krbs_num_regs, rbs_end, addr, val;
- long ndirty, ret;
- struct pt_regs *child_regs;
+ unsigned long *krbs, bspstore, *kbspstore, bsp, rbs_end, addr, val;
+ long ndirty, ret = 0;
+ struct pt_regs *child_regs = ia64_task_regs(child);
+
+#ifdef CONFIG_IA64_NEW_UNWIND
+ struct unw_frame_info info;
+ unsigned long cfm, sof;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ unw_get_bsp(&info, (unsigned long *) &kbspstore);
+
+ krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ ndirty = ia64_rse_num_regs(krbs, krbs + (child_regs->loadrs >> 19));
+ bspstore = child_regs->ar_bspstore;
+ bsp = (long) ia64_rse_skip_regs((long *)bspstore, ndirty);
+
+ cfm = child_regs->cr_ifs;
+ if (!(cfm & (1UL << 63)))
+ unw_get_cfm(&info, &cfm);
+ sof = (cfm & 0x7f);
+ rbs_end = (long) ia64_rse_skip_regs((long *)bspstore, sof);
+#else
struct switch_stack *child_stack;
+ unsigned long krbs_num_regs;
- ret = 0;
- child_regs = ia64_task_regs(child);
child_stack = (struct switch_stack *) child_regs - 1;
-
+ kbspstore = (unsigned long *) child_stack->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
ndirty = ia64_rse_num_regs(krbs, krbs + (child_regs->loadrs >> 19));
bspstore = child_regs->ar_bspstore;
bsp = (long) ia64_rse_skip_regs((long *)bspstore, ndirty);
- krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
+ krbs_num_regs = ia64_rse_num_regs(krbs, kbspstore);
rbs_end = (long) ia64_rse_skip_regs((long *)bspstore, krbs_num_regs);
+#endif
/* Return early if nothing to do */
if (bsp = new_bsp)
@@ -438,13 +544,15 @@
}
static void
-sync_thread_rbs (struct task_struct *child, int make_writable)
+sync_thread_rbs (struct task_struct *child, struct mm_struct *mm, int make_writable)
{
struct task_struct *p;
read_lock(&tasklist_lock);
- for_each_task(p) {
- if (p->mm = child->mm && p->state != TASK_RUNNING)
- sync_kernel_register_backing_store(p, 0, make_writable);
+ {
+ for_each_task(p) {
+ if (p->mm = mm && p->state != TASK_RUNNING)
+ sync_kernel_register_backing_store(p, 0, make_writable);
+ }
}
read_unlock(&tasklist_lock);
child->thread.flags |= IA64_THREAD_KRBS_SYNCED;
@@ -466,6 +574,234 @@
}
}
+#ifdef CONFIG_IA64_NEW_UNWIND
+
+#include <asm/unwind.h>
+
+static int
+access_fr (struct unw_frame_info *info, int regnum, int hi, unsigned long *data, int write_access)
+{
+ struct ia64_fpreg fpval;
+ int ret;
+
+ ret = unw_get_fr(info, regnum, &fpval);
+ if (ret < 0)
+ return ret;
+
+ if (write_access) {
+ fpval.u.bits[hi] = *data;
+ ret = unw_set_fr(info, regnum, fpval);
+ } else
+ *data = fpval.u.bits[hi];
+ return ret;
+}
+
+static int
+access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
+{
+ unsigned long *ptr, *rbs, *bspstore, ndirty, regnum;
+ struct switch_stack *sw;
+ struct unw_frame_info info;
+ struct pt_regs *pt;
+
+ pt = ia64_task_regs(child);
+ sw = (struct switch_stack *) (child->thread.ksp + 16);
+
+ if ((addr & 0x7) != 0) {
+ dprintk("ptrace: unaligned register address 0x%lx\n", addr);
+ return -1;
+ }
+
+ if (addr < PT_F127 + 16) {
+ /* accessing fph */
+ sync_fph(child);
+ ptr = (unsigned long *) ((unsigned long) &child->thread.fph + addr);
+ } else if (addr >= PT_F10 && addr < PT_F15 + 16) {
+ /* scratch registers untouched by kernel (saved in switch_stack) */
+ ptr = (unsigned long *) ((long) sw + addr - PT_NAT_BITS);
+ } else if (addr < PT_AR_LC + 8) {
+ /* preserved state: */
+ unsigned long nat_bits, scratch_unat, dummy = 0;
+ struct unw_frame_info info;
+ char nat = 0;
+ int ret;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ switch (addr) {
+ case PT_NAT_BITS:
+ if (write_access) {
+ nat_bits = *data;
+ scratch_unat = ia64_put_scratch_nat_bits(pt, nat_bits);
+ if (unw_set_ar(&info, UNW_AR_UNAT, scratch_unat) < 0) {
+ dprintk("ptrace: failed to set ar.unat\n");
+ return -1;
+ }
+ for (regnum = 4; regnum <= 7; ++regnum) {
+ unw_get_gr(&info, regnum, &dummy, &nat);
+ unw_set_gr(&info, regnum, dummy, (nat_bits >> regnum) & 1);
+ }
+ } else {
+ if (unw_get_ar(&info, UNW_AR_UNAT, &scratch_unat) < 0) {
+ dprintk("ptrace: failed to read ar.unat\n");
+ return -1;
+ }
+ nat_bits = ia64_get_scratch_nat_bits(pt, scratch_unat);
+ for (regnum = 4; regnum <= 7; ++regnum) {
+ unw_get_gr(&info, regnum, &dummy, &nat);
+ nat_bits |= (nat != 0) << regnum;
+ }
+ *data = nat_bits;
+ }
+ return 0;
+
+ case PT_R4: case PT_R5: case PT_R6: case PT_R7:
+ if (write_access) {
+ /* read NaT bit first: */
+ ret = unw_get_gr(&info, (addr - PT_R4)/8 + 4, data, &nat);
+ if (ret < 0)
+ return ret;
+ }
+ return unw_access_gr(&info, (addr - PT_R4)/8 + 4, data, &nat,
+ write_access);
+
+ case PT_B1: case PT_B2: case PT_B3: case PT_B4: case PT_B5:
+ return unw_access_br(&info, (addr - PT_B1)/8 + 1, data, write_access);
+
+ case PT_AR_LC:
+ return unw_access_ar(&info, UNW_AR_LC, data, write_access);
+
+ default:
+ if (addr >= PT_F2 && addr < PT_F5 + 16)
+ return access_fr(&info, (addr - PT_F2)/16 + 2, (addr & 8) != 0,
+ data, write_access);
+ else if (addr >= PT_F16 && addr < PT_F31 + 16)
+ return access_fr(&info, (addr - PT_F16)/16 + 16, (addr & 8) != 0,
+ data, write_access);
+ else {
+ dprintk("ptrace: rejecting access to register address 0x%lx\n",
+ addr);
+ return -1;
+ }
+ }
+ } else if (addr < PT_F9+16) {
+ /* scratch state */
+ switch (addr) {
+ case PT_AR_BSP:
+ if (write_access)
+ /* FIXME? Account for lack of ``cover'' in the syscall case */
+ return sync_kernel_register_backing_store(child, *data, 1);
+ else {
+ rbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ bspstore = (unsigned long *) pt->ar_bspstore;
+ ndirty = ia64_rse_num_regs(rbs, rbs + (pt->loadrs >> 19));
+
+ /*
+ * If we're in a system call, no ``cover'' was done. So to
+ * make things uniform, we'll add the appropriate displacement
+ * onto bsp if we're in a system call.
+ */
+ if (!(pt->cr_ifs & (1UL << 63))) {
+ struct unw_frame_info info;
+ unsigned long cfm;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ unw_get_cfm(&info, &cfm);
+ ndirty += cfm & 0x7f;
+ }
+ *data = (unsigned long) ia64_rse_skip_regs(bspstore, ndirty);
+ return 0;
+ }
+
+ case PT_CFM:
+ if (pt->cr_ifs & (1UL << 63)) {
+ if (write_access)
+ pt->cr_ifs = ((pt->cr_ifs & ~0x3fffffffffUL)
+ | (*data & 0x3fffffffffUL));
+ else
+ *data = pt->cr_ifs & 0x3fffffffffUL;
+ } else {
+ /* kernel was entered through a system call */
+ unsigned long cfm;
+
+ if (unwind_to_user(&info, child) < 0)
+ return -1;
+
+ unw_get_cfm(&info, &cfm);
+ if (write_access)
+ unw_set_cfm(&info, ((cfm & ~0x3fffffffffU)
+ | (*data & 0x3fffffffffUL)));
+ else
+ *data = cfm;
+ }
+ return 0;
+
+ case PT_CR_IPSR:
+ if (write_access)
+ pt->cr_ipsr = ((*data & IPSR_WRITE_MASK)
+ | (pt->cr_ipsr & ~IPSR_WRITE_MASK));
+ else
+ *data = (pt->cr_ipsr & IPSR_READ_MASK);
+ return 0;
+
+ case PT_R1: case PT_R2: case PT_R3:
+ case PT_R8: case PT_R9: case PT_R10: case PT_R11:
+ case PT_R12: case PT_R13: case PT_R14: case PT_R15:
+ case PT_R16: case PT_R17: case PT_R18: case PT_R19:
+ case PT_R20: case PT_R21: case PT_R22: case PT_R23:
+ case PT_R24: case PT_R25: case PT_R26: case PT_R27:
+ case PT_R28: case PT_R29: case PT_R30: case PT_R31:
+ case PT_B0: case PT_B6: case PT_B7:
+ case PT_F6: case PT_F6+8: case PT_F7: case PT_F7+8:
+ case PT_F8: case PT_F8+8: case PT_F9: case PT_F9+8:
+ case PT_AR_BSPSTORE:
+ case PT_AR_RSC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
+ case PT_AR_CCV: case PT_AR_FPSR: case PT_CR_IIP: case PT_PR:
+ /* scratch register */
+ ptr = (unsigned long *) ((long) pt + addr - PT_CR_IPSR);
+ break;
+
+ default:
+ /* disallow accessing anything else... */
+ dprintk("ptrace: rejecting access to register address 0x%lx\n",
+ addr);
+ return -1;
+ }
+ } else {
+ /* access debug registers */
+
+ if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
+ child->thread.flags |= IA64_THREAD_DBG_VALID;
+ memset(child->thread.dbr, 0, sizeof(child->thread.dbr));
+ memset(child->thread.ibr, 0, sizeof( child->thread.ibr));
+ }
+ if (addr >= PT_IBR) {
+ regnum = (addr - PT_IBR) >> 3;
+ ptr = &child->thread.ibr[0];
+ } else {
+ regnum = (addr - PT_DBR) >> 3;
+ ptr = &child->thread.dbr[0];
+ }
+
+ if (regnum >= 8) {
+ dprintk("ptrace: rejecting access to register address 0x%lx\n", addr);
+ return -1;
+ }
+
+ ptr += regnum;
+ }
+ if (write_access)
+ *ptr = *data;
+ else
+ *data = *ptr;
+ return 0;
+}
+
+#else /* !CONFIG_IA64_NEW_UNWIND */
+
static int
access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
{
@@ -486,6 +822,13 @@
sw = (struct switch_stack *) pt - 1;
switch (addr) {
+ case PT_NAT_BITS:
+ if (write_access)
+ ia64_put_nat_bits(pt, sw, *data);
+ else
+ *data = ia64_get_nat_bits(pt, sw);
+ return 0;
+
case PT_AR_BSP:
if (write_access)
/* FIXME? Account for lack of ``cover'' in the syscall case */
@@ -508,9 +851,6 @@
case PT_CFM:
if (write_access) {
- pt = ia64_task_regs(child);
- sw = (struct switch_stack *) pt - 1;
-
if (pt->cr_ifs & (1UL << 63))
pt->cr_ifs = ((pt->cr_ifs & ~0x3fffffffffUL)
| (*data & 0x3fffffffffUL));
@@ -545,18 +885,26 @@
case PT_R28: case PT_R29: case PT_R30: case PT_R31:
case PT_B0: case PT_B1: case PT_B2: case PT_B3:
case PT_B4: case PT_B5: case PT_B6: case PT_B7:
- case PT_F2: case PT_F3:
- case PT_F4: case PT_F5: case PT_F6: case PT_F7:
- case PT_F8: case PT_F9: case PT_F10: case PT_F11:
- case PT_F12: case PT_F13: case PT_F14: case PT_F15:
- case PT_F16: case PT_F17: case PT_F18: case PT_F19:
- case PT_F20: case PT_F21: case PT_F22: case PT_F23:
- case PT_F24: case PT_F25: case PT_F26: case PT_F27:
- case PT_F28: case PT_F29: case PT_F30: case PT_F31:
- case PT_AR_LC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
- case PT_AR_CCV: case PT_AR_FPSR:
- case PT_CR_IIP: case PT_PR:
- ptr = (unsigned long *) ((long) sw + addr - PT_PRI_UNAT);
+ case PT_F2: case PT_F2+8: case PT_F3: case PT_F3+8:
+ case PT_F4: case PT_F4+8: case PT_F5: case PT_F5+8:
+ case PT_F6: case PT_F6+8: case PT_F7: case PT_F7+8:
+ case PT_F8: case PT_F8+8: case PT_F9: case PT_F9+8:
+ case PT_F10: case PT_F10+8: case PT_F11: case PT_F11+8:
+ case PT_F12: case PT_F12+8: case PT_F13: case PT_F13+8:
+ case PT_F14: case PT_F14+8: case PT_F15: case PT_F15+8:
+ case PT_F16: case PT_F16+8: case PT_F17: case PT_F17+8:
+ case PT_F18: case PT_F18+8: case PT_F19: case PT_F19+8:
+ case PT_F20: case PT_F20+8: case PT_F21: case PT_F21+8:
+ case PT_F22: case PT_F22+8: case PT_F23: case PT_F23+8:
+ case PT_F24: case PT_F24+8: case PT_F25: case PT_F25+8:
+ case PT_F26: case PT_F26+8: case PT_F27: case PT_F27+8:
+ case PT_F28: case PT_F28+8: case PT_F29: case PT_F29+8:
+ case PT_F30: case PT_F30+8: case PT_F31: case PT_F31+8:
+ case PT_AR_BSPSTORE:
+ case PT_AR_RSC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
+ case PT_AR_CCV: case PT_AR_FPSR: case PT_CR_IIP: case PT_PR:
+ case PT_AR_LC:
+ ptr = (unsigned long *) ((long) sw + addr - PT_NAT_BITS);
break;
default:
@@ -591,6 +939,8 @@
return 0;
}
+#endif /* !CONFIG_IA64_NEW_UNWIND */
+
asmlinkage long
sys_ptrace (long request, pid_t pid, unsigned long addr, unsigned long data,
long arg4, long arg5, long arg6, long arg7, long stack)
@@ -613,17 +963,21 @@
ret = -ESRCH;
read_lock(&tasklist_lock);
- child = find_task_by_pid(pid);
+ {
+ child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
+ }
read_unlock(&tasklist_lock);
if (!child)
goto out;
ret = -EPERM;
if (pid = 1) /* no messing around with init! */
- goto out;
+ goto out_tsk;
if (request = PTRACE_ATTACH) {
if (child = current)
- goto out;
+ goto out_tsk;
if ((!child->dumpable ||
(current->uid != child->euid) ||
(current->uid != child->suid) ||
@@ -632,10 +986,10 @@
(current->gid != child->sgid) ||
(!cap_issubset(child->cap_permitted, current->cap_permitted)) ||
(current->gid != child->gid)) && !capable(CAP_SYS_PTRACE))
- goto out;
+ goto out_tsk;
/* the same process cannot be attached many times */
if (child->flags & PF_PTRACED)
- goto out;
+ goto out_tsk;
child->flags |= PF_PTRACED;
if (child->p_pptr != current) {
unsigned long flags;
@@ -648,78 +1002,98 @@
}
send_sig(SIGSTOP, child, 1);
ret = 0;
- goto out;
+ goto out_tsk;
}
ret = -ESRCH;
if (!(child->flags & PF_PTRACED))
- goto out;
+ goto out_tsk;
if (child->state != TASK_STOPPED) {
if (request != PTRACE_KILL)
- goto out;
+ goto out_tsk;
}
if (child->p_pptr != current)
- goto out;
+ goto out_tsk;
switch (request) {
case PTRACE_PEEKTEXT:
case PTRACE_PEEKDATA: /* read word at location addr */
- if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)
- && atomic_read(&child->mm->mm_users) > 1)
- sync_thread_rbs(child, 0);
+ if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)) {
+ struct mm_struct *mm;
+ long do_sync;
+
+ task_lock(child);
+ {
+ mm = child->mm;
+ do_sync = mm && (atomic_read(&mm->mm_users) > 1);
+ }
+ task_unlock(child);
+ if (do_sync)
+ sync_thread_rbs(child, mm, 0);
+ }
ret = ia64_peek(regs, child, addr, &data);
if (ret = 0) {
ret = data;
regs->r8 = 0; /* ensure "ret" is not mistaken as an error code */
}
- goto out;
+ goto out_tsk;
case PTRACE_POKETEXT:
case PTRACE_POKEDATA: /* write the word at location addr */
- if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)
- && atomic_read(&child->mm->mm_users) > 1)
- sync_thread_rbs(child, 1);
+ if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)) {
+ struct mm_struct *mm;
+ long do_sync;
+
+ task_lock(child);
+ {
+ mm = child->mm;
+ do_sync = mm && (atomic_read(&child->mm->mm_users) > 1);
+ }
+ task_unlock(child);
+ if (do_sync)
+ sync_thread_rbs(child, mm, 1);
+ }
ret = ia64_poke(regs, child, addr, data);
- goto out;
+ goto out_tsk;
case PTRACE_PEEKUSR: /* read the word at addr in the USER area */
if (access_uarea(child, addr, &data, 0) < 0) {
ret = -EIO;
- goto out;
+ goto out_tsk;
}
ret = data;
regs->r8 = 0; /* ensure "ret" is not mistaken as an error code */
- goto out;
+ goto out_tsk;
case PTRACE_POKEUSR: /* write the word at addr in the USER area */
if (access_uarea(child, addr, &data, 1) < 0) {
ret = -EIO;
- goto out;
+ goto out_tsk;
}
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_GETSIGINFO:
ret = -EIO;
if (!access_ok(VERIFY_WRITE, data, sizeof (siginfo_t))
|| child->thread.siginfo = 0)
- goto out;
+ goto out_tsk;
copy_to_user((siginfo_t *) data, child->thread.siginfo, sizeof (siginfo_t));
ret = 0;
- goto out;
+ goto out_tsk;
break;
case PTRACE_SETSIGINFO:
ret = -EIO;
if (!access_ok(VERIFY_READ, data, sizeof (siginfo_t))
|| child->thread.siginfo = 0)
- goto out;
+ goto out_tsk;
copy_from_user(child->thread.siginfo, (siginfo_t *) data, sizeof (siginfo_t));
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
case PTRACE_CONT: /* restart after signal. */
ret = -EIO;
if (data > _NSIG)
- goto out;
+ goto out_tsk;
if (request = PTRACE_SYSCALL)
child->flags |= PF_TRACESYS;
else
@@ -735,7 +1109,7 @@
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_KILL:
/*
@@ -744,7 +1118,7 @@
* that it wants to exit.
*/
if (child->state = TASK_ZOMBIE) /* already dead */
- goto out;
+ goto out_tsk;
child->exit_code = SIGKILL;
/* make sure the single step/take-branch tra bits are not set: */
@@ -756,13 +1130,13 @@
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_SINGLESTEP: /* let child execute for one instruction */
case PTRACE_SINGLEBLOCK:
ret = -EIO;
if (data > _NSIG)
- goto out;
+ goto out_tsk;
child->flags &= ~PF_TRACESYS;
if (request = PTRACE_SINGLESTEP) {
@@ -778,12 +1152,12 @@
/* give it a chance to run. */
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
case PTRACE_DETACH: /* detach a process that was attached. */
ret = -EIO;
if (data > _NSIG)
- goto out;
+ goto out_tsk;
child->flags &= ~(PF_PTRACED|PF_TRACESYS);
child->exit_code = data;
@@ -802,12 +1176,14 @@
wake_up_process(child);
ret = 0;
- goto out;
+ goto out_tsk;
default:
ret = -EIO;
- goto out;
+ goto out_tsk;
}
+ out_tsk:
+ free_task_struct(child);
out:
unlock_kernel();
return ret;
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.0-test1-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/setup.c Fri Jun 9 17:16:12 2000
@@ -36,6 +36,10 @@
#include <asm/efi.h>
#include <asm/mca.h>
+#ifdef CONFIG_BLK_DEV_RAM
+# include <linux/blk.h>
+#endif
+
extern char _end;
/* cpu_data[bootstrap_processor] is data for the bootstrap processor: */
@@ -127,11 +131,22 @@
* change APIs, they'd do things for the better. Grumble...
*/
bootmap_start = PAGE_ALIGN(__pa(&_end));
+ if (ia64_boot_param.initrd_size)
+ bootmap_start = PAGE_ALIGN(bootmap_start + ia64_boot_param.initrd_size);
bootmap_size = init_bootmem(bootmap_start >> PAGE_SHIFT, max_pfn);
efi_memmap_walk(free_available_memory, 0);
reserve_bootmem(bootmap_start, bootmap_size);
+#ifdef CONFIG_BLK_DEV_INITRD
+ initrd_start = ia64_boot_param.initrd_start;
+ if (initrd_start) {
+ initrd_end = initrd_start+ia64_boot_param.initrd_size;
+ printk("Initial ramdisk at: 0x%p (%lu bytes)\n",
+ (void *) initrd_start, ia64_boot_param.initrd_size);
+ reserve_bootmem(virt_to_phys(initrd_start), ia64_boot_param.initrd_size);
+ }
+#endif
#if 0
/* XXX fix me */
init_mm.start_code = (unsigned long) &_stext;
diff -urN linux-davidm/arch/ia64/kernel/signal.c linux-2.4.0-test1-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/signal.c Fri Jun 9 17:16:43 2000
@@ -37,16 +37,26 @@
# define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0])
#endif
+struct sigscratch {
+#ifdef CONFIG_IA64_NEW_UNWIND
+ unsigned long scratch_unat; /* ar.unat for the general registers saved in pt */
+ unsigned long pad;
+#else
+ struct switch_stack sw;
+#endif
+ struct pt_regs pt;
+};
+
struct sigframe {
struct siginfo info;
struct sigcontext sc;
};
extern long sys_wait4 (int, int *, int, struct rusage *);
-extern long ia64_do_signal (sigset_t *, struct pt_regs *, long); /* forward decl */
+extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); /* forward decl */
long
-ia64_rt_sigsuspend (sigset_t *uset, size_t sigsetsize, struct pt_regs *pt)
+ia64_rt_sigsuspend (sigset_t *uset, size_t sigsetsize, struct sigscratch *scr)
{
sigset_t oldset, set;
@@ -72,18 +82,18 @@
* get saved in sigcontext by ia64_do_signal.
*/
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
- pt->r8 = -EINTR;
+ if (IS_IA32_PROCESS(&scr->pt)) {
+ scr->pt.r8 = -EINTR;
} else
#endif
{
- pt->r8 = EINTR;
- pt->r10 = -1;
+ scr->pt.r8 = EINTR;
+ scr->pt.r10 = -1;
}
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
schedule();
- if (ia64_do_signal(&oldset, pt, 1))
+ if (ia64_do_signal(&oldset, scr, 1))
return -EINTR;
}
}
@@ -98,9 +108,8 @@
}
static long
-restore_sigcontext (struct sigcontext *sc, struct pt_regs *pt)
+restore_sigcontext (struct sigcontext *sc, struct sigscratch *scr)
{
- struct switch_stack *sw = (struct switch_stack *) pt - 1;
unsigned long ip, flags, nat, um, cfm;
long err;
@@ -111,28 +120,32 @@
err |= __get_user(ip, &sc->sc_ip); /* instruction pointer */
err |= __get_user(cfm, &sc->sc_cfm);
err |= __get_user(um, &sc->sc_um); /* user mask */
- err |= __get_user(pt->ar_rsc, &sc->sc_ar_rsc);
- err |= __get_user(pt->ar_ccv, &sc->sc_ar_ccv);
- err |= __get_user(pt->ar_unat, &sc->sc_ar_unat);
- err |= __get_user(pt->ar_fpsr, &sc->sc_ar_fpsr);
- err |= __get_user(pt->ar_pfs, &sc->sc_ar_pfs);
- err |= __get_user(pt->pr, &sc->sc_pr); /* predicates */
- err |= __get_user(pt->b0, &sc->sc_br[0]); /* b0 (rp) */
- err |= __get_user(pt->b6, &sc->sc_br[6]); /* b6 */
- err |= __get_user(pt->b7, &sc->sc_br[7]); /* b7 */
- err |= __copy_from_user(&pt->r1, &sc->sc_gr[1], 3*8); /* r1-r3 */
- err |= __copy_from_user(&pt->r8, &sc->sc_gr[8], 4*8); /* r8-r11 */
- err |= __copy_from_user(&pt->r12, &sc->sc_gr[12], 4*8); /* r12-r15 */
- err |= __copy_from_user(&pt->r16, &sc->sc_gr[16], 16*8); /* r16-r31 */
+ err |= __get_user(scr->pt.ar_rsc, &sc->sc_ar_rsc);
+ err |= __get_user(scr->pt.ar_ccv, &sc->sc_ar_ccv);
+ err |= __get_user(scr->pt.ar_unat, &sc->sc_ar_unat);
+ err |= __get_user(scr->pt.ar_fpsr, &sc->sc_ar_fpsr);
+ err |= __get_user(scr->pt.ar_pfs, &sc->sc_ar_pfs);
+ err |= __get_user(scr->pt.pr, &sc->sc_pr); /* predicates */
+ err |= __get_user(scr->pt.b0, &sc->sc_br[0]); /* b0 (rp) */
+ err |= __get_user(scr->pt.b6, &sc->sc_br[6]); /* b6 */
+ err |= __get_user(scr->pt.b7, &sc->sc_br[7]); /* b7 */
+ err |= __copy_from_user(&scr->pt.r1, &sc->sc_gr[1], 3*8); /* r1-r3 */
+ err |= __copy_from_user(&scr->pt.r8, &sc->sc_gr[8], 4*8); /* r8-r11 */
+ err |= __copy_from_user(&scr->pt.r12, &sc->sc_gr[12], 4*8); /* r12-r15 */
+ err |= __copy_from_user(&scr->pt.r16, &sc->sc_gr[16], 16*8); /* r16-r31 */
- pt->cr_ifs = cfm | (1UL << 63);
+ scr->pt.cr_ifs = cfm | (1UL << 63);
/* establish new instruction pointer: */
- pt->cr_iip = ip & ~0x3UL;
- ia64_psr(pt)->ri = ip & 0x3;
- pt->cr_ipsr = (pt->cr_ipsr & ~IA64_PSR_UM) | (um & IA64_PSR_UM);
-
- ia64_put_nat_bits (pt, sw, nat); /* restore the original scratch NaT bits */
+ scr->pt.cr_iip = ip & ~0x3UL;
+ ia64_psr(&scr->pt)->ri = ip & 0x3;
+ scr->pt.cr_ipsr = (scr->pt.cr_ipsr & ~IA64_PSR_UM) | (um & IA64_PSR_UM);
+
+#ifdef CONFIG_IA64_NEW_UNWIND
+ scr->scratch_unat = ia64_put_scratch_nat_bits(&scr->pt, nat);
+#else
+ ia64_put_nat_bits(&scr->pt, &scr->sw, nat); /* restore the original scratch NaT bits */
+#endif
if (flags & IA64_SC_FLAG_FPH_VALID) {
struct task_struct *fpu_owner = ia64_get_fpu_owner();
@@ -186,15 +199,8 @@
}
}
-/*
- * When we get here, ((struct switch_stack *) pt - 1) is a
- * switch_stack frame that has no defined value. Upon return, we
- * expect sw->caller_unat to contain the new unat value. The reason
- * we use a full switch_stack frame is so everything is symmetric
- * with ia64_do_signal().
- */
long
-ia64_rt_sigreturn (struct pt_regs *pt)
+ia64_rt_sigreturn (struct sigscratch *scr)
{
extern char ia64_strace_leave_kernel, ia64_leave_kernel;
struct sigcontext *sc;
@@ -202,7 +208,7 @@
sigset_t set;
long retval;
- sc = &((struct sigframe *) (pt->r12 + 16))->sc;
+ sc = &((struct sigframe *) (scr->pt.r12 + 16))->sc;
/*
* When we return to the previously executing context, r8 and
@@ -234,18 +240,18 @@
recalc_sigpending(current);
spin_unlock_irq(¤t->sigmask_lock);
- if (restore_sigcontext(sc, pt))
+ if (restore_sigcontext(sc, scr))
goto give_sigsegv;
#if DEBUG_SIG
printk("SIG return (%s:%d): sp=%lx ip=%lx\n",
- current->comm, current->pid, pt->r12, pt->cr_iip);
+ current->comm, current->pid, scr->pt.r12, scr->pt.cr_iip);
#endif
/*
* It is more difficult to avoid calling this function than to
* call it and ignore errors.
*/
- do_sigaltstack(&sc->sc_stack, 0, pt->r12);
+ do_sigaltstack(&sc->sc_stack, 0, scr->pt.r12);
return retval;
give_sigsegv:
@@ -266,14 +272,13 @@
* trampoline starts. Everything else is done at the user-level.
*/
static long
-setup_sigcontext (struct sigcontext *sc, sigset_t *mask, struct pt_regs *pt)
+setup_sigcontext (struct sigcontext *sc, sigset_t *mask, struct sigscratch *scr)
{
- struct switch_stack *sw = (struct switch_stack *) pt - 1;
struct task_struct *fpu_owner = ia64_get_fpu_owner();
unsigned long flags = 0, ifs, nat;
long err;
- ifs = pt->cr_ifs;
+ ifs = scr->pt.cr_ifs;
if (on_sig_stack((unsigned long) sc))
flags |= IA64_SC_FLAG_ONSTACK;
@@ -293,46 +298,49 @@
* Note: sw->ar_unat is UNDEFINED unless the process is being
* PTRACED. However, this is OK because the NaT bits of the
* preserved registers (r4-r7) are never being looked at by
- * the signal handler (register r4-r7 are used instead).
+ * the signal handler (registers r4-r7 are used instead).
*/
- nat = ia64_get_nat_bits(pt, sw);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ nat = ia64_get_scratch_nat_bits(&scr->pt, scr->scratch_unat);
+#else
+ nat = ia64_get_nat_bits(&scr->pt, &scr->sw);
+#endif
err = __put_user(flags, &sc->sc_flags);
err |= __put_user(nat, &sc->sc_nat);
err |= PUT_SIGSET(mask, &sc->sc_mask);
- err |= __put_user(pt->cr_ipsr & IA64_PSR_UM, &sc->sc_um);
- err |= __put_user(pt->ar_rsc, &sc->sc_ar_rsc);
- err |= __put_user(pt->ar_ccv, &sc->sc_ar_ccv);
- err |= __put_user(pt->ar_unat, &sc->sc_ar_unat); /* ar.unat */
- err |= __put_user(pt->ar_fpsr, &sc->sc_ar_fpsr); /* ar.fpsr */
- err |= __put_user(pt->ar_pfs, &sc->sc_ar_pfs);
- err |= __put_user(pt->pr, &sc->sc_pr); /* predicates */
- err |= __put_user(pt->b0, &sc->sc_br[0]); /* b0 (rp) */
- err |= __put_user(pt->b6, &sc->sc_br[6]); /* b6 */
- err |= __put_user(pt->b7, &sc->sc_br[7]); /* b7 */
-
- err |= __copy_to_user(&sc->sc_gr[1], &pt->r1, 3*8); /* r1-r3 */
- err |= __copy_to_user(&sc->sc_gr[8], &pt->r8, 4*8); /* r8-r11 */
- err |= __copy_to_user(&sc->sc_gr[12], &pt->r12, 4*8); /* r12-r15 */
- err |= __copy_to_user(&sc->sc_gr[16], &pt->r16, 16*8); /* r16-r31 */
+ err |= __put_user(scr->pt.cr_ipsr & IA64_PSR_UM, &sc->sc_um);
+ err |= __put_user(scr->pt.ar_rsc, &sc->sc_ar_rsc);
+ err |= __put_user(scr->pt.ar_ccv, &sc->sc_ar_ccv);
+ err |= __put_user(scr->pt.ar_unat, &sc->sc_ar_unat); /* ar.unat */
+ err |= __put_user(scr->pt.ar_fpsr, &sc->sc_ar_fpsr); /* ar.fpsr */
+ err |= __put_user(scr->pt.ar_pfs, &sc->sc_ar_pfs);
+ err |= __put_user(scr->pt.pr, &sc->sc_pr); /* predicates */
+ err |= __put_user(scr->pt.b0, &sc->sc_br[0]); /* b0 (rp) */
+ err |= __put_user(scr->pt.b6, &sc->sc_br[6]); /* b6 */
+ err |= __put_user(scr->pt.b7, &sc->sc_br[7]); /* b7 */
+
+ err |= __copy_to_user(&sc->sc_gr[1], &scr->pt.r1, 3*8); /* r1-r3 */
+ err |= __copy_to_user(&sc->sc_gr[8], &scr->pt.r8, 4*8); /* r8-r11 */
+ err |= __copy_to_user(&sc->sc_gr[12], &scr->pt.r12, 4*8); /* r12-r15 */
+ err |= __copy_to_user(&sc->sc_gr[16], &scr->pt.r16, 16*8); /* r16-r31 */
- err |= __put_user(pt->cr_iip + ia64_psr(pt)->ri, &sc->sc_ip);
- err |= __put_user(pt->r12, &sc->sc_gr[12]); /* r12 */
+ err |= __put_user(scr->pt.cr_iip + ia64_psr(&scr->pt)->ri, &sc->sc_ip);
return err;
}
static long
-setup_frame (int sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *set, struct pt_regs *pt)
+setup_frame (int sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *set,
+ struct sigscratch *scr)
{
- struct switch_stack *sw = (struct switch_stack *) pt - 1;
extern char ia64_sigtramp[], __start_gate_section[];
unsigned long tramp_addr, new_rbs = 0;
struct sigframe *frame;
struct siginfo si;
long err;
- frame = (void *) pt->r12;
+ frame = (void *) scr->pt.r12;
tramp_addr = GATE_ADDR + (ia64_sigtramp - __start_gate_section);
if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && !on_sig_stack((unsigned long) frame)) {
new_rbs = (current->sas_ss_sp + sizeof(long) - 1) & ~(sizeof(long) - 1);
@@ -348,31 +356,39 @@
err |= __put_user(current->sas_ss_sp, &frame->sc.sc_stack.ss_sp);
err |= __put_user(current->sas_ss_size, &frame->sc.sc_stack.ss_size);
- err |= __put_user(sas_ss_flags(pt->r12), &frame->sc.sc_stack.ss_flags);
- err |= setup_sigcontext(&frame->sc, set, pt);
+ err |= __put_user(sas_ss_flags(scr->pt.r12), &frame->sc.sc_stack.ss_flags);
+ err |= setup_sigcontext(&frame->sc, set, scr);
if (err)
goto give_sigsegv;
- pt->r12 = (unsigned long) frame - 16; /* new stack pointer */
- pt->r2 = sig; /* signal number */
- pt->r3 = (unsigned long) ka->sa.sa_handler; /* addr. of handler's proc. descriptor */
- pt->r15 = new_rbs;
- pt->ar_fpsr = FPSR_DEFAULT; /* reset fpsr for signal handler */
- pt->cr_iip = tramp_addr;
- ia64_psr(pt)->ri = 0; /* start executing in first slot */
+ scr->pt.r12 = (unsigned long) frame - 16; /* new stack pointer */
+ scr->pt.r2 = sig; /* signal number */
+ scr->pt.r3 = (unsigned long) ka->sa.sa_handler; /* addr. of handler's proc desc */
+ scr->pt.r15 = new_rbs;
+ scr->pt.ar_fpsr = FPSR_DEFAULT; /* reset fpsr for signal handler */
+ scr->pt.cr_iip = tramp_addr;
+ ia64_psr(&scr->pt)->ri = 0; /* start executing in first slot */
+#ifdef CONFIG_IA64_NEW_UNWIND
+ /*
+ * Note: this affects only the NaT bits of the scratch regs
+ * (the ones saved in pt_regs), which is exactly what we want.
+ */
+ scr->scratch_unat = 0; /* ensure NaT bits of at least r2, r3, r12, and r15 are clear */
+#else
/*
* Note: this affects only the NaT bits of the scratch regs
- * (the ones saved in pt_regs, which is exactly what we want.
+ * (the ones saved in pt_regs), which is exactly what we want.
* The NaT bits for the preserved regs (r4-r7) are in
* sw->ar_unat iff this process is being PTRACED.
*/
- sw->caller_unat = 0; /* ensure NaT bits of at least r2, r3, r12, and r15 are clear */
+ scr->sw.caller_unat = 0; /* ensure NaT bits of at least r2, r3, r12, and r15 are clear */
+#endif
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sig=%d sp=%lx ip=%lx handler=%lx\n",
- current->comm, current->pid, sig, pt->r12, pt->cr_iip, pt->r3);
+ current->comm, current->pid, sig, scr->pt.r12, scr->pt.cr_iip, scr->pt.r3);
#endif
return 1;
@@ -391,17 +407,17 @@
static long
handle_signal (unsigned long sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *oldset,
- struct pt_regs *pt)
+ struct sigscratch *scr)
{
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
+ if (IS_IA32_PROCESS(&scr->pt)) {
/* send signal to IA-32 process */
- if (!ia32_setup_frame1(sig, ka, info, oldset, pt))
+ if (!ia32_setup_frame1(sig, ka, info, oldset, &scr->pt))
return 0;
} else
#endif
/* send signal to IA-64 process */
- if (!setup_frame(sig, ka, info, oldset, pt))
+ if (!setup_frame(sig, ka, info, oldset, scr))
return 0;
if (ka->sa.sa_flags & SA_ONESHOT)
@@ -418,12 +434,6 @@
}
/*
- * When we get here, `pt' points to struct pt_regs and ((struct
- * switch_stack *) pt - 1) points to a switch stack structure.
- * HOWEVER, in the normal case, the ONLY value valid in the
- * switch_stack is the caller_unat field. The entire switch_stack is
- * valid ONLY if current->flags has PF_PTRACED set.
- *
* Note that `init' is a special process: it doesn't get signals it
* doesn't want to handle. Thus you cannot kill init even with a
* SIGKILL even by mistake.
@@ -433,26 +443,26 @@
* user-level signal handling stack-frames in one go after that.
*/
long
-ia64_do_signal (sigset_t *oldset, struct pt_regs *pt, long in_syscall)
+ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall)
{
struct k_sigaction *ka;
siginfo_t info;
long restart = in_syscall;
- long errno = pt->r8;
+ long errno = scr->pt.r8;
/*
* In the ia64_leave_kernel code path, we want the common case
* to go fast, which is why we may in certain cases get here
* from kernel mode. Just return without doing anything if so.
*/
- if (!user_mode(pt))
+ if (!user_mode(&scr->pt))
return 0;
if (!oldset)
oldset = ¤t->blocked;
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
+ if (IS_IA32_PROCESS(&scr->pt)) {
if (in_syscall) {
if (errno >= 0)
restart = 0;
@@ -461,7 +471,7 @@
}
} else
#endif
- if (pt->r10 != -1) {
+ if (scr->pt.r10 != -1) {
/*
* A system calls has to be restarted only if one of
* the error codes ERESTARTNOHAND, ERESTARTSYS, or
@@ -555,7 +565,7 @@
case SIGQUIT: case SIGILL: case SIGTRAP:
case SIGABRT: case SIGFPE: case SIGSEGV:
case SIGBUS: case SIGSYS: case SIGXCPU: case SIGXFSZ:
- if (do_coredump(signr, pt))
+ if (do_coredump(signr, &scr->pt))
exit_code |= 0x80;
/* FALLTHRU */
@@ -575,29 +585,29 @@
if ((ka->sa.sa_flags & SA_RESTART) = 0) {
case ERESTARTNOHAND:
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt))
- pt->r8 = -EINTR;
+ if (IS_IA32_PROCESS(&scr->pt))
+ scr->pt.r8 = -EINTR;
else
#endif
- pt->r8 = EINTR;
- /* note: pt->r10 is already -1 */
+ scr->pt.r8 = EINTR;
+ /* note: scr->pt.r10 is already -1 */
break;
}
case ERESTARTNOINTR:
-#ifdef CONFIG_IA32_SUPPOT
- if (IS_IA32_PROCESS(pt)) {
- pt->r8 = pt->r1;
- pt->cr_iip -= 2;
+#ifdef CONFIG_IA32_SUPPORT
+ if (IS_IA32_PROCESS(&scr->pt)) {
+ scr->pt.r8 = scr->pt.r1;
+ scr->pt.cr_iip -= 2;
} else
#endif
- ia64_decrement_ip(pt);
+ ia64_decrement_ip(&scr->pt);
}
}
/* Whee! Actually deliver the signal. If the
delivery failed, we need to continue to iterate in
this loop so we can deliver the SIGSEGV... */
- if (handle_signal(signr, ka, &info, oldset, pt))
+ if (handle_signal(signr, ka, &info, oldset, scr))
return 1;
}
@@ -606,9 +616,9 @@
/* Restart the system call - no handlers present */
if (errno = ERESTARTNOHAND || errno = ERESTARTSYS || errno = ERESTARTNOINTR) {
#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(pt)) {
- pt->r8 = pt->r1;
- pt->cr_iip -= 2;
+ if (IS_IA32_PROCESS(&scr->pt)) {
+ scr->pt.r8 = scr->pt.r1;
+ scr->pt.cr_iip -= 2;
} else
#endif
/*
@@ -617,7 +627,7 @@
* is adjust ip so that the "break"
* instruction gets re-executed.
*/
- ia64_decrement_ip(pt);
+ ia64_decrement_ip(&scr->pt);
}
}
return 0;
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-test1-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/smp.c Fri Jun 9 17:49:36 2000
@@ -81,7 +81,6 @@
#ifndef CONFIG_ITANIUM_PTCG
# define IPI_FLUSH_TLB 3
#endif /*!CONFIG_ITANIUM_PTCG */
-#define IPI_KDB_INTERRUPT 4
/*
* Setup routine for controlling SMP activation
@@ -133,24 +132,17 @@
}
-static inline void
-pointer_unlock(void **lock, void **data)
-{
- *data = *lock;
- *lock = NULL;
-}
-
static inline int
pointer_lock(void *lock, void *data, int retry)
{
again:
- if (cmpxchg_acq(lock, 0, data) = 0)
+ if (cmpxchg_acq((void **) lock, 0, data) = 0)
return 0;
if (!retry)
return -EBUSY;
- while (*(void**) lock)
+ while (*(void **) lock)
;
goto again;
@@ -191,13 +183,13 @@
int wait;
/* release the 'pointer lock' */
- pointer_unlock((void **) &smp_call_function_data, (void **) &data);
+ data = smp_call_function_data;
func = data->func;
info = data->info;
wait = data->wait;
mb();
- atomic_dec (&data->unstarted_count);
+ atomic_dec(&data->unstarted_count);
/* At this point the structure may be gone unless wait is true. */
(*func)(info);
@@ -205,7 +197,7 @@
/* Notify the sending CPU that the task is done. */
mb();
if (wait)
- atomic_dec (&data->unfinished_count);
+ atomic_dec(&data->unfinished_count);
}
break;
@@ -344,41 +336,35 @@
{
struct smp_call_struct data;
long timeout;
- static spinlock_t lock = SPIN_LOCK_UNLOCKED;
+ int cpus = smp_num_cpus - 1;
+
+ if (cpus = 0)
+ return 0;
data.func = func;
data.info = info;
data.wait = wait;
- atomic_set(&data.unstarted_count, smp_num_cpus - 1);
- atomic_set(&data.unfinished_count, smp_num_cpus - 1);
+ atomic_set(&data.unstarted_count, cpus);
+ atomic_set(&data.unfinished_count, cpus);
if (pointer_lock(&smp_call_function_data, &data, retry))
return -EBUSY;
- smp_call_function_data = &data;
- spin_unlock (&lock);
- data.func = func;
- data.info = info;
- atomic_set (&data.unstarted_count, smp_num_cpus - 1);
- data.wait = wait;
- if (wait)
- atomic_set (&data.unfinished_count, smp_num_cpus - 1);
-
/* Send a message to all other CPUs and wait for them to respond */
send_IPI_allbutself(IPI_CALL_FUNC);
/* Wait for response */
timeout = jiffies + HZ;
- while ( (atomic_read (&data.unstarted_count) > 0) &&
- time_before (jiffies, timeout) )
- barrier ();
- if (atomic_read (&data.unstarted_count) > 0) {
+ while ((atomic_read(&data.unstarted_count) > 0) && time_before(jiffies, timeout))
+ barrier();
+ if (atomic_read(&data.unstarted_count) > 0) {
smp_call_function_data = NULL;
return -ETIMEDOUT;
}
if (wait)
- while (atomic_read (&data.unfinished_count) > 0)
- barrier ();
+ while (atomic_read(&data.unfinished_count) > 0)
+ barrier();
+ /* unlock pointer */
smp_call_function_data = NULL;
return 0;
}
diff -urN linux-davidm/arch/ia64/kernel/sys_ia64.c linux-2.4.0-test1-lia/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/sys_ia64.c Fri Jun 9 17:18:00 2000
@@ -156,6 +156,9 @@
{
struct pt_regs *regs = (struct pt_regs *) &stack;
+ if ((off & ~PAGE_MASK) != 0)
+ return -EINVAL;
+
addr = do_mmap2(addr, len, prot, flags, fd, off >> PAGE_SHIFT);
if (!IS_ERR(addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.4.0-test1-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/time.c Fri Jun 9 17:18:07 2000
@@ -162,6 +162,11 @@
*/
write_lock(&xtime_lock);
new_itm = itm.next[cpu].count;
+
+ if (!time_after(ia64_get_itc(), new_itm))
+ printk("Oops: timer tick before it's due (itc=%lx,itm=%lx)\n",
+ ia64_get_itc(), new_itm);
+
while (1) {
/*
* Do kernel PC profiling here. We multiply the
@@ -220,7 +225,7 @@
ia64_set_itm(new_itm);
}
-#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_IA64_SOFTSDV_HACKS)
/*
* Interrupts must be disabled before calling this routine.
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.0-test1-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Fri Jun 9 17:38:58 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/traps.c Fri Jun 9 17:49:52 2000
@@ -36,10 +36,6 @@
#include <linux/init.h>
#include <linux/sched.h>
-#ifdef CONFIG_KDB
-#include <linux/kdb.h>
-#endif
-
#include <asm/ia32.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
@@ -92,13 +88,6 @@
}
printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
-
-#ifdef CONFIG_KDB
- while (1) {
- kdb(KDB_REASON_PANIC, 0, regs);
- printk("Cant go anywhere from Panic!\n");
- }
-#endif
show_regs(regs);
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.0-test1-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Fri Jun 9 17:38:59 2000
+++ linux-2.4.0-test1-lia/arch/ia64/kernel/unwind.c Fri Jun 9 17:18:53 2000
@@ -49,7 +49,7 @@
#define UNW_LOG_HASH_SIZE (UNW_LOG_CACHE_SIZE + 1)
#define UNW_HASH_SIZE (1 << UNW_LOG_HASH_SIZE)
-#define UNW_DEBUG 0
+#define UNW_DEBUG 1
#define UNW_STATS 0 /* WARNING: this disabled interrupts for long time-spans!! */
#if UNW_DEBUG
@@ -191,8 +191,10 @@
struct unw_ireg *ireg;
struct pt_regs *pt;
- if ((unsigned) regnum - 1 >= 127)
+ if ((unsigned) regnum - 1 >= 127) {
+ dprintk("unwind: trying to access non-existent r%u\n", regnum);
return -1;
+ }
if (regnum < 32) {
if (regnum >= 4 && regnum <= 7) {
@@ -238,7 +240,12 @@
nat_addr = ia64_rse_rnat_addr(addr);
if ((unsigned long) addr < info->regstk.limit
|| (unsigned long) addr >= info->regstk.top)
+ {
+ dprintk("unwind: %lx outside of regstk "
+ "[0x%lx-0x%lx)\n", addr,
+ info->regstk.limit, info->regstk.top);
return -1;
+ }
if ((unsigned long) nat_addr >= info->regstk.top)
nat_addr = &info->sw->ar_rnat;
nat_mask = (1UL << ia64_rse_slot_num(addr));
@@ -307,9 +314,12 @@
/* preserved: */
case 1: case 2: case 3: case 4: case 5:
addr = *(&info->b1 + (regnum - 1));
+ if (!addr)
+ addr = &info->sw->b1 + (regnum - 1);
break;
default:
+ dprintk("unwind: trying to access non-existent b%u\n", regnum);
return -1;
}
if (write)
@@ -325,8 +335,10 @@
struct ia64_fpreg *addr = 0;
struct pt_regs *pt;
- if ((unsigned) (regnum - 2) >= 30)
+ if ((unsigned) (regnum - 2) >= 30) {
+ dprintk("unwind: trying to access non-existent f%u\n", regnum);
return -1;
+ }
pt = (struct pt_regs *) info->sp - 1;
@@ -412,6 +424,7 @@
break;
default:
+ dprintk("unwind: trying to access non-existent ar%u\n", regnum);
return -1;
}
@@ -1327,7 +1340,8 @@
}
if (!e) {
/* no info, return default unwinder (leaf proc, no mem stack, no saved regs) */
- dprintk("unwind: no unwind info for ip=0x%lx\n", ip);
+ dprintk("unwind: no unwind info for ip=0x%lx (prev ip=0x%lx)\n", ip,
+ unw.cache[info->prev_script].ip);
sr.curr.reg[UNW_REG_RP].where = UNW_WHERE_BR;
sr.curr.reg[UNW_REG_RP].when = -1;
sr.curr.reg[UNW_REG_RP].val = 0;
@@ -1338,7 +1352,7 @@
return script;
}
- sr.when_target = (3*((ip & ~0xfUL) - (table->segment_base + e->start_offset))
+ sr.when_target = (3*((ip & ~0xfUL) - (table->segment_base + e->start_offset))/16
+ (ip & 0xfUL));
hdr = *(u64 *) (table->segment_base + e->info_offset);
dp = (u8 *) (table->segment_base + e->info_offset + 8);
@@ -1383,7 +1397,7 @@
case UNW_WHERE_FR: printk("f%lu", r->val); break;
case UNW_WHERE_BR: printk("b%lu", r->val); break;
case UNW_WHERE_SPREL: printk("[sp+0x%lx]", r->val); break;
- case UNW_WHERE_PSPREL: printk("[psp+0x%lx]", 0x10 - r->val); break;
+ case UNW_WHERE_PSPREL: printk("[psp+0x%lx]", r->val); break;
case UNW_WHERE_NONE:
printk("%s+0x%lx", unw.preg_name[r - sr.curr.reg], r->val);
break;
@@ -1531,6 +1545,7 @@
if (info->ip & (my_cpu_data.unimpl_va_mask | 0xf)) {
/* don't let obviously bad addresses pollute the cache */
+ dprintk("unwind: rejecting bad ip=0x%lx\n", info->ip);
info->rp = 0;
return -1;
}
@@ -1590,18 +1605,18 @@
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
- info->cfm = *info->pfs;
+ info->cfm = info->pfs;
/* restore the bsp: */
pr = info->pr_val;
num_regs = 0;
if ((info->flags & UNW_FLAG_INTERRUPT_FRAME)) {
if ((pr & (1UL << pNonSys)) != 0)
- num_regs = info->cfm & 0x7f; /* size of frame */
+ num_regs = *info->cfm & 0x7f; /* size of frame */
info->pfs (unsigned long *) (info->sp + 16 + struct_offset(struct pt_regs, ar_pfs));
} else
- num_regs = (info->cfm >> 7) & 0x7f; /* size of locals */
+ num_regs = (*info->cfm >> 7) & 0x7f; /* size of locals */
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -num_regs);
if (info->bsp < info->regstk.limit || info->bsp > info->regstk.top) {
dprintk("unwind: bsp (0x%lx) out of range [0x%lx-0x%lx]\n",
@@ -1669,8 +1684,8 @@
info->memstk.top = stktop;
info->sw = sw;
info->sp = info->psp = (unsigned long) (sw + 1) - 16;
- info->cfm = sw->ar_pfs;
- sol = (info->cfm >> 7) & 0x7f;
+ info->cfm = &sw->ar_pfs;
+ sol = (*info->cfm >> 7) & 0x7f;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
info->ip = sw->b0;
info->pr_val = sw->pr;
@@ -1704,7 +1719,7 @@
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
- info->cfm = sw->ar_pfs;
+ info->cfm = &sw->ar_pfs;
info->ip = sw->b0;
#endif
}
@@ -1741,7 +1756,7 @@
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs(bsp, -sof);
- info->cfm = regs->cr_ifs;
+ info->cfm = ®s->cr_ifs;
info->ip = regs->cr_iip;
#endif
}
@@ -1777,7 +1792,7 @@
int
unw_unwind (struct unw_frame_info *info)
{
- unsigned long sol, cfm = info->cfm;
+ unsigned long sol, cfm = *info->cfm;
int is_nat;
sol = (cfm >> 7) & 0x7f; /* size of locals */
@@ -1796,16 +1811,16 @@
info->ip = read_reg(info, sol - 2, &is_nat);
if (is_nat || (info->ip & (my_cpu_data.unimpl_va_mask | 0xf)))
- /* don't let obviously bad addresses pollute the cache */
+ /* reject let obviously bad addresses */
return -1;
+ info->cfm = ia64_rse_skip_regs((unsigned long *) info->bsp, sol - 1);
cfm = read_reg(info, sol - 1, &is_nat);
if (is_nat)
return -1;
sol = (cfm >> 7) & 0x7f;
- info->cfm = cfm;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -sol);
return 0;
}
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.0-test1-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Fri Jun 9 17:38:59 2000
+++ linux-2.4.0-test1-lia/arch/ia64/mm/init.c Fri Jun 9 17:19:34 2000
@@ -183,6 +183,19 @@
}
void
+free_initrd_mem(unsigned long start, unsigned long end)
+{
+ if (start < end)
+ printk ("Freeing initrd memory: %ldkB freed\n", (end - start) >> 10);
+ for (; start < end; start += PAGE_SIZE) {
+ clear_bit(PG_reserved, &mem_map[MAP_NR(start)].flags);
+ set_page_count(&mem_map[MAP_NR(start)], 1);
+ free_page(start);
+ ++totalram_pages;
+ }
+}
+
+void
si_meminfo (struct sysinfo *val)
{
val->totalram = totalram_pages;
diff -urN linux-davidm/arch/ia64/vmlinux.lds.S linux-2.4.0-test1-lia/arch/ia64/vmlinux.lds.S
--- linux-davidm/arch/ia64/vmlinux.lds.S Fri Jun 9 17:38:59 2000
+++ linux-2.4.0-test1-lia/arch/ia64/vmlinux.lds.S Fri Jun 9 17:20:07 2000
@@ -46,6 +46,11 @@
{ *(__ex_table) }
__stop___ex_table = .;
+ __start___ksymtab = .; /* Kernel symbol table */
+ __ksymtab : AT(ADDR(__ksymtab) - PAGE_OFFSET)
+ { *(__ksymtab) }
+ __stop___ksymtab = .;
+
/* Unwind table */
ia64_unw_start = .;
.IA_64.unwind : AT(ADDR(.IA_64.unwind) - PAGE_OFFSET)
@@ -56,6 +61,8 @@
.rodata : AT(ADDR(.rodata) - PAGE_OFFSET)
{ *(.rodata) }
+ .kstrtab : AT(ADDR(.kstrtab) - PAGE_OFFSET)
+ { *(.kstrtab) }
.opd : AT(ADDR(.opd) - PAGE_OFFSET)
{ *(.opd) }
diff -urN linux-davidm/fs/proc/generic.c linux-2.4.0-test1-lia/fs/proc/generic.c
--- linux-davidm/fs/proc/generic.c Sun May 21 20:34:37 2000
+++ linux-2.4.0-test1-lia/fs/proc/generic.c Fri Jun 9 17:24:08 2000
@@ -42,7 +42,7 @@
#endif
/* 4K page size but our output routines use some slack for overruns */
-#define PROC_BLOCK_SIZE (3*1024)
+#define PROC_BLOCK_SIZE (PAGE_SIZE - 1024)
static ssize_t
proc_file_read(struct file * file, char * buf, size_t nbytes, loff_t *ppos)
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.0-test1-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/processor.h Fri Jun 9 17:24:33 2000
@@ -658,13 +658,16 @@
thread_saved_pc (struct thread_struct *t)
{
struct unw_frame_info info;
+ unsigned long ip;
+
/* XXX ouch: Linus, please pass the task pointer to thread_saved_pc() instead! */
struct task_struct *p = (void *) ((unsigned long) t - IA64_TASK_THREAD_OFFSET);
unw_init_from_blocked_task(&info, p);
if (unw_unwind(&info) < 0)
return 0;
- return unw_get_ip(&info);
+ unw_get_ip(&info, &ip);
+ return ip;
}
/*
diff -urN linux-davidm/include/asm-ia64/ptrace.h linux-2.4.0-test1-lia/include/asm-ia64/ptrace.h
--- linux-davidm/include/asm-ia64/ptrace.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/ptrace.h Fri Jun 9 17:24:41 2000
@@ -220,10 +220,17 @@
extern long ia64_peek (struct pt_regs *, struct task_struct *, unsigned long addr, long *val);
extern long ia64_poke (struct pt_regs *, struct task_struct *, unsigned long addr, long val);
+#ifdef CONFIG_IA64_NEW_UNWIND
+ /* get nat bits for scratch registers such that bit N=1 iff scratch register rN is a NaT */
+ extern unsigned long ia64_get_scratch_nat_bits (struct pt_regs *pt, unsigned long scratch_unat);
+ /* put nat bits for scratch registers such that scratch register rN is a NaT iff bit N=1 */
+ extern unsigned long ia64_put_scratch_nat_bits (struct pt_regs *pt, unsigned long nat);
+#else
/* get nat bits for r1-r31 such that bit N=1 iff rN is a NaT */
extern long ia64_get_nat_bits (struct pt_regs *pt, struct switch_stack *sw);
/* put nat bits for r1-r31 such that rN is a NaT iff bit N=1 */
extern void ia64_put_nat_bits (struct pt_regs *pt, struct switch_stack *sw, unsigned long nat);
+#endif
extern void ia64_increment_ip (struct pt_regs *pt);
extern void ia64_decrement_ip (struct pt_regs *pt);
diff -urN linux-davidm/include/asm-ia64/ptrace_offsets.h linux-2.4.0-test1-lia/include/asm-ia64/ptrace_offsets.h
--- linux-davidm/include/asm-ia64/ptrace_offsets.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/ptrace_offsets.h Fri Jun 9 17:25:05 2000
@@ -118,7 +118,7 @@
#define PT_F126 0x05e0
#define PT_F127 0x05f0
/* switch stack: */
-#define PT_PRI_UNAT 0x0600
+#define PT_NAT_BITS 0x0600
#define PT_F2 0x0610
#define PT_F3 0x0620
diff -urN linux-davidm/include/asm-ia64/system.h linux-2.4.0-test1-lia/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/system.h Fri Jun 9 17:25:19 2000
@@ -54,6 +54,8 @@
__u16 num_pci_vectors; /* number of ACPI derived PCI IRQ's*/
__u64 pci_vectors; /* physical address of PCI data (pci_vector_struct)*/
__u64 fpswa; /* physical address of the the fpswa interface */
+ __u64 initrd_start;
+ __u64 initrd_size;
} ia64_boot_param;
extern inline void
diff -urN linux-davidm/include/asm-ia64/unwind.h linux-2.4.0-test1-lia/include/asm-ia64/unwind.h
--- linux-davidm/include/asm-ia64/unwind.h Fri Jun 9 17:39:00 2000
+++ linux-2.4.0-test1-lia/include/asm-ia64/unwind.h Fri Jun 9 17:25:28 2000
@@ -54,9 +54,9 @@
unsigned long bsp;
unsigned long sp; /* stack pointer */
unsigned long psp; /* previous sp */
- unsigned long cfm;
unsigned long ip; /* instruction pointer */
unsigned long pr_val; /* current predicates */
+ unsigned long *cfm;
struct switch_stack *sw;
@@ -123,11 +123,21 @@
*/
extern int unw_unwind (struct unw_frame_info *info);
-#define unw_get_ip(info) ((info)->ip)
-#define unw_get_sp(info) ((unsigned long) (info)->sp)
-#define unw_get_psp(info) ((unsigned long) (info)->psp)
-#define unw_get_bsp(info) ((unsigned long) (info)->bsp)
-#define unw_get_cfm(info) ((info)->cfm)
+#define unw_get_ip(info,vp) ({*(vp) = (info)->ip; 0;})
+#define unw_get_sp(info,vp) ({*(vp) = (unsigned long) (info)->sp; 0;})
+#define unw_get_psp(info,vp) ({*(vp) = (unsigned long) (info)->psp; 0;})
+#define unw_get_bsp(info,vp) ({*(vp) = (unsigned long) (info)->bsp; 0;})
+#define unw_get_cfm(info,vp) ({*(vp) = *(info)->cfm; 0;})
+#define unw_set_cfm(info,val) ({*(info)->cfm = (val); 0;})
+
+static inline int
+unw_get_rp (struct unw_frame_info *info, unsigned long *val)
+{
+ if (!info->rp)
+ return -1;
+ *val = *info->rp;
+ return 0;
+}
extern int unw_access_gr (struct unw_frame_info *, int, unsigned long *, char *, int);
extern int unw_access_br (struct unw_frame_info *, int, unsigned long *, int);
@@ -135,11 +145,35 @@
extern int unw_access_ar (struct unw_frame_info *, int, unsigned long *, int);
extern int unw_access_pr (struct unw_frame_info *, unsigned long *, int);
-#define unw_set_gr(i,n,v,nat) unw_access_gr(i,n,v,nat,1)
-#define unw_set_br(i,n,v) unw_access_br(i,n,v,1)
-#define unw_set_fr(i,n,v) unw_access_fr(i,n,v,1)
-#define unw_set_ar(i,n,v) unw_access_ar(i,n,v,1)
-#define unw_set_pr(i,v) unw_access_ar(i,v,1)
+static inline int
+unw_set_gr (struct unw_frame_info *i, int n, unsigned long v, char nat)
+{
+ return unw_access_gr(i, n, &v, &nat, 1);
+}
+
+static inline int
+unw_set_br (struct unw_frame_info *i, int n, unsigned long v)
+{
+ return unw_access_br(i, n, &v, 1);
+}
+
+static inline int
+unw_set_fr (struct unw_frame_info *i, int n, struct ia64_fpreg v)
+{
+ return unw_access_fr(i, n, &v, 1);
+}
+
+static inline int
+unw_set_ar (struct unw_frame_info *i, int n, unsigned long v)
+{
+ return unw_access_ar(i, n, &v, 1);
+}
+
+static inline int
+unw_set_pr (struct unw_frame_info *i, unsigned long v)
+{
+ return unw_access_pr(i, &v, 1);
+}
#define unw_get_gr(i,n,v,nat) unw_access_gr(i,n,v,nat,0)
#define unw_get_br(i,n,v) unw_access_br(i,n,v,0)
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0-test4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (2 preceding siblings ...)
2000-06-10 1:11 ` David Mosberger
@ 2000-07-14 21:37 ` David Mosberger
2000-08-12 5:02 ` [Linux-ia64] kernel update (relative to v2.4.0-test6) David Mosberger
` (211 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-07-14 21:37 UTC (permalink / raw)
To: linux-ia64
Here is a long overdue kernel update. There relative diff I usually
provide would have been too big (and hard to read) to include in this
mail, so I'm just providing a summary of changes and the full diff,
relative to the official 2.4.0-test4 kernel can be found at:
ftp://ftp.kernel.org/pub/linux/kernel/linux/ports/ia64/
as usual. The filename for the current patch is:
linux-2.4.0-test4-ia64-000714.diff.gz
Summary of changes:
- latest IA-32 fixes by Don Dugger (Don, can you check sys_ia32.c? I
changed the way utime() is implemented. Also, I still think it would
be a good idea to try to make the mmap emulation good enough so that
it can be used for loading IA-32 ELF binaries. Some code for doing
that is there now, though it's probably not complete.)
- clone2 system call is there now (works like clone, except that it
specifies the stack memory area explicitly via a starting address
and size)
- simplified kernel configuration for HP Simulator (Ski) by removing
options that don't make sense
- probably a couple of other things I already forgot about---check patch
for a complete reference... ;-)
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.0-test6)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (3 preceding siblings ...)
2000-07-14 21:37 ` [Linux-ia64] kernel update (relative to 2.4.0-test4) David Mosberger
@ 2000-08-12 5:02 ` David Mosberger
2000-08-14 11:35 ` Andreas Schwab
` (210 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-08-12 5:02 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 kernel diff is now available at:
ftp.kernel.org:/pub/linux/kernel/ports/ia64/
in file linux-2.4.0-test6-ia64-000811.diff*.
This update features:
- Stephane's PAL call and /proc/palinfo updates.
- Asit's updates to the sw I/O TLB.
- Perfmon updates from NEC. Per-process performance monitoring
should work now.
- Stephane's fixes for initrd. The current version tolerates older
version of eli, but if you're seeing a warning message of the form:
Warning: boot loader passed virtual address for initrd
you should consider updating your bootloader.
- A fix from Goutham that avoids pointer_lock() getting into an
endless loop
- Finished optimizing memcpy.
- Asit's new SMP-safe region-id allocation code.
- Couple of small fixes to the AGP and DRM code.
- Added PCI ids for the 460gx chipset.
- Asit/Jes's siginfo and sigevent padding fixes. Note: this could
break certain applications (though I don't know of any that
actually do break).
- Documentation for Configure.help (yeah, nobody else volunteered, so
I figured I had to bite the bullet; however, I'd still appreciate
if someone else could keep an eye on this to make sure it doesn't
fall behind the code too much...)
- Cleaned up (simplified?) the configuration a little. In
particular, I dropped the CONFIG_IA64_FW_EMU option. I don't think
anybody was using that anymore. Don, holler if I'm wrong... ;-)
- Small fix to make IA-32 version of vfork() work again (acrobat
didn't like it otherwise...).
- Don/Kanoj's IA-32 fixes to readv/writev and other stuff.
- Implement workarounds for lfetch and semaphore erratas.
- Started a new spinlock implementation. It's currently commented
out (#ifdef NEW_LOCK), not the least because it doesn't work yet.
The idea behind this new version is to have a tiny amount of code
inline for each spinlock and if there is contention, a branch to
out-of-line code is done. What is different compared to other
platforms is that the out-of-line handler is shared, which keeps
code-size inflation down and also allows to use a fairly
sophisticated handler for the contention case. The current handler
is using an exponential backoff to try to keep the load on the bus
low. The reason I believe we can make this work on IA-64 is
because there are enough scratch registers so that stealing two or
three of them for the spinlock won't hurt much.
As usual, the relative diff below is only for your convenience. Get
the full diff from kernel.org for the real thing.
This kernel has been compiled and tested on the HP simulator, UP
BigSur and 4-way Lions. It feels to me like the best kernel ever. I
compiled the kernel 5 times from scratch in raw with "make -j16" and
didn't get a single failure. Of course, YMMV.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help linux-2.4.0-test6-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Thu Aug 10 19:56:17 2000
+++ linux-2.4.0-test6-lia/Documentation/Configure.help Fri Aug 11 17:13:35 2000
@@ -16466,6 +16466,108 @@
another UltraSPARC-IIi-cEngine boardset with a 7-segment display,
you should say N to this option.
+IA-64 system type
+CONFIG_IA64_GENERIC
+ This selects the system type of your hardware. A "generic" kernel
+ will run on any supported IA-64 system. However, if you configure
+ a kernel for your specific system, it will be faster and smaller.
+
+ To find out what type of IA-64 system you have, you may want to
+ check the IA-64 Linux web site at http://www.linux-ia64.org/.
+ As of the time of this writing, most hardware is DIG compliant,
+ so the "DIG-compliant" option is usually the right choice.
+
+ HP-simulator For the HP simulator (http://software.hp.com/ia64linux/).
+ SN1-simulator For the SGI SN1 simulator.
+ DIG-compliant For DIG ("Developer's Interface Guide") compliant system.
+
+ If you don't know what to do, choose "generic".
+
+Kernel page size
+CONFIG_IA64_PAGE_SIZE_4KB
+
+ This lets you select the page size of the kernel. For best IA-64
+ performance, a page size of 8KB or 16KB is recommended. For best
+ IA-32 compatibility, a page size of 4KB should be selected (the vast
+ majority of IA-32 binaries work perfectly fine with a larger page
+ size). For Itanium systems, do NOT chose a page size larger than
+ 16KB.
+
+ 4KB For best IA-32 compatibility
+ 8KB For best IA-64 performance
+ 16KB For best IA-64 performance
+ 64KB Not for Itanium.
+
+ If you don't know what to do, choose 8KB.
+
+Enable Itanium A-step specific code
+CONFIG_ITANIUM_ASTEP_SPECIFIC
+ Select this option to build a kernel for an Itanium prototype system
+ with an A-step CPU. You have an A-step CPU if the "revision" field in
+ /proc/cpuinfo is 0.
+
+Enable Itanium A1-step specific code
+CONFIG_ITANIUM_A1_SPECIFIC
+ Select this option to build a kernel for an Itanium prototype system
+ with an A1-step CPU. If you don't know whether you have an A1-step CPU,
+ you probably don't and you can answer "no" here.
+
+Enable Itanium B-step specific code
+CONFIG_ITANIUM_BSTEP_SPECIFIC
+ Select this option to build a kernel for an Itanium prototype system
+ with a B-step CPU. You have a B-step CPU if the "revision" field in
+ /proc/cpuinfo has a value in the range from 1 to 4.
+
+Enable Itanium B0-step specific code
+CONFIG_ITANIUM_B0_SPECIFIC
+ Select this option to bild a kernel for an Itanium prototype system
+ with a B0-step CPU. You have a B0-step CPU if the "revision" field in
+ /proc/cpuinfo is 1.
+
+Force interrupt redirection
+CONFIG_IA64_HAVE_IRQREDIR
+ Select this option if you know that your system has the ability to
+ redirect interrupts to different CPUs. Select N here if you're
+ unsure.
+
+Enable use of global TLB purge instruction (ptc.g)
+CONFIG_ITANIUM_PTCG
+ Say Y here if you want the kernel to use the IA-64 "ptc.g"
+ instruction to flush the TLB on all CPUs. Select N here if
+ you're unsure.
+
+Enable SoftSDV hacks
+CONFIG_IA64_SOFTSDV_HACKS
+ Say Y here to enable hacks to make the kernel work on the Intel
+ SoftSDV simulator. Select N here if you're unsure.
+
+Enable AzusA hacks
+CONFIG_IA64_AZUSA_HACKS
+ Say Y here to enable hacks to make the kernel work on the NEC
+ AzusA platform. Select N here if you're unsure.
+
+Enable IA-64 Machine Check Abort
+CONFIG_IA64_MCA
+ Say Y here to enable machine check support for IA-64. If you're
+ unsure, answer Y.
+
+Performance monitor support
+CONFIG_PERFMON
+ Selects whether support for the IA-64 performance monitor hardware
+ is included in the kernel. This makes some kernel data-structures a
+ little bigger and slows down execution a bit, but it is still
+ usually a good idea to turn this on. If you're unsure, say N.
+
+/proc/pal support
+CONFIG_IA64_PALINFO
+ If you say Y here, you are able to get PAL (Processor Abstraction
+ Layer) information in /proc/pal. This contains useful information
+ about the processors in your systems, such as cache and TLB sizes
+ and the PAL firmware version in use.
+
+ To use this option, you have to check that the "/proc file system
+ support" (CONFIG_PROC_FS) is enabled, too.
+
#
# A couple of things I keep forgetting:
# capitalize: AppleTalk, Ethernet, DOS, DMA, FAT, FTP, Internet,
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-test6-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Wed Aug 2 18:54:01 2000
+++ linux-2.4.0-test6-lia/arch/ia64/config.in Fri Aug 11 16:59:01 2000
@@ -18,15 +18,16 @@
comment 'General setup'
define_bool CONFIG_IA64 y
+define_bool CONFIG_SWIOTLB y # for now...
define_bool CONFIG_ISA n
define_bool CONFIG_SBUS n
choice 'IA-64 system type' \
- "Generic CONFIG_IA64_GENERIC \
+ "generic CONFIG_IA64_GENERIC \
+ DIG-compliant CONFIG_IA64_DIG \
HP-simulator CONFIG_IA64_HP_SIM \
- SN1-simulator CONFIG_IA64_SGI_SN1_SIM \
- DIG-compliant CONFIG_IA64_DIG" Generic
+ SN1-simulator CONFIG_IA64_SGI_SN1_SIM" generic
choice 'Kernel page size' \
"4KB CONFIG_IA64_PAGE_SIZE_4KB \
@@ -38,16 +39,18 @@
define_bool CONFIG_ITANIUM y
define_bool CONFIG_IA64_BRL_EMU y
bool ' Enable Itanium A-step specific code' CONFIG_ITANIUM_ASTEP_SPECIFIC
- bool ' Enable Itanium A1-step specific code' CONFIG_ITANIUM_A1_SPECIFIC
+ if [ "$CONFIG_ITANIUM_ASTEP_SPECIFIC" = "y" ]; then
+ bool ' Enable Itanium A1-step specific code' CONFIG_ITANIUM_A1_SPECIFIC
+ fi
+ bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
+ if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
+ bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
+ fi
+ bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
bool ' Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
- bool ' Emulate PAL/SAL/EFI firmware' CONFIG_IA64_FW_EMU
- bool ' Enable IA64 Machine Check Abort' CONFIG_IA64_MCA
-fi
-
-if [ "$CONFIG_IA64_GENERIC" = "y" ]; then
- define_bool CONFIG_IA64_SOFTSDV_HACKS y
+ bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
fi
if [ "$CONFIG_IA64_SGI_SN1_SIM" = "y" ]; then
@@ -59,7 +62,7 @@
bool 'SMP support' CONFIG_SMP
bool 'Performance monitor support' CONFIG_PERFMON
-bool '/proc/palinfo support' CONFIG_IA64_PALINFO
+bool '/proc/pal support' CONFIG_IA64_PALINFO
bool 'Networking support' CONFIG_NET
bool 'System V IPC' CONFIG_SYSVIPC
@@ -162,8 +165,6 @@
#source drivers/misc/Config.in
source fs/Config.in
-
-source fs/nls/Config.in
if [ "$CONFIG_VT" = "y" ]; then
mainmenu_option next_comment
diff -urN linux-davidm/arch/ia64/dig/setup.c linux-2.4.0-test6-lia/arch/ia64/dig/setup.c
--- linux-davidm/arch/ia64/dig/setup.c Wed Aug 2 18:54:01 2000
+++ linux-2.4.0-test6-lia/arch/ia64/dig/setup.c Fri Aug 11 16:58:37 2000
@@ -24,10 +24,6 @@
#include <asm/machvec.h>
#include <asm/system.h>
-#ifdef CONFIG_IA64_FW_EMU
-# include "../../kernel/fw-emu.c"
-#endif
-
/*
* This is here so we can use the CMOS detection in ide-probe.c to
* determine what drives are present. In theory, we don't need this
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S linux-2.4.0-test6-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Wed Aug 2 18:54:01 2000
+++ linux-2.4.0-test6-lia/arch/ia64/ia32/ia32_entry.S Wed Aug 2 12:32:26 2000
@@ -73,7 +73,7 @@
END(ia32_trace_syscall)
GLOBAL_ENTRY(sys32_vfork)
- alloc r16=ar.pfs,2,2,3,0;;
+ alloc r16=ar.pfs,2,2,4,0;;
mov out0=IA64_CLONE_VFORK|IA64_CLONE_VM|SIGCHLD // out0 = clone_flags
br.cond.sptk.few .fork1 // do the work
END(sys32_vfork)
@@ -105,7 +105,7 @@
.align 8
.globl ia32_syscall_table
ia32_syscall_table:
- data8 sys_ni_syscall /* 0 - old "setup(" system call*/
+ data8 sys32_ni_syscall /* 0 - old "setup(" system call*/
data8 sys_exit
data8 sys32_fork
data8 sys_read
@@ -122,25 +122,25 @@
data8 sys_mknod
data8 sys_chmod /* 15 */
data8 sys_lchown
- data8 sys_ni_syscall /* old break syscall holder */
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall /* old break syscall holder */
+ data8 sys32_ni_syscall
data8 sys_lseek
data8 sys_getpid /* 20 */
data8 sys_mount
data8 sys_oldumount
data8 sys_setuid
data8 sys_getuid
- data8 sys_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */
+ data8 sys32_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */
data8 sys32_ptrace
data8 sys32_alarm
- data8 sys_ni_syscall
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall
+ data8 sys32_ni_syscall
data8 ia32_utime /* 30 */
- data8 sys_ni_syscall /* old stty syscall holder */
- data8 sys_ni_syscall /* old gtty syscall holder */
+ data8 sys32_ni_syscall /* old stty syscall holder */
+ data8 sys32_ni_syscall /* old gtty syscall holder */
data8 sys_access
data8 sys_nice
- data8 sys_ni_syscall /* 35 */ /* old ftime syscall holder */
+ data8 sys32_ni_syscall /* 35 */ /* old ftime syscall holder */
data8 sys_sync
data8 sys_kill
data8 sys_rename
@@ -149,22 +149,22 @@
data8 sys_dup
data8 sys32_pipe
data8 sys32_times
- data8 sys_ni_syscall /* old prof syscall holder */
+ data8 sys32_ni_syscall /* old prof syscall holder */
data8 sys_brk /* 45 */
data8 sys_setgid
data8 sys_getgid
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall
data8 sys_geteuid
data8 sys_getegid /* 50 */
data8 sys_acct
data8 sys_umount /* recycled never used phys( */
- data8 sys_ni_syscall /* old lock syscall holder */
+ data8 sys32_ni_syscall /* old lock syscall holder */
data8 ia32_ioctl
- data8 sys_fcntl /* 55 */
- data8 sys_ni_syscall /* old mpx syscall holder */
+ data8 sys32_fcntl /* 55 */
+ data8 sys32_ni_syscall /* old mpx syscall holder */
data8 sys_setpgid
- data8 sys_ni_syscall /* old ulimit syscall holder */
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall /* old ulimit syscall holder */
+ data8 sys32_ni_syscall
data8 sys_umask /* 60 */
data8 sys_chroot
data8 sys_ustat
@@ -172,12 +172,12 @@
data8 sys_getppid
data8 sys_getpgrp /* 65 */
data8 sys_setsid
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
+ data8 sys32_sigaction
+ data8 sys32_ni_syscall
+ data8 sys32_ni_syscall
data8 sys_setreuid /* 70 */
data8 sys_setregid
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall
data8 sys_sigpending
data8 sys_sethostname
data8 sys32_setrlimit /* 75 */
@@ -189,7 +189,7 @@
data8 sys_setgroups
data8 old_select
data8 sys_symlink
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall
data8 sys_readlink /* 85 */
data8 sys_uselib
data8 sys_swapon
@@ -203,7 +203,7 @@
data8 sys_fchown /* 95 */
data8 sys_getpriority
data8 sys_setpriority
- data8 sys_ni_syscall /* old profil syscall holder */
+ data8 sys32_ni_syscall /* old profil syscall holder */
data8 sys32_statfs
data8 sys32_fstatfs /* 100 */
data8 sys_ioperm
@@ -214,11 +214,11 @@
data8 sys32_newstat
data8 sys32_newlstat
data8 sys32_newfstat
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall
data8 sys_iopl /* 110 */
data8 sys_vhangup
- data8 sys_ni_syscall // used to be sys_idle
- data8 sys_ni_syscall
+ data8 sys32_ni_syscall // used to be sys_idle
+ data8 sys32_ni_syscall
data8 sys32_wait4
data8 sys_swapoff /* 115 */
data8 sys_sysinfo
@@ -242,7 +242,7 @@
data8 sys_bdflush
data8 sys_sysfs /* 135 */
data8 sys_personality
- data8 sys_ni_syscall /* for afs_syscall */
+ data8 sys32_ni_syscall /* for afs_syscall */
data8 sys_setfsuid
data8 sys_setfsgid
data8 sys_llseek /* 140 */
@@ -293,8 +293,8 @@
data8 sys_capset /* 185 */
data8 sys_sigaltstack
data8 sys_sendfile
- data8 sys_ni_syscall /* streams1 */
- data8 sys_ni_syscall /* streams2 */
+ data8 sys32_ni_syscall /* streams1 */
+ data8 sys32_ni_syscall /* streams2 */
data8 sys32_vfork /* 190 */
/*
* CAUTION: If any system calls are added beyond this point
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.4.0-test6-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Wed Aug 2 18:54:01 2000
+++ linux-2.4.0-test6-lia/arch/ia64/ia32/sys_ia32.c Mon Jul 31 14:01:22 2000
@@ -74,10 +74,14 @@
n = 0;
do {
- if ((err = get_user(addr, (int *)A(arg))) != 0)
- return(err);
- if (ap)
- *ap++ = (char *)A(addr);
+ err = get_user(addr, (int *)A(arg));
+ if (IS_ERR(err))
+ return err;
+ if (ap) { /* no access_ok needed, we allocated */
+ err = __put_user((char *)A(addr), ap++);
+ if (IS_ERR(err))
+ return err;
+ }
arg += sizeof(unsigned int);
n++;
} while (addr);
@@ -101,7 +105,11 @@
int na, ne, r, len;
na = nargs(argv, NULL);
+ if (IS_ERR(na))
+ return(na);
ne = nargs(envp, NULL);
+ if (IS_ERR(ne))
+ return(ne);
len = (na + ne + 2) * sizeof(*av);
/*
* kmalloc won't work because the `sys_exec' code will attempt
@@ -121,12 +129,21 @@
if (IS_ERR(av))
return (long)av;
ae = av + na + 1;
- av[na] = (char *)0;
- ae[ne] = (char *)0;
- (void)nargs(argv, av);
- (void)nargs(envp, ae);
+ r = __put_user(0, (av + na));
+ if (IS_ERR(r))
+ goto out;
+ r = __put_user(0, (ae + ne));
+ if (IS_ERR(r))
+ goto out;
+ r = nargs(argv, av);
+ if (IS_ERR(r))
+ goto out;
+ r = nargs(envp, ae);
+ if (IS_ERR(r))
+ goto out;
r = sys_execve(filename, av, ae, regs);
if (IS_ERR(r))
+out:
sys_munmap((unsigned long) av, len);
return(r);
}
@@ -959,150 +976,85 @@
}
struct iovec32 { unsigned int iov_base; int iov_len; };
+asmlinkage ssize_t sys_readv(unsigned long,const struct iovec *,unsigned long);
+asmlinkage ssize_t sys_writev(unsigned long,const struct iovec *,unsigned long);
-typedef ssize_t (*IO_fn_t)(struct file *, char *, size_t, loff_t *);
-
-static long
-do_readv_writev32(int type, struct file *file, const struct iovec32 *vector,
- u32 count)
+static struct iovec *
+get_iovec32(struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int type)
{
- unsigned long tot_len;
- struct iovec iovstack[UIO_FASTIOV];
- struct iovec *iov=iovstack, *ivp;
- struct inode *inode;
- long retval, i;
- IO_fn_t fn;
+ int i;
+ u32 buf, len;
+ struct iovec *ivp, *iov;
+
+ /* Get the "struct iovec" from user memory */
- /* First get the "struct iovec" from user memory and
- * verify all the pointers
- */
if (!count)
return 0;
- if(verify_area(VERIFY_READ, vector, sizeof(struct iovec32)*count))
- return -EFAULT;
+ if(verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count))
+ return(struct iovec *)0;
if (count > UIO_MAXIOV)
- return -EINVAL;
+ return(struct iovec *)0;
if (count > UIO_FASTIOV) {
iov = kmalloc(count*sizeof(struct iovec), GFP_KERNEL);
if (!iov)
- return -ENOMEM;
- }
+ return((struct iovec *)0);
+ } else
+ iov = iov_buf;
- tot_len = 0;
- i = count;
ivp = iov;
- while(i > 0) {
- u32 len;
- u32 buf;
-
- __get_user(len, &vector->iov_len);
- __get_user(buf, &vector->iov_base);
- tot_len += len;
+ for (i = 0; i < count; i++) {
+ if (__get_user(len, &iov32->iov_len) ||
+ __get_user(buf, &iov32->iov_base)) {
+ if (iov != iov_buf)
+ kfree(iov);
+ return((struct iovec *)0);
+ }
+ if (verify_area(type, (void *)A(buf), len)) {
+ if (iov != iov_buf)
+ kfree(iov);
+ return((struct iovec *)0);
+ }
ivp->iov_base = (void *)A(buf);
- ivp->iov_len = (__kernel_size_t) len;
- vector++;
- ivp++;
- i--;
- }
-
- inode = file->f_dentry->d_inode;
- /* VERIFY_WRITE actually means a read, as we write to user space */
- retval = locks_verify_area((type = VERIFY_WRITE
- ? FLOCK_VERIFY_READ : FLOCK_VERIFY_WRITE),
- inode, file, file->f_pos, tot_len);
- if (retval) {
- if (iov != iovstack)
- kfree(iov);
- return retval;
- }
-
- /* Then do the actual IO. Note that sockets need to be handled
- * specially as they have atomicity guarantees and can handle
- * iovec's natively
- */
- if (inode->i_sock) {
- int err;
- err = sock_readv_writev(type, inode, file, iov, count, tot_len);
- if (iov != iovstack)
- kfree(iov);
- return err;
- }
-
- if (!file->f_op) {
- if (iov != iovstack)
- kfree(iov);
- return -EINVAL;
- }
- /* VERIFY_WRITE actually means a read, as we write to user space */
- fn = file->f_op->read;
- if (type = VERIFY_READ)
- fn = (IO_fn_t) file->f_op->write;
- ivp = iov;
- while (count > 0) {
- void * base;
- int len, nr;
-
- base = ivp->iov_base;
- len = ivp->iov_len;
+ ivp->iov_len = (__kernel_size_t)len;
+ iov32++;
ivp++;
- count--;
- nr = fn(file, base, len, &file->f_pos);
- if (nr < 0) {
- if (retval)
- break;
- retval = nr;
- break;
- }
- retval += nr;
- if (nr != len)
- break;
}
- if (iov != iovstack)
- kfree(iov);
- return retval;
+ return(iov);
}
asmlinkage long
sys32_readv(int fd, struct iovec32 *vector, u32 count)
{
- struct file *file;
- long ret = -EBADF;
-
- file = fget(fd);
- if(!file)
- goto bad_file;
-
- if(!(file->f_mode & 1))
- goto out;
+ struct iovec iovstack[UIO_FASTIOV];
+ struct iovec *iov;
+ int ret;
+ mm_segment_t old_fs = get_fs();
- ret = do_readv_writev32(VERIFY_WRITE, file,
- vector, count);
-out:
- fput(file);
-bad_file:
+ if ((iov = get_iovec32(vector, iovstack, count, VERIFY_WRITE)) = (struct iovec *)0)
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_readv(fd, iov, count);
+ set_fs(old_fs);
+ if (iov != iovstack)
+ kfree(iov);
return ret;
}
asmlinkage long
sys32_writev(int fd, struct iovec32 *vector, u32 count)
{
- struct file *file;
- int ret = -EBADF;
-
- file = fget(fd);
- if(!file)
- goto bad_file;
-
- if(!(file->f_mode & 2))
- goto out;
+ struct iovec iovstack[UIO_FASTIOV];
+ struct iovec *iov;
+ int ret;
+ mm_segment_t old_fs = get_fs();
- down(&file->f_dentry->d_inode->i_sem);
- ret = do_readv_writev32(VERIFY_READ, file,
- vector, count);
- up(&file->f_dentry->d_inode->i_sem);
-out:
- fput(file);
-bad_file:
+ if ((iov = get_iovec32(vector, iovstack, count, VERIFY_READ)) = (struct iovec *)0)
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_writev(fd, iov, count);
+ set_fs(old_fs);
+ if (iov != iovstack)
+ kfree(iov);
return ret;
}
@@ -1173,21 +1125,22 @@
static inline int
shape_msg(struct msghdr *mp, struct msghdr32 *mp32)
{
+ int ret;
unsigned int i;
if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32)))
return(-EFAULT);
- __get_user(i, &mp32->msg_name);
+ ret = __get_user(i, &mp32->msg_name);
mp->msg_name = (void *)A(i);
- __get_user(mp->msg_namelen, &mp32->msg_namelen);
- __get_user(i, &mp32->msg_iov);
+ ret |= __get_user(mp->msg_namelen, &mp32->msg_namelen);
+ ret |= __get_user(i, &mp32->msg_iov);
mp->msg_iov = (struct iovec *)A(i);
- __get_user(mp->msg_iovlen, &mp32->msg_iovlen);
- __get_user(i, &mp32->msg_control);
+ ret |= __get_user(mp->msg_iovlen, &mp32->msg_iovlen);
+ ret |= __get_user(i, &mp32->msg_control);
mp->msg_control = (void *)A(i);
- __get_user(mp->msg_controllen, &mp32->msg_controllen);
- __get_user(mp->msg_flags, &mp32->msg_flags);
- return(0);
+ ret |= __get_user(mp->msg_controllen, &mp32->msg_controllen);
+ ret |= __get_user(mp->msg_flags, &mp32->msg_flags);
+ return(ret ? -EFAULT : 0);
}
/*
@@ -2341,17 +2294,17 @@
{
struct switch_stack *swp;
struct pt_regs *ptp;
- int i, tos;
+ int i, tos, ret;
int fsrlo, fsrhi;
if (!access_ok(VERIFY_READ, save, sizeof(*save)))
return(-EIO);
- __get_user(tsk->thread.fcr, (unsigned int *)&save->cw);
- __get_user(fsrlo, (unsigned int *)&save->sw);
- __get_user(fsrhi, (unsigned int *)&save->tag);
+ ret = __get_user(tsk->thread.fcr, (unsigned int *)&save->cw);
+ ret |= __get_user(fsrlo, (unsigned int *)&save->sw);
+ ret |= __get_user(fsrhi, (unsigned int *)&save->tag);
tsk->thread.fsr = ((long)fsrhi << 32) | (long)fsrlo;
- __get_user(tsk->thread.fir, (unsigned int *)&save->ipoff);
- __get_user(tsk->thread.fdr, (unsigned int *)&save->dataoff);
+ ret |= __get_user(tsk->thread.fir, (unsigned int *)&save->ipoff);
+ ret |= __get_user(tsk->thread.fdr, (unsigned int *)&save->dataoff);
/*
* Stack frames start with 16-bytes of temp space
*/
@@ -2360,7 +2313,7 @@
tos = (tsk->thread.fsr >> 11) & 3;
for (i = 0; i < 8; i++)
get_fpreg(i, &save->_st[i], ptp, swp, tos);
- return(0);
+ return(ret ? -EFAULT : 0);
}
asmlinkage long sys_ptrace(long, pid_t, unsigned long, unsigned long, long, long, long, long, long);
@@ -2492,6 +2445,105 @@
return ret;
}
+static inline int
+get_flock32(struct flock *kfl, struct flock32 *ufl)
+{
+ int err;
+
+ err = get_user(kfl->l_type, &ufl->l_type);
+ err |= __get_user(kfl->l_whence, &ufl->l_whence);
+ err |= __get_user(kfl->l_start, &ufl->l_start);
+ err |= __get_user(kfl->l_len, &ufl->l_len);
+ err |= __get_user(kfl->l_pid, &ufl->l_pid);
+ return err;
+}
+
+static inline int
+put_flock32(struct flock *kfl, struct flock32 *ufl)
+{
+ int err;
+
+ err = __put_user(kfl->l_type, &ufl->l_type);
+ err |= __put_user(kfl->l_whence, &ufl->l_whence);
+ err |= __put_user(kfl->l_start, &ufl->l_start);
+ err |= __put_user(kfl->l_len, &ufl->l_len);
+ err |= __put_user(kfl->l_pid, &ufl->l_pid);
+ return err;
+}
+
+extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd,
+ unsigned long arg);
+
+asmlinkage long
+sys32_fcntl(unsigned int fd, unsigned int cmd, int arg)
+{
+ struct flock f;
+ mm_segment_t old_fs;
+ long ret;
+
+ switch (cmd) {
+ case F_GETLK:
+ case F_SETLK:
+ case F_SETLKW:
+ if(cmd != F_GETLK && get_flock32(&f, (struct flock32 *)((long)arg)))
+ return -EFAULT;
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ ret = sys_fcntl(fd, cmd, (unsigned long)&f);
+ set_fs(old_fs);
+ if(cmd = F_GETLK && put_flock32(&f, (struct flock32 *)((long)arg)))
+ return -EFAULT;
+ return ret;
+ default:
+ /*
+ * `sys_fcntl' lies about arg, for the F_SETOWN
+ * sub-function arg can have a negative value.
+ */
+ return sys_fcntl(fd, cmd, (unsigned long)((long)arg));
+ }
+}
+
+asmlinkage long
+sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigaction32 *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+
+ if (act) {
+ old_sigset32_t mask;
+
+ ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
+ ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= __get_user(mask, &act->sa_mask);
+ if (ret)
+ return ret;
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
+ ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+asmlinkage long sys_ni_syscall(void);
+
+asmlinkage long
+sys32_ni_syscall(int dummy0, int dummy1, int dummy2, int dummy3,
+ int dummy4, int dummy5, int dummy6, int dummy7, int stack)
+{
+ struct pt_regs *regs = (struct pt_regs *)&stack;
+
+ printk("IA32 syscall #%d issued, maybe we should implement it\n",
+ (int)regs->r1);
+ return(sys_ni_syscall());
+}
+
#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
/* In order to reduce some races, while at the same time doing additional
@@ -2545,61 +2597,6 @@
return sys_ioperm((unsigned long)from, (unsigned long)num, on);
}
-static inline int
-get_flock(struct flock *kfl, struct flock32 *ufl)
-{
- int err;
-
- err = get_user(kfl->l_type, &ufl->l_type);
- err |= __get_user(kfl->l_whence, &ufl->l_whence);
- err |= __get_user(kfl->l_start, &ufl->l_start);
- err |= __get_user(kfl->l_len, &ufl->l_len);
- err |= __get_user(kfl->l_pid, &ufl->l_pid);
- return err;
-}
-
-static inline int
-put_flock(struct flock *kfl, struct flock32 *ufl)
-{
- int err;
-
- err = __put_user(kfl->l_type, &ufl->l_type);
- err |= __put_user(kfl->l_whence, &ufl->l_whence);
- err |= __put_user(kfl->l_start, &ufl->l_start);
- err |= __put_user(kfl->l_len, &ufl->l_len);
- err |= __put_user(kfl->l_pid, &ufl->l_pid);
- return err;
-}
-
-extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd,
- unsigned long arg);
-
-asmlinkage long
-sys32_fcntl(unsigned int fd, unsigned int cmd, unsigned long arg)
-{
- switch (cmd) {
- case F_GETLK:
- case F_SETLK:
- case F_SETLKW:
- {
- struct flock f;
- mm_segment_t old_fs;
- long ret;
-
- if(get_flock(&f, (struct flock32 *)arg))
- return -EFAULT;
- old_fs = get_fs(); set_fs (KERNEL_DS);
- ret = sys_fcntl(fd, cmd, (unsigned long)&f);
- set_fs (old_fs);
- if(put_flock(&f, (struct flock32 *)arg))
- return -EFAULT;
- return ret;
- }
- default:
- return sys_fcntl(fd, cmd, (unsigned long)arg);
- }
-}
-
struct dqblk32 {
__u32 dqb_bhardlimit;
__u32 dqb_bsoftlimit;
@@ -3861,40 +3858,6 @@
}
extern void check_pending(int signum);
-
-asmlinkage long
-sys32_sigaction (int sig, struct old_sigaction32 *act,
- struct old_sigaction32 *oact)
-{
- struct k_sigaction new_ka, old_ka;
- int ret;
-
- if(sig < 0) {
- current->tss.new_signal = 1;
- sig = -sig;
- }
-
- if (act) {
- old_sigset_t32 mask;
-
- ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
- ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
- ret |= __get_user(mask, &act->sa_mask);
- if (ret)
- return ret;
- siginitset(&new_ka.sa.sa_mask, mask);
- }
-
- ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
-
- if (!ret && oact) {
- ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
- ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
- ret |= __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
- }
-
- return ret;
-}
#ifdef CONFIG_MODULES
diff -urN linux-davidm/arch/ia64/kernel/Makefile linux-2.4.0-test6-lia/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Fri Aug 11 19:01:13 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/Makefile Wed Aug 2 18:57:03 2000
@@ -9,8 +9,8 @@
all: kernel.o head.o init_task.o
-obj-y := acpi.o entry.o gate.o efi.o efi_stub.o irq.o irq_ia64.o irq_sapic.o ivt.o \
- pal.o pci-dma.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
+obj-y := acpi.o entry.o gate.o efi.o efi_stub.o irq.o irq_ia64.o irq_sapic.o ivt.o \
+ machvec.o pal.o pci-dma.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
obj-$(CONFIG_IA64_GENERIC) += machvec.o
diff -urN linux-davidm/arch/ia64/kernel/efi.c linux-2.4.0-test6-lia/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Fri Aug 11 19:01:13 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/efi.c Fri Aug 11 18:01:55 2000
@@ -246,7 +246,7 @@
printk(KERN_ERR "Too many EFI Pal Code memory ranges, dropped @ %lx\n",
md->phys_addr);
continue;
- }
+ }
mask = ~((1 << _PAGE_SIZE_4M)-1); /* XXX should be dynamic? */
vaddr = PAGE_OFFSET + md->phys_addr;
@@ -289,7 +289,7 @@
for (cp = saved_command_line; *cp; ) {
if (memcmp(cp, "mem=", 4) = 0) {
cp += 4;
- mem_limit = memparse(cp, &end);
+ mem_limit = memparse(cp, &end) - 1;
if (end != cp)
break;
cp = end;
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-test6-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Thu Aug 10 19:56:18 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/entry.S Fri Aug 11 14:56:27 2000
@@ -106,29 +106,19 @@
alloc r16=ar.pfs,1,0,0,0
DO_SAVE_SWITCH_STACK
UNW(.body)
- // disable interrupts to ensure atomicity for next few instructions:
- mov r17=psr // M-unit
- ;;
- rsm psr.i // M-unit
- dep r18=-1,r0,0,61 // build mask 0x1fffffffffffffff
- ;;
- srlz.d
- ;;
+
adds r22=IA64_TASK_THREAD_KSP_OFFSET,r13
+ dep r18=-1,r0,0,61 // build mask 0x1fffffffffffffff
adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
;;
st8 [r22]=sp // save kernel stack pointer of old task
ld8 sp=[r21] // load kernel stack pointer of new task
and r20=in0,r18 // physical address of "current"
;;
+ mov ar.k6=r20 // copy "current" into ar.k6
mov r8=r13 // return pointer to previously running task
mov r13=in0 // set "current" pointer
- mov ar.k6=r20 // copy "current" into ar.k6
- ;;
- // restore interrupts
- mov psr.l=r17
;;
- srlz.d
DO_LOAD_SWITCH_STACK( )
br.ret.sptk.few rp
END(ia64_switch_to)
diff -urN linux-davidm/arch/ia64/kernel/head.S linux-2.4.0-test6-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Wed Aug 2 18:54:01 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/head.S Sat Aug 5 19:27:13 2000
@@ -181,7 +181,9 @@
GLOBAL_ENTRY(ia64_load_debug_regs)
alloc r16=ar.pfs,1,0,0,0
+#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
lfetch.nta [in0]
+#endif
mov r20=ar.lc // preserve ar.lc
add r19=IA64_NUM_DBG_REGS*8,in0
mov ar.lc=IA64_NUM_DBG_REGS-1
@@ -702,3 +704,74 @@
SET_REG(b5);
#endif /* CONFIG_IA64_BRL_EMU */
+
+#ifdef CONFIG_SMP
+
+ /*
+ * This routine handles spinlock contention. It uses a simple exponential backoff
+ * algorithm to reduce unnecessary bus traffic. The initial delay is selected from
+ * the low-order bits of the cycle counter (a cheap "randomizer"). I'm sure this
+ * could use additional tuning, especially on systems with a large number of CPUs.
+ * Also, I think the maximum delay should be made a function of the number of CPUs in
+ * the system. --davidm 00/08/05
+ *
+ * WARNING: This is not a normal procedure. It gets called from C code without
+ * the compiler knowing about it. Thus, we must not use any scratch registers
+ * beyond those that were declared "clobbered" at the call-site (see spin_lock()
+ * macro). We may not even use the stacked registers, because that could overwrite
+ * output registers. Similarly, we can't use the scratch stack area as it may be
+ * in use, too.
+ *
+ * Inputs:
+ * ar.ccv = 0 (and available for use)
+ * r28 = available for use
+ * r29 = available for use
+ * r30 = non-zero (and available for use)
+ * r31 = address of lock we're trying to acquire
+ * p15 = available for use
+ */
+
+# define delay r28
+# define timeout r29
+# define tmp r30
+
+GLOBAL_ENTRY(ia64_spinlock_contention)
+ mov tmp=ar.itc
+ ;;
+ and delay=0x3f,tmp
+ ;;
+
+.retry: add timeout=tmp,delay
+ shl delayÞlay,1
+ ;;
+ dep delayÞlay,r0,0,13 // limit delay to 8192 cycles
+ ;;
+ // delay a little...
+.wait: sub tmp=tmp,timeout
+ or delay=0xf,delay // make sure delay is non-zero (otherwise we get stuck with 0)
+ ;;
+ cmp.lt p15,p0=tmp,r0
+ mov tmp=ar.itc
+(p15) br.cond.sptk .wait
+ ;;
+ ld1 tmp=[r31]
+ ;;
+ cmp.ne p15,p0=tmp,r0
+ mov tmp=ar.itc
+(p15) br.cond.sptk.few .retry // lock is still busy
+ ;;
+ // try acquiring lock (we know ar.ccv is still zero!):
+ mov tmp=1
+ ;;
+ IA64_SEMFIX_INSN
+ cmpxchg1.acq tmp=[r31],tmp,ar.ccv
+ ;;
+ cmp.eq p15,p0=tmp,r0
+
+ mov tmp=ar.itc
+(p15) br.ret.sptk.many b7 // got lock -> return
+ br .retry // still no luck, retry
+
+END(ia64_spinlock_contention)
+
+#endif
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.0-test6-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Thu Aug 10 19:56:18 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/ia64_ksyms.c Mon Jul 31 14:01:22 2000
@@ -18,6 +18,7 @@
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
EXPORT_SYMBOL(strncpy);
+EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strtok);
#include <linux/pci.h>
@@ -37,6 +38,7 @@
EXPORT_SYMBOL(kernel_thread);
#ifdef CONFIG_SMP
+#include <asm/hardirq.h>
EXPORT_SYMBOL(synchronize_irq);
#include <asm/smplock.h>
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.0-test6-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Fri Aug 11 19:01:13 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/ivt.S Fri Aug 11 14:55:22 2000
@@ -170,33 +170,27 @@
* The ITLB basically does the same as the VHPT handler except
* that we always insert exactly one instruction TLB entry.
*/
-#if 1
/*
* Attempt to lookup PTE through virtual linear page table.
* The speculative access will fail if there is no TLB entry
* for the L3 page table page we're trying to access.
*/
- mov r31=pr // save predicates
- ;;
- thash r17=r16 // compute virtual address of L3 PTE
+ mov r16=cr.iha // get virtual address of L3 PTE
;;
- ld8.s r18=[r17] // try to read L3 PTE
+ ld8.s r16=[r16] // try to read L3 PTE
+ mov r31=pr // save predicates
;;
- tnat.nz p6,p0=r18 // did read succeed?
+ tnat.nz p6,p0=r16 // did read succeed?
(p6) br.cond.spnt.many 1f
;;
- itc.i r18
+ itc.i r16
;;
mov pr=r31,-1
rfi
-1: rsm psr.dt // use physical addressing for data
-#else
- mov r16=cr.ifa // get address that caused the TLB miss
+1: mov r16=cr.ifa // get address that caused the TLB miss
;;
rsm psr.dt // use physical addressing for data
-#endif
- mov r31=pr // save the predicate registers
mov r19=ar.k7 // get page table base address
shl r21=r16,3 // shift bit 60 into sign bit
shr.u r17=r16,61 // get the region number into r17
@@ -244,33 +238,27 @@
* The DTLB basically does the same as the VHPT handler except
* that we always insert exactly one data TLB entry.
*/
- mov r16=cr.ifa // get address that caused the TLB miss
-#if 1
/*
* Attempt to lookup PTE through virtual linear page table.
* The speculative access will fail if there is no TLB entry
* for the L3 page table page we're trying to access.
*/
- mov r31=pr // save predicates
+ mov r16=cr.iha // get virtual address of L3 PTE
;;
- thash r17=r16 // compute virtual address of L3 PTE
- ;;
- ld8.s r18=[r17] // try to read L3 PTE
+ ld8.s r16=[r16] // try to read L3 PTE
+ mov r31=pr // save predicates
;;
- tnat.nz p6,p0=r18 // did read succeed?
+ tnat.nz p6,p0=r16 // did read succeed?
(p6) br.cond.spnt.many 1f
;;
- itc.d r18
+ itc.d r16
;;
mov pr=r31,-1
rfi
-1: rsm psr.dt // use physical addressing for data
-#else
- rsm psr.dt // use physical addressing for data
- mov r31=pr // save the predicate registers
+1: mov r16=cr.ifa // get address that caused the TLB miss
;;
-#endif
+ rsm psr.dt // use physical addressing for data
mov r19=ar.k7 // get page table base address
shl r21=r16,3 // shift bit 60 into sign bit
shr.u r17=r16,61 // get the region number into r17
diff -urN linux-davidm/arch/ia64/kernel/pal.S linux-2.4.0-test6-lia/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/pal.S Fri Jul 28 09:04:50 2000
@@ -191,3 +191,57 @@
srlz.d // seralize restoration of psr.l
br.ret.sptk.few b0
END(ia64_pal_call_phys_static)
+
+/*
+ * Make a PAL call using the stacked registers in physical mode.
+ *
+ * Inputs:
+ * in0 Index of PAL service
+ * in2 - in3 Remaning PAL arguments
+ */
+GLOBAL_ENTRY(ia64_pal_call_phys_stacked)
+ UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(5))
+ alloc loc1 = ar.pfs,5,5,86,0
+ movl loc2 = pal_entry_point
+1: {
+ mov r28 = in0 // copy procedure index
+ mov loc0 = rp // save rp
+ }
+ .body
+ ;;
+ ld8 loc2 = [loc2] // loc2 <- entry point
+ mov out0 = in0 // first argument
+ mov out1 = in1 // copy arg2
+ mov out2 = in2 // copy arg3
+ mov out3 = in3 // copy arg3
+ ;;
+ mov loc3 = psr // save psr
+ ;;
+ mov loc4=ar.rsc // save RSE configuration
+ dep.z loc2=loc2,0,61 // convert pal entry point to physical
+ ;;
+ mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ movl r16=PAL_PSR_BITS_TO_CLEAR
+ movl r17=PAL_PSR_BITS_TO_SET
+ ;;
+ or loc3=loc3,r17 // add in psr the bits to set
+ mov b7 = loc2 // install target to branch reg
+ ;;
+ andcm r16=loc3,r16 // removes bits to clear from psr
+ br.call.sptk.few rp=ia64_switch_mode
+.ret6:
+ br.call.sptk.many rp· // now make the call
+.ret7:
+ mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ mov r16=loc3 // r16= original psr
+ br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+
+.ret8: mov psr.l = loc3 // restore init PSR
+ mov ar.pfs = loc1
+ mov rp = loc0
+ ;;
+ mov ar.rsc=loc4 // restore RSE configuration
+ srlz.d // seralize restoration of psr.l
+ br.ret.sptk.few b0
+END(ia64_pal_call_phys_stacked)
+
diff -urN linux-davidm/arch/ia64/kernel/palinfo.c linux-2.4.0-test6-lia/arch/ia64/kernel/palinfo.c
--- linux-davidm/arch/ia64/kernel/palinfo.c Fri Aug 11 19:01:14 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/palinfo.c Fri Aug 11 18:12:58 2000
@@ -21,6 +21,10 @@
#include <linux/init.h>
#include <linux/proc_fs.h>
#include <linux/mm.h>
+#include <linux/module.h>
+#if defined(MODVERSIONS)
+#include <linux/modversions.h>
+#endif
#include <asm/pal.h>
#include <asm/sal.h>
@@ -31,12 +35,15 @@
#include <linux/smp.h>
#endif
+MODULE_AUTHOR("Stephane Eranian <eranian@hpl.hp.com>");
+MODULE_DESCRIPTION("/proc interface to IA-64 PAL");
+
/*
- * Hope to get rid of these in a near future
+ * Hope to get rid of this one in a near future
*/
#define IA64_PAL_VERSION_BUG 1
-#define PALINFO_VERSION "0.2"
+#define PALINFO_VERSION "0.3"
#ifdef CONFIG_SMP
#define cpu_is_online(i) (cpu_online_map & (1UL << i))
@@ -129,64 +136,31 @@
/*
* Allocate a buffer suitable for calling PAL code in Virtual mode
*
- * The documentation (PAL2.6) requires thius buffer to have a pinned
- * translation to avoid any DTLB faults. For this reason we allocate
- * a page (large enough to hold any possible reply) and use a DTC
- * to hold the translation during the call. A call the free_palbuffer()
- * is required to release ALL resources (page + translation).
+ * The documentation (PAL2.6) allows DTLB misses on the buffer. So
+ * using the TC is enough, no need to pin the entry.
*
- * The size of the page allocated is based on the PAGE_SIZE defined
- * at compile time for the kernel, i.e. >= 4Kb.
- *
- * Return: a pointer to the newly allocated page (virtual address)
+ * We allocate a kernel-sized page (at least 4KB). This is enough to
+ * hold any possible reply.
*/
-static void *
+static inline void *
get_palcall_buffer(void)
{
void *tmp;
tmp = (void *)__get_free_page(GFP_KERNEL);
if (tmp = 0) {
- printk(KERN_ERR "%s: can't get a buffer page\n", __FUNCTION__);
- } else if ( ((u64)tmp - PAGE_OFFSET) > (1<<_PAGE_SIZE_256M) ) { /* XXX: temporary hack */
- unsigned long flags;
-
- /* PSR.ic must be zero to insert new DTR */
- ia64_clear_ic(flags);
-
- /*
- * we only insert of DTR
- *
- * XXX: we need to figure out a way to "allocate" TR(s) to avoid
- * conflicts. Maybe something in an include file like pgtable.h
- * page.h or processor.h
- *
- * ITR0/DTR0: used for kernel code/data
- * ITR1/DTR1: used by HP simulator
- * ITR2/DTR2: used to map PAL code
- */
- ia64_itr(0x2, 3, (u64)tmp,
- pte_val(mk_pte_phys(__pa(tmp), __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RW))), PAGE_SHIFT);
-
- ia64_srlz_d ();
-
- __restore_flags(flags);
- }
-
+ printk(KERN_ERR __FUNCTION__" : can't get a buffer page\n");
+ }
return tmp;
}
/*
* Free a palcall buffer allocated with the previous call
- *
- * The translation is also purged.
*/
-static void
+static inline void
free_palcall_buffer(void *addr)
{
__free_page(addr);
- ia64_ptr(0x2, (u64)addr, PAGE_SHIFT);
- ia64_srlz_d ();
}
/*
@@ -672,17 +646,23 @@
if (status != 0) return 0;
p += sprintf(p, "PAL_vendor : 0x%02x (min=0x%02x)\n" \
- "PAL_A : %02x.%02x (min=%02x.%02x)\n" \
- "PAL_B : %02x.%02x (min=%02x.%02x)\n",
+ "PAL_A : %x.%x.%x (min=%x.%x.%x)\n" \
+ "PAL_B : %x.%x.%x (min=%x.%x.%x)\n",
cur_ver.pal_version_s.pv_pal_vendor,
min_ver.pal_version_s.pv_pal_vendor,
- cur_ver.pal_version_s.pv_pal_a_model,
+
+ cur_ver.pal_version_s.pv_pal_a_model>>4,
+ cur_ver.pal_version_s.pv_pal_a_model&0xf,
cur_ver.pal_version_s.pv_pal_a_rev,
- min_ver.pal_version_s.pv_pal_a_model,
+ min_ver.pal_version_s.pv_pal_a_model>>4,
+ min_ver.pal_version_s.pv_pal_a_model&0xf,
min_ver.pal_version_s.pv_pal_a_rev,
- cur_ver.pal_version_s.pv_pal_b_model,
+
+ cur_ver.pal_version_s.pv_pal_b_model>>4,
+ cur_ver.pal_version_s.pv_pal_b_model&0xf,
cur_ver.pal_version_s.pv_pal_b_rev,
- min_ver.pal_version_s.pv_pal_b_model,
+ min_ver.pal_version_s.pv_pal_b_model>>4,
+ min_ver.pal_version_s.pv_pal_b_model&0xf,
min_ver.pal_version_s.pv_pal_b_rev);
return p - page;
@@ -704,6 +684,9 @@
}
#ifdef IA64_PAL_PERF_MON_INFO_BUG
+ /*
+ * This bug has been fixed in PAL 2.2.9 and higher
+ */
pm_buffer[5]=0x3;
pm_info.pal_perf_mon_info_s.cycles = 0x12;
pm_info.pal_perf_mon_info_s.retired = 0x08;
@@ -990,6 +973,7 @@
int len=0;
pal_func_cpu_u_t *f = (pal_func_cpu_u_t *)&data;
+ MOD_INC_USE_COUNT;
/*
* in SMP mode, we may need to call another CPU to get correct
* information. PAL, by definition, is processor specific
@@ -1007,6 +991,8 @@
if (len>count) len = count;
if (len<0) len = 0;
+ MOD_DEC_USE_COUNT;
+
return len;
}
@@ -1015,7 +1001,6 @@
{
# define CPUSTR "cpu%d"
- palinfo_entry_t *p;
pal_func_cpu_u_t f;
struct proc_dir_entry **pdir = palinfo_proc_entries;
struct proc_dir_entry *palinfo_dir, *cpu_dir;
@@ -1052,7 +1037,7 @@
return 0;
}
-static int __exit
+static void __exit
palinfo_exit(void)
{
int i = 0;
@@ -1061,8 +1046,6 @@
for (i=0; i< NR_PALINFO_PROC_ENTRIES ; i++) {
remove_proc_entry (palinfo_proc_entries[i]->name, NULL);
}
-
- return 0;
}
module_init(palinfo_init);
diff -urN linux-davidm/arch/ia64/kernel/pci-dma.c linux-2.4.0-test6-lia/arch/ia64/kernel/pci-dma.c
--- linux-davidm/arch/ia64/kernel/pci-dma.c Fri Aug 11 19:01:14 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/pci-dma.c Mon Jul 31 14:01:22 2000
@@ -3,7 +3,8 @@
*
* This implementation is for IA-64 platforms that do not support
* I/O TLBs (aka DMA address translation hardware).
- * Goutham Rao <goutham.rao@intel.com>: Implemented the PCI DMA mapping API.
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
*/
#include <linux/config.h>
@@ -23,46 +24,50 @@
#include <linux/init.h>
#include <linux/bootmem.h>
-#define ALIGN(val, align) ((void *) (((unsigned long) (val) + ((align) - 1)) & ~((align) - 1)))
+#define ALIGN(val, align) ((unsigned long) (((unsigned long) (val) + ((align) - 1)) & ~((align) - 1)))
-typedef struct io_tlb_sizes {
- size_t size;
- int log_size;
- int n_buffers;
- int curr_index;
- spinlock_t lock;
- char *base;
- unsigned long *orig_addr;
- unsigned int *free_list;
-} io_tlb_sizes_t;
-
-/*
- * List entries in order of size (low to high)
- */
-static io_tlb_sizes_t io_tlb[] = {
- {2048, 11, 128, 127, SPIN_LOCK_UNLOCKED, 0, 0, 0},
- {PAGE_SIZE, PAGE_SHIFT, 128, 127, SPIN_LOCK_UNLOCKED, 0, 0, 0},
- /*
- * Indicated end of entries
- */
- {0, 0, 0, 0, SPIN_LOCK_UNLOCKED, 0, 0, 0}
-};
+/*
+ * log of the size of each IO TLB slab. The number of slabs is command line
+ * controllable.
+ */
+#define IO_TLB_SHIFT 11
/*
- * Used to do a quick range check in pci_unmap_single and pci_sync_single
+ * Used to do a quick range check in pci_unmap_single and pci_sync_single, to see if the
+ * memory was in fact allocated by this API.
*/
static char *io_tlb_start, *io_tlb_end;
-static unsigned long swiotlb_buf_count;
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and io_tlb_end.
+ * This is command line adjustable via setup_io_tlb_npages.
+ */
+unsigned long io_tlb_nslabs = 1024;
+
+/*
+ * This is a free list describing the number of free entries available from each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry for the sync
+ * operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+spinlock_t io_tlb_lock = SPIN_LOCK_UNLOCKED;
static int __init
-setup_swiotlb_buf_count (char *str)
+setup_io_tlb_npages (char *str)
{
- swiotlb_buf_count = simple_strtoul(str, NULL, 0);
+ io_tlb_nslabs = simple_strtoul(str, NULL, 0) << (PAGE_SHIFT - IO_TLB_SHIFT);
return 1;
}
-
-__setup("swiotlb=", setup_swiotlb_buf_count);
+__setup("swiotlb=", setup_io_tlb_npages);
/*
* Statically reserve bounce buffer space and initialize bounce buffer
@@ -71,58 +76,27 @@
void
setup_swiotlb (void)
{
- unsigned long entry_size, size = 0;
- struct io_tlb_sizes *itp;
-
- for (itp = io_tlb; itp->size; ++itp) {
- /*
- * Let user override number of buffers needed
- */
- if (swiotlb_buf_count)
- itp->n_buffers = swiotlb_buf_count;
- itp->curr_index = itp->n_buffers - 1;
- /*
- * size needed for buffers + size needed for offset
- * table + size needed for mapping:
- */
- entry_size = itp->size + sizeof(int) + sizeof(long);
- /* the +1 makes room for the worst-case alignment... */
- size += (itp->n_buffers + 1)*entry_size;
- }
+ int i;
/*
- * Now get IO TLB memory from the low pages
+ * Get IO TLB memory from the low pages
*/
- io_tlb_start = io_tlb_end = alloc_bootmem_low_pages(size);
+ io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs * (1 << IO_TLB_SHIFT));
if (!io_tlb_start)
BUG();
+ io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
/*
- * For every io tlb size entry, allocate the required amount of memory
- * and initialize the free list array to mark all entries as available
+ * Allocate and initialize the free list array. This array is used
+ * to find contiguous free memory regions of size 2^IO_TLB_SHIFT between
+ * io_tlb_start and io_tlb_end.
*/
- for (itp = io_tlb; itp->size; ++itp) {
- int j;
-
- /*
- * Reserve memory for the IO TLB buffers and the
- * offsets array for these size chunks
- */
- itp->base = ALIGN(io_tlb_end, itp->size);
- io_tlb_end = itp->base + itp->size * itp->n_buffers;
-
- itp->orig_addr = ALIGN(io_tlb_end, sizeof(long));
- io_tlb_end = ((char *)itp->orig_addr) + itp->n_buffers * sizeof(long);
+ io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+ for (i = 0; i < io_tlb_nslabs; i++)
+ io_tlb_list[i] = io_tlb_nslabs - i;
+ io_tlb_index = 0;
+ io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
- itp->free_list = (unsigned int *)io_tlb_end;
- io_tlb_end = ((char *)itp->free_list) + itp->n_buffers * sizeof(int);
-
- /*
- * Initialize free list array, marking all entries available
- */
- for (j = 0; j < itp->n_buffers; j++)
- itp->free_list[j] = (unsigned int)(j * itp->size);
- }
printk("Placing software IO TLB between 0x%p - 0x%p\n", io_tlb_start, io_tlb_end);
}
@@ -132,51 +106,71 @@
static void *
__pci_map_single (struct pci_dev *hwdev, char *buffer, size_t size, int direction)
{
- struct io_tlb_sizes *itp;
- char *dma_addr = 0;
- int index;
+ unsigned long flags;
+ char *dma_addr;
+ unsigned int i, nslots, stride, index, wrap;
+
+ /*
+ * For mappings greater than a page size, we limit the stride (and hence alignment)
+ * to a page size.
+ */
+ nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+ if (size > (1 << PAGE_SHIFT))
+ stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+ else
+ stride = nslots;
+
+ if (!nslots)
+ BUG();
/*
- * Find a IO TLB size that will fit this request and allocate a buffer
+ * Find suitable number of IO TLB entries size that will fit this request and allocate a buffer
* from that IO TLB pool.
*/
- for (itp = io_tlb; itp->size; ++itp) {
- if (size <= itp->size) {
- unsigned long flags;
-
- spin_lock_irqsave(&itp->lock, flags);
- {
- if (!itp->curr_index) {
- /*
- * Get buffer from next IO TLB... this will
- * waste memory though.
- */
- spin_unlock_irqrestore(&itp->lock, flags);
- continue;
- }
- dma_addr = (itp->base + itp->free_list[itp->curr_index--]);
- }
- spin_unlock_irqrestore(&itp->lock, flags);
+ spin_lock_irqsave(&io_tlb_lock, flags);
+ {
+ wrap = index = ALIGN(io_tlb_index, stride);
+ do {
/*
- * Save the mapping from original address to DMA address
- * because the map_single API doesen't have a mapping
- * like the map_sg API.
+ * If we find a slot that indicates we have 'nslots' number of
+ * contiguous buffers, we allocate the buffers from that slot and mark the
+ * entries as '0' indicating unavailable.
*/
- index = (dma_addr - itp->base) >> itp->log_size;
- itp->orig_addr[index] = (unsigned long) buffer;
+ if (io_tlb_list[index] >= nslots) {
+ for (i = index; i < index + nslots; i++)
+ io_tlb_list[i] = 0;
+ dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
- if (direction = PCI_DMA_TODEVICE || direction = PCI_DMA_BIDIRECTIONAL)
- memcpy(dma_addr, buffer, size);
+ /*
+ * Update the indices to avoid searching in the next round.
+ */
+ io_tlb_index = (index + nslots) < io_tlb_nslabs ? (index + nslots) : 0;
- return dma_addr;
- }
+ goto found;
+ }
+ index += stride;
+ if (index >= io_tlb_nslabs)
+ index = 0;
+ } while (index != wrap);
+
+ /*
+ * XXX What is a suitable recovery mechanism here? We cannot
+ * sleep because we are called from with in interrupts!
+ */
+ panic("__pci_map_single: could not allocate software IO TLB (%ld bytes)", size);
+found:
}
+ spin_unlock_irqrestore(&io_tlb_lock, flags);
/*
- * XXX What is a suitable recovery mechanism here? We cannot
- * sleep because we are called from with in interrupts!
+ * Save away the mapping from the original address to the DMA address. This is needed
+ * when we sync the memory. Then we sync the buffer if needed.
*/
- panic("__pci_map_single: could not allocate software IO TLB (%ld bytes)", size);
+ io_tlb_orig_addr[index] = buffer;
+ if (direction = PCI_DMA_TODEVICE || direction = PCI_DMA_BIDIRECTIONAL)
+ memcpy(dma_addr, buffer, size);
+
+ return dma_addr;
}
/*
@@ -185,71 +179,60 @@
static void
__pci_unmap_single (struct pci_dev *hwdev, char *dma_addr, size_t size, int direction)
{
- struct io_tlb_sizes *itp;
+ unsigned long flags;
+ int i, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+ int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+ char *buffer = io_tlb_orig_addr[index];
/*
- * Return the buffer to the free list
+ * First, sync the memory before unmapping the entry
*/
- for (itp = io_tlb; itp->size; ++itp) {
- if (size <= itp->size) {
- unsigned long flags;
- char *buffer;
- int index;
-
- /*
- * Get the mapping (IO address to original address)...
- */
- index = (dma_addr - itp->base) >> itp->log_size;
- buffer = (char *) itp->orig_addr[index];
- if ((direction = PCI_DMA_FROMDEVICE)
- || (direction = PCI_DMA_BIDIRECTIONAL))
- /*
- * bounce... copy the data back into the original buffer
- * and delete the bounce buffer.
- */
- memcpy(buffer, dma_addr, size);
+ if ((direction = PCI_DMA_FROMDEVICE) || (direction = PCI_DMA_BIDIRECTIONAL))
+ /*
+ * bounce... copy the data back into the original buffer
+ * and delete the bounce buffer.
+ */
+ memcpy(buffer, dma_addr, size);
- /*
- * Return the entry to the list
- */
- spin_lock_irqsave(&itp->lock, flags);
- {
- itp->free_list[++itp->curr_index] = (dma_addr - itp->base);
- }
- spin_unlock_irqrestore(&itp->lock, flags);
- return;
- }
+ /*
+ * Return the buffer to the free list by setting the corresponding entries to indicate
+ * the number of contigous entries available.
+ * While returning the entries to the free list, we merge the entries with slots below
+ * and above the pool being returned.
+ */
+ spin_lock_irqsave(&io_tlb_lock, flags);
+ {
+ int count = ((index + nslots) < io_tlb_nslabs ? io_tlb_list[index + nslots] : 0);
+ /*
+ * Step 1: return the slots to the free list, merging the slots with superceeding slots
+ */
+ for (i = index + nslots - 1; i >= index; i--)
+ io_tlb_list[i] = ++count;
+ /*
+ * Step 2: merge the returned slots with the preceeding slots, if available (non zero)
+ */
+ for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
+ io_tlb_list[i] += io_tlb_list[index];
}
- BUG();
+ spin_unlock_irqrestore(&io_tlb_lock, flags);
}
static void
__pci_sync_single (struct pci_dev *hwdev, char *dma_addr, size_t size, int direction)
{
- struct io_tlb_sizes *itp;
- char *buffer;
- int index;
+ int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+ char *buffer = io_tlb_orig_addr[index];
/*
* bounce... copy the data back into/from the original buffer
* XXX How do you handle PCI_DMA_BIDIRECTIONAL here ?
*/
- for (itp = io_tlb; itp->size; ++itp) {
- if (size <= itp->size) {
- /*
- * Get the mapping (IO address to original address)...
- */
- index = (dma_addr - itp->base) >> itp->log_size;
- buffer = (char *) itp->orig_addr[index];
- if (direction = PCI_DMA_FROMDEVICE)
- memcpy(buffer, dma_addr, size);
- else if (direction = PCI_DMA_TODEVICE)
- memcpy(dma_addr, buffer, size);
- else
- BUG();
- break;
- }
- }
+ if (direction = PCI_DMA_FROMDEVICE)
+ memcpy(buffer, dma_addr, size);
+ else if (direction = PCI_DMA_TODEVICE)
+ memcpy(dma_addr, buffer, size);
+ else
+ BUG();
}
/*
@@ -276,9 +259,11 @@
*/
return pci_addr;
- /* get a bounce buffer: */
-
+ /*
+ * get a bounce buffer:
+ */
pci_addr = virt_to_phys(__pci_map_single(hwdev, ptr, size, direction));
+
/*
* Ensure that the address returned is DMA'ble:
*/
@@ -399,6 +384,106 @@
for (i = 0; i < nelems; i++, sg++)
if (sg->orig_address != sg->address)
__pci_sync_single(hwdev, sg->address, sg->length, direction);
+}
+
+#else
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.
+ * The 32-bit bus address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory
+ * until either pci_unmap_single or pci_dma_sync_single is performed.
+ */
+extern inline dma_addr_t
+pci_map_single (struct pci_dev *hwdev, void *ptr, size_t size, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ return virt_to_bus(ptr);
+}
+
+/*
+ * Unmap a single streaming mode DMA translation. The dma_addr and size
+ * must match what was provided for in a previous pci_map_single call. All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guarenteed to see
+ * whatever the device wrote there.
+ */
+extern inline void
+pci_unmap_single (struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
+}
+/*
+ * Map a set of buffers described by scatterlist in streaming
+ * mode for DMA. This is the scather-gather version of the
+ * above pci_map_single interface. Here the scatter gather list
+ * elements are each tagged with the appropriate dma address
+ * and length. They are obtained via sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ * DMA address/length pairs than there are SG table elements.
+ * (for example via virtual mapping capabilities)
+ * The routine returns the number of addr/length pairs actually
+ * used, at most nents.
+ *
+ * Device ownership issues as mentioned above for pci_map_single are
+ * the same here.
+ */
+extern inline int
+pci_map_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ return nents;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.
+ * Again, cpu read rules concerning calls here are the same as for
+ * pci_unmap_single() above.
+ */
+extern inline void
+pci_unmap_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
+}
+/*
+ * Make physical memory consistent for a single
+ * streaming mode DMA translation after a transfer.
+ *
+ * If you perform a pci_map_single() but wish to interrogate the
+ * buffer using the cpu, yet do not wish to teardown the PCI dma
+ * mapping, you must call this function before doing so. At the
+ * next point you give the PCI dma address back to the card, the
+ * device again owns the buffer.
+ */
+extern inline void
+pci_dma_sync_single (struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA
+ * translations after a transfer.
+ *
+ * The same as pci_dma_sync_single but for a scatter-gather list,
+ * same rules and usage.
+ */
+extern inline void
+pci_dma_sync_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
}
#endif /* CONFIG_SWIOTLB */
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c linux-2.4.0-test6-lia/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Fri Mar 10 15:24:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/perfmon.c Fri Aug 11 18:19:21 2000
@@ -11,6 +11,7 @@
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/sched.h>
+#include <linux/interrupt.h>
#include <linux/smp_lock.h>
#include <asm/errno.h>
@@ -55,24 +56,23 @@
#define WRITE_PMCS 0xa1
#define READ_PMDS 0xa2
#define STOP_PMCS 0xa3
-#define IA64_COUNTER_MASK 0xffffffffffffff6f
-#define PERF_OVFL_VAL 0xffffffff
+#define IA64_COUNTER_MASK 0xffffffffffffff6fL
+#define PERF_OVFL_VAL 0xffffffffL
+
+volatile int used_by_system;
struct perfmon_counter {
unsigned long data;
unsigned long counter_num;
};
-unsigned long pmds[MAX_PERF_COUNTER];
-struct task_struct *perf_owner=NULL;
+unsigned long pmds[NR_CPUS][MAX_PERF_COUNTER];
asmlinkage unsigned long
sys_perfmonctl (int cmd1, int cmd2, void *ptr)
{
struct perfmon_counter tmp, *cptr = ptr;
- unsigned long pmd, cnum, dcr, flags;
- struct task_struct *p;
- struct pt_regs *regs;
+ unsigned long cnum, dcr, flags;
struct perf_counter;
int i;
@@ -80,22 +80,24 @@
case WRITE_PMCS: /* Writes to PMC's and clears PMDs */
case WRITE_PMCS_AND_START: /* Also starts counting */
- if (!access_ok(VERIFY_READ, cptr, sizeof(struct perf_counter)*cmd2))
- return -EFAULT;
+ if (cmd2 <= 0 || cmd2 > MAX_PERF_COUNTER - used_by_system)
+ return -EINVAL;
- if (cmd2 > MAX_PERF_COUNTER)
+ if (!access_ok(VERIFY_READ, cptr, sizeof(struct perf_counter)*cmd2))
return -EFAULT;
- if (perf_owner && perf_owner != current)
- return -EBUSY;
- perf_owner = current;
+ current->thread.flags |= IA64_THREAD_PM_VALID;
for (i = 0; i < cmd2; i++, cptr++) {
copy_from_user(&tmp, cptr, sizeof(tmp));
/* XXX need to check validity of counter_num and perhaps data!! */
+ if (tmp.counter_num < 4
+ || tmp.counter_num >= 4 + MAX_PERF_COUNTER - used_by_system)
+ return -EFAULT;
+
ia64_set_pmc(tmp.counter_num, tmp.data);
ia64_set_pmd(tmp.counter_num, 0);
- pmds[tmp.counter_num - 4] = 0;
+ pmds[smp_processor_id()][tmp.counter_num - 4] = 0;
}
if (cmd1 = WRITE_PMCS_AND_START) {
@@ -104,26 +106,13 @@
dcr |= IA64_DCR_PP;
ia64_set_dcr(dcr);
local_irq_restore(flags);
-
- /*
- * This is a no can do. It obviously wouldn't
- * work on SMP where another process may not
- * be blocked at all. We need to put in a perfmon
- * IPI to take care of MP systems. See blurb above.
- */
- lock_kernel();
- for_each_task(p) {
- regs = (struct pt_regs *) (((char *)p) + IA64_STK_OFFSET) -1 ;
- ia64_psr(regs)->pp = 1;
- }
- unlock_kernel();
ia64_set_pmc(0, 0);
}
break;
case READ_PMDS:
- if (cmd2 > MAX_PERF_COUNTER)
- return -EFAULT;
+ if (cmd2 <= 0 || cmd2 > MAX_PERF_COUNTER - used_by_system)
+ return -EINVAL;
if (!access_ok(VERIFY_WRITE, cptr, sizeof(struct perf_counter)*cmd2))
return -EFAULT;
@@ -153,9 +142,13 @@
* when we re-enabled interrupts. When I muck with dcr,
* is the irq_save/restore needed?
*/
- for (i = 0, cnum = 4;i < MAX_PERF_COUNTER; i++, cnum++, cptr++){
- pmd = pmds[i] + (ia64_get_pmd(cnum) & PERF_OVFL_VAL);
- put_user(pmd, &cptr->data);
+ for (i = 0, cnum = 4;i < cmd2; i++, cnum++, cptr++) {
+ tmp.data = (pmds[smp_processor_id()][i]
+ + (ia64_get_pmd(cnum) & PERF_OVFL_VAL));
+ tmp.counter_num = cnum;
+ if (copy_to_user(cptr, &tmp, sizeof(tmp)))
+ return -EFAULT;
+ //put_user(pmd, &cptr->data);
}
local_irq_save(flags);
__asm__ __volatile__("ssm psr.pp");
@@ -167,30 +160,22 @@
case STOP_PMCS:
ia64_set_pmc(0, 1);
- for (i = 0; i < MAX_PERF_COUNTER; ++i)
- ia64_set_pmc(i, 0);
+ ia64_srlz_d();
+ for (i = 0; i < MAX_PERF_COUNTER - used_by_system; ++i)
+ ia64_set_pmc(4+i, 0);
- local_irq_save(flags);
- dcr = ia64_get_dcr();
- dcr &= ~IA64_DCR_PP;
- ia64_set_dcr(dcr);
- local_irq_restore(flags);
- /*
- * This is a no can do. It obviously wouldn't
- * work on SMP where another process may not
- * be blocked at all. We need to put in a perfmon
- * IPI to take care of MP systems. See blurb above.
- */
- lock_kernel();
- for_each_task(p) {
- regs = (struct pt_regs *) (((char *)p) + IA64_STK_OFFSET) - 1;
- ia64_psr(regs)->pp = 0;
+ if (!used_by_system) {
+ local_irq_save(flags);
+ dcr = ia64_get_dcr();
+ dcr &= ~IA64_DCR_PP;
+ ia64_set_dcr(dcr);
+ local_irq_restore(flags);
}
- unlock_kernel();
- perf_owner = NULL;
+ current->thread.flags &= ~(IA64_THREAD_PM_VALID);
break;
default:
+ return -EINVAL;
break;
}
return 0;
@@ -202,13 +187,13 @@
unsigned long mask, i, cnum, val;
mask = ia64_get_pmc(0) >> 4;
- for (i = 0, cnum = 4; i < MAX_PERF_COUNTER; cnum++, i++, mask >>= 1) {
+ for (i = 0, cnum = 4; i < MAX_PERF_COUNTER - used_by_system; cnum++, i++, mask >>= 1) {
+ val = 0;
if (mask & 0x1)
- val = PERF_OVFL_VAL;
- else
+ val += PERF_OVFL_VAL + 1;
/* since we got an interrupt, might as well clear every pmd. */
- val = ia64_get_pmd(cnum) & PERF_OVFL_VAL;
- pmds[i] += val;
+ val += ia64_get_pmd(cnum) & PERF_OVFL_VAL;
+ pmds[smp_processor_id()][i] += val;
ia64_set_pmd(cnum, 0);
}
}
@@ -221,20 +206,61 @@
ia64_srlz_d();
}
+static struct irqaction perfmon_irqaction = {
+ handler: perfmon_interrupt,
+ flags: SA_INTERRUPT,
+ name: "perfmon"
+};
+
void
perfmon_init (void)
{
- if (request_irq(PERFMON_IRQ, perfmon_interrupt, 0, "perfmon", NULL)) {
- printk("perfmon_init: could not allocate performance monitor vector %u\n",
- PERFMON_IRQ);
- return;
- }
+ irq_desc[PERFMON_IRQ].status |= IRQ_PER_CPU;
+ irq_desc[PERFMON_IRQ].handler = &irq_type_ia64_sapic;
+ setup_irq(PERFMON_IRQ, &perfmon_irqaction);
+
ia64_set_pmv(PERFMON_IRQ);
ia64_srlz_d();
printk("Initialized perfmon vector to %u\n",PERFMON_IRQ);
}
+void
+perfmon_init_percpu (void)
+{
+ ia64_set_pmv(PERFMON_IRQ);
+ ia64_srlz_d();
+}
+
+void
+ia64_save_pm_regs (struct thread_struct *t)
+{
+ int i;
+
+ ia64_set_pmc(0, 1);
+ ia64_srlz_d();
+ for (i=0; i< IA64_NUM_PM_REGS - used_by_system ; i++) {
+ t->pmd[i] = ia64_get_pmd(4+i);
+ t->pmod[i] = pmds[smp_processor_id()][i];
+ t->pmc[i] = ia64_get_pmc(4+i);
+ }
+}
+
+void
+ia64_load_pm_regs (struct thread_struct *t)
+{
+ int i;
+
+ for (i=0; i< IA64_NUM_PM_REGS - used_by_system ; i++) {
+ ia64_set_pmd(4+i, t->pmd[i]);
+ pmds[smp_processor_id()][i] = t->pmod[i];
+ ia64_set_pmc(4+i, t->pmc[i]);
+ }
+ ia64_set_pmc(0, 0);
+ ia64_srlz_d();
+}
+
#else /* !CONFIG_PERFMON */
+
asmlinkage unsigned long
sys_perfmonctl (int cmd1, int cmd2, void *ptr)
{
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.0-test6-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/process.c Fri Aug 11 17:20:17 2000
@@ -27,6 +27,8 @@
#include <asm/unwind.h>
#include <asm/user.h>
+#ifdef CONFIG_IA64_NEW_UNWIND
+
static void
do_show_stack (struct unw_frame_info *info, void *arg)
{
@@ -44,6 +46,8 @@
} while (unw_unwind(info) >= 0);
}
+#endif
+
void
show_stack (struct task_struct *task)
{
@@ -118,15 +122,14 @@
current->nice = 20;
current->counter = -100;
-#ifdef CONFIG_SMP
- if (!current->need_resched)
- min_xtp();
-#endif
while (1) {
- while (!current->need_resched) {
+#ifdef CONFIG_SMP
+ if (!current->need_resched)
+ min_xtp();
+#endif
+ while (!current->need_resched)
continue;
- }
#ifdef CONFIG_SMP
normal_xtp();
#endif
@@ -157,11 +160,12 @@
void
ia64_save_extra (struct task_struct *task)
{
- extern void ia64_save_debug_regs (unsigned long *save_area);
- extern void ia32_save_state (struct thread_struct *thread);
-
if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0)
ia64_save_debug_regs(&task->thread.dbr[0]);
+#ifdef CONFIG_PERFMON
+ if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
+ ia64_save_pm_regs(&task->thread);
+#endif
if (IS_IA32_PROCESS(ia64_task_regs(task)))
ia32_save_state(&task->thread);
}
@@ -169,11 +173,12 @@
void
ia64_load_extra (struct task_struct *task)
{
- extern void ia64_load_debug_regs (unsigned long *save_area);
- extern void ia32_load_state (struct thread_struct *thread);
-
if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0)
ia64_load_debug_regs(&task->thread.dbr[0]);
+#ifdef CONFIG_PERFMON
+ if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
+ ia64_load_pm_regs(&task->thread);
+#endif
if (IS_IA32_PROCESS(ia64_task_regs(task)))
ia32_load_state(&task->thread);
}
@@ -530,17 +535,6 @@
if (ia64_get_fpu_owner() = current) {
ia64_set_fpu_owner(0);
}
-}
-
-/*
- * Free remaining state associated with DEAD_TASK. This is called
- * after the parent of DEAD_TASK has collected the exist status of the
- * task via wait().
- */
-void
-release_thread (struct task_struct *dead_task)
-{
- /* nothing to do */
}
unsigned long
diff -urN linux-davidm/arch/ia64/kernel/sal.c linux-2.4.0-test6-lia/arch/ia64/kernel/sal.c
--- linux-davidm/arch/ia64/kernel/sal.c Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/sal.c Mon Jul 31 14:01:22 2000
@@ -156,6 +156,14 @@
struct ia64_sal_desc_platform_feature *pf = (void *) p;
printk("SAL: Platform features ");
+#ifdef CONFIG_IA64_HAVE_IRQREDIR
+ /*
+ * Early versions of SAL say we don't have
+ * IRQ redirection, even though we do...
+ */
+ pf->feature_mask |= (1 << 1);
+#endif
+
if (pf->feature_mask & (1 << 0))
printk("BusLock ");
diff -urN linux-davidm/arch/ia64/kernel/semaphore.c linux-2.4.0-test6-lia/arch/ia64/kernel/semaphore.c
--- linux-davidm/arch/ia64/kernel/semaphore.c Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/semaphore.c Fri Aug 11 17:20:34 2000
@@ -222,9 +222,6 @@
void
__down_read_failed (struct rw_semaphore *sem, long count)
{
- struct task_struct *tsk = current;
- DECLARE_WAITQUEUE(wait, tsk);
-
while (1) {
if (count = -1) {
down_read_failed_biased(sem);
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.0-test6-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Fri Aug 11 19:01:14 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/setup.c Mon Jul 31 14:01:22 2000
@@ -137,19 +137,65 @@
*/
bootmap_start = PAGE_ALIGN(__pa(&_end));
if (ia64_boot_param.initrd_size)
- bootmap_start = PAGE_ALIGN(bootmap_start + ia64_boot_param.initrd_size);
+ bootmap_start = PAGE_ALIGN(bootmap_start
+ + ia64_boot_param.initrd_size);
bootmap_size = init_bootmem(bootmap_start >> PAGE_SHIFT, max_pfn);
efi_memmap_walk(free_available_memory, 0);
reserve_bootmem(bootmap_start, bootmap_size);
+
#ifdef CONFIG_BLK_DEV_INITRD
initrd_start = ia64_boot_param.initrd_start;
+
if (initrd_start) {
+ u64 start, size;
+# define is_same_page(a,b) (((a)&PAGE_MASK) = ((b)&PAGE_MASK))
+
+#if 1
+ /* XXX for now some backwards compatibility... */
+ if (initrd_start >= PAGE_OFFSET)
+ printk("Warning: boot loader passed virtual address "
+ "for initrd, please upgrade the loader\n");
+ } else
+#endif
+ /*
+ * The loader ONLY passes physical addresses
+ */
+ initrd_start = (unsigned long)__va(initrd_start);
initrd_end = initrd_start+ia64_boot_param.initrd_size;
+ start = initrd_start;
+ size = ia64_boot_param.initrd_size;
+
printk("Initial ramdisk at: 0x%p (%lu bytes)\n",
(void *) initrd_start, ia64_boot_param.initrd_size);
- reserve_bootmem(virt_to_phys(initrd_start), ia64_boot_param.initrd_size);
+
+ /*
+ * The kernel end and the beginning of initrd can be
+ * on the same page. This would cause the page to be
+ * reserved twice. While not harmful, it does lead to
+ * a warning message which can cause confusion. Thus,
+ * we make sure that in this case we only reserve new
+ * pages, i.e., initrd only pages. We need to:
+ *
+ * - align up start
+ * - adjust size of reserved section accordingly
+ *
+ * It should be noted that this operation is only
+ * valid for the reserve_bootmem() call and does not
+ * affect the integrety of the initrd itself.
+ *
+ * reserve_bootmem() considers partial pages as reserved.
+ */
+ if (is_same_page(initrd_start, (unsigned long)&_end)) {
+ start = PAGE_ALIGN(start);
+ size -= start-initrd_start;
+
+ printk("Initial ramdisk & kernel on the same page: "
+ "reserving start=%lx size=%ld bytes\n",
+ start, size);
+ }
+ reserve_bootmem(__pa(start), size);
}
#endif
#if 0
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-test6-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/smp.c Fri Aug 11 20:40:15 2000
@@ -135,6 +135,7 @@
static inline int
pointer_lock(void *lock, void *data, int retry)
{
+ volatile long *ptr = lock;
again:
if (cmpxchg_acq((void **) lock, 0, data) = 0)
return 0;
@@ -142,7 +143,7 @@
if (!retry)
return -EBUSY;
- while (*(void **) lock)
+ while (*ptr)
;
goto again;
@@ -320,6 +321,58 @@
#endif /* !CONFIG_ITANIUM_PTCG */
/*
+ * Run a function on another CPU
+ * <func> The function to run. This must be fast and non-blocking.
+ * <info> An arbitrary pointer to pass to the function.
+ * <retry> If true, keep retrying until ready.
+ * <wait> If true, wait until function has completed on other CPUs.
+ * [RETURNS] 0 on success, else a negative status code.
+ *
+ * Does not return until the remote CPU is nearly ready to execute <func>
+ * or is or has executed.
+ */
+
+int
+smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int retry, int wait)
+{
+ struct smp_call_struct data;
+ long timeout;
+ int cpus = 1;
+
+ if (cpuid = smp_processor_id()) {
+ printk(__FUNCTION__" trying to call self\n");
+ return -EBUSY;
+ }
+
+ data.func = func;
+ data.info = info;
+ data.wait = wait;
+ atomic_set(&data.unstarted_count, cpus);
+ atomic_set(&data.unfinished_count, cpus);
+
+ if (pointer_lock(&smp_call_function_data, &data, retry))
+ return -EBUSY;
+
+ /* Send a message to all other CPUs and wait for them to respond */
+ send_IPI_single(cpuid, IPI_CALL_FUNC);
+
+ /* Wait for response */
+ timeout = jiffies + HZ;
+ while ((atomic_read(&data.unstarted_count) > 0) && time_before(jiffies, timeout))
+ barrier();
+ if (atomic_read(&data.unstarted_count) > 0) {
+ smp_call_function_data = NULL;
+ return -ETIMEDOUT;
+ }
+ if (wait)
+ while (atomic_read(&data.unfinished_count) > 0)
+ barrier();
+ /* unlock pointer */
+ smp_call_function_data = NULL;
+ return 0;
+}
+
+/*
* Run a function on all other CPUs.
* <func> The function to run. This must be fast and non-blocking.
* <info> An arbitrary pointer to pass to the function.
@@ -396,13 +449,19 @@
smp_do_timer(struct pt_regs *regs)
{
int cpu = smp_processor_id();
+ int user = user_mode(regs);
struct cpuinfo_ia64 *data = &cpu_data[cpu];
- if (!--data->prof_counter) {
- irq_enter(cpu, TIMER_IRQ);
- update_process_times(user_mode(regs));
+ if (--data->prof_counter <= 0) {
data->prof_counter = data->prof_multiplier;
- irq_exit(cpu, TIMER_IRQ);
+ /*
+ * update_process_times() expects us to have done irq_enter().
+ * Besides, if we don't timer interrupts ignore the global
+ * interrupt lock, which is the WrongThing (tm) to do.
+ */
+ irq_enter(cpu, 0);
+ update_process_times(user);
+ irq_exit(cpu, 0);
}
}
@@ -473,6 +532,11 @@
extern void ia64_rid_init(void);
extern void ia64_init_itm(void);
extern void ia64_cpu_local_tick(void);
+#ifdef CONFIG_PERFMON
+ extern void perfmon_init_percpu(void);
+#endif
+
+ efi_map_pal_code();
cpu_init();
@@ -480,6 +544,10 @@
/* setup the CPU local timer tick */
ia64_init_itm();
+
+#ifdef CONFIG_PERFMON
+ perfmon_init_percpu();
+#endif
/* Disable all local interrupts */
ia64_set_lrr0(0, 1);
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.4.0-test6-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/time.c Mon Jul 31 14:01:22 2000
@@ -150,11 +150,13 @@
static void
timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
- static unsigned long last_time;
- static unsigned char count;
int cpu = smp_processor_id();
unsigned long new_itm;
+#if 0
+ static unsigned long last_time;
+ static unsigned char count;
int printed = 0;
+#endif
/*
* Here we are in the timer irq handler. We have irqs locally
@@ -192,7 +194,7 @@
if (time_after(new_itm, ia64_get_itc()))
break;
-#if !(defined(CONFIG_IA64_SOFTSDV_HACKS) && defined(CONFIG_SMP))
+#if 0
/*
* SoftSDV in SMP mode is _slow_, so we do "lose" ticks,
* but it's really OK...
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.0-test6-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/kernel/unwind.c Fri Aug 11 15:52:00 2000
@@ -62,7 +62,7 @@
#define UNW_LOG_HASH_SIZE (UNW_LOG_CACHE_SIZE + 1)
#define UNW_HASH_SIZE (1 << UNW_LOG_HASH_SIZE)
-#define UNW_DEBUG 1
+#define UNW_DEBUG 0
#define UNW_STATS 0 /* WARNING: this disabled interrupts for long time-spans!! */
#if UNW_DEBUG
diff -urN linux-davidm/arch/ia64/lib/memcpy.S linux-2.4.0-test6-lia/arch/ia64/lib/memcpy.S
--- linux-davidm/arch/ia64/lib/memcpy.S Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/lib/memcpy.S Sat Aug 5 13:19:26 2000
@@ -1,3 +1,20 @@
+/*
+ *
+ * Optimized version of the standard memcpy() function
+ *
+ * Inputs:
+ * in0: destination address
+ * in1: source address
+ * in2: number of bytes to copy
+ * Output:
+ * no return value
+ *
+ * Copyright (C) 2000 Hewlett-Packard Co
+ * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/config.h>
+
#include <asm/asmmacro.h>
GLOBAL_ENTRY(bcopy)
@@ -10,77 +27,254 @@
// FALL THROUGH
GLOBAL_ENTRY(memcpy)
-# define MEM_LAT 4
-
-# define N MEM_LAT-1
-# define Nrot ((MEM_LAT + 7) & ~7)
+# define MEM_LAT 2 /* latency to L1 cache */
# define dst r2
# define src r3
-# define len r9
-# define saved_pfs r10
-# define saved_lc r11
-# define saved_pr r16
-# define t0 r17
-# define cnt r18
-
+# define retval r8
+# define saved_pfs r9
+# define saved_lc r10
+# define saved_pr r11
+# define cnt r16
+# define src2 r17
+# define t0 r18
+# define t1 r19
+# define t2 r20
+# define t3 r21
+# define t4 r22
+# define src_end r23
+
+# define N (MEM_LAT + 4)
+# define Nrot ((N + 7) & ~7)
+
+ /*
+ * First, check if everything (src, dst, len) is a multiple of eight. If
+ * so, we handle everything with no taken branches (other than the loop
+ * itself) and a small icache footprint. Otherwise, we jump off to
+ * the more general copy routine handling arbitrary
+ * sizes/alignment etc.
+ */
UNW(.prologue)
UNW(.save ar.pfs, saved_pfs)
alloc saved_pfs=ar.pfs,3,Nrot,0,Nrot
+#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
lfetch [in1]
+#else
+ nop.m 0
+#endif
+ or t0=in0,in1
+ ;;
- .rotr val[MEM_LAT]
- .rotp p[MEM_LAT]
-
+ or t0=t0,in2
UNW(.save ar.lc, saved_lc)
mov saved_lc=ar.lc
-
- or t0=in0,in1
UNW(.save pr, saved_pr)
mov saved_pr=pr
- UNW(.body)
-
- mov ar.ec=MEM_LAT
+ cmp.eq p6,p0=in2,r0 // zero length?
+ mov retval=in0 // return dst
+(p6) br.ret.spnt.many rp // zero length, return immediately
+ ;;
- mov r8=in0 // return dst
- shr cnt=in2,3 // number of 8-byte words to copy
+ mov dst=in0 // copy because of rotation
+ shr.u cnt=in2,3 // number of 8-byte words to copy
mov pr.rot=1<<16
;;
- cmp.eq p6,p0=in2,r0 // zero length?
- or t0=t0,in2
-(p6) br.ret.spnt.many rp // yes, return immediately
- mov dst=in0 // copy because of rotation
- mov src=in1 // copy because of rotation
adds cnt=-1,cnt // br.ctop is repeat/until
+ cmp.gtu p7,p0\x16,in2 // copying less than 16 bytes?
+ UNW(.body)
+ mov ar.ec=N
;;
+
and t0=0x7,t0
mov ar.lc=cnt
;;
cmp.ne p6,p0=t0,r0
-(p6) br.cond.spnt.few slow_memcpy
+ mov src=in1 // copy because of rotation
+(p7) br.cond.spnt.few memcpy_short
+(p6) br.cond.spnt.few memcpy_long
+ ;;
+ .rotr val[N]
+ .rotp p[N]
1:
(p[0]) ld8 val[0]=[src],8
-(p[N]) st8 [dst]=val[N],8
- br.ctop.sptk.few 1b
+(p[N-1])st8 [dst]=val[N-1],8
+ br.ctop.dptk.few 1b
;;
-.exit:
mov ar.lc=saved_lc
- mov pr=saved_pr,0xffffffffffff0000
+ mov pr=saved_pr,-1
mov ar.pfs=saved_pfs
br.ret.sptk.many rp
-slow_memcpy:
- adds cnt=-1,in2
+ /*
+ * Small (<16 bytes) unaligned copying is done via a simple byte-at-the-time
+ * copy loop. This performs relatively poorly on Itanium, but it doesn't
+ * get used very often (gcc inlines small copies) and due to atomicity
+ * issues, we want to avoid read-modify-write of entire words.
+ */
+ .align 32
+memcpy_short:
+ adds cnt=-1,in2 // br.ctop is repeat/until
+ mov ar.ec=MEM_LAT
;;
mov ar.lc=cnt
;;
+ /*
+ * It is faster to put a stop bit in the loop here because it makes
+ * the pipeline shorter (and latency is what matters on short copies).
+ */
1:
(p[0]) ld1 val[0]=[src],1
-(p[N]) st1 [dst]=val[N],1
- br.ctop.sptk.few 1b
- br.sptk.few .exit
+ ;;
+(p[MEM_LAT-1])st1 [dst]=val[MEM_LAT-1],1
+ br.ctop.dptk.few 1b
+ ;;
+ mov ar.lc=saved_lc
+ mov pr=saved_pr,-1
+ mov ar.pfs=saved_pfs
+ br.ret.sptk.many rp
+
+ /*
+ * Large (>= 16 bytes) copying is done in a fancy way. Latency isn't
+ * an overriding concern here, but throughput is. We first do
+ * sub-word copying until the destination is aligned, then we check
+ * if the source is also aligned. If so, we do a simple load/store-loop
+ * until there are less than 8 bytes left over and then we do the tail,
+ * by storing the last few bytes using sub-word copying. If the source
+ * is not aligned, we branch off to the non-congruent loop.
+ *
+ * stage: op:
+ * 0 ld
+ * :
+ * MEM_LAT+3 shrp
+ * MEM_LAT+4 st
+ *
+ * On Itanium, the pipeline itself runs without stalls. However, br.ctop
+ * seems to introduce an unavoidable bubble in the pipeline so the overall
+ * latency is 2 cycles/iteration. This gives us a _copy_ throughput
+ * of 4 byte/cycle. Still not bad.
+ */
+# undef N
+# undef Nrot
+# define N (MEM_LAT + 5) /* number of stages */
+# define Nrot ((N+1 + 2 + 7) & ~7) /* number of rotating regs */
+
+#define LOG_LOOP_SIZE 6
+
+memcpy_long:
+ alloc t3=ar.pfs,3,Nrot,0,Nrot // resize register frame
+ and t0=-8,src // t0 = src & ~7
+ and t2=7,src // t2 = src & 7
+ ;;
+ ld8 t0=[t0] // t0 = 1st source word
+ adds src2=7,src // src2 = (src + 7)
+ sub t4=r0,dst // t4 = -dst
+ ;;
+ and src2=-8,src2 // src2 = (src + 7) & ~7
+ shl t2=t2,3 // t2 = 8*(src & 7)
+ shl t4=t4,3 // t4 = 8*(dst & 7)
+ ;;
+ ld8 t1=[src2] // t1 = 1st source word if src is 8-byte aligned, 2nd otherwise
+ sub t3d,t2 // t3 = 64-8*(src & 7)
+ shr.u t0=t0,t2
+ ;;
+ add src_end=src,in2
+ shl t1=t1,t3
+ mov pr=t4,0x38 // (p5,p4,p3)=(dst & 7)
+ ;;
+ or t0=t0,t1
+ mov cnt=r0
+ adds src_end=-1,src_end
+ ;;
+(p3) st1 [dst]=t0,1
+(p3) shr.u t0=t0,8
+(p3) adds cnt=1,cnt
+ ;;
+(p4) st2 [dst]=t0,2
+(p4) shr.u t0=t0,16
+(p4) adds cnt=2,cnt
+ ;;
+(p5) st4 [dst]=t0,4
+(p5) adds cnt=4,cnt
+ and src_end=-8,src_end // src_end = last word of source buffer
+ ;;
+
+ // At this point, dst is aligned to 8 bytes and there at least 16-7=9 bytes left to copy:
+
+1:{ add src=cnt,src // make src point to remainder of source buffer
+ sub cnt=in2,cnt // cnt = number of bytes left to copy
+ mov t4=ip
+ } ;;
+ and src2=-8,src // align source pointer
+ adds t4=memcpy_loops-1b,t4
+ mov ar.ec=N
+
+ and t0=7,src // t0 = src & 7
+ shr.u t2=cnt,3 // t2 = number of 8-byte words left to copy
+ shl cnt=cnt,3 // move bits 0-2 to 3-5
+ ;;
+
+ .rotr val[N+1], w[2]
+ .rotp p[N]
+
+ cmp.ne p6,p0=t0,r0 // is src aligned, too?
+ shl t0=t0,LOG_LOOP_SIZE // t0 = 8*(src & 7)
+ adds t2=-1,t2 // br.ctop is repeat/until
+ ;;
+ add t4=t0,t4
+ mov pr=cnt,0x38 // set (p5,p4,p3) to # of bytes last-word bytes to copy
+ mov ar.lc=t2
+ ;;
+(p6) ld8 val[1]=[src2],8 // prime the pump...
+ mov b6=t4
+ br.sptk.few b6
+ ;;
+
+memcpy_tail:
+ // At this point, (p5,p4,p3) are set to the number of bytes left to copy (which is
+ // less than 8) and t0 contains the last few bytes of the src buffer:
+(p5) st4 [dst]=t0,4
+(p5) shr.u t0=t0,32
+ mov ar.lc=saved_lc
+ ;;
+(p4) st2 [dst]=t0,2
+(p4) shr.u t0=t0,16
+ mov ar.pfs=saved_pfs
+ ;;
+(p3) st1 [dst]=t0
+ mov pr=saved_pr,-1
+ br.ret.sptk.many rp
+
+///////////////////////////////////////////////////////
+ .align 64
+
+#define COPY(shift,index) \
+ 1: \
+ { .mfi \
+ (p[0]) ld8 val[0]=[src2],8; \
+ nop.f 0; \
+ (p[MEM_LAT+3]) shrp w[0]=val[MEM_LAT+3],val[MEM_LAT+4-index],shift; \
+ }; \
+ { .mbb \
+ (p[MEM_LAT+4]) st8 [dst]=w[1],8; \
+ nop.b 0; \
+ br.ctop.dptk.few 1b; \
+ }; \
+ ;; \
+ ld8 val[N-1]=[src_end]; /* load last word (may be same as val[N]) */ \
+ ;; \
+ shrp t0=val[N-1],val[N-index],shift; \
+ br memcpy_tail
+memcpy_loops:
+ COPY(0, 1) /* no point special casing this---it doesn't go any faster without shrp */
+ COPY(8, 0)
+ COPY(16, 0)
+ COPY(24, 0)
+ COPY(32, 0)
+ COPY(40, 0)
+ COPY(48, 0)
+ COPY(56, 0)
END(memcpy)
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.0-test6-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Fri Aug 11 19:01:15 2000
+++ linux-2.4.0-test6-lia/arch/ia64/mm/init.c Mon Jul 31 14:01:22 2000
@@ -185,8 +185,42 @@
void
free_initrd_mem(unsigned long start, unsigned long end)
{
+ /*
+ * EFI uses 4KB pages while the kernel can use 4KB or bigger.
+ * Thus EFI and the kernel may have different page sizes. It is
+ * therefore possible to have the initrd share the same page as
+ * the end of the kernel (given current setup).
+ *
+ * To avoid freeing/using the wrong page (kernel sized) we:
+ * - align up the beginning of initrd
+ * - keep the end untouched
+ *
+ * | |
+ * |=======| a000
+ * | |
+ * | |
+ * | | 9000
+ * |/////////////|
+ * |/////////////|
+ * |=======| 8000
+ * |///INITRD////|
+ * |/////////////|
+ * |/////////////| 7000
+ * | |
+ * |KKKKKKKKKKKKK|
+ * |=======| 6000
+ * |KKKKKKKKKKKKK|
+ * |KKKKKKKKKKKKK|
+ * K=kernel using 8KB pages
+ *
+ * In this example, we must free page 8000 ONLY. So we must align up
+ * initrd_start and keep initrd_end as is.
+ */
+ start = PAGE_ALIGN(start);
+
if (start < end)
printk ("Freeing initrd memory: %ldkB freed\n", (end - start) >> 10);
+
for (; start < end; start += PAGE_SIZE) {
clear_bit(PG_reserved, &virt_to_page(start)->flags);
set_page_count(virt_to_page(start), 1);
diff -urN linux-davidm/arch/ia64/mm/tlb.c linux-2.4.0-test6-lia/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test6-lia/arch/ia64/mm/tlb.c Mon Jul 31 14:01:22 2000
@@ -1,8 +1,11 @@
/*
* TLB support routines.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 08/02/00 A. Mallick <asit.k.mallick@intel.com>
+ * Modified RID allocation for SMP
*/
#include <linux/config.h>
#include <linux/init.h>
@@ -27,9 +30,11 @@
1 << _PAGE_SIZE_8K | \
1 << _PAGE_SIZE_4K )
-static void wrap_context (struct mm_struct *mm);
-
-unsigned long ia64_next_context = (1UL << IA64_HW_CONTEXT_BITS) + 1;
+struct ia64_ctx ia64_ctx = {
+ lock: SPIN_LOCK_UNLOCKED,
+ next: 1,
+ limit: (1UL << IA64_HW_CONTEXT_BITS)
+};
/*
* Put everything in a struct so we avoid the global offset table whenever
@@ -106,49 +111,43 @@
#endif /* CONFIG_SMP && !CONFIG_ITANIUM_PTCG */
-void
-get_new_mmu_context (struct mm_struct *mm)
-{
- if ((ia64_next_context & IA64_HW_CONTEXT_MASK) = 0) {
- wrap_context(mm);
- }
- mm->context = ia64_next_context++;
-}
-
/*
- * This is where we handle the case where (ia64_next_context &
- * IA64_HW_CONTEXT_MASK) = 0. Whenever this happens, we need to
- * flush the entire TLB and skip over region id number 0, which is
- * used by the kernel.
+ * Acquire the ia64_ctx.lock before calling this function!
*/
-static void
-wrap_context (struct mm_struct *mm)
+void
+wrap_mmu_context (struct mm_struct *mm)
{
- struct task_struct *task;
+ struct task_struct *tsk;
+ unsigned long tsk_context;
+
+ if (ia64_ctx.next >= (1UL << IA64_HW_CONTEXT_BITS))
+ ia64_ctx.next = 300; /* skip daemons */
+ ia64_ctx.limit = (1UL << IA64_HW_CONTEXT_BITS);
/*
- * We wrapped back to the first region id so we nuke the TLB
- * so we can switch to the next generation of region ids.
+ * Scan all the task's mm->context and set proper safe range
*/
- __flush_tlb_all();
- if (ia64_next_context++ = 0) {
- /*
- * Oops, we've used up all 64 bits of the context
- * space---walk through task table to ensure we don't
- * get tricked into using an old context. If this
- * happens, the machine has been running for a long,
- * long time!
- */
- ia64_next_context = (1UL << IA64_HW_CONTEXT_BITS) + 1;
-
- read_lock(&tasklist_lock);
- for_each_task (task) {
- if (task->mm = mm)
- continue;
- flush_tlb_mm(mm);
+
+ read_lock(&tasklist_lock);
+ repeat:
+ for_each_task(tsk) {
+ if (!tsk->mm)
+ continue;
+ tsk_context = tsk->mm->context;
+ if (tsk_context = ia64_ctx.next) {
+ if (++ia64_ctx.next >= ia64_ctx.limit) {
+ /* empty range: reset the range limit and start over */
+ if (ia64_ctx.next >= (1UL << IA64_HW_CONTEXT_BITS))
+ ia64_ctx.next = 300;
+ ia64_ctx.limit = (1UL << IA64_HW_CONTEXT_BITS);
+ goto repeat;
+ }
}
- read_unlock(&tasklist_lock);
+ if ((tsk_context > ia64_ctx.next) && (tsk_context < ia64_ctx.limit))
+ ia64_ctx.limit = tsk_context;
}
+ read_unlock(&tasklist_lock);
+ flush_tlb_all();
}
void
diff -urN linux-davidm/arch/ia64/sn/sn1/irq.c linux-2.4.0-test6-lia/arch/ia64/sn/sn1/irq.c
--- linux-davidm/arch/ia64/sn/sn1/irq.c Fri Aug 11 19:01:15 2000
+++ linux-2.4.0-test6-lia/arch/ia64/sn/sn1/irq.c Mon Jul 31 14:01:22 2000
@@ -1,6 +1,6 @@
#include <linux/kernel.h>
-#include <linux/irq.h>
#include <linux/sched.h>
+#include <linux/irq.h>
#include <asm/ptrace.h>
diff -urN linux-davidm/drivers/char/Makefile linux-2.4.0-test6-lia/drivers/char/Makefile
--- linux-davidm/drivers/char/Makefile Fri Aug 11 19:01:15 2000
+++ linux-2.4.0-test6-lia/drivers/char/Makefile Thu Aug 10 20:29:27 2000
@@ -109,7 +109,17 @@
endif
obj-$(CONFIG_MAGIC_SYSRQ) += sysrq.o
+
obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o
+ifeq ($(CONFIG_ATARI_DSP56K),y)
+S = y
+else
+ ifeq ($(CONFIG_ATARI_DSP56K),m)
+ SM = y
+ endif
+endif
+
+obj-$(CONFIG_SIM_SERIAL) += simserial.o
obj-$(CONFIG_ROCKETPORT) += rocket.o
obj-$(CONFIG_MOXA_SMARTIO) += mxser.o
obj-$(CONFIG_MOXA_INTELLIO) += moxa.o
diff -urN linux-davidm/drivers/char/agp/agpgart_be.c linux-2.4.0-test6-lia/drivers/char/agp/agpgart_be.c
--- linux-davidm/drivers/char/agp/agpgart_be.c Thu Aug 10 19:56:21 2000
+++ linux-2.4.0-test6-lia/drivers/char/agp/agpgart_be.c Mon Jul 31 14:01:22 2000
@@ -67,14 +67,16 @@
{
#if defined(__i386__)
asm volatile ("wbinvd":::"memory");
-#elif defined(__alpha__)
+#elif defined(__alpha__) || defined(__ia64__)
/* ??? I wonder if we'll really need to flush caches, or if the
core logic can manage to keep the system coherent. The ARM
speaks only of using `cflush' to get things in memory in
preparation for power failure.
If we do need to call `cflush', we'll need a target page,
- as we can only flush one page at a time. */
+ as we can only flush one page at a time.
+
+ Ditto for IA-64. --davidm 00/08/07 */
mb();
#else
#error "Please define flush_cache."
diff -urN linux-davidm/drivers/char/drm/agpsupport.c linux-2.4.0-test6-lia/drivers/char/drm/agpsupport.c
--- linux-davidm/drivers/char/drm/agpsupport.c Thu Aug 10 19:56:21 2000
+++ linux-2.4.0-test6-lia/drivers/char/drm/agpsupport.c Mon Jul 31 14:01:22 2000
@@ -322,7 +322,7 @@
case ALI_M1541: head->chipset = "ALi M1541"; break;
default: head->chipset = "Unknown"; break;
}
- DRM_INFO("AGP %d.%d on %s @ 0x%08lx %dMB\n",
+ DRM_INFO("AGP %d.%d on %s @ 0x%08lx %ZuMB\n",
head->agp_info.version.major,
head->agp_info.version.minor,
head->chipset,
diff -urN linux-davidm/drivers/char/drm/lists.c linux-2.4.0-test6-lia/drivers/char/drm/lists.c
--- linux-davidm/drivers/char/drm/lists.c Wed Aug 2 18:54:13 2000
+++ linux-2.4.0-test6-lia/drivers/char/drm/lists.c Mon Jul 31 14:01:22 2000
@@ -153,6 +153,7 @@
#endif
buf->list = DRM_LIST_FREE;
do {
+ /* XXX this is wrong due to the ABA problem! --davidm 00/08/07 */
old = bl->next;
buf->next = old;
prev = cmpxchg(&bl->next, old, buf);
@@ -185,6 +186,7 @@
/* Get buffer */
do {
+ /* XXX this is wrong due to the ABA problem! --davidm 00/08/07 */
old = bl->next;
if (!old) return NULL;
new = bl->next->next;
diff -urN linux-davidm/drivers/char/drm/vm.c linux-2.4.0-test6-lia/drivers/char/drm/vm.c
--- linux-davidm/drivers/char/drm/vm.c Thu Aug 10 19:56:21 2000
+++ linux-2.4.0-test6-lia/drivers/char/drm/vm.c Fri Aug 11 15:38:46 2000
@@ -250,7 +250,7 @@
vma->vm_start, vma->vm_end, VM_OFFSET(vma));
/* Length must match exact page count */
- if ((length >> PAGE_SHIFT) != dma->page_count) {
+ if (!dma || (length >> PAGE_SHIFT) != dma->page_count) {
unlock_kernel();
return -EINVAL;
}
@@ -323,6 +323,9 @@
pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
}
+#elif defined(__ia64__)
+ if (map->type != _DRM_AGP)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
#endif
vma->vm_flags |= VM_IO; /* not in core dump */
}
diff -urN linux-davidm/drivers/char/efirtc.c linux-2.4.0-test6-lia/drivers/char/efirtc.c
--- linux-davidm/drivers/char/efirtc.c Wed Aug 2 18:54:14 2000
+++ linux-2.4.0-test6-lia/drivers/char/efirtc.c Fri Aug 11 17:21:50 2000
@@ -395,11 +395,10 @@
return 0;
}
-static int __exit
+static void __exit
efi_rtc_exit(void)
{
/* not yet used */
- return 0;
}
module_init(efi_rtc_init);
diff -urN linux-davidm/drivers/char/simserial.c linux-2.4.0-test6-lia/drivers/char/simserial.c
--- linux-davidm/drivers/char/simserial.c Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/drivers/char/simserial.c Fri Aug 11 14:56:04 2000
@@ -742,7 +742,7 @@
info->tqueue.data = info;
info->state = sstate;
if (sstate->info) {
- kfree_s(info, sizeof(struct async_struct));
+ kfree(info);
*ret_info = sstate->info;
return 0;
}
diff -urN linux-davidm/drivers/pci/pci.ids linux-2.4.0-test6-lia/drivers/pci/pci.ids
--- linux-davidm/drivers/pci/pci.ids Thu Aug 10 19:56:23 2000
+++ linux-2.4.0-test6-lia/drivers/pci/pci.ids Thu Aug 10 20:29:30 2000
@@ -4635,7 +4635,12 @@
84c4 450KX/GX [Orion] - 82454KX/GX PCI bridge
84c5 450KX/GX [Orion] - 82453KX/GX Memory controller
84ca 450NX - 82451NX Memory & I/O Controller
- 84cb 450NX - 82454NX PCI Expander Bridge
+ 84cb 450NX - 82454NX/84460GX PCI Expander Bridge
+ 84e0 460GX - 84460GX System Address Controller (SAC)
+ 84e1 460GX - 84460GX System Data Controller (SDC)
+ 84e2 460GX - 84460GX AGP Bridge (GXB)
+ 84e3 460GX - 84460GX Memory Address Controller (MAC)
+ 84e4 460GX - 84460GX Memory Data Controller (MDC)
ffff 450NX/GX [Orion] - 82453KX/GX Memory controller [BUG]
8800 Trigem Computer Inc.
2008 Video assistent component
diff -urN linux-davidm/fs/dcache.c linux-2.4.0-test6-lia/fs/dcache.c
--- linux-davidm/fs/dcache.c Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/fs/dcache.c Fri Aug 11 17:23:00 2000
@@ -1189,8 +1189,9 @@
if (!dentry_cache)
panic("Cannot create dentry cache");
- if (PAGE_SHIFT < 13)
- mempages >>= (13 - PAGE_SHIFT);
+#if PAGE_SHIFT < 13
+ mempages >>= (13 - PAGE_SHIFT);
+#endif
mempages *= sizeof(struct list_head);
for (order = 0; ((1UL << order) << PAGE_SHIFT) < mempages; order++)
;
diff -urN linux-davidm/include/asm-ia64/asmmacro.h linux-2.4.0-test6-lia/include/asm-ia64/asmmacro.h
--- linux-davidm/include/asm-ia64/asmmacro.h Thu Jun 22 07:09:45 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/asmmacro.h Fri Aug 11 16:43:58 2000
@@ -23,7 +23,7 @@
#endif
#define ENTRY(name) \
- .align 16; \
+ .align 32; \
.proc name; \
name:
diff -urN linux-davidm/include/asm-ia64/machvec.h linux-2.4.0-test6-lia/include/asm-ia64/machvec.h
--- linux-davidm/include/asm-ia64/machvec.h Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/machvec.h Fri Aug 11 16:43:55 2000
@@ -40,7 +40,7 @@
# elif defined (CONFIG_IA64_DIG)
# include <asm/machvec_dig.h>
# elif defined (CONFIG_IA64_SGI_SN1_SIM)
-# include <asm/machvec_sgi_sn1.h>
+# include <asm/machvec_sn1.h>
# elif defined (CONFIG_IA64_GENERIC)
# ifdef MACHVEC_PLATFORM_HEADER
diff -urN linux-davidm/include/asm-ia64/mmu_context.h linux-2.4.0-test6-lia/include/asm-ia64/mmu_context.h
--- linux-davidm/include/asm-ia64/mmu_context.h Sun Feb 13 10:31:06 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/mmu_context.h Fri Aug 11 16:43:56 2000
@@ -2,12 +2,13 @@
#define _ASM_IA64_MMU_CONTEXT_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
#include <linux/sched.h>
+#include <linux/spinlock.h>
#include <asm/processor.h>
@@ -26,21 +27,6 @@
* architecture manual guarantees this number to be in the range
* 18-24.
*
- * A context number has the following format:
- *
- * +--------------------+---------------------+
- * | generation number | region id |
- * +--------------------+---------------------+
- *
- * A context number of 0 is considered "invalid".
- *
- * The generation number is incremented whenever we end up having used
- * up all available region ids. At that point with flush the entire
- * TLB and reuse the first region id. The new generation number
- * ensures that when we context switch back to an old process, we do
- * not inadvertently end up using its possibly reused region id.
- * Instead, we simply allocate a new region id for that process.
- *
* Copyright (C) 1998 David Mosberger-Tang <davidm@hpl.hp.com>
*/
@@ -56,9 +42,15 @@
#define IA64_HW_CONTEXT_MASK ((1UL << IA64_HW_CONTEXT_BITS) - 1)
-extern unsigned long ia64_next_context;
+struct ia64_ctx {
+ spinlock_t lock;
+ unsigned int next; /* next context number to use */
+ unsigned int limit; /* next >= limit => must call wrap_mmu_context() */
+};
+
+extern struct ia64_ctx ia64_ctx;
-extern void get_new_mmu_context (struct mm_struct *mm);
+extern void wrap_mmu_context (struct mm_struct *mm);
static inline void
enter_lazy_tlb (struct mm_struct *mm, struct task_struct *tsk, unsigned cpu)
@@ -76,12 +68,24 @@
}
extern inline void
+get_new_mmu_context (struct mm_struct *mm)
+{
+ spin_lock(&ia64_ctx.lock);
+ {
+ if (ia64_ctx.next >= ia64_ctx.limit)
+ wrap_mmu_context(mm);
+ mm->context = ia64_ctx.next++;
+ }
+ spin_unlock(&ia64_ctx.lock);
+
+}
+
+extern inline void
get_mmu_context (struct mm_struct *mm)
{
/* check if our ASN is of an older generation and thus invalid: */
- if (((mm->context ^ ia64_next_context) & ~IA64_HW_CONTEXT_MASK) != 0) {
+ if (mm->context = 0)
get_new_mmu_context(mm);
- }
}
extern inline void
@@ -103,7 +107,7 @@
unsigned long rid_incr = 0;
unsigned long rr0, rr1, rr2, rr3, rr4;
- rid = (mm->context & IA64_HW_CONTEXT_MASK);
+ rid = mm->context;
#ifndef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
rid <<= 3; /* make space for encoding the region number */
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.0-test6-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Wed Aug 2 18:54:53 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/offsets.h Fri Aug 11 15:53:25 2000
@@ -11,10 +11,10 @@
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 2768 /* 0xad0 */
+#define IA64_TASK_SIZE 2864 /* 0xb30 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
-#define IA64_SIGINFO_SIZE 136 /* 0x88 */
+#define IA64_SIGINFO_SIZE 128 /* 0x80 */
#define UNW_FRAME_INFO_SIZE 448 /* 0x1c0 */
#define IA64_TASK_PTRACE_OFFSET 48 /* 0x30 */
@@ -23,7 +23,7 @@
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
#define IA64_TASK_THREAD_OFFSET 896 /* 0x380 */
#define IA64_TASK_THREAD_KSP_OFFSET 896 /* 0x380 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 2648 /* 0xa58 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 2744 /* 0xab8 */
#define IA64_TASK_PID_OFFSET 188 /* 0xbc */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/page.h linux-2.4.0-test6-lia/include/asm-ia64/page.h
--- linux-davidm/include/asm-ia64/page.h Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/page.h Fri Aug 11 16:43:55 2000
@@ -100,13 +100,14 @@
#define MAP_NR_SN1(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
#ifdef CONFIG_IA64_GENERIC
-# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
+# include <asm/machvec.h>
+# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
#elif defined (CONFIG_IA64_SN_SN1_SIM)
-# define virt_to_page(kaddr) (mem_map + MAP_NR_SN1(kaddr))
+# define virt_to_page(kaddr) (mem_map + MAP_NR_SN1(kaddr))
#else
-# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
+# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
#endif
-#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
+#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
# endif /* __KERNEL__ */
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.4.0-test6-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/pgtable.h Fri Aug 11 16:43:56 2000
@@ -287,34 +287,29 @@
* contains the memory attribute bits, dirty bits, and various other
* bits as well.
*/
-#define pgprot_noncached(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_UC)
+#define pgprot_noncached(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_UC)
/*
- * Return the region index for virtual address ADDRESS.
+ * Macro to make mark a page protection value as "write-combining".
+ * Note that "protection" is really a misnomer here as the protection
+ * value contains the memory attribute bits, dirty bits, and various
+ * other bits as well. Accesses through a write-combining translation
+ * works bypasses the caches, but does allow for consecutive writes to
+ * be combined into single (but larger) write transactions.
*/
-extern __inline__ unsigned long
-rgn_index (unsigned long address)
-{
- ia64_va a;
-
- a.l = address;
- return a.f.reg;
-}
+#define pgprot_writecombine(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_WC)
/*
- * Return the region offset for virtual address ADDRESS.
+ * Return the region index for virtual address ADDRESS.
*/
extern __inline__ unsigned long
-rgn_offset (unsigned long address)
+rgn_index (unsigned long address)
{
ia64_va a;
a.l = address;
- return a.f.off;
+ return a.f.reg;
}
-
-#define RGN_SIZE (1UL << 61)
-#define RGN_KERNEL 7
/*
* Return the region offset for virtual address ADDRESS.
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.0-test6-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/processor.h Fri Aug 11 16:43:56 2000
@@ -19,6 +19,7 @@
#include <asm/types.h>
#define IA64_NUM_DBG_REGS 8
+#define IA64_NUM_PM_REGS 4
/*
* TASK_SIZE really is a mis-named. It really is the maximum user
@@ -152,12 +153,13 @@
#define IA64_THREAD_FPH_VALID (__IA64_UL(1) << 0) /* floating-point high state valid? */
#define IA64_THREAD_DBG_VALID (__IA64_UL(1) << 1) /* debug registers valid? */
-#define IA64_THREAD_UAC_NOPRINT (__IA64_UL(1) << 2) /* don't log unaligned accesses */
-#define IA64_THREAD_UAC_SIGBUS (__IA64_UL(1) << 3) /* generate SIGBUS on unaligned acc. */
-#define IA64_THREAD_KRBS_SYNCED (__IA64_UL(1) << 4) /* krbs synced with process vm? */
+#define IA64_THREAD_PM_VALID (__IA64_UL(1) << 2) /* performance registers valid? */
+#define IA64_THREAD_UAC_NOPRINT (__IA64_UL(1) << 3) /* don't log unaligned accesses */
+#define IA64_THREAD_UAC_SIGBUS (__IA64_UL(1) << 4) /* generate SIGBUS on unaligned acc. */
+#define IA64_THREAD_KRBS_SYNCED (__IA64_UL(1) << 5) /* krbs synced with process vm? */
#define IA64_KERNEL_DEATH (__IA64_UL(1) << 63) /* see die_if_kernel()... */
-#define IA64_THREAD_UAC_SHIFT 2
+#define IA64_THREAD_UAC_SHIFT 3
#define IA64_THREAD_UAC_MASK (IA64_THREAD_UAC_NOPRINT | IA64_THREAD_UAC_SIGBUS)
#ifndef __ASSEMBLY__
@@ -285,6 +287,14 @@
struct ia64_fpreg fph[96]; /* saved/loaded on demand */
__u64 dbr[IA64_NUM_DBG_REGS];
__u64 ibr[IA64_NUM_DBG_REGS];
+#ifdef CONFIG_PERFMON
+ __u64 pmc[IA64_NUM_PM_REGS];
+ __u64 pmd[IA64_NUM_PM_REGS];
+ __u64 pmod[IA64_NUM_PM_REGS];
+# define INIT_THREAD_PM {0, }, {0, }, {0, },
+#else
+# define INIT_THREAD_PM
+#endif
__u64 map_base; /* base address for mmap() */
#ifdef CONFIG_IA32_SUPPORT
__u64 eflag; /* IA32 EFLAGS reg */
@@ -316,6 +326,7 @@
{{{{0}}}, }, /* fph */ \
{0, }, /* dbr */ \
{0, }, /* ibr */ \
+ INIT_THREAD_PM \
0x2000000000000000 /* map_base */ \
INIT_THREAD_IA32, \
0 /* siginfo */ \
@@ -396,6 +407,18 @@
extern void __ia64_init_fpu (void);
extern void __ia64_save_fpu (struct ia64_fpreg *fph);
extern void __ia64_load_fpu (struct ia64_fpreg *fph);
+extern void ia64_save_debug_regs (unsigned long *save_area);
+extern void ia64_load_debug_regs (unsigned long *save_area);
+
+#ifdef CONFIG_IA32_SUPPORT
+extern void ia32_save_state (struct thread_struct *thread);
+extern void ia32_load_state (struct thread_struct *thread);
+#endif
+
+#ifdef CONFIG_PERFMON
+extern void ia64_save_pm_regs (struct thread_struct *thread);
+extern void ia64_load_pm_regs (struct thread_struct *thread);
+#endif
#define ia64_fph_enable() __asm__ __volatile__ (";; rsm psr.dfh;; srlz.d;;" ::: "memory");
#define ia64_fph_disable() __asm__ __volatile__ (";; ssm psr.dfh;; srlz.d;;" ::: "memory");
diff -urN linux-davidm/include/asm-ia64/siginfo.h linux-2.4.0-test6-lia/include/asm-ia64/siginfo.h
--- linux-davidm/include/asm-ia64/siginfo.h Thu Jun 22 07:09:45 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/siginfo.h Mon Jul 31 14:01:22 2000
@@ -14,12 +14,13 @@
} sigval_t;
#define SI_MAX_SIZE 128
-#define SI_PAD_SIZE ((SI_MAX_SIZE/sizeof(int)) - 3)
+#define SI_PAD_SIZE ((SI_MAX_SIZE/sizeof(int)) - 4)
typedef struct siginfo {
int si_signo;
int si_errno;
int si_code;
+ int __pad0;
union {
int _pad[SI_PAD_SIZE];
@@ -212,7 +213,7 @@
#define SIGEV_THREAD 2 /* deliver via thread creation */
#define SIGEV_MAX_SIZE 64
-#define SIGEV_PAD_SIZE ((SIGEV_MAX_SIZE/sizeof(int)) - 3)
+#define SIGEV_PAD_SIZE ((SIGEV_MAX_SIZE/sizeof(int)) - 4)
typedef struct sigevent {
sigval_t sigev_value;
diff -urN linux-davidm/include/asm-ia64/spinlock.h linux-2.4.0-test6-lia/include/asm-ia64/spinlock.h
--- linux-davidm/include/asm-ia64/spinlock.h Thu Jun 22 07:09:45 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/spinlock.h Fri Aug 11 16:43:56 2000
@@ -15,8 +15,11 @@
#include <asm/bitops.h>
#include <asm/atomic.h>
+#undef NEW_LOCK
+
+#ifdef NEW_LOCK
typedef struct {
- volatile unsigned int lock;
+ volatile unsigned char lock;
} spinlock_t;
#define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 }
@@ -26,44 +29,86 @@
* Streamlined test_and_set_bit(0, (x)). We use test-and-test-and-set
* rather than a simple xchg to avoid writing the cache-line when
* there is contention.
+ *
+ * XXX Fix me: instead of preserving ar.pfs, we should just mark it
+ * XXX as "clobbered". Unfortunately, the Mar 2000 release of the compiler
+ * XXX doesn't let us do that. The August release fixes that.
*/
-#if 1 /* Bad code generation? */
-#define spin_lock(x) __asm__ __volatile__ ( \
- "mov ar.ccv = r0\n" \
- "mov r29 = 1\n" \
- ";;\n" \
- "1:\n" \
- "ld4 r2 = %0\n" \
- ";;\n" \
- "cmp4.eq p0,p7 = r0,r2\n" \
- "(p7) br.cond.spnt.few 1b \n" \
- "cmpxchg4.acq r2 = %0, r29, ar.ccv\n" \
- ";;\n" \
- "cmp4.eq p0,p7 = r0, r2\n" \
- "(p7) br.cond.spnt.few 1b\n" \
- ";;\n" \
- :: "m" __atomic_fool_gcc((x)) : "r2", "r29", "memory")
-
-#else
-#define spin_lock(x) \
-{ \
- spinlock_t *__x = (x); \
- \
- do { \
- while (__x->lock); \
- } while (cmpxchg_acq(&__x->lock, 0, 1)); \
+#define spin_lock(x) \
+{ \
+ register char *addr __asm__ ("r31") = (char *) &(x)->lock; \
+ long saved_pfs; \
+ \
+ __asm__ __volatile__ ( \
+ "mov r30=1\n" \
+ "mov ar.ccv=r0\n" \
+ ";;\n" \
+ IA64_SEMFIX"cmpxchg1.acq r30=[%1],r30,ar.ccv\n" \
+ ";;\n" \
+ "cmp.ne p15,p0=r30,r0\n" \
+ "mov %0=ar.pfs\n" \
+ "(p15) br.call.spnt.few b7=ia64_spinlock_contention\n" \
+ ";;\n" \
+ "1: (p15) mov ar.pfs=%0;;\n" /* force a new bundle */ \
+ : "=&r"(saved_pfs) : "r"(addr) \
+ : "p15", "r28", "r29", "r30", "memory"); \
}
-#endif
+
+#define spin_trylock(x) \
+({ \
+ register char *addr __asm__ ("r31") = (char *) &(x)->lock; \
+ register long result; \
+ \
+ __asm__ __volatile__ ( \
+ "mov r30=1\n" \
+ "mov ar.ccv=r0\n" \
+ ";;\n" \
+ IA64_SEMFIX"cmpxchg1.acq %0=[%1],r30,ar.ccv\n" \
+ : "=r"(result) : "r"(addr) : "r30", "memory"); \
+ (result = 0); \
+})
#define spin_is_locked(x) ((x)->lock != 0)
+#define spin_unlock(x) ({((spinlock_t *) x)->lock = 0;})
+#define spin_unlock_wait(x) ({ while ((x)->lock); })
-#define spin_unlock(x) ({((spinlock_t *) x)->lock = 0; barrier();})
+#else /* !NEW_LOCK */
-/* Streamlined !test_and_set_bit(0, (x)) */
-#define spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) = 0)
+typedef struct {
+ volatile unsigned int lock;
+} spinlock_t;
+
+#define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 }
+#define spin_lock_init(x) ((x)->lock = 0)
+/*
+ * Streamlined test_and_set_bit(0, (x)). We use test-and-test-and-set
+ * rather than a simple xchg to avoid writing the cache-line when
+ * there is contention.
+ */
+#define spin_lock(x) __asm__ __volatile__ ( \
+ "mov ar.ccv = r0\n" \
+ "mov r29 = 1\n" \
+ ";;\n" \
+ "1:\n" \
+ "ld4 r2 = %0\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0,r2\n" \
+ "(p7) br.cond.spnt.few 1b \n" \
+ IA64_SEMFIX"cmpxchg4.acq r2 = %0, r29, ar.ccv\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0, r2\n" \
+ "(p7) br.cond.spnt.few 1b\n" \
+ ";;\n" \
+ :: "m" __atomic_fool_gcc((x)) : "r2", "r29", "memory")
+
+#define spin_is_locked(x) ((x)->lock != 0)
+#define spin_unlock(x) ({((spinlock_t *) x)->lock = 0; barrier();})
+#define spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) = 0)
#define spin_unlock_wait(x) ({ do { barrier(); } while ((x)->lock); })
+#endif /* !NEW_LOCK */
+
typedef struct {
volatile int read_counter:31;
volatile int write_lock:1;
@@ -73,12 +118,12 @@
#define read_lock(rw) \
do { \
int tmp = 0; \
- __asm__ __volatile__ ("1:\tfetchadd4.acq %0 = %1, 1\n" \
+ __asm__ __volatile__ ("1:\t"IA64_SEMFIX"fetchadd4.acq %0 = %1, 1\n" \
";;\n" \
"tbit.nz p6,p0 = %0, 31\n" \
"(p6) br.cond.sptk.few 2f\n" \
".section .text.lock,\"ax\"\n" \
- "2:\tfetchadd4.rel %0 = %1, -1\n" \
+ "2:\t"IA64_SEMFIX"fetchadd4.rel %0 = %1, -1\n" \
";;\n" \
"3:\tld4.acq %0 = %1\n" \
";;\n" \
@@ -94,7 +139,7 @@
#define read_unlock(rw) \
do { \
int tmp = 0; \
- __asm__ __volatile__ ("fetchadd4.rel %0 = %1, -1\n" \
+ __asm__ __volatile__ (IA64_SEMFIX"fetchadd4.rel %0 = %1, -1\n" \
: "=r" (tmp) \
: "m" (__atomic_fool_gcc(rw)) \
: "memory"); \
diff -urN linux-davidm/include/asm-ia64/system.h linux-2.4.0-test6-lia/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/include/asm-ia64/system.h Fri Aug 11 16:43:55 2000
@@ -27,6 +27,15 @@
#define GATE_ADDR (0xa000000000000000 + PAGE_SIZE)
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
+ /* Workaround for Errata 97. */
+# define IA64_SEMFIX_INSN mf;
+# define IA64_SEMFIX "mf;"
+#else
+# define IA64_SEMFIX_INSN
+# define IA64_SEMFIX ""
+#endif
+
#ifndef __ASSEMBLY__
#include <linux/types.h>
@@ -231,13 +240,13 @@
({ \
switch (sz) { \
case 4: \
- __asm__ __volatile__ ("fetchadd4.rel %0=%1,%3" \
+ __asm__ __volatile__ (IA64_SEMFIX"fetchadd4.rel %0=%1,%3" \
: "=r"(tmp), "=m"(__atomic_fool_gcc(v)) \
: "m" (__atomic_fool_gcc(v)), "i"(n)); \
break; \
\
case 8: \
- __asm__ __volatile__ ("fetchadd8.rel %0=%1,%3" \
+ __asm__ __volatile__ (IA64_SEMFIX"fetchadd8.rel %0=%1,%3" \
: "=r"(tmp), "=m"(__atomic_fool_gcc(v)) \
: "m" (__atomic_fool_gcc(v)), "i"(n)); \
break; \
@@ -280,22 +289,22 @@
switch (size) {
case 1:
- __asm__ __volatile ("xchg1 %0=%1,%2" : "=r" (result)
+ __asm__ __volatile (IA64_SEMFIX"xchg1 %0=%1,%2" : "=r" (result)
: "m" (*(char *) ptr), "r" (x) : "memory");
return result;
case 2:
- __asm__ __volatile ("xchg2 %0=%1,%2" : "=r" (result)
+ __asm__ __volatile (IA64_SEMFIX"xchg2 %0=%1,%2" : "=r" (result)
: "m" (*(short *) ptr), "r" (x) : "memory");
return result;
case 4:
- __asm__ __volatile ("xchg4 %0=%1,%2" : "=r" (result)
+ __asm__ __volatile (IA64_SEMFIX"xchg4 %0=%1,%2" : "=r" (result)
: "m" (*(int *) ptr), "r" (x) : "memory");
return result;
case 8:
- __asm__ __volatile ("xchg8 %0=%1,%2" : "=r" (result)
+ __asm__ __volatile (IA64_SEMFIX"xchg8 %0=%1,%2" : "=r" (result)
: "m" (*(long *) ptr), "r" (x) : "memory");
return result;
}
@@ -305,7 +314,6 @@
#define xchg(ptr,x) \
((__typeof__(*(ptr))) __xchg ((unsigned long) (x), (ptr), sizeof(*(ptr))))
-#define tas(ptr) (xchg ((ptr), 1))
/*
* Atomic compare and exchange. Compare OLD with MEM, if identical,
@@ -324,50 +332,50 @@
struct __xchg_dummy { unsigned long a[100]; };
#define __xg(x) (*(struct __xchg_dummy *)(x))
-#define ia64_cmpxchg(sem,ptr,old,new,size) \
-({ \
- __typeof__(ptr) _p_ = (ptr); \
- __typeof__(new) _n_ = (new); \
- __u64 _o_, _r_; \
- \
- switch (size) { \
- case 1: _o_ = (__u8 ) (old); break; \
- case 2: _o_ = (__u16) (old); break; \
- case 4: _o_ = (__u32) (old); break; \
- case 8: _o_ = (__u64) (old); break; \
- default: \
- } \
- __asm__ __volatile__ ("mov ar.ccv=%0;;" :: "rO"(_o_)); \
- switch (size) { \
- case 1: \
- __asm__ __volatile__ ("cmpxchg1."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
- break; \
- \
- case 2: \
- __asm__ __volatile__ ("cmpxchg2."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
- break; \
- \
- case 4: \
- __asm__ __volatile__ ("cmpxchg4."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
- break; \
- \
- case 8: \
- __asm__ __volatile__ ("cmpxchg8."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
- break; \
- \
- default: \
- _r_ = __cmpxchg_called_with_bad_pointer(); \
- break; \
- } \
- (__typeof__(old)) _r_; \
+#define ia64_cmpxchg(sem,ptr,old,new,size) \
+({ \
+ __typeof__(ptr) _p_ = (ptr); \
+ __typeof__(new) _n_ = (new); \
+ __u64 _o_, _r_; \
+ \
+ switch (size) { \
+ case 1: _o_ = (__u8 ) (long) (old); break; \
+ case 2: _o_ = (__u16) (long) (old); break; \
+ case 4: _o_ = (__u32) (long) (old); break; \
+ case 8: _o_ = (__u64) (long) (old); break; \
+ default: \
+ } \
+ __asm__ __volatile__ ("mov ar.ccv=%0;;" :: "rO"(_o_)); \
+ switch (size) { \
+ case 1: \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg1."sem" %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ case 2: \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg2."sem" %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ case 4: \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg4."sem" %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ case 8: \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg8."sem" %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ default: \
+ _r_ = __cmpxchg_called_with_bad_pointer(); \
+ break; \
+ } \
+ (__typeof__(old)) _r_; \
})
#define cmpxchg_acq(ptr,o,n) ia64_cmpxchg("acq", (ptr), (o), (n), sizeof(*(ptr)))
@@ -418,15 +426,15 @@
extern void ia64_save_extra (struct task_struct *task);
extern void ia64_load_extra (struct task_struct *task);
-#define __switch_to(prev,next,last) do { \
- if (((prev)->thread.flags & IA64_THREAD_DBG_VALID) \
- || IS_IA32_PROCESS(ia64_task_regs(prev))) \
- ia64_save_extra(prev); \
- if (((next)->thread.flags & IA64_THREAD_DBG_VALID) \
- || IS_IA32_PROCESS(ia64_task_regs(next))) \
- ia64_load_extra(next); \
- ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
- (last) = ia64_switch_to((next)); \
+#define __switch_to(prev,next,last) do { \
+ if (((prev)->thread.flags & (IA64_THREAD_DBG_VALID|IA64_THREAD_PM_VALID)) \
+ || IS_IA32_PROCESS(ia64_task_regs(prev))) \
+ ia64_save_extra(prev); \
+ if (((next)->thread.flags & (IA64_THREAD_DBG_VALID|IA64_THREAD_PM_VALID)) \
+ || IS_IA32_PROCESS(ia64_task_regs(next))) \
+ ia64_load_extra(next); \
+ ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
+ (last) = ia64_switch_to((next)); \
} while (0)
#ifdef CONFIG_SMP
diff -urN linux-davidm/init/main.c linux-2.4.0-test6-lia/init/main.c
--- linux-davidm/init/main.c Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/init/main.c Fri Aug 11 14:55:42 2000
@@ -584,6 +584,7 @@
#endif
check_bugs();
printk("POSIX conformance testing by UNIFIX\n");
+
/*
* We count on the initial thread going ok
* Like idlers init is an unlocked kernel thread, which will
diff -urN linux-davidm/kernel/timer.c linux-2.4.0-test6-lia/kernel/timer.c
--- linux-davidm/kernel/timer.c Fri Aug 11 19:01:16 2000
+++ linux-2.4.0-test6-lia/kernel/timer.c Mon Jul 31 14:01:22 2000
@@ -680,7 +680,7 @@
void do_timer(struct pt_regs *regs)
{
- (*(unsigned long *)&jiffies)++;
+ (*(volatile unsigned long *)&jiffies)++;
#ifndef CONFIG_SMP
/* SMP process accounting uses the local APIC timer */
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test6)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (4 preceding siblings ...)
2000-08-12 5:02 ` [Linux-ia64] kernel update (relative to v2.4.0-test6) David Mosberger
@ 2000-08-14 11:35 ` Andreas Schwab
2000-08-14 17:00 ` David Mosberger
` (209 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2000-08-14 11:35 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> - Stephane's fixes for initrd. The current version tolerates older
|> version of eli, but if you're seeing a warning message of the form:
|>
|> Warning: boot loader passed virtual address for initrd
|>
|> you should consider updating your bootloader.
This patch is needed to make it actually compile.
--- arch/ia64/kernel/setup.c 2000/08/14 09:17:15 1.1
+++ arch/ia64/kernel/setup.c 2000/08/14 11:30:51
@@ -157,7 +157,7 @@
if (initrd_start >= PAGE_OFFSET)
printk("Warning: boot loader passed virtual address "
"for initrd, please upgrade the loader\n");
- } else
+ else
#endif
/*
* The loader ONLY passes physical addresses
Andreas.
--
Andreas Schwab "And now for something
SuSE Labs completely different."
Andreas.Schwab@suse.de
SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test6)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (5 preceding siblings ...)
2000-08-14 11:35 ` Andreas Schwab
@ 2000-08-14 17:00 ` David Mosberger
2000-09-09 6:51 ` [Linux-ia64] kernel update (relative to v2.4.0-test8) David Mosberger
` (208 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-08-14 17:00 UTC (permalink / raw)
To: linux-ia64
Yes, I goofed when merging in Stephane's changes (and Stephane told me
just about 10 minutes after I mailed out the patch). It obviously
compiles fine if you don't turn on INITRD support, which is why I
didn't notice.
--david
Andreas> David Mosberger <davidm@hpl.hp.com> writes:
Andreas> |> - Stephane's fixes for initrd. The current version tolerates older
Andreas> |> version of eli, but if you're seeing a warning message of the form:
Andreas> |>
Andreas> |> Warning: boot loader passed virtual address for initrd
Andreas> |>
Andreas> |> you should consider updating your bootloader.
Andreas> This patch is needed to make it actually compile.
--- arch/ia64/kernel/setup.c 2000/08/14 09:17:15 1.1
+++ arch/ia64/kernel/setup.c 2000/08/14 11:30:51
@@ -157,7 +157,7 @@
if (initrd_start >= PAGE_OFFSET)
printk("Warning: boot loader passed virtual address "
"for initrd, please upgrade the loader\n");
- } else
+ else
#endif
/*
* The loader ONLY passes physical addresses
Andreas.
--
Andreas Schwab "And now for something
SuSE Labs completely different."
Andreas.Schwab@suse.de
SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (6 preceding siblings ...)
2000-08-14 17:00 ` David Mosberger
@ 2000-09-09 6:51 ` David Mosberger
2000-09-09 19:07 ` H . J . Lu
` (207 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-09-09 6:51 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 kernel diff is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.0-test8-ia64-000908.diff.gz. The most important
changes since the previous version:
- SMP should be MUCH more stable again, thanks to Asit's fixed
for fph and the deadlock fix to read/write-lock. Note that
for SMP, I eliminated the notion of "fpu_owner" completely,
as it made no sense with the current setup. Longer term, I
think it would be better to manage the the fph partition in
a completely lazy fashion as we do on UP. IMHO, if you have
four threads and four CPUs, the fph contents should never
have to be switched.
- Don's IA-32 updates (support for inb/outb et al...)
- Updated qla1280 driver from Qlogic. This one uses the PCI
DMA interface. It should boot on any machine independent of
whether >4GB addressing is enabled. Of course, for best
performance you'll want to make sure >4GB addressing is
enabled on machines with memory above 4GB.
- Stephane's efi_map_pal_code() updates to avoid overlapping
TLB translations. Also some updates to /proc/pal.
- Bill added a bunch of symbols to ia64_ksyms.
- Rob's fix for ia64_pal_cache_flush (psr.ic needs to be off)
and addition of ia64_pal_prefetch_visibility() call.
- ar.ec is now accessible via ptrace() again (got dropped
accidentally..).
- Hacked the eepro100 driver to make it possible to DMA
incoming packets in a way that will yield properly aligned
IP headers. This results in a very significant saving of
CPU utilization during heavy Ethernet I/O. I also added an
option to force device socket buffer allocation to be below
4GB, which should avoid bounce buffers on systems with
memory above 4GB. It won't eliminate them completely
though. Haven't had a chance to determine the performance
effect of this. It's just a hack until 2.5 comes along and
gives us an all-improved memory allocation scheme... ;-)
The patch below is once again a rough approximation of what changed
since the last update. This kernel has been tested on the HP Ski
simulator, Big Sur (UP and MP), and Lion (MP).
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help lia64/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Fri Sep 8 22:36:14 2000
+++ lia64/Documentation/Configure.help Fri Sep 8 16:19:57 2000
@@ -16607,6 +16607,17 @@
Say Y here to enable hacks to make the kernel work on the NEC
AzusA platform. Select N here if you're unsure.
+Force socket buffers below 4GB?
+CONFIG_SKB_BELOW_4GB
+ Most of today's network interface cards (NICs) support DMA to
+ the low 32 bits of the address space only. On machines with
+ more then 4GB of memory, this can cause the system to slow
+ down if there is no I/O TLB hardware. Turning this option on
+ avoids the slow-down by forcing socket buffers to be allocated
+ from memory below 4GB. The downside is that your system could
+ run out of memory below 4GB before all memory has been used up.
+ If you're unsure how to answer this question, answer Y.
+
Enable IA-64 Machine Check Abort
CONFIG_IA64_MCA
Say Y here to enable machine check support for IA-64. If you're
diff -urN linux-davidm/arch/ia64/config.in lia64/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/config.in Fri Sep 8 16:21:07 2000
@@ -51,6 +51,7 @@
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
bool ' Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
+ bool ' Force socket buffers below 4GB?' CONFIG_SKB_BELOW_4GB
bool ' ACPI kernel configuration manager (EXPERIMENTAL)' CONFIG_ACPI_KERNEL_CONFIG
if [ "$CONFIG_ACPI_KERNEL_CONFIG" = "y" ]; then
diff -urN linux-davidm/arch/ia64/ia32/binfmt_elf32.c lia64/arch/ia64/ia32/binfmt_elf32.c
--- linux-davidm/arch/ia64/ia32/binfmt_elf32.c Thu Aug 10 19:56:18 2000
+++ lia64/arch/ia64/ia32/binfmt_elf32.c Fri Sep 8 16:21:30 2000
@@ -52,7 +52,7 @@
pte_t * pte;
if (page_count(page) != 1)
- printk("mem_map disagrees with %p at %08lx\n", page, address);
+ printk("mem_map disagrees with %p at %08lx\n", (void *) page, address);
pgd = pgd_offset(tsk->mm, address);
pmd = pmd_alloc(pgd, address);
if (!pmd) {
@@ -120,6 +120,8 @@
: "r" ((ulong)IA32_FCR_DEFAULT));
__asm__("mov ar.fir = r0");
__asm__("mov ar.fdr = r0");
+ __asm__("mov %0=ar.k0 ;;" : "=r" (current->thread.old_iob));
+ __asm__("mov ar.k0=%0 ;;" :: "r"(IA32_IOBASE));
/* TSS */
__asm__("mov ar.k1 = %0"
: /* no outputs */
diff -urN linux-davidm/arch/ia64/ia32/ia32_signal.c lia64/arch/ia64/ia32/ia32_signal.c
--- linux-davidm/arch/ia64/ia32/ia32_signal.c Thu Jun 22 07:09:44 2000
+++ lia64/arch/ia64/ia32/ia32_signal.c Fri Sep 8 16:21:40 2000
@@ -278,7 +278,7 @@
err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
- if (_NSIG_WORDS > 1) {
+ if (_IA32_NSIG_WORDS > 1) {
err |= __copy_to_user(frame->extramask, &set->sig[1],
sizeof(frame->extramask));
}
@@ -310,7 +310,7 @@
#if 0
printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
- current->comm, current->pid, sig, frame, regs->cr_iip, frame->pretcode);
+ current->comm, current->pid, sig, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
return 1;
@@ -380,7 +380,7 @@
#if 0
printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
- current->comm, current->pid, frame, regs->cr_iip, frame->pretcode);
+ current->comm, current->pid, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
return 1;
diff -urN linux-davidm/arch/ia64/ia32/ia32_support.c lia64/arch/ia64/ia32/ia32_support.c
--- linux-davidm/arch/ia64/ia32/ia32_support.c Thu Jun 22 07:09:44 2000
+++ lia64/arch/ia64/ia32/ia32_support.c Fri Sep 8 16:21:50 2000
@@ -42,6 +42,7 @@
thread->csd = csd;
thread->ssd = ssd;
thread->tssd = tssd;
+ asm ("mov ar.k0=%0 ;;" :: "r"(thread->old_iob));
}
void
@@ -68,6 +69,8 @@
"mov ar.k1=%7"
:: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr),
"r"(csd), "r"(ssd), "r"(tssd));
+ asm ("mov %0=ar.k0 ;;" : "=r"(thread->old_iob));
+ asm ("mov ar.k0=%0 ;;" :: "r"(IA32_IOBASE));
}
/*
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c lia64/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/ia32/sys_ia32.c Fri Sep 8 16:22:05 2000
@@ -75,11 +75,11 @@
n = 0;
do {
err = get_user(addr, (int *)A(arg));
- if (IS_ERR(err))
+ if (err)
return err;
if (ap) { /* no access_ok needed, we allocated */
err = __put_user((char *)A(addr), ap++);
- if (IS_ERR(err))
+ if (err)
return err;
}
arg += sizeof(unsigned int);
@@ -102,13 +102,14 @@
{
struct pt_regs *regs = (struct pt_regs *)&stack;
char **av, **ae;
- int na, ne, r, len;
+ int na, ne, len;
+ long r;
na = nargs(argv, NULL);
- if (IS_ERR(na))
+ if (na < 0)
return(na);
ne = nargs(envp, NULL);
- if (IS_ERR(ne))
+ if (ne < 0)
return(ne);
len = (na + ne + 2) * sizeof(*av);
/*
@@ -130,19 +131,19 @@
return (long)av;
ae = av + na + 1;
r = __put_user(0, (av + na));
- if (IS_ERR(r))
+ if (r)
goto out;
r = __put_user(0, (ae + ne));
- if (IS_ERR(r))
+ if (r)
goto out;
r = nargs(argv, av);
- if (IS_ERR(r))
+ if (r < 0)
goto out;
r = nargs(envp, ae);
- if (IS_ERR(r))
+ if (r < 0)
goto out;
r = sys_execve(filename, av, ae, regs);
- if (IS_ERR(r))
+ if (r < 0)
out:
sys_munmap((unsigned long) av, len);
return(r);
@@ -297,7 +298,7 @@
error = do_mmap(file, addr, len, prot, flags, poff);
up(¤t->mm->mmap_sem);
- if (!IS_ERR(error))
+ if (!IS_ERR((void *) error))
error += offset - poff;
} else {
down(¤t->mm->mmap_sem);
@@ -2544,6 +2545,78 @@
printk("IA32 syscall #%d issued, maybe we should implement it\n",
(int)regs->r1);
return(sys_ni_syscall());
+}
+
+/*
+ * The IA64 maps 4 I/O ports for each 4K page
+ */
+#define IOLEN ((65536 / 4) * 4096)
+
+asmlinkage long
+sys_iopl (int level, long arg1, long arg2, long arg3)
+{
+ extern unsigned long ia64_iobase;
+ int fd;
+ struct file * file;
+ unsigned int old;
+ unsigned long addr;
+ mm_segment_t old_fs = get_fs ();
+
+ if (level != 3)
+ return(-EINVAL);
+ /* Trying to gain more privileges? */
+ __asm__ __volatile__("mov %0=ar.eflag ;;" : "=r"(old));
+ if (level > ((old >> 12) & 3)) {
+ if (!capable(CAP_SYS_RAWIO))
+ return -EPERM;
+ }
+ set_fs(KERNEL_DS);
+ fd = sys_open("/dev/mem", O_SYNC | O_RDWR, 0);
+ set_fs(old_fs);
+ if (fd < 0)
+ return fd;
+ file = fget(fd);
+ if (file = NULL) {
+ sys_close(fd);
+ return(-EFAULT);
+ }
+
+ down(¤t->mm->mmap_sem);
+ lock_kernel();
+
+ addr = do_mmap_pgoff(file, IA32_IOBASE,
+ IOLEN, PROT_READ|PROT_WRITE, MAP_SHARED,
+ (ia64_iobase & ~PAGE_OFFSET) >> PAGE_SHIFT);
+
+ unlock_kernel();
+ up(¤t->mm->mmap_sem);
+
+ if (addr >= 0) {
+ __asm__ __volatile__("mov ar.k0=%0 ;;" :: "r"(addr));
+ old = (old & ~0x3000) | (level << 12);
+ __asm__ __volatile__("mov ar.eflag=%0 ;;" :: "r"(old));
+ }
+
+ fput(file);
+ sys_close(fd);
+ return 0;
+}
+
+asmlinkage long
+sys_ioperm (unsigned long from, unsigned long num, int on)
+{
+
+ /*
+ * Since IA64 doesn't have permission bits we'd have to go to
+ * a lot of trouble to simulate them in software. There's
+ * no point, only trusted programs can make this call so we'll
+ * just turn it into an iopl call and let the process have
+ * access to all I/O ports.
+ *
+ * XXX proper ioperm() support should be emulated by
+ * manipulating the page protections...
+ */
+ return(sys_iopl(3, 0, 0, 0));
}
#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
diff -urN linux-davidm/arch/ia64/kernel/efi.c lia64/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/kernel/efi.c Fri Sep 8 16:22:33 2000
@@ -247,12 +247,41 @@
md->phys_addr);
continue;
}
- mask = ~((1 << _PAGE_SIZE_4M)-1); /* XXX should be dynamic? */
+ /*
+ * We must use the same page size as the one used
+ * for the kernel region when we map the PAL code.
+ * This way, we avoid overlapping TRs if code is
+ * executed nearby. The Alt I-TLB installs 256MB
+ * page sizes as defined for region 7.
+ *
+ * XXX Fixme: should be dynamic here (for page size)
+ */
+ mask = ~((1 << _PAGE_SIZE_256M)-1);
vaddr = PAGE_OFFSET + md->phys_addr;
- printk(__FUNCTION__": mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
- md->phys_addr, md->phys_addr + (md->num_pages << 12),
- vaddr & mask, (vaddr & mask) + 4*1024*1024);
+ /*
+ * We must check that the PAL mapping won't overlap
+ * with the kernel mapping on ITR1.
+ *
+ * PAL code is guaranteed to be aligned on a power of 2
+ * between 4k and 256KB.
+ * Also from the documentation, it seems like there is an
+ * implicit guarantee that you will need only ONE ITR to
+ * map it. This implies that the PAL code is always aligned
+ * on its size, i.e., the closest matching page size supported
+ * by the TLB. Therefore PAL code is guaranteed never to cross
+ * a 256MB unless it is bigger than 256MB (very unlikely!).
+ * So for now the following test is enough to determine whether
+ * or not we need a dedicated ITR for the PAL code.
+ */
+ if ((vaddr & mask) = (PAGE_OFFSET & mask)) {
+ printk(__FUNCTION__ " : no need to install ITR for PAL Code\n");
+ continue;
+ }
+
+ printk(__FUNCTION__": CPU %d mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
+ smp_processor_id(), md->phys_addr, md->phys_addr + (md->num_pages << 12),
+ vaddr & mask, (vaddr & mask) + 256*1024*1024);
/*
* Cannot write to CRx with PSR.ic=1
@@ -263,12 +292,11 @@
* ITR0/DTR0: used for kernel code/data
* ITR1/DTR1: used by HP simulator
* ITR2/DTR2: map PAL code
- * ITR3/DTR3: used to map PAL calls buffer
*/
ia64_itr(0x1, 2, vaddr & mask,
pte_val(mk_pte_phys(md->phys_addr,
__pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX))),
- _PAGE_SIZE_4M);
+ _PAGE_SIZE_256M);
local_irq_restore(flags);
ia64_srlz_i ();
}
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c lia64/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/kernel/ia64_ksyms.c Fri Sep 8 16:22:52 2000
@@ -18,6 +18,8 @@
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
EXPORT_SYMBOL(strncpy);
+EXPORT_SYMBOL(strnlen);
+EXPORT_SYMBOL(strrchr);
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strtok);
@@ -28,11 +30,25 @@
#include <linux/in6.h>
#include <asm/checksum.h>
EXPORT_SYMBOL(csum_partial_copy_nocheck);
+EXPORT_SYMBOL(csum_tcpudp_magic);
+EXPORT_SYMBOL(ip_compute_csum);
+EXPORT_SYMBOL(ip_fast_csum);
#include <asm/irq.h>
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
+#include <asm/page.h>
+EXPORT_SYMBOL(clear_page);
+
+#include <asm/pci.h>
+EXPORT_SYMBOL(pci_dma_sync_sg);
+EXPORT_SYMBOL(pci_dma_sync_single);
+EXPORT_SYMBOL(pci_map_sg);
+EXPORT_SYMBOL(pci_map_single);
+EXPORT_SYMBOL(pci_unmap_sg);
+EXPORT_SYMBOL(pci_unmap_single);
+
#include <asm/processor.h>
EXPORT_SYMBOL(cpu_data);
EXPORT_SYMBOL(kernel_thread);
@@ -40,6 +56,10 @@
#ifdef CONFIG_SMP
#include <asm/hardirq.h>
EXPORT_SYMBOL(synchronize_irq);
+
+#include <asm/smp.h>
+EXPORT_SYMBOL(smp_call_function);
+EXPORT_SYMBOL(smp_num_cpus);
#include <asm/smplock.h>
EXPORT_SYMBOL(kernel_flag);
diff -urN linux-davidm/arch/ia64/kernel/irq.c lia64/arch/ia64/kernel/irq.c
--- linux-davidm/arch/ia64/kernel/irq.c Thu Aug 10 19:56:18 2000
+++ lia64/arch/ia64/kernel/irq.c Fri Sep 8 16:23:06 2000
@@ -536,8 +536,7 @@
desc->depth--;
break;
case 0:
- printk("enable_irq() unbalanced from %p\n",
- __builtin_return_address(0));
+ printk("enable_irq() unbalanced from %p\n", (void *) __builtin_return_address(0));
}
spin_unlock_irqrestore(&desc->lock, flags);
}
diff -urN linux-davidm/arch/ia64/kernel/minstate.h lia64/arch/ia64/kernel/minstate.h
--- linux-davidm/arch/ia64/kernel/minstate.h Thu Jun 22 07:09:44 2000
+++ lia64/arch/ia64/kernel/minstate.h Fri Sep 8 16:23:20 2000
@@ -192,13 +192,3 @@
#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs,) STOPS
#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs, mov r15=r19) STOPS
#define SAVE_MIN DO_SAVE_MIN(mov rCRIFS=r0,) STOPS
-
-#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
-# define STOPS nop.i 0x0;; nop.i 0x0;; nop.i 0x0;;
-#else
-# define STOPS
-#endif
-
-#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs,) STOPS
-#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs, mov r15=r19) STOPS
-#define SAVE_MIN DO_SAVE_MIN(mov rCRIFS=r0,) STOPS
diff -urN linux-davidm/arch/ia64/kernel/pal.S lia64/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/kernel/pal.S Fri Sep 8 16:23:31 2000
@@ -54,7 +54,8 @@
*
* in0 Pointer to struct ia64_pal_retval
* in1 Index of PAL service
- * in2 - in4 Remaning PAL arguments
+ * in2 - in4 Remaining PAL arguments
+ * in5 1 => clear psr.ic, 0 => don't clear psr.ic
*
*/
GLOBAL_ENTRY(ia64_pal_call_static)
@@ -68,18 +69,22 @@
}
;;
ld8 loc2 = [loc2] // loc2 <- entry point
- mov r30 = in2
- mov r31 = in3
+ tbit.nz p6,p7 = in5, 0
+ adds r8 = 1f-1b,r8
;;
mov loc3 = psr
mov loc0 = rp
UNW(.body)
- adds r8 = 1f-1b,r8
- ;;
- rsm psr.i
+ mov r30 = in2
+
+(p6) rsm psr.i | psr.ic
+ mov r31 = in3
mov b7 = loc2
+
+(p7) rsm psr.i
+ ;;
+(p6) srlz.i
mov rp = r8
- ;;
br.cond.sptk.few b7
1: mov psr.l = loc3
mov ar.pfs = loc1
diff -urN linux-davidm/arch/ia64/kernel/palinfo.c lia64/arch/ia64/kernel/palinfo.c
--- linux-davidm/arch/ia64/kernel/palinfo.c Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/kernel/palinfo.c Fri Sep 8 16:23:41 2000
@@ -131,38 +131,6 @@
"NaTPage" /* 111 */
};
-
-
-/*
- * Allocate a buffer suitable for calling PAL code in Virtual mode
- *
- * The documentation (PAL2.6) allows DTLB misses on the buffer. So
- * using the TC is enough, no need to pin the entry.
- *
- * We allocate a kernel-sized page (at least 4KB). This is enough to
- * hold any possible reply.
- */
-static inline void *
-get_palcall_buffer(void)
-{
- void *tmp;
-
- tmp = (void *)__get_free_page(GFP_KERNEL);
- if (tmp = 0) {
- printk(KERN_ERR __FUNCTION__" : can't get a buffer page\n");
- }
- return tmp;
-}
-
-/*
- * Free a palcall buffer allocated with the previous call
- */
-static inline void
-free_palcall_buffer(void *addr)
-{
- __free_page(addr);
-}
-
/*
* Take a 64bit vector and produces a string such that
* if bit n is set then 2^n in clear text is generated. The adjustment
@@ -242,17 +210,12 @@
{
s64 status;
char *p = page;
- pal_power_mgmt_info_u_t *halt_info;
+ u64 halt_info_buffer[8];
+ pal_power_mgmt_info_u_t *halt_info =(pal_power_mgmt_info_u_t *)halt_info_buffer;
int i;
- halt_info = get_palcall_buffer();
- if (halt_info = 0) return 0;
-
status = ia64_pal_halt_info(halt_info);
- if (status != 0) {
- free_palcall_buffer(halt_info);
- return 0;
- }
+ if (status != 0) return 0;
for (i=0; i < 8 ; i++ ) {
if (halt_info[i].pal_power_mgmt_info_s.im = 1) {
@@ -269,9 +232,6 @@
p += sprintf(p,"Power level %d: not implemented\n",i);
}
}
-
- free_palcall_buffer(halt_info);
-
return p - page;
}
@@ -674,16 +634,10 @@
perfmon_info(char *page)
{
char *p = page;
- u64 *pm_buffer;
+ u64 pm_buffer[16];
pal_perf_mon_info_u_t pm_info;
- pm_buffer = (u64 *)get_palcall_buffer();
- if (pm_buffer = 0) return 0;
-
- if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0) {
- free_palcall_buffer(pm_buffer);
- return 0;
- }
+ if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0) return 0;
#ifdef IA64_PAL_PERF_MON_INFO_BUG
/*
@@ -719,8 +673,6 @@
p = bitregister_process(p, pm_buffer+12, 256);
p += sprintf(p, "\n");
-
- free_palcall_buffer(pm_buffer);
return p - page;
}
diff -urN linux-davidm/arch/ia64/kernel/pci-dma.c lia64/arch/ia64/kernel/pci-dma.c
--- linux-davidm/arch/ia64/kernel/pci-dma.c Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/kernel/pci-dma.c Fri Sep 8 16:24:04 2000
@@ -97,7 +97,8 @@
io_tlb_index = 0;
io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
- printk("Placing software IO TLB between 0x%p - 0x%p\n", io_tlb_start, io_tlb_end);
+ printk("Placing software IO TLB between 0x%p - 0x%p\n",
+ (void *) io_tlb_start, (void *) io_tlb_end);
}
/*
diff -urN linux-davidm/arch/ia64/kernel/process.c lia64/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/kernel/process.c Fri Sep 8 16:24:19 2000
@@ -370,7 +370,6 @@
void
do_dump_fpu (struct unw_frame_info *info, void *arg)
{
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
elf_fpreg_t *dst = arg;
int i;
@@ -384,10 +383,9 @@
for (i = 2; i < 32; ++i)
unw_get_fr(info, i, dst + i);
- if ((fpu_owner = current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
- ia64_sync_fph(current);
+ ia64_flush_fph(current);
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0)
memcpy(dst + 32, current->thread.fph, 96*16);
- }
}
#endif /* CONFIG_IA64_NEW_UNWIND */
@@ -463,7 +461,6 @@
unw_init_running(do_dump_fpu, dst);
#else
struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
memset(dst, 0, sizeof (dst)); /* don't leak any "random" bits */
@@ -472,12 +469,9 @@
dst[8] = pt->f8; dst[9] = pt->f9;
memcpy(dst + 10, &sw->f10, 22*16); /* f10-f31 are contiguous */
- if ((fpu_owner = current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
- if (fpu_owner = current) {
- __ia64_save_fpu(current->thread.fph);
- }
+ ia64_flush_fph(current);
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0)
memcpy(dst + 32, current->thread.fph, 96*16);
- }
#endif
return 1; /* f0-f31 are always valid so we always return 1 */
}
@@ -520,9 +514,10 @@
/* drop floating-point and debug-register state if it exists: */
current->thread.flags &= ~(IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID);
- if (ia64_get_fpu_owner() = current) {
+#ifndef CONFIG_SMP
+ if (ia64_get_fpu_owner() = current)
ia64_set_fpu_owner(0);
- }
+#endif
}
/*
@@ -532,9 +527,10 @@
void
exit_thread (void)
{
- if (ia64_get_fpu_owner() = current) {
+#ifndef CONFIG_SMP
+ if (ia64_get_fpu_owner() = current)
ia64_set_fpu_owner(0);
- }
+#endif
}
unsigned long
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c lia64/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/kernel/ptrace.c Fri Sep 8 16:24:37 2000
@@ -376,7 +376,8 @@
ret = 0;
} else {
if ((unsigned long) laddr >= (unsigned long) high_memory) {
- printk("yikes: trying to access long at %p\n", laddr);
+ printk("yikes: trying to access long at %p\n",
+ (void *) laddr);
return -EIO;
}
ret = *laddr;
@@ -542,18 +543,28 @@
child->thread.flags |= IA64_THREAD_KRBS_SYNCED;
}
-/*
- * Ensure the state in child->thread.fph is up-to-date.
- */
void
-ia64_sync_fph (struct task_struct *child)
+ia64_flush_fph (struct task_struct *child)
{
- if (ia64_psr(ia64_task_regs(child))->mfh && ia64_get_fpu_owner() = child) {
- ia64_psr(ia64_task_regs(child))->mfh = 0;
+ struct ia64_psr *psr = ia64_psr(ia64_task_regs(child));
+
+ if (psr->mfh) {
+ psr->mfh = 0;
+#ifndef CONFIG_SMP
ia64_set_fpu_owner(0);
+#endif
ia64_save_fpu(&child->thread.fph[0]);
child->thread.flags |= IA64_THREAD_FPH_VALID;
}
+}
+
+/*
+ * Ensure the state in child->thread.fph is up-to-date.
+ */
+void
+ia64_sync_fph (struct task_struct *child)
+{
+ ia64_flush_fph(child);
if (!(child->thread.flags & IA64_THREAD_FPH_VALID)) {
memset(&child->thread.fph, 0, sizeof(child->thread.fph));
child->thread.flags |= IA64_THREAD_FPH_VALID;
@@ -656,6 +667,9 @@
case PT_B1: case PT_B2: case PT_B3: case PT_B4: case PT_B5:
return unw_access_br(&info, (addr - PT_B1)/8 + 1, data, write_access);
+ case PT_AR_EC:
+ return unw_access_ar(&info, UNW_AR_EC, data, write_access);
+
case PT_AR_LC:
return unw_access_ar(&info, UNW_AR_LC, data, write_access);
@@ -863,6 +877,14 @@
else
*data = (pt->cr_ipsr & IPSR_READ_MASK);
return 0;
+
+ case PT_AR_EC:
+ if (write_access)
+ sw->ar_pfs = (((*data & 0x3f) << 52)
+ | (sw->ar_pfs & ~(0x3fUL << 52)));
+ else
+ *data = (sw->ar_pfs >> 52) & 0x3f;
+ break;
case PT_R1: case PT_R2: case PT_R3:
case PT_R4: case PT_R5: case PT_R6: case PT_R7:
diff -urN linux-davidm/arch/ia64/kernel/setup.c lia64/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/kernel/setup.c Fri Sep 8 16:24:55 2000
@@ -57,6 +57,8 @@
volatile unsigned long cpu_online_map;
#endif
+unsigned long ia64_iobase; /* virtual address for I/O accesses */
+
#define COMMAND_LINE_SIZE 512
char saved_command_line[COMMAND_LINE_SIZE]; /* used in proc filesystem */
@@ -112,6 +114,7 @@
void __init
setup_arch (char **cmdline_p)
{
+ extern unsigned long ia64_iobase;
unsigned long max_pfn, bootmap_start, bootmap_size;
unw_init();
@@ -219,6 +222,13 @@
current->processor = 0;
cpu_physical_id(0) = hard_smp_processor_id();
#endif
+ /*
+ * Set `iobase' to the appropriate address in region 6
+ * (uncached access range)
+ */
+ __asm__ ("mov %0=ar.k0;;" : "=r"(ia64_iobase));
+ ia64_iobase = __IA64_UNCACHED_OFFSET | (ia64_iobase & ~PAGE_OFFSET);
+
cpu_init(); /* initialize the bootstrap CPU */
#ifdef CONFIG_IA64_GENERIC
@@ -408,7 +418,9 @@
* particular setting of these bits.
*/
ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_PP);
+#ifndef CONFIG_SMP
ia64_set_fpu_owner(0); /* initialize ar.k5 */
+#endif
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
diff -urN linux-davidm/arch/ia64/kernel/signal.c lia64/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Fri Sep 8 14:34:53 2000
+++ lia64/arch/ia64/kernel/signal.c Fri Sep 8 16:25:17 2000
@@ -147,13 +147,12 @@
ia64_put_nat_bits(&scr->pt, &scr->sw, nat); /* restore the original scratch NaT bits */
#endif
- if (flags & IA64_SC_FLAG_FPH_VALID) {
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
+ if ((flags & IA64_SC_FLAG_FPH_VALID)) {
+ struct ia64_psr *psr = ia64_psr(ia64_task_regs(current));
__copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
- if (fpu_owner = current) {
+ if (!psr->dfh)
__ia64_load_fpu(current->thread.fph);
- }
}
return err;
}
@@ -235,9 +234,12 @@
goto give_sigsegv;
sigdelsetmask(&set, ~_BLOCKABLE);
+
spin_lock_irq(¤t->sigmask_lock);
- current->blocked = set;
- recalc_sigpending(current);
+ {
+ current->blocked = set;
+ recalc_sigpending(current);
+ }
spin_unlock_irq(¤t->sigmask_lock);
if (restore_sigcontext(sc, scr))
@@ -274,7 +276,6 @@
static long
setup_sigcontext (struct sigcontext *sc, sigset_t *mask, struct sigscratch *scr)
{
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
unsigned long flags = 0, ifs, nat;
long err;
@@ -286,11 +287,9 @@
/* if cr_ifs isn't valid, we got here through a syscall */
flags |= IA64_SC_FLAG_IN_SYSCALL;
}
- if ((fpu_owner = current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
+ ia64_flush_fph(current);
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID)) {
flags |= IA64_SC_FLAG_FPH_VALID;
- if (fpu_owner = current) {
- __ia64_save_fpu(current->thread.fph);
- }
__copy_to_user(&sc->sc_fr[32], current->thread.fph, 96*16);
}
@@ -425,9 +424,11 @@
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(¤t->sigmask_lock);
- sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
- sigaddset(¤t->blocked, sig);
- recalc_sigpending(current);
+ {
+ sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
+ sigaddset(¤t->blocked, sig);
+ recalc_sigpending(current);
+ }
spin_unlock_irq(¤t->sigmask_lock);
}
return 1;
diff -urN linux-davidm/arch/ia64/kernel/smp.c lia64/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/kernel/smp.c Fri Sep 8 16:25:51 2000
@@ -453,14 +453,7 @@
if (--data->prof_counter <= 0) {
data->prof_counter = data->prof_multiplier;
- /*
- * update_process_times() expects us to have done irq_enter().
- * Besides, if we don't timer interrupts ignore the global
- * interrupt lock, which is the WrongThing (tm) to do.
- */
- irq_enter(cpu, 0);
update_process_times(user);
- irq_exit(cpu, 0);
}
}
diff -urN linux-davidm/arch/ia64/kernel/smpboot.c lia64/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Fri Apr 21 15:21:24 2000
+++ lia64/arch/ia64/kernel/smpboot.c Wed Dec 31 16:00:00 1969
@@ -1,2 +0,0 @@
-unsigned long cpu_online_map;
-
diff -urN linux-davidm/arch/ia64/kernel/sys_ia64.c lia64/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c Wed Aug 2 18:54:02 2000
+++ lia64/arch/ia64/kernel/sys_ia64.c Fri Sep 8 16:26:15 2000
@@ -147,7 +147,7 @@
struct pt_regs *regs = (struct pt_regs *) &stack;
addr = do_mmap2(addr, len, prot, flags, fd, pgoff);
- if (!IS_ERR(addr))
+ if (!IS_ERR((void *) addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
return addr;
}
@@ -162,26 +162,12 @@
return -EINVAL;
addr = do_mmap2(addr, len, prot, flags, fd, off >> PAGE_SHIFT);
- if (!IS_ERR(addr))
+ if (!IS_ERR((void *) addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
return addr;
}
asmlinkage long
-sys_ioperm (unsigned long from, unsigned long num, int on)
-{
- printk(KERN_ERR "sys_ioperm(from=%lx, num=%lx, on=%d)\n", from, num, on);
- return -EIO;
-}
-
-asmlinkage long
-sys_iopl (int level, long arg1, long arg2, long arg3)
-{
- printk(KERN_ERR "sys_iopl(level=%d)!\n", level);
- return -ENOSYS;
-}
-
-asmlinkage long
sys_vm86 (long arg0, long arg1, long arg2, long arg3)
{
printk(KERN_ERR "sys_vm86(%lx, %lx, %lx, %lx)!\n", arg0, arg1, arg2, arg3);
@@ -204,7 +190,7 @@
unsigned long addr;
addr = sys_create_module (name_user, size);
- if (!IS_ERR(addr))
+ if (!IS_ERR((void *) addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
return addr;
}
diff -urN linux-davidm/arch/ia64/kernel/traps.c lia64/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/kernel/traps.c Fri Sep 8 16:26:57 2000
@@ -202,29 +202,44 @@
static inline void
disabled_fph_fault (struct pt_regs *regs)
{
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
-
/* first, clear psr.dfh and psr.mfh: */
regs->cr_ipsr &= ~(IA64_PSR_DFH | IA64_PSR_MFH);
- if (fpu_owner != current) {
- ia64_set_fpu_owner(current);
+#ifdef CONFIG_SMP
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0)
+ __ia64_load_fpu(current->thread.fph);
+ else {
+ __ia64_init_fpu();
+ /*
+ * Set mfh because the state in thread.fph does not match
+ * the state in the fph partition.
+ */
+ ia64_psr(regs)->mfh = 1;
+ }
+#else /* !CONFIG_SMP */
+ {
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
- if (fpu_owner && ia64_psr(ia64_task_regs(fpu_owner))->mfh) {
- ia64_psr(ia64_task_regs(fpu_owner))->mfh = 0;
- fpu_owner->thread.flags |= IA64_THREAD_FPH_VALID;
- __ia64_save_fpu(fpu_owner->thread.fph);
- }
- if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
- __ia64_load_fpu(current->thread.fph);
- } else {
- __ia64_init_fpu();
- /*
- * Set mfh because the state in thread.fph does not match
- * the state in the fph partition.
- */
- ia64_psr(regs)->mfh = 1;
+ if (fpu_owner != current) {
+ ia64_set_fpu_owner(current);
+
+ if (fpu_owner && ia64_psr(ia64_task_regs(fpu_owner))->mfh) {
+ ia64_psr(ia64_task_regs(fpu_owner))->mfh = 0;
+ fpu_owner->thread.flags |= IA64_THREAD_FPH_VALID;
+ __ia64_save_fpu(fpu_owner->thread.fph);
+ }
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
+ __ia64_load_fpu(current->thread.fph);
+ } else {
+ __ia64_init_fpu();
+ /*
+ * Set mfh because the state in thread.fph does not match
+ * the state in the fph partition.
+ */
+ ia64_psr(regs)->mfh = 1;
+ }
}
}
+#endif /* !CONFIG_SMP */
}
static inline int
diff -urN linux-davidm/arch/ia64/kernel/unaligned.c lia64/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Wed Aug 2 18:54:02 2000
+++ lia64/arch/ia64/kernel/unaligned.c Fri Sep 8 16:27:16 2000
@@ -278,9 +278,9 @@
bspstore = (unsigned long *)regs->ar_bspstore;
DPRINT(("rse_slot_num=0x%lx\n",ia64_rse_slot_num((unsigned long *)sw->ar_bspstore)));
- DPRINT(("kbs=%p nlocals=%ld\n", kbs, nlocals));
+ DPRINT(("kbs=%p nlocals=%ld\n", (void *) kbs, nlocals));
DPRINT(("bspstore next rnat slot %p\n",
- ia64_rse_rnat_addr((unsigned long *)sw->ar_bspstore)));
+ (void *) ia64_rse_rnat_addr((unsigned long *)sw->ar_bspstore)));
DPRINT(("on_kbs=%ld rnats=%ld\n",
on_kbs, ((sw->ar_bspstore-(unsigned long)kbs)>>3) - on_kbs));
@@ -292,7 +292,7 @@
addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
- ubs_end, bsp, addr, ia64_rse_slot_num(addr)));
+ (void *) ubs_end, (void *) bsp, (void *) addr, ia64_rse_slot_num(addr)));
ia64_poke(regs, current, (unsigned long)addr, val);
@@ -303,7 +303,7 @@
ia64_peek(regs, current, (unsigned long)addr, &rnats);
DPRINT(("rnat @%p = 0x%lx nat=%d rnatval=%lx\n",
- addr, rnats, nat, rnats &ia64_rse_slot_num(slot)));
+ (void *) addr, rnats, nat, rnats &ia64_rse_slot_num(slot)));
if (nat) {
rnats |= __IA64_UL(1) << ia64_rse_slot_num(slot);
@@ -312,7 +312,7 @@
}
ia64_poke(regs, current, (unsigned long)addr, rnats);
- DPRINT(("rnat changed to @%p = 0x%lx\n", addr, rnats));
+ DPRINT(("rnat changed to @%p = 0x%lx\n", (void *) addr, rnats));
}
@@ -373,7 +373,7 @@
addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
- ubs_end, bsp, addr, ia64_rse_slot_num(addr)));
+ (void *) ubs_end, (void *) bsp, (void *) addr, ia64_rse_slot_num(addr)));
ia64_peek(regs, current, (unsigned long)addr, val);
@@ -383,7 +383,7 @@
addr = ia64_rse_rnat_addr(addr);
ia64_peek(regs, current, (unsigned long)addr, &rnats);
- DPRINT(("rnat @%p = 0x%lx\n", addr, rnats));
+ DPRINT(("rnat @%p = 0x%lx\n", (void *) addr, rnats));
if (nat)
*nat = rnats >> ia64_rse_slot_num(slot) & 0x1;
@@ -437,13 +437,13 @@
* UNAT bit_pos = GR[r3]{8:3} form EAS-2.4
*/
bitmask = __IA64_UL(1) << (addr >> 3 & 0x3f);
- DPRINT(("*0x%lx=0x%lx NaT=%d prev_unat @%p=%lx\n", addr, val, nat, unat, *unat));
+ DPRINT(("*0x%lx=0x%lx NaT=%d prev_unat @%p=%lx\n", addr, val, nat, (void *) unat, *unat));
if (nat) {
*unat |= bitmask;
} else {
*unat &= ~bitmask;
}
- DPRINT(("*0x%lx=0x%lx NaT=%d new unat: %p=%lx\n", addr, val, nat, unat,*unat));
+ DPRINT(("*0x%lx=0x%lx NaT=%d new unat: %p=%lx\n", addr, val, nat, (void *) unat,*unat));
}
#define IA64_FPH_OFFS(r) (r - IA64_FIRST_ROTATING_FR)
diff -urN linux-davidm/arch/ia64/kernel/unwind.c lia64/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Fri Sep 8 22:36:14 2000
+++ lia64/arch/ia64/kernel/unwind.c Fri Sep 8 16:27:28 2000
@@ -268,9 +268,10 @@
if ((unsigned long) addr < info->regstk.limit
|| (unsigned long) addr >= info->regstk.top)
{
- dprintk("unwind: 0x%p outside of regstk "
+ dprintk("unwind: %p outside of regstk "
"[0x%lx-0x%lx)\n", addr,
- info->regstk.limit, info->regstk.top);
+ (void *) info->regstk.limit,
+ info->regstk.top);
return -1;
}
if ((unsigned long) nat_addr >= info->regstk.top)
@@ -2005,7 +2006,7 @@
if (prevt->next = table)
break;
if (!prevt) {
- dprintk("unwind: failed to find unwind table %p\n", table);
+ dprintk("unwind: failed to find unwind table %p\n", (void *) table);
spin_unlock_irqrestore(&unw.lock, flags);
return;
}
diff -urN linux-davidm/drivers/acpi/acpiconf.c lia64/drivers/acpi/acpiconf.c
--- linux-davidm/drivers/acpi/acpiconf.c Fri Sep 8 22:36:14 2000
+++ lia64/drivers/acpi/acpiconf.c Fri Sep 8 16:28:09 2000
@@ -43,6 +43,7 @@
status = acpi_load_firmware_tables ();
if (ACPI_FAILURE(status)) {
printk ("Acpi cfg:acpi load firmware tables error=0x%x\n", status);
+ acpi_terminate();
return status;
} else
printk ("Acpi cfg:acpi load firmware tables pass\n");
@@ -50,6 +51,7 @@
status = acpi_load_namespace ();
if (ACPI_FAILURE(status)) {
printk ("Acpi cfg:acpi load namespace error=0x%x\n", status);
+ acpi_terminate();
return status;
} else
printk ("Acpi cfg:acpi load namespace pass\n");
@@ -217,7 +219,7 @@
switch (ext_obj->type) {
case ACPI_TYPE_NUMBER:
busnum = (NATIVE_UINT) ext_obj->number.value;
- next_busnum = busnum;
+ next_busnum = busnum + 1;
break;
default:
printk("Acpi cfg:_BBN object type incorrect: set busnum to %ld\n ", next_busnum);
@@ -301,7 +303,7 @@
)
{
struct pci_vector_struct *pvec;
- PCI_ROUTING_TABLE **pprts, *prt;
+ PCI_ROUTING_TABLE **pprts, *prt, *prtf;
int nvec = 0;
int i;
@@ -309,7 +311,7 @@
pprts = (PCI_ROUTING_TABLE **)prts;
for ( i = 0; i < PCI_MAX_BUS; i++) {
- prt = *pprts++;
+ prt = prtf = *pprts++;
if (prt) {
for ( ; prt->length > 0; nvec++) {
prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
@@ -340,7 +342,7 @@
prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
prt = (PCI_ROUTING_TABLE *) ROUND_UP_TO_4BYTES(prt);
}
- acpi_os_free((void *)prt);
+ acpi_os_free((void *)prtf);
}
}
diff -urN linux-davidm/drivers/char/efirtc.c lia64/drivers/char/efirtc.c
--- linux-davidm/drivers/char/efirtc.c Thu Aug 24 08:17:32 2000
+++ lia64/drivers/char/efirtc.c Fri Sep 8 16:28:26 2000
@@ -249,7 +249,7 @@
convert_from_efi_time(&eft, &wtime);
- return copy_to_user((void *)&ewp->time, &wtime, sizeof(struct rtc_time));
+ return copy_to_user((void *)&ewp->time, &wtime, sizeof(struct rtc_time)) ? - EFAULT : 0;
}
return -EINVAL;
}
diff -urN linux-davidm/drivers/net/eepro100.c lia64/drivers/net/eepro100.c
--- linux-davidm/drivers/net/eepro100.c Fri Sep 8 22:36:14 2000
+++ lia64/drivers/net/eepro100.c Fri Sep 8 16:29:55 2000
@@ -25,6 +25,8 @@
Disabled FC and ER, to avoid lockups when when we get FCP interrupts.
2000 Jul 17 Goutham Rao <goutham.rao@intel.com>
PCI DMA API fixes, adding pci_dma_sync_single calls where neccesary
+ 2000 Aug 31 David Mosberger <davidm@hpl.hp.com>
+ RX_ALIGN support: enables rx DMA without causing unaligned accesses.
*/
static const char *version @@ -42,17 +44,16 @@
static int rxdmacount = 0;
#ifdef __ia64__
-/*
- * Bug: this driver may generate unaligned accesses when not copying
- * an incoming packet. Setting rx_copybreak to a large value force a
- * copy and prevents unaligned accesses.
- */
-static int rx_copybreak = 0x10000;
+ /* align rx buffers to 2 bytes so that IP header is aligned */
+# define RX_ALIGN
+# define RxFD_ALIGNMENT __attribute__ ((aligned (2), packed))
#else
+# define RxFD_ALIGNMENT
+#endif
+
/* Set the copy breakpoint for the copy-only-tiny-buffer Rx method.
Lower values use more memory, but are faster. */
static int rx_copybreak = 200;
-#endif
/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
static int max_interrupt_work = 20;
@@ -449,7 +450,7 @@
u32 link; /* struct RxFD * */
u32 rx_buf_addr; /* void * */
u32 count;
-};
+} RxFD_ALIGNMENT;
/* Selected elements of the Tx/RxFD.status word. */
enum RxFD_bits {
@@ -1216,6 +1217,9 @@
for (i = 0; i < RX_RING_SIZE; i++) {
struct sk_buff *skb;
skb = dev_alloc_skb(PKT_BUF_SZ + sizeof(struct RxFD));
+#ifdef RX_ALIGN
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundary */
+#endif
sp->rx_skbuff[i] = skb;
if (skb = NULL)
break; /* OK. Just initially short of Rx bufs. */
@@ -1665,6 +1669,9 @@
struct sk_buff *skb;
/* Get a fresh skbuff to replace the consumed one. */
skb = dev_alloc_skb(PKT_BUF_SZ + sizeof(struct RxFD));
+#ifdef RX_ALIGN
+ skb_reserve(skb, 2); /* Align IP on 16 byte boundary */
+#endif
sp->rx_skbuff[entry] = skb;
if (skb = NULL) {
sp->rx_ringp[entry] = NULL;
diff -urN linux-davidm/drivers/scsi/qla1280.c lia64/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Fri Sep 8 22:36:14 2000
+++ lia64/drivers/scsi/qla1280.c Fri Sep 8 16:31:05 2000
@@ -1,162 +1,72 @@
/********************************************************************************
- * QLOGIC LINUX SOFTWARE
- *
- * QLogic ISP1x80/1x160 device driver for Linux 2.3.x (redhat 6.X).
- *
- * COPYRIGHT (C) 1999-2000 QLOGIC CORPORATION
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the Qlogic's Linux Software License. See below.
- *
- * This program is WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * 1. Redistribution's or source code must retain the above copyright
- * notice, this list of conditions, and the following disclaimer,
- * without modification, immediately at the beginning of the file.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- ********************************************************************************/
-
-/*****************************************************************************************
- QLOGIC CORPORATION SOFTWARE
- "GNU" GENERAL PUBLIC LICENSE
- TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION
- AND MODIFICATION
-
-This GNU General Public License ("License") applies solely to QLogic Linux
-Software ("Software") and may be distributed under the terms of this License.
-
-1. You may copy and distribute verbatim copies of the Software's source code as
-you receive it, in any medium, provided that you conspicuously and appropriately
-publish on each copy an appropriate copyright notice and disclaimer of warranty;
-keep intact all the notices that refer to this License and to the absence of any
-warranty; and give any other recipients of the Software a copy of this License along
-with the Software.
-
-You may charge a fee for the physical act of transferring a copy, and you may at your
-option offer warranty protection in exchange for a fee.
-
-2. You may modify your copy or copies of the Software or any portion of it, thus forming
-a work based on the Software, and copy and distribute such modifications or work under
-the terms of Section 1 above, provided that you also meet all of these conditions:
-
-* a) You must cause the modified files to carry prominent notices stating that you
-changed the files and the date of any change.
-
-* b) You must cause any work that you distribute or publish that in whole or in part
-contains or is derived from the Software or any part thereof, to be licensed as a
-whole at no charge to all third parties under the terms of this License.
-
-* c) If the modified Software normally reads commands interactively when run, you
-must cause it, when started running for such interactive use in the most ordinary way,
-to print or display an announcement including an appropriate copyright notice and a
-notice that there is no warranty (or else, saying that you provide a warranty) and that
-users may redistribute the Software under these conditions, and telling the user how to
-view a copy of this License. (Exception:if the Software itself is interactive but does
-not normally print such an announcement, your work based on the Software is not required
-to print an announcement.)
-
-These requirements apply to the modified work as a whole. If identifiable sections of
-that work are not derived from the Software, and can be reasonably considered independent
-and separate works in themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works. But when you distribute the same
-sections as part of a whole which is a work based on the Software, the distribution of the
-whole must be on the terms of this License, whose permissions for other licensees extend
-to the entire whole, and thus to each and every part regardless of who wrote it.
-
-3. You may copy and distribute the Software (or a work based on it, under Section 2) in
-object code or executable form under the terms of Sections 1 and 2 above provided that
-you also do one of the following:
-
-* a) Accompany it with the complete corresponding machine-readable source code, which must
-be distributed under the terms of Sections 1 and 2 above on a medium customarily used for
-software interchange; or,
-
-* b) Accompany it with a written offer, valid for at least three years, to give any third
-party, for a charge no more than your cost of physically performing source distribution,
-a complete machine-readable copy of the corresponding source code, to be distributed under
-the terms of Sections 1 and 2 above on a medium customarily used for software interchange;
-or,
-
-* c) Accompany it with the information you received as to the offer to distribute
-corresponding source code. (This alternative is allowed only for noncommercial distribution
-and only if you received the Software in object code or executable form with such an offer,
-in accord with Subsection b above.)
-
-The source code for a work means the preferred form of the work for making modifications
-to it. For an executable work, complete source code means all the source code for all
-modules it contains, plus any associated interface definition files, plus the scripts used
-to control compilation and installation of the executable.
-
-If distribution of executable or object code is made by offering access to copy from a
-designated place, then offering equivalent access to copy the source code from the same
-place counts as distribution of the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
-4. You may not copy, modify, sublicense, or distribute the Software except as expressly
-provided under this License. Any attempt otherwise to copy, modify, sublicense or
-distribute the Software is void, and will automatically terminate your rights under this
-License. However, parties who have received copies, or rights, from you under this License
-will not have their licenses terminated so long as such parties remain in full compliance.
-
-5. This license grants you world wide, royalty free non-exclusive rights to modify or
-distribute the Software or its derivative works. These actions are prohibited by law
-if you do not accept this License. Therefore, by modifying or distributing the Software
-(or any work based on the Software), you indicate your acceptance of this License to do
-so, and all its terms and conditions for copying, distributing or modifying the Software
-or works based on it.
-
-6. Each time you redistribute the Software (or any work based on the Software), the
-recipient automatically receives a license from the original licensor to copy, distribute
-or modify the Software subject to these terms and conditions. You may not impose any
-further restrictions on the recipients' exercise of the rights granted herein. You are
-not responsible for enforcing compliance by third parties to this License.
-
-7. If, as a consequence of a court judgment or allegation of patent infringement or for
-any other reason (not limited to patent issues), conditions are imposed on you
-(whether by court order, agreement or otherwise) that contradict the conditions of this
-License, they do not excuse you from the conditions of this License. If you cannot
-distribute so as to satisfy simultaneously your obligations under this License
-and any other pertinent obligations, then as a consequence you may not distribute the
-Software at all.
-
-If any portion of this section is held invalid or unenforceable under any particular
-circumstance, the balance of the section is intended to apply and the section as a whole
-is intended to apply in other circumstances.
-NO WARRANTY
-
-11. THE SOFTWARE IS PROVIDED WITHOUT A WARRANTY OF ANY KIND. THERE IS NO
-WARRANTY FOR THE SOFTWARE, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
-EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
-OTHER PARTIES PROVIDE THE SOFTWARE "AS IS" WITHOUT WARRANTY OF ANY KIND,
-EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
-ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE SOFTWARE IS WITH YOU.
-SHOULD THE SOFTWARE PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
-NECESSARY SERVICING, REPAIR OR CORRECTION.
-
-12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
-REDISTRIBUTE THE SOFTWARE AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
-DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL
-DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE SOFTWARE (INCLUDING
-BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
-LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE SOFTWARE TO
-OPERATE WITH ANY OTHER SOFTWARES), EVEN IF SUCH HOLDER OR OTHER PARTY HAS
-BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
-END OF TERMS AND CONDITIONS
-
-*******************************************************************************************/
+* QLOGIC LINUX SOFTWARE
+*
+* QLogic QLA1280 (Ultra2) and QLA12160 (Ultra3) SCSI driver
+* Copyright (C) 2000 Qlogic Corporation
+* (www.qlogic.com)
+*
+* This program is free software; you can redistribute it and/or modify it
+* under the terms of the GNU General Public License as published by the
+* Free Software Foundation; either version 2, or (at your option) any
+* later version.
+*
+* This program is distributed in the hope that it will be useful, but
+* WITHOUT ANY WARRANTY; without even the implied warranty of
+* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+* General Public License for more details.
+**
+******************************************************************************/
/****************************************************************************
Revision History:
- Rev. 3.00 Jan 17, 1999 DG Qlogic
+ Rev. 3.16 Beta August 25, 2000 BN Qlogic
+ - Corrected 64 bit addressing issue on IA-64
+ where the upper 32 bits were not properly
+ passed to the RISC engine.
+ Rev. 3.15 Beta August 22, 2000 BN Qlogic
+ - Modified qla1280_setup_chip to properly load
+ ISP firmware for greater that 4 Gig memory on IA-64
+ Rev. 3.14 Beta August 16, 2000 BN Qlogic
+ - Added setting of dma_mask to full 64 bit
+ if flags.enable_64bit_addressing is set in NVRAM
+ Rev. 3.13 Beta August 16, 2000 BN Qlogic
+ - Use new PCI DMA mapping APIs for 2.4.x kernel
+ Rev. 3.12 July 18, 2000 Redhat & BN Qlogic
+ - Added check of pci_enable_device to detect() for 2.3.x
+ - Use pci_resource_start() instead of
+ pdev->resource[0].start in detect() for 2.3.x
+ - Updated driver version
+ Rev. 3.11 July 14, 2000 BN Qlogic
+ - Updated SCSI Firmware to following versions:
+ qla1x80: 8.13.08
+ qla1x160: 10.04.08
+ - Updated driver version to 3.11
+ Rev. 3.10 June 23, 2000 BN Qlogic
+ - Added filtering of AMI SubSys Vendor ID devices
+ Rev. 3.9
+ - DEBUG_QLA1280 undefined and new version BN Qlogic
+ Rev. 3.08b May 9, 2000 MD Dell
+ - Added logic to check against AMI subsystem vendor ID
+ Rev. 3.08 May 4, 2000 DG Qlogic
+ - Added logic to check for PCI subsystem ID.
+ Rev. 3.07 Apr 24, 2000 DG & BN Qlogic
+ - Updated SCSI Firmware to following versions:
+ qla12160: 10.01.19
+ qla1280: 8.09.00
+ Rev. 3.06 Apr 12, 2000 DG & BN Qlogic
+ - Internal revision; not released
+ Rev. 3.05 Mar 28, 2000 DG & BN Qlogic
+ - Edit correction for virt_to_bus and PROC.
+ Rev. 3.04 Mar 28, 2000 DG & BN Qlogic
+ - Merge changes from ia64 port.
+ Rev. 3.03 Mar 28, 2000 BN Qlogic
+ - Increase version to reflect new code drop with compile fix
+ of issue with inclusion of linux/spinlock for 2.3 kernels
+ Rev. 3.02 Mar 15, 2000 BN Qlogic
+ - Merge qla1280_proc_info from 2.10 code base
+ Rev. 3.01 Feb 10, 2000 BN Qlogic
+ - Corrected code to compile on a 2.2.x kernel.
+ Rev. 3.00 Jan 17, 2000 DG Qlogic
- Added 64-bit support.
Rev. 2.07 Nov 9, 1999 DG Qlogic
- Added new routine to set target parameters for ISP12160.
@@ -183,11 +93,12 @@
*****************************************************************************/
+#include <linux/config.h>
#ifdef MODULE
#include <linux/module.h>
#endif
-#define QLA1280_VERSION " 3.00-Beta"
+#define QLA1280_VERSION "3.16 Beta"
#include <stdarg.h>
#include <asm/io.h>
@@ -207,8 +118,16 @@
#include <linux/proc_fs.h>
#include <linux/blk.h>
#include <linux/tqueue.h>
-/* MRS #include <linux/tasks.h> */
+#ifndef KERNEL_VERSION
+# define KERNEL_VERSION(x,y,z) (((x)<<16)+((y)<<8)+(z))
+#endif
+
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
+#include <linux/pci_ids.h>
+#endif
+
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,95)
+#include <linux/tasks.h>
# include <linux/bios32.h>
#endif
#include "sd.h"
@@ -216,23 +135,16 @@
#include "hosts.h"
#define UNIQUE_FW_NAME
#include "qla1280.h"
-#include "ql12160_fw.h" /* ISP RISC code */
+#include "ql12160_fw.h" /* ISP RISC codes */
#include "ql1280_fw.h"
#include <linux/stat.h>
-#include <linux/malloc.h> /* for kmalloc() */
-
-
-#ifndef KERNEL_VERSION
-# define KERNEL_VERSION(x,y,z) (((x)<<16)+((y)<<8)+(z))
-#endif
-
+#include <linux/malloc.h>
/*
* Compile time Options:
* 0 - Disable and 1 - Enable
*/
-#define QLA1280_64BIT_SUPPORT 1 /* 64-bit Support */
#define QL1280_TARGET_MODE_SUPPORT 0 /* Target mode support */
#define WATCHDOGTIMER 0
#define MEMORY_MAPPED_IO 0
@@ -244,15 +156,16 @@
#define AUTO_ESCALATE_ABORT 0 /* Automatically escalate aborts */
#define STOP_ON_ERROR 0 /* Stop on aborts and resets */
#define STOP_ON_RESET 0
-#define STOP_ON_ABORT 0
-#undef DYNAMIC_MEM_ALLOC
-
-#define DEBUG_QLA1280 0 /* Debugging */
-/* #define CHECKSRBSIZE */
+#define STOP_ON_ABORT 0
+
+#define DEBUG_QLA1280 0
-/*
- * These macros to assist programming
- */
+/*************** 64 BIT PCI DMA ******************************************/
+#define FORCE_64BIT_PCI_DMA 0 /* set to one for testing only */
+/* Applicable to 64 version of the Linux 2.4.x and above only */
+/* NVRAM bit nv->cntr_flags_1.enable_64bit_addressing should be used for */
+/* administrator control of PCI DMA width size per system configuration */
+/*************************************************************************/
#define BZERO(ptr, amt) memset(ptr, 0, amt)
#define BCOPY(src, dst, amt) memcpy(dst, src, amt)
@@ -260,19 +173,23 @@
#define KMFREE(ip,siz) kfree((ip))
#define SYS_DELAY(x) udelay(x);barrier()
#define QLA1280_DELAY(sec) mdelay(sec * 1000)
-#define VIRT_TO_BUS(a) virt_to_bus((a))
-#if QLA1280_64BIT_SUPPORT
+
+/* 3.16 */
+#define pci_dma_lo32(a) (a & 0xffffffff)
+#define pci_dma_hi32(a) ((a >> 32) & 0xffffffff)
+
+#define VIRT_TO_BUS(a) virt_to_bus(((void *)a))
+
#if BITS_PER_LONG <= 32
-#define VIRT_TO_BUS_LOW(a) (uint32_t)virt_to_bus((a))
+#define VIRT_TO_BUS_LOW(a) (uint32_t)virt_to_bus(((void *)a))
#define VIRT_TO_BUS_HIGH(a) (uint32_t)(0x0)
#else
-#define VIRT_TO_BUS_LOW(a) (uint32_t)(0xffffffff & virt_to_bus((a)))
-#define VIRT_TO_BUS_HIGH(a) (uint32_t)(0xffffffff & (virt_to_bus((a))>>32))
+#define VIRT_TO_BUS_LOW(a) (uint32_t)(0xffffffff & virt_to_bus((void *)(a)))
+#define VIRT_TO_BUS_HIGH(a) (uint32_t)(0xffffffff & (virt_to_bus((void *)(a))>>32))
#endif
-#endif /* QLA1280_64BIT_SUPPORT */
-#define STATIC
+#define STATIC
#define NVRAM_DELAY() udelay(500) /* 2 microsecond delay */
void qla1280_device_queue_depth(scsi_qla_host_t *, Scsi_Device *);
@@ -285,11 +202,11 @@
#define LSB(x) (uint8_t)(x)
#if BITS_PER_LONG <= 32
-#define LS_64BITS(x) (uint32_t)(x)
-#define MS_64BITS(x) (uint32_t)(0x0)
+#define LS_64BITS(x) (uint32_t)((unsigned long) x)
+#define MS_64BITS(x) (uint32_t)((unsigned long) 0x0)
#else
-#define LS_64BITS(x) (uint32_t)(0xffffffff & (x))
-#define MS_64BITS(x) (uint32_t)(0xffffffff & ((x)>>32) )
+#define LS_64BITS(x) (uint32_t)(0xffffffff & ((unsigned long)x))
+#define MS_64BITS(x) (uint32_t)(0xffffffff & (((unsigned long)x)>>32) )
#endif
/*
@@ -300,9 +217,6 @@
STATIC void qla1280_putq_t(scsi_lu_t *, srb_t *);
STATIC void qla1280_done_q_put(srb_t *, srb_t **, srb_t **);
STATIC void qla1280_select_queue_depth(struct Scsi_Host *, Scsi_Device *);
-#ifdef QLA1280_UNUSED
-static void qla1280_dump_regs(struct Scsi_Host *host);
-#endif
#if STOP_ON_ERROR
static void qla1280_panic(char *, struct Scsi_Host *host);
#endif
@@ -313,9 +227,6 @@
STATIC void qla1280_removeq(scsi_lu_t *q, srb_t *sp);
STATIC void qla1280_mem_free(scsi_qla_host_t *ha);
void qla1280_do_dpc(void *p);
-#ifdef QLA1280_UNUSED
-static void qla1280_set_flags(char * s);
-#endif
static char *qla1280_get_token(char *, char *);
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0)
STATIC inline void mdelay(int);
@@ -339,9 +250,7 @@
STATIC uint8_t qla1280_device_reset(scsi_qla_host_t *, uint8_t, uint32_t);
STATIC uint8_t qla1280_abort_device(scsi_qla_host_t *, uint8_t, uint32_t, uint32_t);
STATIC uint8_t qla1280_abort_command(scsi_qla_host_t *, srb_t *),
-#if QLA1280_64BIT_SUPPORT
qla1280_64bit_start_scsi(scsi_qla_host_t *, srb_t *),
-#endif
qla1280_32bit_start_scsi(scsi_qla_host_t *, srb_t *),
qla1280_abort_isp(scsi_qla_host_t *);
STATIC void qla1280_nv_write(scsi_qla_host_t *, uint16_t),
@@ -374,12 +283,12 @@
qla1280_notify_ack(scsi_qla_host_t *, notify_entry_t *),
qla1280_immed_notify(scsi_qla_host_t *, notify_entry_t *),
qla1280_accept_io(scsi_qla_host_t *, ctio_ret_entry_t *),
-#if QLA1280_64BIT_SUPPORT
- qla1280_64bit_continue_io(scsi_qla_host_t *, atio_entry_t *, uint32_t,
- paddr32_t *),
-#endif
- qla1280_32bit_continue_io(scsi_qla_host_t *, atio_entry_t *, uint32_t,
- paddr32_t *),
+ qla1280_64bit_continue_io(scsi_qla_host_t *,
+ atio_entry_t *, uint32_t,
+ paddr32_t *),
+ qla1280_32bit_continue_io(scsi_qla_host_t *,
+ atio_entry_t *, uint32_t,
+ paddr32_t *),
qla1280_atio_entry(scsi_qla_host_t *, atio_entry_t *),
qla1280_notify_entry(scsi_qla_host_t *, notify_entry_t *);
#endif /* QLA1280_TARGET_MODE_SUPPORT */
@@ -400,7 +309,7 @@
qla1280_dump_buffer(caddr_t, uint32_t);
char debug_buff[80];
-#if DEBUG_QLA1280
+#if DEBUG_QLA1280
STATIC uint8_t ql_debug_print = 1;
#else
STATIC uint8_t ql_debug_print = 0;
@@ -426,6 +335,22 @@
#endif
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+/*
+ * Our directory Entry in /proc/scsi for the user to
+ * access the driver.
+ */
+/* Need to add in proc_fs.h PROC_SCSI_QL1280 */
+#define PROC_SCSI_QL1280 PROC_SCSI_QLOGICISP
+
+struct proc_dir_entry proc_scsi_qla1280 = {
+ PROC_SCSI_QL1280, 7, "qla1280",
+ S_IFDIR | S_IRUGO | S_IXUGO, 2,
+ 0, 0, 0, NULL, NULL, NULL, NULL, NULL, NULL, NULL
+};
+#endif
+
/* We use the Scsi_Pointer structure that's included with each command
* SCSI_Cmnd as a scratchpad for our SRB.
*
@@ -471,7 +396,7 @@
unsigned char *fwver; /* Ptr to F/W version array */
} qla_boards_t;
-struct _qlaboards QLBoardTbl[NUM_OF_ISP_DEVICES] =
+struct _qlaboards QL1280BoardTbl[NUM_OF_ISP_DEVICES] =
{
/* Name , Board PCI Device ID, Number of ports */
{"QLA1080 ", QLA1080_DEVICE_ID, 1,
@@ -567,152 +492,149 @@
*
* Returns:
*************************************************************************/
-#ifdef QLA1280_PROFILE
-#define PROC_BUF (&qla1280_buffer[size])
-#define LUN_ID (targ_lun>>(MAX_T_BITS+MAX_L_BITS)),((targ_lun>>MAX_L_BITS)&0xf), targ_lun&0x7
-#endif
+#define PROC_BUF (&qla1280_buffer[len])
int
-qla1280_proc_info ( char *buffer, char **start, off_t offset, int length,
- int hostno, int inout)
-{
-#ifdef QLA1280_PROFILE
+qla1280_proc_info( char *buffer, char **start, off_t offset, int length,
+ int hostno, int inout) {
+#if QLA1280_PROFILE
struct Scsi_Host *host;
scsi_qla_host_t *ha;
int size = 0;
- int targ_lun;
scsi_lu_t *up;
- int no_devices;
+ int len = 0;
+ qla_boards_t *bdp;
+ uint32_t b, t, l;
- printk("Entering proc_info 0x%p,0x%lx,0x%x,0x%x\n",buffer,offset,length,hostno);
host = NULL;
- /* find the host they want to look at */
- for(ha=qla1280_hostlist; (ha != NULL) && ha->host->host_no != hostno; ha=ha->next)
+
+ /* Find the host that was specified */
+ for( ha=qla1280_hostlist; (ha != NULL) && ha->host->host_no != hostno; ha=ha->next )
;
- if (!ha)
- {
- size += sprintf(buffer, "Can't find adapter for host number %d\n", hostno);
- if (size > length)
- {
+ /* if host wasn't found then exit */
+ if( !ha ) {
+ size = sprintf(buffer, "Can't find adapter for host number %d\n", hostno);
+ if( size > length ) {
return (size);
- }
- else
- {
- return (length);
+ } else {
+ return (0);
}
}
host = ha->host;
- if (inout = TRUE) /* Has data been written to the file? */
- {
- return (qla1280_set_info(buffer, length, host));
- }
- /* compute number of active devices */
- no_devices = 0;
- for (targ_lun = 0; targ_lun < MAX_EQ; targ_lun++)
+ if( inout = TRUE ) /* Has data been written to the file? */
{
- if( (up = ha->dev[targ_lun]) = NULL )
- continue;
- no_devices++;
+ printk("qla1280_proc: has data been written to the file. \n");
+ return (qla1280_set_info(buffer, length, host));
}
- /* size = 112 * no_devices; */
- size = 4096;
- /* round up to the next page */
/*
* if our old buffer is the right size use it otherwise
* allocate a new one.
*/
- if (qla1280_buffer_size != size)
- {
+ size = 4096; /* get a page */
+ if( qla1280_buffer_size != size ) {
/* deallocate this buffer and get a new one */
- if (qla1280_buffer != NULL)
- {
+ if( qla1280_buffer != NULL ) {
kfree(qla1280_buffer);
qla1280_buffer_size = 0;
}
qla1280_buffer = kmalloc(size, GFP_KERNEL);
}
- if (qla1280_buffer = NULL)
- {
+ if( qla1280_buffer = NULL ) {
size = sprintf(buffer, "qla1280 - kmalloc error at line %d\n",
__LINE__);
return size;
}
+ /* save the size of our buffer */
qla1280_buffer_size = size;
- size = 0;
- size += sprintf(PROC_BUF, "Qlogic 1280/1080 SCSI driver version: "); /* 43 bytes */
- size += sprintf(PROC_BUF, "%5s, ", QLA1280_VERSION); /* 5 */
- size += sprintf(PROC_BUF, "Qlogic Firmware version: "); /* 25 */
- size += sprintf(PROC_BUF, "%2d.%2d.%2d",_firmware_version[0], /* 8 */
- ql12_firmware_version[1],
- ql12_firmware_version[2]);
- size += sprintf(PROC_BUF, "\n"); /* 1 */
-
- size += sprintf(PROC_BUF, "SCSI Host Adapter Information: %s\n", QLBoardTbl[ha->devnum].bdName);
- size += sprintf(PROC_BUF, "Request Queue = 0x%lx, Response Queue = 0x%lx\n",
+ /* start building the print buffer */
+ bdp = &QL1280BoardTbl[ha->devnum];
+ size = sprintf(PROC_BUF,
+ "QLogic PCI to SCSI Adapter for ISP 1280/12160:\n"
+ " Firmware version: %2d.%02d.%02d, Driver version %s\n", bdp->fwver[0], bdp->fwver[1], bdp->fwver[2], QLA1280_VERSION);
+
+ len += size;
+
+ size = sprintf(PROC_BUF, "SCSI Host Adapter Information: %s\n", bdp->bdName);
+ len += size;
+ size = sprintf(PROC_BUF, "Request Queue = 0x%lx, Response Queue = 0x%lx\n",
ha->request_dma,
ha->response_dma);
- size += sprintf(PROC_BUF, "Request Queue count= 0x%x, Response Queue count= 0x%x\n",
+ len += size;
+ size = sprintf(PROC_BUF, "Request Queue count= 0x%lx, Response Queue count= 0x%lx\n",
REQUEST_ENTRY_CNT,
RESPONSE_ENTRY_CNT);
- size += sprintf(PROC_BUF,"Number of pending commands = 0x%lx\n", ha->actthreads);
- size += sprintf(PROC_BUF,"Number of queued commands = 0x%lx\n", ha->qthreads);
- size += sprintf(PROC_BUF,"Number of free request entries = %d\n",ha->req_q_cnt);
- size += sprintf(PROC_BUF, "\n"); /* 1 */
+ len += size;
+ size = sprintf(PROC_BUF, "Number of pending commands = 0x%lx\n", ha->actthreads);
+ len += size;
+ size = sprintf(PROC_BUF, "Number of queued commands = 0x%lx\n", ha->qthreads);
+ len += size;
+ size = sprintf(PROC_BUF, "Number of free request entries = %d\n",ha->req_q_cnt);
+ len += size;
+ size = sprintf(PROC_BUF, "\n"); /* 1 */
+ len += size;
- size += sprintf(PROC_BUF, "Attached devices:\n");
+ size = sprintf(PROC_BUF, "SCSI device Information:\n");
+ len += size;
/* scan for all equipment stats */
- for (targ_lun = 0; targ_lun < MAX_EQ; targ_lun++)
- {
- if( (up = ha->dev[targ_lun]) = NULL )
+ for (b = 0; b < MAX_BUSES; b++)
+ for (t = 0; t < MAX_TARGETS; t++) {
+ for( l = 0; l < MAX_LUNS; l++ ) {
+ up = (scsi_lu_t *) LU_Q(ha, b, t, l);
+ if( up = NULL )
continue;
- if( up->io_cnt = 0 )
- {
- size += sprintf(PROC_BUF,"(%2d:%2d:%2d) No stats\n",LUN_ID);
+ /* unused device/lun */
+ if( up->io_cnt = 0 || up->io_cnt < 2 )
continue;
- }
/* total reads since boot */
/* total writes since boot */
/* total requests since boot */
- size += sprintf(PROC_BUF, "Total requests %ld,",up->io_cnt);
+ size = sprintf(PROC_BUF, "(%2d:%2d:%2d): Total reqs %ld,",b,t,l,up->io_cnt);
+ len += size;
/* current number of pending requests */
- size += sprintf(PROC_BUF, "(%2d:%2d:%2d) pending requests %d,",LUN_ID,up->q_outcnt);
+ size = sprintf(PROC_BUF, " Pend reqs %d,",up->q_outcnt);
+ len += size;
+#if 0
/* avg response time */
- size += sprintf(PROC_BUF, "Avg response time %ld%%,",(up->resp_time/up->io_cnt)*100);
+ size = sprintf(PROC_BUF, " Avg resp time %ld%%,",(up->resp_time/up->io_cnt)*100);
+ len += size;
/* avg active time */
- size += sprintf(PROC_BUF, "Avg active time %ld%%\n",(up->act_time/up->io_cnt)*100);
+ size = sprintf(PROC_BUF, " Avg active time %ld%%\n",(up->act_time/up->io_cnt)*100);
+#else
+ size = sprintf(PROC_BUF, "\n");
+#endif
+ len += size;
+ }
+ if( len >= qla1280_buffer_size )
+ break;
}
- if (size >= qla1280_buffer_size)
- {
+ if( len >= qla1280_buffer_size ) {
printk(KERN_WARNING "qla1280: Overflow buffer in qla1280_proc.c\n");
}
- if (offset > size - 1)
- {
+ if( offset > len - 1 ) {
kfree(qla1280_buffer);
qla1280_buffer = NULL;
qla1280_buffer_size = length = 0;
*start = NULL;
- }
- else
- {
+ } else {
*start = &qla1280_buffer[offset]; /* Start of wanted data */
- if (size - offset < length)
- {
- length = size - offset;
+ if( len - offset < length ) {
+ length = len - offset;
}
}
+ return (length);
+#else
+ return (0);
#endif
- return (length);
}
-
/**************************************************************************
* qla1280_detect
* This routine will probe for Qlogic 1280 SCSI host adapters.
@@ -735,6 +657,9 @@
scsi_qla_host_t *ha, *cur_ha;
struct _qlaboards *bdp;
int i, j;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ unsigned short subsys;
+#endif
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,1,95)
unsigned int piobase;
unsigned char pci_bus, pci_devfn, pci_irq;
@@ -747,13 +672,19 @@
#else
int index;
#endif
+#ifndef PCI_VENDOR_ID_AMI
+#define PCI_VENDOR_ID_AMI 0x101e
+#endif
ENTER("qla1280_detect");
+ if (sizeof(srb_t) > sizeof(Scsi_Pointer) )
+ printk("qla1280_detect: [WARNING] srb_t Must Be Redefined");
+
#ifdef CHECKSRBSIZE
if (sizeof(srb_t) > sizeof(Scsi_Pointer) )
{
- printk("Redefine SRB - its too big");
+ printk("qla1280_detect: srb_t Must Be Redefined - its too big");
return 0;
}
#endif
@@ -784,41 +715,77 @@
"qla1280: insmod or else it might trash certain memory areas.\n");
#endif
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
if ((int) !pcibios_present())
+#else
+ if (!pci_present())
+#endif
{
- printk("scsi: PCI not present\n");
- return 0;
- } /* end of IF */
- bdp = &QLBoardTbl[0];
+ printk("scsi: PCI not present\n");
+ return 0;
+ }
+
+ bdp = &QL1280BoardTbl[0];
qla1280_hostlist = NULL;
-#if 0
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
template->proc_dir = &proc_scsi_qla1280;
#else
- template->proc_name = "qla1280";
+ template->proc_name = "qla1x80";
#endif
-
/* Try and find each different type of adapter we support */
for( i=0; bdp->device_id != 0 && i < NUM_OF_ISP_DEVICES; i++, bdp++ ) {
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,95)
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
+ /* PCI_SUBSYSTEM_IDS supported */
+ while ((pdev = pci_find_subsys(QLA1280_VENDOR_ID,
+ bdp->device_id, PCI_ANY_ID, PCI_ANY_ID, pdev) )) {
+ if (pci_enable_device(pdev)) continue;
+#else
while ((pdev = pci_find_device(QLA1280_VENDOR_ID,
bdp->device_id, pdev ) )) {
- if (pci_enable_device(pdev)) continue;
-#else
+#endif /* 2,3,18 */
+#else /* less than 2,1,95 */
while (!(pcibios_find_device(QLA1280_VENDOR_ID,
bdp->device_id,
index++, &pci_bus, &pci_devfn)) ) {
-#endif
+#endif /* 2,1,95 */
/* found a adapter */
- template->unchecked_isa_dma = 1;
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
+ printk("qla1280: detect() found an HBA\n");
+ printk("qla1280: VID=%x DID=%x SSVID=%x SSDID=%x\n",
+ pdev->vendor, pdev->device,
+ pdev->subsystem_vendor, pdev->subsystem_device);
+ /* If it's an AMI SubSys Vendor ID adapter, skip it. */
+ if (pdev->subsystem_vendor = PCI_VENDOR_ID_AMI)
+ {
+ printk("qla1280: Skip AMI SubSys Vendor ID Chip\n");
+ continue;
+ }
+#else
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,95)
+ pci_read_config_word(pdev, PCI_SUBSYSTEM_VENDOR_ID,
+ &subsys);
+ /* Bypass all AMI SUBSYS VENDOR IDs */
+ if (subsys = PCI_VENDOR_ID_AMI)
+ {
+ printk("qla1280: Skip AMI SubSys Vendor ID Chip\n");
+ continue;
+ }
+#endif /* 2,1,95 */
+#endif /* 2,3,18 */
host = scsi_register(template, sizeof(scsi_qla_host_t));
ha = (scsi_qla_host_t *) host->hostdata;
/* Clear our data area */
for( j =0, cp = (char *)ha; j < sizeof(scsi_qla_host_t); j++)
- *cp = 0;
+ *cp++ = 0;
/* Sanitize the information from PCI BIOS. */
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,95)
host->irq = pdev->irq;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ host->io_port = (unsigned int) pdev->base_address[0];
+#else
host->io_port = pci_resource_start(pdev, 0);
+#endif
ha->pci_bus = pdev->bus->number;
ha->pci_device_fn = pdev->devfn;
ha->pdev = pdev;
@@ -835,35 +802,35 @@
ha->devnum = i;
if( qla1280_mem_alloc(ha) ) {
- printk(KERN_INFO "qla1280: Failed to allocate memory for adapter\n");
+ printk(KERN_INFO "qla1280: Failed to get memory\n");
}
ha->ports = bdp->numPorts;
+ /* following needed for all cases of OS versions */
+ host->io_port &= PCI_BASE_ADDRESS_IO_MASK;
ha->iobase = (device_reg_t *) host->io_port;
ha->host = host;
ha->host_no = host->host_no;
/* load the F/W, read paramaters, and init the H/W */
+ ha->instance = num_hosts;
if (qla1280_initialize_adapter(ha))
{
-
- printk(KERN_INFO "qla1280: Failed to initialized adapter\n");
- qla1280_mem_free(ha);
- scsi_unregister(host);
- continue;
+ printk(KERN_INFO "qla1280: Failed to initialize adapter\n");
+ qla1280_mem_free(ha);
+ scsi_unregister(host);
+ continue;
}
host->max_channel = bdp->numPorts-1;
- ha->instance = num_hosts;
/* Register our resources with Linux */
if( qla1280_register_with_Linux(ha, bdp->numPorts-1) ) {
- printk(KERN_INFO "qla1280: Failed to register our resources\n");
- qla1280_mem_free(ha);
- scsi_unregister(host);
- continue;
+ printk(KERN_INFO "qla1280: Failed to register resources\n");
+ qla1280_mem_free(ha);
+ scsi_unregister(host);
+ continue;
}
-
reg = ha->iobase;
/* Disable ISP interrupts. */
qla1280_disable_intrs(ha);
@@ -920,8 +887,11 @@
host->can_queue = 0xfffff; /* unlimited */
host->cmd_per_lun = 1;
host->select_queue_depths = qla1280_select_queue_depth;
- host->n_io_port = 0xFF;
- host->base = (unsigned long) ha->mmpbase;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ host->base = (unsigned char *) ha->mmpbase;
+#else
+ host->base = (u_long) ha->mmpbase;
+#endif
host->max_channel = maxchannels;
host->max_lun = MAX_LUNS-1;
host->unique_id = ha->instance;
@@ -1012,10 +982,10 @@
bp = &qla1280_buffer[0];
ha = (scsi_qla_host_t *)host->hostdata;
- bdp = &QLBoardTbl[ha->devnum];
+ bdp = &QL1280BoardTbl[ha->devnum];
memset(bp, 0, sizeof(qla1280_buffer));
sprintf(bp,
- "QLogic %sPCI to SCSI Host Adapter: bus %d device %d irq %d\n"
+ "QLogic %s PCI to SCSI Host Adapter: bus %d device %d irq %d\n"
" Firmware version: %2d.%02d.%02d, Driver version %s",
(char *)&bdp->bdName[0], ha->pci_bus, (ha->pci_device_fn & 0xf8) >> 3, host->irq,
bdp->fwver[0],bdp->fwver[1],bdp->fwver[2],
@@ -1047,8 +1017,8 @@
scsi_lu_t *q;
u_long handle;
- ENTER("qla1280_queuecommand");
- COMTRACE('C')
+ /*ENTER("qla1280_queuecommand");
+ COMTRACE('C')*/
host = cmd->host;
ha = (scsi_qla_host_t *) host->hostdata;
@@ -1075,7 +1045,7 @@
{
LU_Q(ha, b, t, l) = q;
BZERO(q,sizeof(struct scsi_lu));
- DEBUG(sprintf(debug_buff,"Allocate new device queue 0x%x\n",q));
+ DEBUG(sprintf(debug_buff,"Allocate new device queue 0x%x\n\r",q));
DEBUG(qla1280_print(debug_buff));
DRIVER_UNLOCK
}
@@ -1094,15 +1064,13 @@
handle = INVALID_HANDLE;
CMD_HANDLE(cmd) = (unsigned char *)handle;
- /* Bookkeeping information */
- sp->r_start = jiffies; /* time the request was recieved */
- sp->u_start = 0;
-
/* add the command to our queue */
ha->qthreads++;
qla1280_putq_t(q,sp);
- DEBUG(sprintf(debug_buff,"qla1280_queuecmd: queue pid=%d, hndl=0x%x\n\r",cmd->pid,handle));
+ DEBUG(sprintf(debug_buff,
+ "qla1280_QC: t=%x CDB=%x I/OSize=0x%x haQueueCount=0x%x\n\r",
+ t,cmd->cmnd[0],CMD_XFRLEN(cmd),ha->qthreads));
DEBUG(qla1280_print(debug_buff));
/* send command to adapter */
@@ -1112,7 +1080,7 @@
DRIVER_UNLOCK
- LEAVE("qla1280_queuecommand");
+ /*LEAVE("qla1280_queuecommand");*/
return (0);
}
@@ -1723,6 +1691,7 @@
scsi_lu_t *q;
uint32_t b, t, l;
Scsi_Cmnd *cmd;
+
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,95)
unsigned long cpu_flags = 0;
#endif
@@ -1739,7 +1708,8 @@
*done_q_last = NULL;
else
(*done_q_first)->s_prev = NULL;
- cmd = sp->cmd;
+
+ cmd = sp->cmd;
b = SCSI_BUS_32(cmd);
t = SCSI_TCN_32(cmd);
l = SCSI_LUN_32(cmd);
@@ -1753,8 +1723,6 @@
q->q_flag &= ~QLA1280_QBUSY;
}
- q->resp_time += jiffies - sp->r_start; /* Lun bookkeeping information */
- q->act_time += jiffies - sp->u_start;
q->io_cnt++;
if( sp->dir & BIT_5 )
q->r_cnt++;
@@ -1777,7 +1745,28 @@
default:
break;
}
-
+ /* 3.13 64 and 32 bit */
+ /* Release memory used for this I/O */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
+ if (cmd->use_sg) {
+ DEBUG(sprintf(debug_buff,
+ "S/G unmap_sg cmd=%x\n\r",cmd);)
+ DEBUG(qla1280_print(debug_buff));
+ pci_unmap_sg(ha->pdev, cmd->request_buffer,
+ cmd->use_sg,
+ scsi_to_pci_dma_dir(cmd->sc_data_direction));
+ }
+ else if (cmd->request_bufflen) {
+ DEBUG(sprintf(debug_buff,
+ "No S/G unmap_single cmd=%x saved_dma_handle=%lx\n\r",
+ cmd,sp->saved_dma_handle);)
+ DEBUG(qla1280_print(debug_buff);)
+
+ pci_unmap_single(ha->pdev,sp->saved_dma_handle,
+ cmd->request_bufflen,
+ scsi_to_pci_dma_dir(cmd->sc_data_direction));
+ }
+#endif
/* Call the mid-level driver interrupt handler */
CMD_HANDLE(sp->cmd) = (unsigned char *) 0;
ha->actthreads--;
@@ -1791,8 +1780,6 @@
qla1280_next(ha, q, b);
}
DRIVER_UNLOCK
-
-
COMTRACE('d')
LEAVE("qla1280_done");
}
@@ -1964,7 +1951,7 @@
if (q->q_outcnt >= ha->bus_settings[b].hiwat)
q->q_flag |= QLA1280_QBUSY;
-#if QLA1280_64BIT_SUPPORT
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
if (ha->flags.enable_64bit_addressing)
status = qla1280_64bit_start_scsi(ha, sp);
else
@@ -1981,7 +1968,7 @@
/* Wait for 30 sec for command to be accepted. */
for (cnt = 6000000; cnt; cnt--)
{
-#if QLA1280_64BIT_SUPPORT
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
if (ha->flags.enable_64bit_addressing)
status = qla1280_64bit_start_scsi(ha, sp);
else
@@ -2072,7 +2059,7 @@
ENTER("qla1280_putq_t");
#endif
DRIVER_LOCK
- DEBUG(sprintf(debug_buff,"Adding to device 0x%p<-(0x%p)\n\r",q,sp));
+ DEBUG(sprintf(debug_buff,"Adding to device q=0x%p<-(0x%p)sp\n\r",q,sp));
DEBUG(qla1280_print(debug_buff));
sp->s_next = NULL;
if (!q->q_first) /* If queue empty */
@@ -2157,28 +2144,33 @@
{
uint8_t status = 1;
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
+ dma_addr_t dma_handle;
+#endif
#ifdef QL_DEBUG_LEVEL_3
ENTER("qla1280_mem_alloc");
#endif
-#ifdef DYNAMIC_MEM_ALLOC
- ha->request_ring = qla1280_alloc_phys(REQUEST_ENTRY_SIZE * REQUEST_ENTRY_CNT,
- &ha->request_dma);
- if(ha->request_ring) {
- ha->response_ring = qla1280_alloc_phys(RESPONSE_ENTRY_SIZE * RESPONSE_ENTRY_CNT,
- &ha->response_dma);
- if(ha->response_ring) {
- status = 0;
- }
- }
-#else
+ /* 3.13 */
+ /* get consistent memory allocated for request and response rings */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
ha->request_ring = &ha->req[0];
ha->request_dma = VIRT_TO_BUS(&ha->req[0]);
ha->response_ring = &ha->res[0];
ha->response_dma = VIRT_TO_BUS(&ha->res[0]);
status = 0;
-#endif
+#else
+ ha->request_ring = pci_alloc_consistent(ha->pdev,
+ ((REQUEST_ENTRY_CNT+1)*(sizeof(request_t))),
+ &dma_handle);
+ ha->request_dma = dma_handle;
+ ha->response_ring = pci_alloc_consistent(ha->pdev,
+ ((RESPONSE_ENTRY_CNT+1)*(sizeof(response_t))),
+ &dma_handle);
+ ha->response_dma = dma_handle;
+ status = 0;
+#endif
if(status) {
#if defined(QL_DEBUG_LEVEL_2) || defined(QL_DEBUG_LEVEL_3)
@@ -2222,6 +2214,16 @@
ha->dev[b] = (scsi_lu_t *)NULL;
}
+ /* 3.13 */
+ /* free consistent memory allocated for request and response rings */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
+ pci_free_consistent(ha->pdev, ((REQUEST_ENTRY_CNT+1)*(sizeof(request_t))),
+ ha->request_ring, ha->request_dma);
+
+ pci_free_consistent(ha->pdev,((RESPONSE_ENTRY_CNT+1)*(sizeof(response_t))),
+ ha->response_ring, ha->response_dma);
+#endif
+
LEAVE("qlc1280_mem_free");
}
@@ -2482,7 +2484,7 @@
/* Verify checksum of loaded RISC code. */
mb[0] = MBC_VERIFY_CHECKSUM;
/* mb[1] = ql12_risc_code_addr01; */
- mb[1] = *QLBoardTbl[ha->devnum].fwstart;
+ mb[1] = *QL1280BoardTbl[ha->devnum].fwstart;
if (!(status = qla1280_mailbox_command(ha, BIT_1|BIT_0, &mb[0])))
{
@@ -2492,7 +2494,7 @@
#endif
mb[0] = MBC_EXECUTE_FIRMWARE;
/* mb[1] = ql12_risc_code_addr01; */
- mb[1] = *QLBoardTbl[ha->devnum].fwstart;
+ mb[1] = *QL1280BoardTbl[ha->devnum].fwstart;
qla1280_mailbox_command(ha, BIT_1|BIT_0, &mb[0]);
}
else
@@ -2527,18 +2529,69 @@
qla1280_pci_config(scsi_qla_host_t *ha)
{
uint8_t status = 1;
- uint32_t command;
#if MEMORY_MAPPED_IO
uint32_t page_offset, base;
uint32_t mmapbase;
#endif
- config_reg_t *creg = 0;
uint16_t buf_wd;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ uint32_t command;
+ config_reg_t *creg = 0;
+#endif
+
ENTER("qla1280_pci_config");
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
+ /*
+ * Set Bus Master Enable, Memory Address Space Enable and
+ * reset any error bits, in the command register.
+ */
+ pci_read_config_word(ha->pdev, PCI_COMMAND, &buf_wd);
+ buf_wd &= ~0x7;
+#if MEMORY_MAPPED_IO
+ DEBUG(printk("qla1280: MEMORY MAPPED IO is enabled.\n"));
+ buf_wd |= BIT_2 + BIT_1 + BIT_0;
+#else
+ buf_wd |= BIT_2 + BIT_0;
+#endif
+ pci_write_config_word(ha->pdev, PCI_COMMAND, buf_wd);
+ /*
+ * Reset expansion ROM address decode enable.
+ */
+ pci_read_config_word(ha->pdev, PCI_ROM_ADDRESS, &buf_wd);
+ buf_wd &= ~PCI_ROM_ADDRESS_ENABLE;
+ pci_write_config_word(ha->pdev, PCI_ROM_ADDRESS, buf_wd);
+#if MEMORY_MAPPED_IO
+ /*
+ * Get memory mapped I/O address.
+ */
+ pci_read_config_word(ha->pdev, PCI_BASE_ADDRESS_1, &mmapbase);
+ mmapbase &= PCI_BASE_ADDRESS_MEM_MASK;
+
+ /*
+ * Find proper memory chunk for memory map I/O reg.
+ */
+ base = mmapbase & PAGE_MASK;
+ page_offset = mmapbase - base;
+ /*
+ * Get virtual address for I/O registers.
+ */
+ ha->mmpbase = ioremap_nocache(base, page_offset + 256);
+ if( ha->mmpbase )
+ {
+ ha->mmpbase += page_offset;
+ /* ha->iobase = ha->mmpbase; */
+ status = 0;
+ }
+#else /* MEMORY_MAPPED_IO */
+ status = 0;
+#endif /* MEMORY_MAPPED_IO */
+
+#else /*LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18) */
+
/* Get command register. */
- if (pci_read_config_word(ha->pdev,OFFSET(creg->command), &buf_wd) = PCIBIOS_SUCCESSFUL)
+ if (pcibios_read_config_word(ha->pci_bus,ha->pci_device_fn, OFFSET(creg->command), &buf_wd) = PCIBIOS_SUCCESSFUL)
{
command = buf_wd;
/*
@@ -2552,20 +2605,20 @@
#else
buf_wd |= BIT_2 + BIT_0;
#endif
- if( pci_write_config_word(ha->pdev,OFFSET(creg->command), buf_wd) )
+ if( pcibios_write_config_word(ha->pci_bus,ha->pci_device_fn, OFFSET(creg->command), buf_wd) )
{
printk(KERN_WARNING "qla1280: Could not write config word.\n");
}
/* Get expansion ROM address. */
- if (pci_read_config_word(ha->pdev,OFFSET(creg->expansion_rom), &buf_wd) = PCIBIOS_SUCCESSFUL)
+ if (pcibios_read_config_word(ha->pci_bus,ha->pci_device_fn, OFFSET(creg->expansion_rom), &buf_wd) = PCIBIOS_SUCCESSFUL)
{
/* Reset expansion ROM address decode enable. */
buf_wd &= ~BIT_0;
- if (pci_write_config_word(ha->pdev,OFFSET(creg->expansion_rom), buf_wd) = PCIBIOS_SUCCESSFUL)
+ if (pcibios_write_config_word(ha->pci_bus,ha->pci_device_fn, OFFSET(creg->expansion_rom), buf_wd) = PCIBIOS_SUCCESSFUL)
{
#if MEMORY_MAPPED_IO
/* Get memory mapped I/O address. */
- pci_read_config_dword(ha->pdev,OFFSET(cfgp->mem_base_addr), &mmapbase);
+ pcibios_read_config_dword(ha->pci_bus, ha->pci_device_fn,OFFSET(cfgp->mem_base_addr), &mmapbase);
mmapbase &= PCI_BASE_ADDRESS_MEM_MASK;
/* Find proper memory chunk for memory map I/O reg. */
@@ -2589,6 +2642,7 @@
}
}
}
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18) */
LEAVE("qla1280_pci_config");
return(status);
@@ -2719,6 +2773,7 @@
* Returns:
* 0 = success.
*/
+#define DUMP_IT_BACK 0 /* for debug of RISC loading */
STATIC uint8_t
qla1280_setup_chip(scsi_qla_host_t *ha)
{
@@ -2727,37 +2782,52 @@
uint16_t *risc_code_address;
long risc_code_size;
uint16_t mb[MAILBOX_REGISTER_COUNT];
-#ifdef QLA1280_UNUSED
- uint8_t *sp;
- int i;
-#endif
uint16_t cnt;
int num;
+#if DUMP_IT_BACK
+ int i;
+ uint8_t *sp;
uint8_t *tbuf;
+#if BITS_PER_LONG > 32
u_long p_tbuf;
+#else
+ uint32_t p_tbuf;
+#endif
+#endif
#ifdef QL_DEBUG_LEVEL_3
ENTER("qla1280_setup_chip");
#endif
- if( (tbuf = (uint8_t *)KMALLOC(8000) ) = NULL )
- {
- printk("setup_chip: couldn't alloacte memory\n");
- return(1);
- }
- p_tbuf = VIRT_TO_BUS(tbuf);
+ /* 3.13 */
+#if DUMP_IT_BACK
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ if( (tbuf = (uint8_t *)KMALLOC(8000) ) = NULL )
+ {
+ printk("setup_chip: couldn't alloacte memory\n");
+ return(1);
+ }
+ p_tbuf = VIRT_TO_BUS(tbuf);
+#else
+ /* get consistent memory allocated for setup_chip */
+ tbuf = pci_alloc_consistent(ha->pdev, 8000, &p_tbuf);
+#endif
+#endif
+
/* Load RISC code. */
/*
risc_address = ql12_risc_code_addr01;
risc_code_address = &ql12_risc_code01[0];
risc_code_size = ql12_risc_code_length01;
*/
- risc_address = *QLBoardTbl[ha->devnum].fwstart;
- risc_code_address = QLBoardTbl[ha->devnum].fwcode;
- risc_code_size = (long)(*QLBoardTbl[ha->devnum].fwlen & 0xffff);
-
- DEBUG(printk("qla1280: DMAing RISC code (%d) words.\n",(int)risc_code_size));
- DEBUG(sprintf(debug_buff,"qla1280_setup_chip: Loading RISC code size =(%ld).\n\r",risc_code_size);)
+ risc_address = *QL1280BoardTbl[ha->devnum].fwstart;
+ risc_code_address = QL1280BoardTbl[ha->devnum].fwcode;
+ risc_code_size = (long)(*QL1280BoardTbl[ha->devnum].fwlen & 0xffff);
+
+ DEBUG(printk("qla1280_setup_chip: DMA RISC code (%d) words\n",
+ (int)risc_code_size));
+ DEBUG(sprintf(debug_buff,
+ "qla1280_setup_chip: DMA RISC code (%d) words\n\r",risc_code_size);)
DEBUG(qla1280_print(debug_buff));
num =0;
while (risc_code_size > 0 && !status)
@@ -2767,29 +2837,31 @@
if ( cnt > risc_code_size )
cnt = risc_code_size;
- DEBUG(sprintf(debug_buff,"qla1280_setup_chip: loading risc @ =(0x%p),%d,%d(0x%x).\n\r",risc_code_address,cnt,num,risc_address);)
+ DEBUG(sprintf(debug_buff,
+ "qla1280_setup_chip: loading risc @ =(0x%p),%d,%d(0x%x).\n\r",
+ risc_code_address,cnt,num,risc_address);)
DEBUG(qla1280_print(debug_buff));
- DEBUG(printk("qla1280_setup_chip: loading risc @ =code=(0x%p),cnt=%d,seg=%d,addr=0x%x\n\r",risc_code_address,cnt,num,risc_address));
- BCOPY((caddr_t) risc_code_address,(caddr_t) ha->request_ring, (cnt <<1));
+ BCOPY((caddr_t) risc_code_address,(caddr_t) ha->request_ring,
+ (cnt <<1));
+
+ flush_cache_all();
+
mb[0] = MBC_LOAD_RAM;
- /* mb[0] = MBC_LOAD_RAM_A64; */
mb[1] = risc_address;
mb[4] = cnt;
mb[3] = (uint16_t) ha->request_dma & 0xffff;
mb[2] = (uint16_t) (ha->request_dma >> 16) & 0xffff;
mb[7] = (uint16_t) (MS_64BITS(ha->request_dma) & 0xffff);
mb[6] = (uint16_t) (MS_64BITS(ha->request_dma) >> 16) & 0xffff;
- DEBUG(printk("qla1280_setup_chip: op=%d 0x%lx = 0x%4x,0x%4x,0x%4x,0x%4x\n",mb[0],ha->request_dma,mb[6],mb[7],mb[2],mb[3]));
+ DEBUG(printk("qla1280_setup_chip: op=%d 0x%p = 0x%4x,0x%4x,0x%4x,0x%4x\n",mb[0],ha->request_dma,mb[6],mb[7],mb[2],mb[3]));
if( (status = qla1280_mailbox_command(ha, BIT_4|BIT_3|BIT_2|BIT_1|BIT_0,
&mb[0])) )
{
printk("Failed to load partial segment of f/w\n");
break;
}
- /* dump it back */
-
-#if 0
- mb[0] = MBC_DUMP_RAM_A64;
+#if DUMP_IT_BACK
+ mb[0] = MBC_READ_RAM_WORD;
mb[1] = risc_address;
mb[4] = cnt;
mb[3] = (uint16_t) p_tbuf & 0xffff;
@@ -2797,10 +2869,13 @@
mb[7] = (uint16_t) (p_tbuf >> 32) & 0xffff;
mb[6] = (uint16_t) (p_tbuf >> 48) & 0xffff;
- if( (status = qla1280_mailbox_command(ha, BIT_4|BIT_3|BIT_2|BIT_1|BIT_0,
- &mb[0])) )
+ if( (status = qla1280_mailbox_command(ha,
+ BIT_4|BIT_3|BIT_2|BIT_1|BIT_0,&mb[0])) )
{
printk("Failed to dump partial segment of f/w\n");
+ DEBUG(sprintf(debug_buff,
+ "setup_chip: Failed to dump partial segment of f/w\n\r");)
+ DEBUG(qla1280_print(debug_buff));
break;
}
sp = (uint8_t *)ha->request_ring;
@@ -2808,51 +2883,20 @@
{
if( tbuf[i] != sp[i] )
{
- printk("qla1280 : firmware compare error @ byte (0x%x)\n",i);
- break;
+ printk("qla1280_setup_chip: FW compare error @ byte(0x%x) loop#=%x\n",i,num);
+ printk("setup_chip: FWbyte=%x FWfromChip=%x\n",sp[i],tbuf[i]);
+ DEBUG(sprintf(debug_buff,
+ "qla1280_setup_chip: FW compare error @ byte(0x%x) loop#=%x\n\r",i);)
+ DEBUG(qla1280_print(debug_buff);)
+ /*break;*/
}
}
-
#endif
risc_address += cnt;
risc_code_size = risc_code_size - cnt;
risc_code_address = risc_code_address + cnt;
num++;
}
-#ifdef QLA1280_UNUSED
- DEBUG(ql_debug_print = 0;)
- {
- for (i = 0; i < ql12_risc_code_length01; i++)
- {
- mb[0] = 0x4;
- mb[1] = ql12_risc_code_addr01 + i;
- mb[2] = ql12_risc_code01[i];
-
- status = qla1280_mailbox_command(ha, BIT_2|BIT_1|BIT_0,
- &mb[0]);
- if (status)
- {
- printk("qla1280 : firmware load failure\n");
- break;
- }
-
- mb[0] = 0x5;
- mb[1] = ql12_risc_code_addr01 + i;
- mb[2] = 0;
-
- status = qla1280_mailbox_command(ha, BIT_2|BIT_1|BIT_0,
- &mb[0]);
- if (status)
- {
- printk("qla1280 : firmware dump failure\n");
- break;
- }
- if( mb[2] != ql12_risc_code01[i] )
- printk("qla1280 : firmware compare error @ (0x%x)\n",ql12_risc_code_addr01+i);
- }
- }
- DEBUG(ql_debug_print = 1;)
-#endif
/* Verify checksum of loaded RISC code. */
if (!status)
@@ -2860,22 +2904,29 @@
DEBUG(printk("qla1280_setup_chip: Verifying checksum of loaded RISC code.\n");)
mb[0] = MBC_VERIFY_CHECKSUM;
/* mb[1] = ql12_risc_code_addr01; */
- mb[1] = *QLBoardTbl[ha->devnum].fwstart;
+ mb[1] = *QL1280BoardTbl[ha->devnum].fwstart;
if (!(status = qla1280_mailbox_command(ha, BIT_1|BIT_0, &mb[0])))
{
/* Start firmware execution. */
DEBUG(qla1280_print("qla1280_setup_chip: start firmware running.\n\r");)
mb[0] = MBC_EXECUTE_FIRMWARE;
- /* mb[1] = ql12_risc_code_addr01; */
- mb[1] = *QLBoardTbl[ha->devnum].fwstart;
+ mb[1] = *QL1280BoardTbl[ha->devnum].fwstart;
qla1280_mailbox_command(ha, BIT_1|BIT_0, &mb[0]);
}
else
printk("qla1280_setup_chip: Failed checksum.\n");
}
- KMFREE(tbuf,8000);
+ /* 3.13 */
+#if DUMP_IT_BACK
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ KMFREE(tbuf,8000);
+#else
+ /* free consistent memory allocated for setup_chip */
+ pci_free_consistent(ha->pdev, 8000, tbuf, p_tbuf);
+#endif
+#endif
#if defined(QL_DEBUG_LEVEL_2) || defined(QL_DEBUG_LEVEL_3)
if (status)
@@ -3152,10 +3203,28 @@
/* Disable RISC load of firmware. */
ha->flags.disable_risc_code_load nv->cntr_flags_1.disable_loading_risc_code;
+
/* Enable 64bit addressing. */
ha->flags.enable_64bit_addressing nv->cntr_flags_1.enable_64bit_addressing;
+#if FORCE_64BIT_PCI_DMA
+ ha->flags.enable_64bit_addressing = 1;
+#endif
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
+ if (ha->flags.enable_64bit_addressing) {
+ printk("[[[ qla1x160: 64 Bit PCI Addressing Enabled ]]]\n");
+
+#if BITS_PER_LONG > 32
+ /* Update our PCI device dma_mask for full 64 bit mask */
+ //ha->pdev->dma_mask = (pci_dma_t) 0xffffffffffffffffull;
+ ha->pdev->dma_mask = 0xffffffffffffffff;
+
+#endif
+ }
+#endif
+
/* Set ISP hardware DMA burst */
mb[0] = nv->isp_config.c;
WRT_REG_WORD(®->cfg_1, mb[0]);
@@ -3838,7 +3907,7 @@
#endif
}
-#if QLA1280_64BIT_SUPPORT
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
/*
* qla1280_64bit_start_scsi
* The start SCSI is responsible for building request packets on
@@ -3863,10 +3932,13 @@
uint16_t seg_cnt;
struct scatterlist *sg = (struct scatterlist *) NULL;
uint32_t *dword_ptr;
+ dma_addr_t dma_handle;
-#ifdef QL_DEBUG_LEVEL_3
ENTER("qla1280_64bit_start_scsi:");
-#endif
+
+ DEBUG(sprintf(debug_buff,
+ "64bit_start: cmd=%x sp=%x CDB=%x\n\r",cmd,sp,cmd->cmnd[0]);)
+ DEBUG(qla1280_print(debug_buff));
if( qla1280_check_for_dead_scsi_bus(ha, sp) )
{
@@ -3877,9 +3949,10 @@
seg_cnt = 0;
req_cnt = 1;
if (cmd->use_sg)
- {
- seg_cnt = cmd->use_sg;
+ { /* 3.13 64 bit */
sg = (struct scatterlist *) cmd->request_buffer;
+ seg_cnt = pci_map_sg(ha->pdev,sg,cmd->use_sg,
+ scsi_to_pci_dma_dir(cmd->sc_data_direction));
if (seg_cnt > 2)
{
@@ -3890,7 +3963,7 @@
}
else if (cmd->request_bufflen) /* If data transfer. */
{
- DEBUG(printk("Single data transfer (0x%x)\n",cmd->request_bufflen));
+ DEBUG(printk("Single data transfer len=0x%x\n",cmd->request_bufflen));
seg_cnt = 1;
}
@@ -3951,7 +4024,7 @@
/* Load SCSI command packet. */
pkt->cdb_len = (uint16_t)CMD_CDBLEN(cmd);
BCOPY(&(CMD_CDBP(cmd)), pkt->scsi_cdb, pkt->cdb_len);
- DEBUG(printk("Build packet for command[0]=0x%x\n",pkt->scsi_cdb[0]));
+ //DEBUG(printk("Build packet for command[0]=0x%x\n",pkt->scsi_cdb[0]));
/*
* Load data segments.
@@ -3977,12 +4050,17 @@
/* Load command entry data segments. */
for (cnt = 0; cnt < 2 && seg_cnt; cnt++, seg_cnt--)
{
- DEBUG(sprintf(debug_buff,"SG Segment ap=0x%p, len=0x%x\n\r",sg->address,sg->length));
- DEBUG(qla1280_print(debug_buff));
- *dword_ptr++ = cpu_to_le32(VIRT_TO_BUS_LOW(sg->address));
- *dword_ptr++ = cpu_to_le32(VIRT_TO_BUS_HIGH(sg->address));
- *dword_ptr++ = sg->length;
+ /* 3.13 64 bit */
+ *dword_ptr++ = cpu_to_le32(pci_dma_lo32(sg_dma_address(sg)));
+ *dword_ptr++ = cpu_to_le32(pci_dma_hi32(sg_dma_address(sg)));
+ *dword_ptr++ = cpu_to_le32(sg_dma_len(sg));
sg++;
+ DEBUG(sprintf(debug_buff,
+ "S/G Segment phys_addr=%x %x, len=0x%x\n\r",
+ cpu_to_le32(pci_dma_hi32(sg_dma_address(sg))),
+ cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))),
+ cpu_to_le32(sg_dma_len(sg)));)
+ DEBUG(qla1280_print(debug_buff));
}
#ifdef QL_DEBUG_LEVEL_5
qla1280_print(
@@ -3999,6 +4077,10 @@
/*
* Build continuation packets.
*/
+ DEBUG(sprintf(debug_buff,
+ "S/G Building Continuation...seg_cnt=0x%x remains\n\r",
+ seg_cnt);)
+ DEBUG(qla1280_print(debug_buff));
while (seg_cnt > 0)
{
/* Adjust ring index. */
@@ -4032,10 +4114,17 @@
/* Load continuation entry data segments. */
for (cnt = 0; cnt < 5 && seg_cnt; cnt++, seg_cnt--)
{
- *dword_ptr++ = cpu_to_le32(VIRT_TO_BUS_LOW(sg->address));
- *dword_ptr++ = cpu_to_le32(VIRT_TO_BUS_HIGH(sg->address));
- *dword_ptr++ = sg->length;
- sg++;
+ /* 3.13 64 bit */
+ *dword_ptr++ = cpu_to_le32(pci_dma_lo32(sg_dma_address(sg)));
+ *dword_ptr++ = cpu_to_le32(pci_dma_hi32(sg_dma_address(sg)));
+ *dword_ptr++ = cpu_to_le32(sg_dma_len(sg));
+ DEBUG(sprintf(debug_buff,
+ "S/G Segment Cont. phys_addr=%x %x, len=0x%x\n\r",
+ cpu_to_le32(pci_dma_hi32(sg_dma_address(sg))),
+ cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))),
+ cpu_to_le32(sg_dma_len(sg)));)
+ DEBUG(qla1280_print(debug_buff));
+ sg++;
}
#ifdef QL_DEBUG_LEVEL_5
qla1280_print(
@@ -4052,11 +4141,21 @@
#endif
}
}
- else /* No scatter gather data transfer */
- {
- *dword_ptr++ = cpu_to_le32(VIRT_TO_BUS_LOW(cmd->request_buffer));
- *dword_ptr++ = cpu_to_le32(VIRT_TO_BUS_HIGH(cmd->request_buffer));
- *dword_ptr = (uint32_t) cmd->request_bufflen;
+ else /* No scatter gather data transfer */
+ { /* 3.13 64 bit */
+ dma_handle = pci_map_single(ha->pdev,
+ cmd->request_buffer,
+ cmd->request_bufflen,
+ scsi_to_pci_dma_dir(cmd->sc_data_direction));
+ /* save dma_handle for pci_unmap_single */
+ sp->saved_dma_handle = dma_handle;
+
+ *dword_ptr++ = cpu_to_le32(pci_dma_lo32(dma_handle));
+ *dword_ptr++ = cpu_to_le32(pci_dma_hi32(dma_handle));
+ *dword_ptr = (uint32_t) cmd->request_bufflen;
+ DEBUG(sprintf(debug_buff,
+ "No S/G map_single saved_dma_handle=%lx\n\r",dma_handle));
+ DEBUG(qla1280_print(debug_buff));
#ifdef QL_DEBUG_LEVEL_5
qla1280_print(
"qla1280_64bit_start_scsi: No scatter/gather command packet data - c");
@@ -4140,7 +4239,7 @@
#endif
return(status);
}
-#endif /* QLA1280_64BIT_SUPPORT */
+#endif
/*
* qla1280_32bit_start_scsi
@@ -4175,8 +4274,15 @@
uint8_t *data_ptr;
uint32_t *dword_ptr;
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
+ dma_addr_t dma_handle;
+#endif
+
ENTER("qla1280_32bit_start_scsi");
+ DEBUG(sprintf(debug_buff,
+ "32bit_start: cmd=%x sp=%x CDB=%x\n\r",cmd,sp,cmd->cmnd[0]);)
+ DEBUG(qla1280_print(debug_buff));
if( qla1280_check_for_dead_scsi_bus(ha, sp) )
{
@@ -4193,8 +4299,15 @@
* differences and the kernel SG list uses virtual addresses where
* we need physical addresses.
*/
- seg_cnt = cmd->use_sg;
sg = (struct scatterlist *) cmd->request_buffer;
+ /* 3.13 32 bit */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ seg_cnt = cmd->use_sg;
+#else
+ seg_cnt = pci_map_sg(ha->pdev,sg,cmd->use_sg,
+ scsi_to_pci_dma_dir(cmd->sc_data_direction));
+#endif
+
/*
* if greater than four sg entries then we need to allocate
* continuation entries
@@ -4205,17 +4318,22 @@
if ((uint16_t)(seg_cnt - 4) % 7)
req_cnt++;
}
- DEBUG(sprintf(debug_buff,"S/G for data transfer -num segs(%d), req blk cnt(%d)\n\r",seg_cnt,req_cnt));
+ DEBUG(sprintf(debug_buff,
+ "S/G Transfer cmd=%x seg_cnt=0x%x, req_cnt=%x\n\r",
+ cmd,seg_cnt,req_cnt));
DEBUG(qla1280_print(debug_buff));
}
else if (cmd->request_bufflen) /* If data transfer. */
{
- DEBUG(printk("Single data transfer (0x%x)\n",cmd->request_bufflen));
+ DEBUG(sprintf(debug_buff,
+ "No S/G transfer t=%x cmd=%x len=%x CDB=%x\n\r",
+ SCSI_TCN_32(cmd),cmd,cmd->request_bufflen,cmd->cmnd[0]));
+ DEBUG(qla1280_print(debug_buff));
seg_cnt = 1;
}
else
{
- DEBUG(printk("No data transfer \n"));
+ //DEBUG(printk("No data transfer \n"));
seg_cnt = 0;
}
@@ -4232,7 +4350,8 @@
ha->req_q_cnt = REQUEST_ENTRY_CNT - (ha->req_ring_index - cnt);
}
- DEBUG(sprintf(debug_buff,"Number of free entries = (%d)\n\r",ha->req_q_cnt));
+ DEBUG(sprintf(debug_buff,"Number of free entries=(%d) seg_cnt=0x%x\n\r",
+ ha->req_q_cnt,seg_cnt));
DEBUG(qla1280_print(debug_buff));
/* If room for request in request ring. */
if ((uint16_t)(req_cnt + 2) < ha->req_q_cnt)
@@ -4280,20 +4399,15 @@
data_ptr = (uint8_t *) &(CMD_CDBP(cmd));
for (cnt = 0; cnt < pkt->cdb_len; cnt++)
pkt->scsi_cdb[cnt] = *data_ptr++;
- DEBUG(printk("Build packet for command[0]=0x%x\n",pkt->scsi_cdb[0]));
+ //DEBUG(printk("Build packet for command[0]=0x%x\n",pkt->scsi_cdb[0]));
/*
* Load data segments.
*/
if (seg_cnt)
{
- DEBUG(printk("loading data segments..\n"));
/* Set transfer direction (READ and WRITE) */
/* Linux doesn't tell us */
-
/*
- * 3/10 dg - Normally, we should need this check with our F/W
- * but because of a small issue with it we do.
- *
* For block devices, cmd->request.cmd has the operation
* For character devices, this isn't always set properly, so
* we need to check data_cmnd[0]. This catches the conditions
@@ -4319,15 +4433,32 @@
/* Load command entry data segments. */
for (cnt = 0; cnt < 4 && seg_cnt; cnt++, seg_cnt--)
{
+ /* 3.13 32 bit */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
*dword_ptr++ = (uint32_t) cpu_to_le32(VIRT_TO_BUS(sg->address));
*dword_ptr++ = sg->length;
- DEBUG(sprintf(debug_buff,"SG Segment ap=0x%p, len=0x%x\n\r",sg->address,sg->length));
+ DEBUG(sprintf(debug_buff,
+ "S/G Segment phys_addr=0x%x, len=0x%x\n\r",
+ cpu_to_le32(VIRT_TO_BUS(sg->address)),sg->length));
+ DEBUG(qla1280_print(debug_buff));
+#else
+ *dword_ptr++ = cpu_to_le32(pci_dma_lo32(sg_dma_address(sg)));
+ *dword_ptr++ = cpu_to_le32(sg_dma_len(sg));
+ DEBUG(sprintf(debug_buff,
+ "S/G Segment phys_addr=0x%x, len=0x%x\n\r",
+ cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))),
+ cpu_to_le32(sg_dma_len(sg)));)
DEBUG(qla1280_print(debug_buff));
+#endif
sg++;
}
/*
* Build continuation packets.
*/
+ DEBUG(sprintf(debug_buff,
+ "S/G Building Continuation...seg_cnt=0x%x remains\n\r",
+ seg_cnt);)
+ DEBUG(qla1280_print(debug_buff));
while (seg_cnt > 0)
{
/* Adjust ring index. */
@@ -4362,9 +4493,25 @@
/* Load continuation entry data segments. */
for (cnt = 0; cnt < 7 && seg_cnt; cnt++, seg_cnt--)
{
- *dword_ptr++ = (u_int) cpu_to_le32(VIRT_TO_BUS(sg->address));
- *dword_ptr++ = sg->length;
- sg++;
+ /* 3.13 32 bit */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+ *dword_ptr++ = (u_int) cpu_to_le32(VIRT_TO_BUS(sg->address));
+ *dword_ptr++ = sg->length;
+ DEBUG(sprintf(debug_buff,
+ "S/G Segment Cont. phys_addr=0x%x, len=0x%x\n\r",
+ cpu_to_le32(pci_dma_lo32(VIRT_TO_BUS(sg->address))),
+ sg->length);)
+ DEBUG(qla1280_print(debug_buff));
+#else
+ *dword_ptr++ = cpu_to_le32(pci_dma_lo32(sg_dma_address(sg)));
+ *dword_ptr++ = cpu_to_le32(sg_dma_len(sg));
+ DEBUG(sprintf(debug_buff,
+ "S/G Segment Cont. phys_addr=0x%x, len=0x%x\n\r",
+ cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))),
+ cpu_to_le32(sg_dma_len(sg)));)
+ DEBUG(qla1280_print(debug_buff));
+#endif
+ sg++;
}
#ifdef QL_DEBUG_LEVEL_5
qla1280_print(
@@ -4379,14 +4526,28 @@
#endif
}
}
- else /* No scatter gather data transfer */
+ else /* No S/G data transfer */
{
+ /* 3.13 32 bit */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
*dword_ptr++ = (uint32_t) cpu_to_le32(VIRT_TO_BUS(cmd->request_buffer));
*dword_ptr = (uint32_t) cmd->request_bufflen;
- DEBUG(printk("Single Segment ap=0x%p, len=0x%x\n",cmd->request_buffer,cmd->request_bufflen));
+#else
+ dma_handle = pci_map_single(ha->pdev,
+ cmd->request_buffer,
+ cmd->request_bufflen,
+ scsi_to_pci_dma_dir(cmd->sc_data_direction));
+ sp->saved_dma_handle = dma_handle;
+
+ *dword_ptr++ = cpu_to_le32(pci_dma_lo32(dma_handle));
+ *dword_ptr = (uint32_t) cmd->request_bufflen;
+ DEBUG(sprintf(debug_buff,
+ "No S/G map_single saved_dma_handle=%lx\n\r",dma_handle));
+ DEBUG(qla1280_print(debug_buff));
+#endif
}
}
- else /* No data transfer */
+ else /* No data transfer at all */
{
*dword_ptr++ = (uint32_t) 0;
*dword_ptr = (uint32_t) 0;
@@ -4414,7 +4575,6 @@
/* Set chip new ring index. */
DEBUG(qla1280_print("qla1280_32bit_start_scsi: Wakeup RISC for pending command\n\r"));
ha->qthreads--;
- sp->u_start = jiffies;
sp->flags |= SRB_SENT;
ha->actthreads++;
/* qla1280_output_number((uint32_t)ha->actthreads++, 16); */
@@ -4425,7 +4585,7 @@
status = 1;
#ifdef QL_DEBUG_LEVEL_2
qla1280_print(
- "qla1280_32bit_start_scsi: NO ROOM IN OUTSTANDING ARRAY\n\r");
+ "qla1280_32bit_start_scsi: NO ROOM IN OUTSTANDING ARRAY\n\r");
qla1280_print(" req_q_cnt=");
qla1280_output_number((uint32_t)ha->req_q_cnt, 16);
qla1280_print("\n\r");
@@ -4459,6 +4619,7 @@
return(status);
}
+
/*
* qla1280_req_pkt
* Function is responsible for locking ring and
@@ -4889,7 +5050,7 @@
{
device_reg_t *reg = ha->iobase;
response_t *pkt;
- srb_t *sp;
+ srb_t *sp = 0;
uint16_t mailbox[MAILBOX_REGISTER_COUNT];
uint16_t *wptr;
uint32_t index;
@@ -4903,9 +5064,11 @@
/* Check for mailbox interrupt. */
mailbox[0] = RD_REG_WORD(®->semaphore);
+
if (mailbox[0] & BIT_0)
{
/* Get mailbox data. */
+ //DEBUG(qla1280_print("qla1280_isr: In Get mailbox data \n\r");)
wptr = &mailbox[0];
*wptr++ = RD_REG_WORD(®->mailbox0);
@@ -4938,7 +5101,7 @@
{
case MBA_SCSI_COMPLETION: /* Response completion */
#ifdef QL_DEBUG_LEVEL_5
- qla1280_print("qla1280_isr: mailbox response completion\n\r");
+ qla1280_print("qla1280_isr: mailbox SCSI response completion\n\r");
#endif
if (ha->flags.online)
{
@@ -4967,9 +5130,11 @@
else
(*done_q_last)->s_next = sp;
*done_q_last = sp;
+
}
else
{
+
#ifdef QL_DEBUG_LEVEL_2
qla1280_print("qla1280_isr: ISP invalid handle\n\r");
#endif
@@ -5041,6 +5206,7 @@
#endif
break;
default:
+ //DEBUG(qla1280_print("qla1280_isr: default case of switch MB \n\r");)
if (mailbox[0] < MBA_ASYNC_EVENT)
{
wptr = &mailbox[0];
@@ -5057,9 +5223,9 @@
break;
}
}
- else
+ else {
WRT_REG_WORD(®->host_cmd, HC_CLR_RISC_INT);
-
+ }
/*
* Response ring
*/
@@ -5123,6 +5289,7 @@
qla1280_error_entry(ha, pkt,
done_q_first, done_q_last);
+
/* Adjust ring index. */
ha->rsp_ring_index++;
if (ha->rsp_ring_index = RESPONSE_ENTRY_CNT)
@@ -5306,9 +5473,12 @@
}
pkt->scsi_status = S_CKCON;
pkt->option_flags |= (uint32_t)OF_SSTS | (uint32_t)OF_NO_DATA;
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
if (ha->flags.enable_64bit_addressing)
qla1280_64bit_continue_io(ha, pkt, 0, 0);
else
+#endif
qla1280_32bit_continue_io(ha, pkt, 0, 0);
break;
case 0x16: /* Requested Capability Not Available */
@@ -5667,10 +5837,12 @@
(uint32_t)OF_NO_DATA;
break;
}
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
if (ha->flags.enable_64bit_addressing)
- qla1280_64bit_continue_io(ha, pkt, len, (paddr32_t *)&phy_addr);
+ qla1280_64bit_continue_io(ha, pkt, len, (paddr32_t *)&phy_addr);
else
- qla1280_32bit_continue_io(ha, pkt, len, (paddr32_t *)&phy_addr);
+#endif
+ qla1280_32bit_continue_io(ha, pkt, len, (paddr32_t *)&phy_addr);
break;
default:
break;
@@ -5744,11 +5916,13 @@
ha->outstanding_cmds[pkt->handle] = 0;
cp = sp->cmd;
+
/* Generate LU queue on cntrl, target, LUN */
b = SCSI_BUS_32(cp);
t = SCSI_TCN_32(cp);
l = SCSI_LUN_32(cp);
q = LU_Q(ha, b, t, l);
+
if( pkt->comp_status || pkt->scsi_status )
{
DEBUG(qla1280_print( "scsi: comp_status = ");)
@@ -5879,7 +6053,7 @@
/* Place command on done queue. */
qla1280_done_q_put(sp, done_q_first, done_q_last);
}
-#if QLA1280_64BIT_SUPPORT
+#if BITS_PER_LONG > 32
else if (pkt->entry_type = COMMAND_A64_TYPE)
{
#ifdef QL_DEBUG_LEVEL_2
@@ -5956,7 +6130,6 @@
sp->timeout += 2; */
/* Place request back on top of device queue. */
- /* sp->flags &= ~(SRB_SENT | SRB_TIMEOUT); */
sp->flags = 0;
qla1280_putq_t(q, sp);
}
@@ -6074,7 +6247,7 @@
}
}
#ifdef QL_DEBUG_LEVEL_3
- qla1280_print("qla1280_restart_queues: exiting normally\n");
+ qla1280_print("qla1280_restart_queues: exiting normally\n\r");
#endif
}
@@ -6279,13 +6452,13 @@
#if MEMORY_MAPPED_IO
ret = *port;
#else
- ret = inb((int)port);
+ ret = inb((long)port);
#endif
if (ql_debug_print)
{
qla1280_print("qla1280_getbyte: address = ");
- qla1280_output_number((uint32_t)port, 16);
+ qla1280_output_number((unsigned long)port, 16);
qla1280_print(" data = 0x");
qla1280_output_number((uint32_t)ret, 16);
qla1280_print("\n\r");
@@ -6305,13 +6478,13 @@
#if MEMORY_MAPPED_IO
ret = *port;
#else
- ret = inw((int)port);
+ ret = inw((unsigned long)port);
#endif
if (ql_debug_print)
{
qla1280_print("qla1280_getword: address = ");
- qla1280_output_number((uint32_t)port, 16);
+ qla1280_output_number((unsigned long)port, 16);
qla1280_print(" data = 0x");
qla1280_output_number((uint32_t)ret, 16);
qla1280_print("\n\r");
@@ -6331,13 +6504,13 @@
#if MEMORY_MAPPED_IO
ret = *port;
#else
- ret = inl((int)port);
+ ret = inl((unsigned long)port);
#endif
if (ql_debug_print)
{
qla1280_print("qla1280_getdword: address = ");
- qla1280_output_number((uint32_t)port, 16);
+ qla1280_output_number((unsigned long)port, 16);
qla1280_print(" data = 0x");
qla1280_output_number((uint32_t)ret, 16);
qla1280_print("\n\r");
@@ -6355,13 +6528,13 @@
#if MEMORY_MAPPED_IO
*port = data;
#else
- outb(data, (int)port);
+ outb(data, (unsigned long)port);
#endif
if (ql_debug_print)
{
qla1280_print("qla1280_putbyte: address = ");
- qla1280_output_number((uint32_t)port, 16);
+ qla1280_output_number((unsigned long)port, 16);
qla1280_print(" data = 0x");
qla1280_output_number((uint32_t)data, 16);
qla1280_print("\n\r");
@@ -6380,14 +6553,14 @@
#ifdef _LINUX_IOPORTS
outw(data, (int)port);
#else
- outw((int)port, data);
+ outw((unsigned long)port, data);
#endif
#endif
if (ql_debug_print)
{
qla1280_print("qla1280_putword: address = ");
- qla1280_output_number((uint32_t)port, 16);
+ qla1280_output_number((unsigned long)port, 16);
qla1280_print(" data = 0x");
qla1280_output_number((uint32_t)data, 16);
qla1280_print("\n\r");
@@ -6406,14 +6579,14 @@
#ifdef _LINUX_IOPORTS
outl(data,(int)port);
#else
- outl((int)port, data);
+ outl((unsigned long)port, data);
#endif
#endif
if (ql_debug_print)
{
qla1280_print("qla1280_putdword: address = ");
- qla1280_output_number((uint32_t)port, 16);
+ qla1280_output_number((unsigned long)port, 16);
qla1280_print(" data = 0x");
qla1280_output_number((uint32_t)data, 16);
qla1280_print("\n\r");
@@ -6437,8 +6610,7 @@
/*
* Out character to COM2 port.
- * PORT must be at standard address for COM2 = 0x2F8,
- * or COM1 = 0x3F8
+ * PORT must be at standard address for COM1 = 0x3f8
*/
#define OUTB(addr,data) outb((data),(addr))
@@ -6448,7 +6620,7 @@
#ifdef QL_DEBUG_CONSOLE
printk("%c", c);
#else
- int com_addr = 0x2f8;
+ int com_addr = 0x3f8;
int hardware_flow_control = 1;
int software_flow_control = 0;
uint8_t data;
@@ -6460,7 +6632,7 @@
}while (!(data & BIT_6));
/*
- * Set BAUD rate for COM2 to 19200 (0x6)
+ * Set BAUD rate for COM2 to 9600 (0x6)
*/
/* Select rate divisor. */
@@ -6656,8 +6828,6 @@
qla1280_print(debug_buff);
sprintf(debug_buff," Pid=%d, SP=0x%p\n\r", (int)cmd->pid, CMD_SP(cmd));
qla1280_print(debug_buff);
- sprintf(debug_buff," r_start=0x%lx, u_start=0x%lx\n\r",sp->r_start,sp->u_start);
- qla1280_print(debug_buff);
sprintf(debug_buff," underflow size = 0x%x, direction=0x%x, req.cmd=0x%x \n\r", cmd->underflow, sp->dir,cmd->request.cmd);
qla1280_print(debug_buff);
}
@@ -6685,23 +6855,6 @@
}
#endif
-#ifdef QLA1280_UNUSED
-/**************************************************************************
- * ql1280_dump_regs
- *
- **************************************************************************/
-static void qla1280_dump_regs(struct Scsi_Host *host)
-{
- printk("Mailbox registers:\n");
- printk("qla1280 : mbox 0 0x%04x \n", inw(host->io_port + 0x70));
- printk("qla1280 : mbox 1 0x%04x \n", inw(host->io_port + 0x72));
- printk("qla1280 : mbox 2 0x%04x \n", inw(host->io_port + 0x74));
- printk("qla1280 : mbox 3 0x%04x \n", inw(host->io_port + 0x76));
- printk("qla1280 : mbox 4 0x%04x \n", inw(host->io_port + 0x78));
- printk("qla1280 : mbox 5 0x%04x \n", inw(host->io_port + 0x7a));
-}
-#endif
-
#if STOP_ON_ERROR
@@ -6728,9 +6881,6 @@
printk("HA flags =0x%lx\n", *fp);
DEBUG2(ql_debug_print = 1;)
/* DEBUG2(ql1280_dump_device((scsi_qla_host_t *) host->hostdata)); */
-#ifdef QLA1280_UNUSED
- qla1280_dump_regs(host);
-#endif
sti();
panic("Ooops");
/* cli();
@@ -6743,11 +6893,6 @@
}
#endif
-#ifdef QLA1280_UNUSED
-static void qla1280_set_flags(char * s)
-{
-}
-#endif
/**************************************************************************
* qla1280_setup
@@ -6761,24 +6906,6 @@
{
char *end, *str, *cp;
-#ifdef QLA1280_UNUSED
- static struct
- {
- const char *name;
- int siz;
- void (*func)();
- int arg;
- } options[] - {
- { "dump_regs", 9, &qla1280_dump_regs, 0
- },
- { "verbose", 7, &qla1280_set_flags, 0x1
- },
- { "", 0, NULL, 0
- }
- };
-#endif
-
printk("scsi: Processing Option str = %s\n", s);
end = strchr(s, '\0');
/* locate command */
@@ -6827,4 +6954,3 @@
* tab-width: 8
* End:
*/
-
diff -urN linux-davidm/drivers/scsi/qla1280.h lia64/drivers/scsi/qla1280.h
--- linux-davidm/drivers/scsi/qla1280.h Fri Jul 14 17:24:15 2000
+++ lia64/drivers/scsi/qla1280.h Fri Sep 8 17:16:54 2000
@@ -1,169 +1,35 @@
-/*************************************************************************
- * QLOGIC LINUX SOFTWARE
- *
- * QLogic ISP1x80/1x160 device driver for Linux 2.3.x (redhat 6.x).
- *
- * COPYRIGHT (C) 1996-2000 QLOGIC CORPORATION
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the Qlogic's Linux Software License.
- *
- * This program is WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * 1. Redistribution's or source code must retain the above copyright
- * notice, this list of conditions, and the following disclaimer,
- * without modification, immediately at the beginning of the file.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- *****************************************************************************/
-
-/*************************************************************************************
- QLOGIC CORPORATION SOFTWARE
- "GNU" GENERAL PUBLIC LICENSE
- TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION
- AND MODIFICATION
-
-This GNU General Public License ("License") applies solely to QLogic Linux
-Software ("Software") and may be distributed under the terms of this License.
-
-1. You may copy and distribute verbatim copies of the Software's source code as
-you receive it, in any medium, provided that you conspicuously and appropriately
-publish on each copy an appropriate copyright notice and disclaimer of warranty;
-keep intact all the notices that refer to this License and to the absence of any
-warranty; and give any other recipients of the Software a copy of this License along
-with the Software.
-
-You may charge a fee for the physical act of transferring a copy, and you may at your
-option offer warranty protection in exchange for a fee.
-
-2. You may modify your copy or copies of the Software or any portion of it, thus forming
-a work based on the Software, and copy and distribute such modifications or work under
-the terms of Section 1 above, provided that you also meet all of these conditions:
-
-* a) You must cause the modified files to carry prominent notices stating that you
-changed the files and the date of any change.
-
-* b) You must cause any work that you distribute or publish that in whole or in part
-contains or is derived from the Software or any part thereof, to be licensed as a
-whole at no charge to all third parties under the terms of this License.
-
-* c) If the modified Software normally reads commands interactively when run, you
-must cause it, when started running for such interactive use in the most ordinary way,
-to print or display an announcement including an appropriate copyright notice and a
-notice that there is no warranty (or else, saying that you provide a warranty) and that
-users may redistribute the Software under these conditions, and telling the user how to
-view a copy of this License. (Exception:if the Software itself is interactive but does
-not normally print such an announcement, your work based on the Software is not required
-to print an announcement.)
-
-These requirements apply to the modified work as a whole. If identifiable sections of
-that work are not derived from the Software, and can be reasonably considered independent
-and separate works in themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works. But when you distribute the same
-sections as part of a whole which is a work based on the Software, the distribution of the
-whole must be on the terms of this License, whose permissions for other licensees extend
-to the entire whole, and thus to each and every part regardless of who wrote it.
-
-3. You may copy and distribute the Software (or a work based on it, under Section 2) in
-object code or executable form under the terms of Sections 1 and 2 above provided that
-you also do one of the following:
-
-* a) Accompany it with the complete corresponding machine-readable source code, which must
-be distributed under the terms of Sections 1 and 2 above on a medium customarily used for
-software interchange; or,
-
-* b) Accompany it with a written offer, valid for at least three years, to give any third
-party, for a charge no more than your cost of physically performing source distribution,
-a complete machine-readable copy of the corresponding source code, to be distributed under
-the terms of Sections 1 and 2 above on a medium customarily used for software interchange;
-or,
-
-* c) Accompany it with the information you received as to the offer to distribute
-corresponding source code. (This alternative is allowed only for noncommercial distribution
-and only if you received the Software in object code or executable form with such an offer,
-in accord with Subsection b above.)
-
-The source code for a work means the preferred form of the work for making modifications
-to it. For an executable work, complete source code means all the source code for all
-modules it contains, plus any associated interface definition files, plus the scripts used
-to control compilation and installation of the executable.
-
-If distribution of executable or object code is made by offering access to copy from a
-designated place, then offering equivalent access to copy the source code from the same
-place counts as distribution of the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
-4. You may not copy, modify, sublicense, or distribute the Software except as expressly
-provided under this License. Any attempt otherwise to copy, modify, sublicense or
-distribute the Software is void, and will automatically terminate your rights under this
-License. However, parties who have received copies, or rights, from you under this License
-will not have their licenses terminated so long as such parties remain in full compliance.
-
-5. This license grants you world wide, royalty free non-exclusive rights to modify or
-distribute the Software or its derivative works. These actions are prohibited by law
-if you do not accept this License. Therefore, by modifying or distributing the Software
-(or any work based on the Software), you indicate your acceptance of this License to do
-so, and all its terms and conditions for copying, distributing or modifying the Software
-or works based on it.
-
-6. Each time you redistribute the Software (or any work based on the Software), the
-recipient automatically receives a license from the original licensor to copy, distribute
-or modify the Software subject to these terms and conditions. You may not impose any
-further restrictions on the recipients' exercise of the rights granted herein. You are
-not responsible for enforcing compliance by third parties to this License.
-
-7. If, as a consequence of a court judgment or allegation of patent infringement or for
-any other reason (not limited to patent issues), conditions are imposed on you
-(whether by court order, agreement or otherwise) that contradict the conditions of this
-License, they do not excuse you from the conditions of this License. If you cannot
-distribute so as to satisfy simultaneously your obligations under this License
-and any other pertinent obligations, then as a consequence you may not distribute the
-Software at all.
-
-If any portion of this section is held invalid or unenforceable under any particular
-circumstance, the balance of the section is intended to apply and the section as a whole
-is intended to apply in other circumstances.
-NO WARRANTY
-
-11. THE SOFTWARE IS PROVIDED WITHOUT A WARRANTY OF ANY KIND. THERE IS NO
-WARRANTY FOR THE SOFTWARE, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
-EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
-OTHER PARTIES PROVIDE THE SOFTWARE "AS IS" WITHOUT WARRANTY OF ANY KIND,
-EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
-ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE SOFTWARE IS WITH YOU.
-SHOULD THE SOFTWARE PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
-NECESSARY SERVICING, REPAIR OR CORRECTION.
-
-12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
-REDISTRIBUTE THE SOFTWARE AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
-DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL
-DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE SOFTWARE (INCLUDING
-BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
-LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE SOFTWARE TO
-OPERATE WITH ANY OTHER SOFTWARES), EVEN IF SUCH HOLDER OR OTHER PARTY HAS
-BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
-END OF TERMS AND CONDITIONS
+/********************************************************************************
+* QLOGIC LINUX SOFTWARE
+*
+* QLogic ISP1280 (Ultra2) /12160 (Ultra3) SCSI driver
+* Copyright (C) 2000 Qlogic Corporation
+* (www.qlogic.com)
+*
+* This program is free software; you can redistribute it and/or modify it
+* under the terms of the GNU General Public License as published by the
+* Free Software Foundation; either version 2, or (at your option) any
+* later version.
+*
+* This program is distributed in the hope that it will be useful, but
+* WITHOUT ANY WARRANTY; without even the implied warranty of
+* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+* General Public License for more details.
+**
+******************************************************************************/
-*************************************************************************************/
-
-
#ifndef _IO_HBA_QLA1280_H /* wrapper symbol for kernel use */
#define _IO_HBA_QLA1280_H /* subject to change without notice */
+
+#ifndef LINUX_VERSION_CODE
+#include <linux/version.h>
+#endif /* LINUX_VERSION_CODE not defined */
+
#if defined(__cplusplus)
extern "C" {
#endif
-#include <linux/version.h>
-
+#ifndef HOSTS_C /* included in hosts.c */
/*
* Enable define statement to ignore Data Underrun Errors,
* remove define statement to enable detection.
@@ -173,15 +39,18 @@
/*
* Driver debug definitions.
*/
-/* #define QL_DEBUG_LEVEL_1 */ /* Output register accesses to COM2. */
-/* #define QL_DEBUG_LEVEL_2 */ /* Output error msgs to COM2. */
-/* #define QL_DEBUG_LEVEL_3 */ /* Output function trace msgs to COM2. */
-/* #define QL_DEBUG_LEVEL_4 */ /* Output NVRAM trace msgs to COM2. */
-/* #define QL_DEBUG_LEVEL_5 */ /* Output ring trace msgs to COM2. */
-/* #define QL_DEBUG_LEVEL_6 */ /* Output WATCHDOG timer trace to COM2. */
-/* #define QL_DEBUG_LEVEL_7 */ /* Output RISC load trace msgs to COM2. */
+/* #define QL_DEBUG_LEVEL_1 */ /* Output register accesses to COM1 */
+/* #define QL_DEBUG_LEVEL_2 */ /* Output error msgs to COM1 */
+/* #define QL_DEBUG_LEVEL_3 */ /* Output function trace msgs to COM1 */
+/* #define QL_DEBUG_LEVEL_4 */ /* Output NVRAM trace msgs to COM1 */
+/* #define QL_DEBUG_LEVEL_5 */ /* Output ring trace msgs to COM1 */
+/* #define QL_DEBUG_LEVEL_6 */ /* Output WATCHDOG timer trace to COM1 */
+/* #define QL_DEBUG_LEVEL_7 */ /* Output RISC load trace msgs to COM1 */
+
+ #define QL_DEBUG_CONSOLE /* Output to console instead of COM1 */
+ /* comment this #define to get output of qla1280_print to COM1 */
+ /* if COM1 is not connected to a host system, the driver hangs system! */
-#define QL_DEBUG_CONSOLE /* Output to console instead of COM2. */
#ifndef TRUE
# define TRUE 1
@@ -206,7 +75,11 @@
* Locking
*/
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,0)
-# include <linux/spinlock.h>
+# if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
+# include <asm/spinlock.h>
+# else
+# include <linux/spinlock.h>
+# endif
# include <linux/smp.h>
# define cpuid smp_processor_id()
# if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,95)
@@ -314,12 +187,12 @@
#define WRT_REG_DWORD(addr, data) qla1280_putdword((uint32_t *)addr, data)
#else /* QL_DEBUG_LEVEL_1 */
#ifdef MEMORY_MAPPED_IO
- #define RD_REG_BYTE(addr) readb((unsigned long) (addr)
- #define RD_REG_WORD(addr) readw((unsigned long) (addr)
- #define RD_REG_DWORD(addr) readl((unsigned long) (addr)
- #define WRT_REG_BYTE(addr, data) writeb((data), (unsigned long) (addr))
- #define WRT_REG_WORD(addr, data) writew((data), (unsigned long) (addr))
- #define WRT_REG_DWORD(addr, data) writel((data), (unsigned long) (addr))
+ #define RD_REG_BYTE(addr) (*((volatile uint8_t *)addr))
+ #define RD_REG_WORD(addr) (*((volatile uint16_t *)addr))
+ #define RD_REG_DWORD(addr) (*((volatile uint32_t *)addr))
+ #define WRT_REG_BYTE(addr, data) (*((volatile uint8_t *)addr) = data)
+ #define WRT_REG_WORD(addr, data) (*((volatile uint16_t *)addr) = data)
+ #define WRT_REG_DWORD(addr, data) (*((volatile uint32_t *)addr) = data)
#else /* MEMORY_MAPPED_IO */
#define RD_REG_BYTE(addr) (inb((unsigned long)addr))
#define RD_REG_WORD(addr) (inw((unsigned long)addr))
@@ -374,7 +247,8 @@
typedef struct timer_list timer_t; /* timer */
/*
- * SCSI Request Block structure
+ * SCSI Request Block structure (sp) that is placed
+ * on cmd->SCp location of every I/O
*/
typedef struct srb
{
@@ -383,10 +257,11 @@
struct srb *s_prev; /* (4) Previous block on LU queue */
uint8_t flags; /* (1) Status flags. */
uint8_t dir; /* direction of transfer */
- uint8_t unused[2];
- u_long r_start; /* jiffies at start of request */
- u_long u_start; /* jiffies when sent to F/W */
-}srb_t;
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
+ dma_addr_t saved_dma_handle; /* for unmap of single transfers */
+#endif
+
+} srb_t;
/*
* SRB flag definitions
@@ -1564,13 +1439,13 @@
request_t req[REQUEST_ENTRY_CNT+1];
response_t res[RESPONSE_ENTRY_CNT+1];
- unsigned long request_dma; /* Physical address. */
+ uint32_t request_dma; /* Physical address. */
request_t *request_ring; /* Base virtual address */
request_t *request_ring_ptr; /* Current address. */
uint16_t req_ring_index; /* Current index. */
uint16_t req_q_cnt; /* Number of available entries. */
- unsigned long response_dma; /* Physical address. */
+ uint32_t response_dma; /* Physical address. */
response_t *response_ring; /* Base virtual address */
response_t *response_ring_ptr; /* Current address. */
uint16_t rsp_ring_index; /* Current index. */
@@ -1616,8 +1491,13 @@
uint32_t dpc :1; /* 15 */
uint32_t dpc_sched :1; /* 16 */
uint32_t interrupts_on :1; /* 17 */
+ uint32_t bios_enabled :1; /* 18 */
}flags;
+ /* needed holders for PCI ordered list of hosts */
+ unsigned long io_port;
+ uint32_t irq;
+
}scsi_qla_host_t;
/*
@@ -1646,6 +1526,8 @@
#define QLA1280_RING_LOCK(ha)
#define QLA1280_RING_UNLOCK(ha)
+#endif /* HOSTS_C */
+
#if defined(__cplusplus)
}
#endif
@@ -1663,49 +1545,20 @@
int qla1280_biosparam(Disk *, kdev_t, int[]);
void qla1280_intr_handler(int, void *, struct pt_regs *);
void qla1280_setup(char *s, int *dummy);
-#if defined(__386__)
+
# define QLA1280_BIOSPARAM qla1280_biosparam
-#else
-# define QLA1280_BIOSPARAM NULL
-#endif
/*
* Scsi_Host_template (see hosts.h)
* Device driver Interfaces to mid-level SCSI driver.
*/
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,95)
-/* This interface is now obsolete !!! */
-#define QLA1280_LINUX_TEMPLATE { \
- next: NULL, \
- usage_count: NULL, \
- proc_dir: NULL, \
- proc_info: NULL, \
- name: "Qlogic ISP 1280", \
- detect: qla1280_detect, \
- release: qla1280_release, \
- info: qla1280_info, \
- command: NULL, \
- queuecommand: qla1280_queuecommand, \
- abort: qla1280_abort, \
- reset: qla1280_reset, \
- slave_attach: NULL, \
- bios_param: QLA1280_BIOSPARAM, \
- can_queue: 255, /* MAX_OUTSTANDING_COMMANDS */ \
- this_id: -1, /* scsi id of host adapter */ \
- sg_tablesize: SG_ALL, \
- cmd_per_lun: 3, /* max commands per lun */ \
- present: 0, /* number of 1280s present */ \
- unchecked_isa_dma: 0, /* no memeory DMA restrictions */ \
- use_clustering: ENABLE_CLUSTERING \
-}
-#else
-#define QLA1280_LINUX_TEMPLATE { \
+#define QLA1280_LINUX_TEMPLATE { \
next: NULL, \
module: NULL, \
proc_dir: NULL, \
proc_info: qla1280_proc_info, \
- name: "Qlogic ISP 1280\1080", \
+ name: "Qlogic ISP 1280\12160", \
detect: qla1280_detect, \
release: qla1280_release, \
info: qla1280_info, \
@@ -1725,13 +1578,14 @@
this_id: -1, /* scsi id of host adapter */\
sg_tablesize: SG_ALL, /* max scatter-gather cmds */\
cmd_per_lun: 3, /* cmds per lun (linked cmds) */\
- present: 0, /* number of 7xxx's present */\
+ present: 0, /* number of 1280's present */\
unchecked_isa_dma: 0, /* no memory DMA restrictions */\
use_clustering: ENABLE_CLUSTERING, \
use_new_eh_code: 0, \
emulated: 0 \
}
-#endif
+
#endif /* _IO_HBA_QLA1280_H */
+
diff -urN linux-davidm/drivers/scsi/sd.c lia64/drivers/scsi/sd.c
--- linux-davidm/drivers/scsi/sd.c Fri Sep 8 14:34:56 2000
+++ lia64/drivers/scsi/sd.c Fri Sep 8 18:28:54 2000
@@ -1335,6 +1335,8 @@
return;
}
+#ifdef MODULE
+
int init_sd(void)
{
sd_template.module = THIS_MODULE;
@@ -1390,3 +1392,5 @@
module_init(init_sd);
module_exit(exit_sd);
+
+#endif /* MODULE */
diff -urN linux-davidm/include/asm-ia64/ia32.h lia64/include/asm-ia64/ia32.h
--- linux-davidm/include/asm-ia64/ia32.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-ia64/ia32.h Fri Sep 8 17:14:47 2000
@@ -351,6 +351,8 @@
(granularity << IA32_SEG_G) | \
(((base >> 24) & 0xFF) << IA32_SEG_HIGH_BASE))
+#define IA32_IOBASE 0x2000000000000000 /* Virtual addres for I/O space */
+
#define IA32_CR0 0x80000001 /* Enable PG and PE bits */
#define IA32_CR4 0 /* No architectural extensions */
diff -urN linux-davidm/include/asm-ia64/io.h lia64/include/asm-ia64/io.h
--- linux-davidm/include/asm-ia64/io.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-ia64/io.h Fri Sep 8 17:14:30 2000
@@ -66,10 +66,9 @@
extern inline const unsigned long
__ia64_get_io_port_base (void)
{
- unsigned long addr;
+ extern unsigned long ia64_iobase;
- __asm__ ("mov %0=ar.k0;;" : "=r"(addr));
- return __IA64_UNCACHED_OFFSET | addr;
+ return ia64_iobase;
}
extern inline void*
diff -urN linux-davidm/include/asm-ia64/offsets.h lia64/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Fri Sep 8 22:36:14 2000
+++ lia64/include/asm-ia64/offsets.h Fri Sep 8 16:34:23 2000
@@ -11,7 +11,7 @@
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 2896 /* 0xb50 */
+#define IA64_TASK_SIZE 2928 /* 0xb70 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
@@ -21,9 +21,9 @@
#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
-#define IA64_TASK_THREAD_OFFSET 896 /* 0x380 */
-#define IA64_TASK_THREAD_KSP_OFFSET 896 /* 0x380 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 2744 /* 0xab8 */
+#define IA64_TASK_THREAD_OFFSET 928 /* 0x3a0 */
+#define IA64_TASK_THREAD_KSP_OFFSET 928 /* 0x3a0 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 2784 /* 0xae0 */
#define IA64_TASK_PID_OFFSET 188 /* 0xbc */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/pal.h lia64/include/asm-ia64/pal.h
--- linux-davidm/include/asm-ia64/pal.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-ia64/pal.h Fri Sep 8 16:34:36 2000
@@ -66,6 +66,7 @@
#define PAL_CACHE_PROT_INFO 38 /* get i/d cache protection info */
#define PAL_REGISTER_INFO 39 /* return AR and CR register information*/
#define PAL_SHUTDOWN 40 /* enter processor shutdown state */
+#define PAL_PREFETCH_VISIBILITY 41
#define PAL_COPY_PAL 256 /* relocate PAL procedures and PAL PMI */
#define PAL_HALT_INFO 257 /* return the low power capabilities of processor */
@@ -644,15 +645,16 @@
* (generally 0) MUST be passed. Reserved parameters are not optional
* parameters.
*/
-extern struct ia64_pal_retval ia64_pal_call_static (u64, u64, u64, u64);
-extern struct ia64_pal_retval ia64_pal_call_stacked (u64, u64, u64, u64);
-extern struct ia64_pal_retval ia64_pal_call_phys_static (u64, u64, u64, u64);
-extern struct ia64_pal_retval ia64_pal_call_phys_stacked (u64, u64, u64, u64);
-
-#define PAL_CALL(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_static(a0, a1, a2, a3)
-#define PAL_CALL_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_stacked(a0, a1, a2, a3)
-#define PAL_CALL_PHYS(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_static(a0, a1, a2, a3)
-#define PAL_CALL_PHYS_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_stacked(a0, a1, a2, a3)
+extern struct ia64_pal_retval ia64_pal_call_static (u64, u64, u64, u64, u64);
+extern struct ia64_pal_retval ia64_pal_call_stacked (u64, u64, u64, u64);
+extern struct ia64_pal_retval ia64_pal_call_phys_static (u64, u64, u64, u64);
+extern struct ia64_pal_retval ia64_pal_call_phys_stacked (u64, u64, u64, u64);
+
+#define PAL_CALL(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_static(a0, a1, a2, a3, 0)
+#define PAL_CALL_IC_OFF(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_static(a0, a1, a2, a3, 1)
+#define PAL_CALL_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_stacked(a0, a1, a2, a3)
+#define PAL_CALL_PHYS(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_static(a0, a1, a2, a3)
+#define PAL_CALL_PHYS_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_stacked(a0, a1, a2, a3)
typedef int (*ia64_pal_handler) (u64, ...);
extern ia64_pal_handler ia64_pal;
@@ -777,7 +779,7 @@
ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 *progress)
{
struct ia64_pal_retval iprv;
- PAL_CALL(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, *progress);
+ PAL_CALL_IC_OFF(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, *progress);
*progress = iprv.v1;
return iprv.status;
}
@@ -1385,6 +1387,14 @@
if (tr_valid)
tr_valid->piv_val = iprv.v0;
return iprv.status;
+}
+
+extern inline s64
+ia64_pal_prefetch_visibility (void)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_PREFETCH_VISIBILITY, 0, 0, 0);
+ return iprv.status;
}
#endif /* __ASSEMBLY__ */
diff -urN linux-davidm/include/asm-ia64/processor.h lia64/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-ia64/processor.h Fri Sep 8 17:14:30 2000
@@ -305,10 +305,11 @@
__u64 csd; /* IA32 code selector descriptor */
__u64 ssd; /* IA32 stack selector descriptor */
__u64 tssd; /* IA32 TSS descriptor */
+ __u64 old_iob; /* old IOBase value */
union {
__u64 sigmask; /* aligned mask for sigsuspend scall */
} un;
-# define INIT_THREAD_IA32 , 0, 0, 0x17800000037fULL, 0, 0, 0, 0, 0, {0}
+# define INIT_THREAD_IA32 , 0, 0, 0x17800000037fULL, 0, 0, 0, 0, 0, 0, {0}
#else
# define INIT_THREAD_IA32
#endif /* CONFIG_IA32_SUPPORT */
@@ -334,6 +335,8 @@
#define start_thread(regs,new_ip,new_sp) do { \
set_fs(USER_DS); \
+ ia64_psr(regs)->dfh = 1; /* disable fph */ \
+ ia64_psr(regs)->mfh = 0; /* clear mfh */ \
ia64_psr(regs)->cpl = 3; /* set user mode */ \
ia64_psr(regs)->ri = 0; /* clear return slot number */ \
ia64_psr(regs)->is = 0; /* IA-64 instruction set */ \
@@ -390,6 +393,8 @@
/* Return stack pointer of blocked task TSK. */
#define KSTK_ESP(tsk) ((tsk)->thread.ksp)
+#ifndef CONFIG_SMP
+
static inline struct task_struct *
ia64_get_fpu_owner (void)
{
@@ -403,6 +408,8 @@
{
__asm__ __volatile__ ("mov ar.k5=%0" :: "r"(t));
}
+
+#endif /* !CONFIG_SMP */
extern void __ia64_init_fpu (void);
extern void __ia64_save_fpu (struct ia64_fpreg *fph);
diff -urN linux-davidm/include/asm-ia64/ptrace.h lia64/include/asm-ia64/ptrace.h
--- linux-davidm/include/asm-ia64/ptrace.h Thu Jun 22 07:09:45 2000
+++ lia64/include/asm-ia64/ptrace.h Fri Sep 8 17:14:26 2000
@@ -219,6 +219,7 @@
extern void show_regs (struct pt_regs *);
extern long ia64_peek (struct pt_regs *, struct task_struct *, unsigned long addr, long *val);
extern long ia64_poke (struct pt_regs *, struct task_struct *, unsigned long addr, long val);
+ extern void ia64_flush_fph (struct task_struct *t);
extern void ia64_sync_fph (struct task_struct *t);
#ifdef CONFIG_IA64_NEW_UNWIND
diff -urN linux-davidm/include/asm-ia64/ptrace_offsets.h lia64/include/asm-ia64/ptrace_offsets.h
--- linux-davidm/include/asm-ia64/ptrace_offsets.h Thu Jun 22 07:09:45 2000
+++ lia64/include/asm-ia64/ptrace_offsets.h Fri Sep 8 16:35:41 2000
@@ -157,6 +157,7 @@
#define PT_B4 0x07f0
#define PT_B5 0x07f8
+#define PT_AR_EC 0x0800
#define PT_AR_LC 0x0808
/* pt_regs */
diff -urN linux-davidm/include/asm-ia64/spinlock.h lia64/include/asm-ia64/spinlock.h
--- linux-davidm/include/asm-ia64/spinlock.h Fri Sep 8 22:36:14 2000
+++ lia64/include/asm-ia64/spinlock.h Fri Sep 8 17:14:30 2000
@@ -139,14 +139,24 @@
: "memory"); \
} while(0)
-#define write_lock(rw) \
-do { \
- do { \
- while ((rw)->write_lock); \
- } while (test_and_set_bit(31, (rw))); \
- while ((rw)->read_counter); \
- barrier(); \
-} while (0)
+#define write_lock(rw) \
+do { \
+ __asm__ __volatile__ ( \
+ "mov ar.ccv = r0\n" \
+ "movl r29 = 0x80000000\n" \
+ ";;\n" \
+ "1:\n" \
+ "ld4 r2 = %0\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0,r2\n" \
+ "(p7) br.cond.spnt.few 1b \n" \
+ IA64_SEMFIX"cmpxchg4.acq r2 = %0, r29, ar.ccv\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0, r2\n" \
+ "(p7) br.cond.spnt.few 1b\n" \
+ ";;\n" \
+ :: "m" __atomic_fool_gcc((rw)) : "r2", "r29", "memory"); \
+} while(0)
/*
* clear_bit() has "acq" semantics; we're really need "rel" semantics,
diff -urN linux-davidm/include/asm-ia64/system.h lia64/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Fri Sep 8 22:36:14 2000
+++ lia64/include/asm-ia64/system.h Fri Sep 8 17:14:30 2000
@@ -424,33 +424,31 @@
if (((next)->thread.flags & (IA64_THREAD_DBG_VALID|IA64_THREAD_PM_VALID)) \
|| IS_IA32_PROCESS(ia64_task_regs(next))) \
ia64_load_extra(next); \
- ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
(last) = ia64_switch_to((next)); \
} while (0)
#ifdef CONFIG_SMP
/*
* In the SMP case, we save the fph state when context-switching
- * away from a thread that owned and modified fph. This way, when
- * the thread gets scheduled on another CPU, the CPU can pick up the
- * state frm task->thread.fph, avoiding the complication of having
- * to fetch the latest fph state from another CPU. If the thread
- * happens to be rescheduled on the same CPU later on and nobody
- * else has touched the FPU in the meantime, the thread will fault
- * upon the first access to fph but since the state in fph is still
- * valid, no other overheads are incurred. In other words, CPU
- * affinity is a Good Thing.
+ * away from a thread that modified fph. This way, when the thread
+ * gets scheduled on another CPU, the CPU can pick up the state from
+ * task->thread.fph, avoiding the complication of having to fetch
+ * the latest fph state from another CPU.
*/
-# define switch_to(prev,next,last) do { \
- if (ia64_get_fpu_owner() = (prev) && ia64_psr(ia64_task_regs(prev))->mfh) { \
- ia64_psr(ia64_task_regs(prev))->mfh = 0; \
- (prev)->thread.flags |= IA64_THREAD_FPH_VALID; \
- __ia64_save_fpu((prev)->thread.fph); \
- } \
- __switch_to(prev,next,last); \
+# define switch_to(prev,next,last) do { \
+ if (ia64_psr(ia64_task_regs(prev))->mfh) { \
+ ia64_psr(ia64_task_regs(prev))->mfh = 0; \
+ (prev)->thread.flags |= IA64_THREAD_FPH_VALID; \
+ __ia64_save_fpu((prev)->thread.fph); \
+ } \
+ ia64_psr(ia64_task_regs(prev))->dfh = 1; \
+ __switch_to(prev,next,last); \
} while (0)
#else
-# define switch_to(prev,next,last) __switch_to(prev,next,last)
+# define switch_to(prev,next,last) do { \
+ ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
+ __switch_to(prev,next,last); \
+} while (0)
#endif
#endif /* __KERNEL__ */
diff -urN linux-davidm/include/linux/skbuff.h lia64/include/linux/skbuff.h
--- linux-davidm/include/linux/skbuff.h Thu Aug 24 08:17:48 2000
+++ lia64/include/linux/skbuff.h Fri Sep 8 17:15:10 2000
@@ -896,7 +896,11 @@
{
struct sk_buff *skb;
+#ifdef CONFIG_SKB_BELOW_4GB
+ skb = alloc_skb(length+16, GFP_ATOMIC | GFP_DMA);
+#else
skb = alloc_skb(length+16, GFP_ATOMIC);
+#endif
if (skb)
skb_reserve(skb,16);
return skb;
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (7 preceding siblings ...)
2000-09-09 6:51 ` [Linux-ia64] kernel update (relative to v2.4.0-test8) David Mosberger
@ 2000-09-09 19:07 ` H . J . Lu
2000-09-09 20:49 ` David Mosberger
` (206 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: H . J . Lu @ 2000-09-09 19:07 UTC (permalink / raw)
To: linux-ia64
On Fri, Sep 08, 2000 at 11:51:50PM -0700, David Mosberger wrote:
> The latest IA-64 kernel diff is now available at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>
> in file linux-2.4.0-test8-ia64-000908.diff.gz. The most important
> changes since the previous version:
>
>
> The patch below is once again a rough approximation of what changed
> since the last update. This kernel has been tested on the HP Ski
> simulator, Big Sur (UP and MP), and Lion (MP).
>
I tried it on my UP Big Sur with BIOS Build 59 and B1 stepping CPU.
It didn't work. I got
Unexpected irq vector 0x0 on CPU 0!
as soon as it booted. I am enclosing my .config here. BTW, the kernel
from the 0828 Turbo Linux booted fine. I am enclosing the boot message
here also. Any ideas?
Thanks.
H.J.
---
Linux version 2.4.0test7-000823-42 (root@borg) (gcc version 2.9-ia64-000216-final) #1 Sat Aug 26 16:31:32 PDT 2000
EFI v0.99 by INTEL: SALsystab=0x3ff25340 ACPI=0x3ffd9160 MPS=0x3ffd0000 SMBIOS=0xf0010
efi_map_pal_code: mapping PAL code [0x3ff40000-0x3ff7b000) into [0xe00000003fc00000-0xe000000040000000)
SAL v2.112: ia32bios´sent, oem=, product=SAL
sal[0] - entry: pal_proc=0x3ff48010, sal_proc=0x3fe54730
SAL: Platform features BusLock
processor implements 51 virtual and 44 physical address bits
ACPI: Intel RSDT 0.0
Acpi cfg:bind to Boot time Acpi OSD
Acpi cfg:acpi initialize pass
Acpi cfg:acpi load firmware tables pass
Acpi cfg:acpi load namespace pass
CPU 0 (0000:0000): Available.
IOSAPIC Version 2.1: address 0xfec00000 IRQs 0x0 - 0x3f
Acpi cfg:get pci vectors
Acpi cfg:_STA not found: pci bus 0 exist
Acpi cfg:_STA not found: pci bus 1 exist
Acpi cfg:_STA not found: pci bus 2 exist
Acpi cfg:_STA not found: pci bus 3 exist
1 CPUs available, 1 CPUs total
ACPI: -0550: <7>ACPI: *** Success: Entire namespace and objects deleted
Acpi cfg:acpi terminate pass
Acpi cfg:bind to Run time Acpi OSD
ia64_mca_init : begin
ia64_mca_init : registered mca rendezvous spinloop and wakeup mech.
ia64_mca_init : correctable mca vector setup done
ia64_mca_init : registered os mca handler with SAL
ia64_mca_init : os init handler at 5414b0
ia64_mca_init : registered os init handler with SAL
ia64_mca_init : platform-specific mca handling setup done
Mca related initialization done
On node 0 totalpages: 64672
zone(0): 64672 pages.
zone(1): 0 pages.
zone(2): 0 pages.
Placing software IO TLB between 0xe000000000100000 - 0xe000000000300000
Kernel command line: root=/dev/sda2 init=/bin/bash
fpswa interface at 3f197010
timer: CPU 0 base freq\x133.344MHz, ITC ratio\x10/2, ITC freqf6.722MHz
Console: colour VGA+ 80x25
Unexpected irq vector 0x0 on CPU 0!
Calibrating delay loop... 545.26 BogoMIPS
Memory: 1017312k/1034752k available (4064k code, 16368k reserved, 1382k data, 320k init)
Initialized perfmon vector to 40
Dentry-cache hash table entries: 65536 (order: 6, 1048576 bytes)
Buffer-cache hash table entries: 65536 (order: 5, 524288 bytes)
Page-cache hash table entries: 65536 (order: 5, 524288 bytes)
Inode-cache hash table entries: 65536 (order: 6, 1048576 bytes)
POSIX conformance testing by UNIFIX
PCI: Probing PCI hardware
PCI->APIC IRQ transform: (B0,I1,P0) -> 23
PCI->APIC IRQ transform: (B0,I3,P3) -> 2f
PCI->APIC IRQ transform: (B0,I3,P1) -> 2e
PCI->APIC IRQ transform: (B0,I4,P0) -> 2d
PCI->APIC IRQ transform: (B0,I5,P0) -> 2c
PCI->APIC IRQ transform: (B1,I1,P0) -> 17
PCI->APIC IRQ transform: (B1,I15,P0) -> 38
PCI->APIC IRQ transform: (B2,I0,P0) -> 1b
PCI->APIC IRQ transform: (B2,I15,P0) -> 39
Linux NET4.0 for Linux 2.4
Based upon Swansea University Computer Society NET3.039
NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
NET4: Linux TCP/IP 1.0 for NET4.0
IP Protocols: ICMP, UDP, TCP, IGMP
IP: routing cache hash table of 16384 buckets, 256Kbytes
TCP: Hash tables configured (established 131072 bind 65536)
Initializing RT netlink socket
PAL Information Facility v0.3
Starting kswapd v1.7
pty: 256 Unix98 ptys configured
RAMDISK driver initialized: 16 RAM disks of 8192K size 1024 blocksize
loop: registered device at major 7
loop: enabling 8 loop devices
Uniform Multi-Platform E-IDE driver Revision: 6.31
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
PIIX4: IDE controller on PCI bus 00 dev 19
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
ide1: BM-DMA at 0x1098-0x109f, BIOS settings: hdc:pio, hdd:pio
hdc: LTN483, ATAPI CDROM drive
hdd: LS-120 F200 08 UHD Floppy, ATAPI FLOPPY drive
ide1 at 0x170-0x177,0x376 on irq 65
hdc: ATAPI 48X CD-ROM drive, 120kB Cache, UDMA(33)
Uniform CD-ROM driver Revision: 3.11
hdd: 123264kB, 963/8/32 CHS, 533 kBps, 512 sector size, 720 rpm
md driver 0.90.0 MAX_MD_DEVS%6, MAX_REAL\x12
linear personality registered
raid0 personality registered
raid1 personality registered
raid5 personality registered
raid5: measuring checksumming speed
8regs : 712.448 MB/sec
32regs : 1149.632 MB/sec
using fastest function: 32regs (1149.632 MB/sec)
md.c: sizeof(mdp_super_t) = 4096
LVM version 0.8final by Heinz Mauelshagen (15/02/2000)
lvm -- Driver successfully initialized
qla1280: detect() found an HBA
qla1280: VID\x1077 DID\x1280 SSVID\x1077 SSDID=6
scsi(0): Determining if RISC is loaded...
scsi(0): Verifying chip...
scsi(0): Setup chip...
scsi(0): Configure NVRAM parameters...
scsi(0): Resetting SCSI BUS (0)
scsi(0): Resetting SCSI BUS (1)
scsi0 : QLogic QLA1280 PCI to SCSI Host Adapter: bus 1 device 1 irq 23
Firmware version: 8.13.08, Driver version 3.16 Beta
----
#
# Automatically generated make config: don't edit
#
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
#
# Loadable module support
#
CONFIG_MODULES=y
CONFIG_MODVERSIONS=y
CONFIG_KMOD=y
#
# General setup
#
CONFIG_IA64=y
CONFIG_SWIOTLB=y
# CONFIG_ISA is not set
# CONFIG_SBUS is not set
# CONFIG_IA64_GENERIC is not set
CONFIG_IA64_DIG=y
# CONFIG_IA64_HP_SIM is not set
# CONFIG_IA64_SGI_SN1_SIM is not set
# CONFIG_IA64_PAGE_SIZE_4KB is not set
# CONFIG_IA64_PAGE_SIZE_8KB is not set
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
CONFIG_ITANIUM=y
CONFIG_IA64_BRL_EMU=y
# CONFIG_ITANIUM_ASTEP_SPECIFIC is not set
CONFIG_ITANIUM_BSTEP_SPECIFIC=y
# CONFIG_ITANIUM_B0_SPECIFIC is not set
# CONFIG_IA64_HAVE_IRQREDIR is not set
# CONFIG_ITANIUM_PTCG is not set
# CONFIG_IA64_SOFTSDV_HACKS is not set
# CONFIG_IA64_AZUSA_HACKS is not set
CONFIG_IA64_MCA=y
CONFIG_SKB_BELOW_4GB=y
CONFIG_ACPI_KERNEL_CONFIG=y
CONFIG_PM=y
CONFIG_ACPI=y
CONFIG_ACPI_INTERPRETER=y
CONFIG_KCORE_ELF=y
# CONFIG_SMP is not set
CONFIG_PERFMON=y
CONFIG_IA64_PALINFO=y
CONFIG_NET=y
CONFIG_SYSVIPC=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_SYSCTL=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PCI=y
CONFIG_PCI_NAMES=y
CONFIG_HOTPLUG=y
#
# PCMCIA/CardBus support
#
# CONFIG_PCMCIA is not set
#
# Parallel port support
#
# CONFIG_PARPORT is not set
#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_NETLINK=y
# CONFIG_RTNETLINK is not set
CONFIG_NETLINK_DEV=y
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
CONFIG_FILTER=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
# CONFIG_IP_ADVANCED_ROUTER is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
# CONFIG_INET_ECN is not set
CONFIG_SYN_COOKIES=y
#
# IP: Netfilter Configuration
#
CONFIG_IP_NF_CONNTRACK=m
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_QUEUE=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_LIMIT=m
CONFIG_IP_NF_MATCH_MAC=m
CONFIG_IP_NF_MATCH_MARK=m
CONFIG_IP_NF_MATCH_MULTIPORT=m
CONFIG_IP_NF_MATCH_TOS=m
CONFIG_IP_NF_MATCH_STATE=m
CONFIG_IP_NF_MATCH_UNCLEAN=m
CONFIG_IP_NF_MATCH_OWNER=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_MIRROR=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_TOS=m
CONFIG_IP_NF_TARGET_MARK=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_COMPAT_IPCHAINS=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_COMPAT_IPFWADM=m
CONFIG_IP_NF_NAT_NEEDED=y
# CONFIG_IPV6 is not set
# CONFIG_KHTTPD is not set
# CONFIG_ATM is not set
#
#
#
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_DECNET is not set
# CONFIG_BRIDGE is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_LLC is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_NET_FASTROUTE is not set
# CONFIG_NET_HW_FLOWCONTROL is not set
#
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
#
# Memory Technology Devices (MTD)
#
# CONFIG_MTD is not set
#
# Plug and Play configuration
#
# CONFIG_PNP is not set
# CONFIG_ISAPNP is not set
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_XD is not set
# CONFIG_PARIDE is not set
# CONFIG_BLK_CPQ_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_NBD is not set
CONFIG_BLK_DEV_LVM=y
CONFIG_LVM_PROC_FS=y
# CONFIG_BLK_DEV_MD is not set
# CONFIG_MD_LINEAR is not set
# CONFIG_MD_RAID0 is not set
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID5 is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZEÅ92
CONFIG_BLK_DEV_INITRD=y
#
# I2O device support
#
# CONFIG_I2O is not set
# CONFIG_I2O_PCI is not set
# CONFIG_I2O_BLOCK is not set
# CONFIG_I2O_LAN is not set
# CONFIG_I2O_SCSI is not set
# CONFIG_I2O_PROC is not set
#
# ATA/IDE/MFM/RLL support
#
CONFIG_IDE=y
#
# IDE, ATA and ATAPI Block devices
#
CONFIG_BLK_DEV_IDE=y
#
# Please see Documentation/ide.txt for help/info on IDE drives
#
# CONFIG_BLK_DEV_HD_IDE is not set
# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_IDEDISK=y
# CONFIG_IDEDISK_MULTI_MODE is not set
# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set
# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set
# CONFIG_BLK_DEV_IDEDISK_IBM is not set
# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set
# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set
# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set
# CONFIG_BLK_DEV_IDEDISK_WD is not set
# CONFIG_BLK_DEV_COMMERIAL is not set
# CONFIG_BLK_DEV_TIVO is not set
# CONFIG_BLK_DEV_IDECS is not set
CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
CONFIG_BLK_DEV_IDEFLOPPY=y
# CONFIG_BLK_DEV_IDESCSI is not set
#
# IDE chipset support/bugfixes
#
# CONFIG_BLK_DEV_CMD640 is not set
# CONFIG_BLK_DEV_CMD640_ENHANCED is not set
# CONFIG_BLK_DEV_ISAPNP is not set
# CONFIG_BLK_DEV_RZ1000 is not set
CONFIG_BLK_DEV_IDEPCI=y
CONFIG_IDEPCI_SHARE_IRQ=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_OFFBOARD is not set
# CONFIG_IDEDMA_PCI_AUTO is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_PCI_WIP is not set
# CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set
# CONFIG_BLK_DEV_AEC62XX is not set
# CONFIG_AEC62XX_TUNING is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_WDC_ALI15X3 is not set
# CONFIG_BLK_DEV_AMD7409 is not set
# CONFIG_AMD7409_OVERRIDE is not set
# CONFIG_BLK_DEV_CMD64X is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5530 is not set
# CONFIG_BLK_DEV_HPT34X is not set
# CONFIG_HPT34X_AUTODMA is not set
# CONFIG_BLK_DEV_HPT366 is not set
CONFIG_BLK_DEV_PIIX=y
# CONFIG_PIIX_TUNING is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX is not set
# CONFIG_PDC202XX_BURST is not set
# CONFIG_BLK_DEV_SIS5513 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_IDE_CHIPSETS is not set
# CONFIG_IDEDMA_AUTO is not set
# CONFIG_IDEDMA_IVB is not set
# CONFIG_DMA_NONPCI is not set
CONFIG_BLK_DEV_IDE_MODES=y
#
# SCSI support
#
CONFIG_SCSI=y
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
CONFIG_SD_EXTRA_DEVS@
CONFIG_CHR_DEV_ST=y
CONFIG_BLK_DEV_SR=y
# CONFIG_BLK_DEV_SR_VENDOR is not set
CONFIG_SR_EXTRA_DEVS=2
CONFIG_CHR_DEV_SG=y
#
# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
#
CONFIG_SCSI_DEBUG_QUEUES=y
# CONFIG_SCSI_MULTI_LUN is not set
CONFIG_SCSI_CONSTANTS=y
# CONFIG_SCSI_LOGGING is not set
#
# SCSI low-level drivers
#
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_7000FASST is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AHA152X is not set
# CONFIG_SCSI_AHA1542 is not set
# CONFIG_SCSI_AHA1740 is not set
CONFIG_SCSI_AIC7XXX=y
# CONFIG_AIC7XXX_TCQ_ON_BY_DEFAULT is not set
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
# CONFIG_AIC7XXX_PROC_STATS is not set
CONFIG_AIC7XXX_RESET_DELAY=5
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_IN2000 is not set
# CONFIG_SCSI_AM53C974 is not set
CONFIG_SCSI_MEGARAID=y
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_DTC3280 is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_EATA_DMA is not set
# CONFIG_SCSI_EATA_PIO is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_GENERIC_NCR5380 is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_NCR53C406A is not set
# CONFIG_SCSI_SYM53C416 is not set
# CONFIG_SCSI_SIM710 is not set
# CONFIG_SCSI_NCR53C7xx is not set
# CONFIG_SCSI_NCR53C8XX is not set
# CONFIG_SCSI_SYM53C8XX is not set
# CONFIG_SCSI_PAS16 is not set
# CONFIG_SCSI_PCI2000 is not set
# CONFIG_SCSI_PCI2220I is not set
# CONFIG_SCSI_PSI240I is not set
# CONFIG_SCSI_QLOGIC_FAS is not set
# CONFIG_SCSI_QLOGIC_ISP is not set
# CONFIG_SCSI_QLOGIC_FC is not set
CONFIG_SCSI_QLOGIC_1280=y
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_T128 is not set
# CONFIG_SCSI_U14_34F is not set
# CONFIG_SCSI_DEBUG is not set
#
# Network device support
#
CONFIG_NETDEVICES=y
#
# ARCnet devices
#
# CONFIG_ARCNET is not set
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
# CONFIG_ETHERTAP is not set
# CONFIG_NET_SB1000 is not set
#
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_LANCE is not set
# CONFIG_NET_VENDOR_SMC is not set
# CONFIG_NET_VENDOR_RACAL is not set
# CONFIG_AT1700 is not set
# CONFIG_DEPCA is not set
# CONFIG_NET_ISA is not set
CONFIG_NET_PCI=y
# CONFIG_PCNET32 is not set
# CONFIG_ADAPTEC_STARFIRE is not set
# CONFIG_AC3200 is not set
# CONFIG_APRICOT is not set
# CONFIG_CS89x0 is not set
# CONFIG_DE4X5 is not set
CONFIG_TULIP=y
# CONFIG_DGRS is not set
# CONFIG_DM9102 is not set
CONFIG_EEPRO100=y
# CONFIG_EEPRO100_PM is not set
# CONFIG_LNE390 is not set
# CONFIG_NATSEMI is not set
# CONFIG_NE2K_PCI is not set
# CONFIG_NE3210 is not set
# CONFIG_RTL8129 is not set
# CONFIG_8139TOO is not set
# CONFIG_SIS900 is not set
# CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set
# CONFIG_ES3210 is not set
# CONFIG_EPIC100 is not set
# CONFIG_NET_POCKET is not set
#
# Ethernet (1000 Mbit)
#
# CONFIG_YELLOWFIN is not set
CONFIG_ACENIC=y
# CONFIG_ACENIC_OMIT_TIGON_I is not set
# CONFIG_SK98LIN is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
#
# Wireless LAN (non-hamradio)
#
# CONFIG_NET_RADIO is not set
#
# Token Ring devices
#
# CONFIG_TR is not set
# CONFIG_NET_FC is not set
# CONFIG_RCPCI is not set
# CONFIG_SHAPER is not set
#
# Wan interfaces
#
# CONFIG_WAN is not set
#
# Amateur Radio support
#
# CONFIG_HAMRADIO is not set
#
# ISDN subsystem
#
# CONFIG_ISDN is not set
#
# CD-ROM drivers (not for SCSI or IDE/ATAPI drives)
#
# CONFIG_CD_NO_IDESCSI is not set
#
# Input core support
#
# CONFIG_INPUT is not set
#
# Character devices
#
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_SERIAL=y
CONFIG_SERIAL_CONSOLE=y
# CONFIG_SERIAL_EXTENDED is not set
# CONFIG_SERIAL_NONSTANDARD is not set
CONFIG_UNIX98_PTYS=y
CONFIG_UNIX98_PTY_COUNT%6
#
# I2C support
#
# CONFIG_I2C is not set
#
# Mice
#
# CONFIG_BUSMOUSE is not set
CONFIG_MOUSE=y
CONFIG_PSMOUSE=y
# CONFIG_82C710_MOUSE is not set
# CONFIG_PC110_PAD is not set
#
# Joysticks
#
# CONFIG_JOYSTICK is not set
#
# Input core support is needed for joysticks
#
# CONFIG_QIC02_TAPE is not set
#
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_INTEL_RNG is not set
# CONFIG_NVRAM is not set
# CONFIG_RTC is not set
CONFIG_EFI_RTC=y
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
#
# Ftape, the floppy tape device driver
#
# CONFIG_FTAPE is not set
CONFIG_AGP=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_I810=y
CONFIG_AGP_VIA=y
CONFIG_AGP_AMD=y
CONFIG_AGP_SIS=y
CONFIG_AGP_ALI=y
CONFIG_DRM=y
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_GAMMA is not set
# CONFIG_DRM_R128 is not set
# CONFIG_DRM_I810 is not set
# CONFIG_DRM_MGA is not set
#
# Multimedia devices
#
# CONFIG_VIDEO_DEV is not set
#
# File systems
#
# CONFIG_QUOTA is not set
# CONFIG_AUTOFS_FS is not set
CONFIG_AUTOFS4_FS=m
# CONFIG_ADFS_FS is not set
# CONFIG_ADFS_FS_RW is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_BFS_FS is not set
CONFIG_FAT_FS=y
# CONFIG_MSDOS_FS is not set
# CONFIG_UMSDOS_FS is not set
CONFIG_VFAT_FS=y
# CONFIG_EFS_FS is not set
# CONFIG_JFFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_RAMFS is not set
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
# CONFIG_MINIX_FS is not set
# CONFIG_NTFS_FS is not set
# CONFIG_NTFS_RW is not set
# CONFIG_HPFS_FS is not set
CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVFS_MOUNT is not set
# CONFIG_DEVFS_DEBUG is not set
CONFIG_DEVPTS_FS=y
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX4FS_RW is not set
CONFIG_ROMFS_FS=y
CONFIG_EXT2_FS=y
# CONFIG_SYSV_FS is not set
# CONFIG_SYSV_FS_WRITE is not set
# CONFIG_UDF_FS is not set
# CONFIG_UDF_RW is not set
# CONFIG_UFS_FS is not set
# CONFIG_UFS_FS_WRITE is not set
#
# Network File Systems
#
# CONFIG_CODA_FS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3=y
# CONFIG_ROOT_NFS is not set
CONFIG_NFSD=y
CONFIG_NFSD_V3=y
CONFIG_SUNRPC=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
# CONFIG_SMB_FS is not set
# CONFIG_NCP_FS is not set
# CONFIG_NCPFS_PACKET_SIGNING is not set
# CONFIG_NCPFS_IOCTL_LOCKING is not set
# CONFIG_NCPFS_STRONG is not set
# CONFIG_NCPFS_NFS_NS is not set
# CONFIG_NCPFS_OS2_NS is not set
# CONFIG_NCPFS_SMALLDOS is not set
# CONFIG_NCPFS_MOUNT_SUBDIR is not set
# CONFIG_NCPFS_NDS_DOMAINS is not set
# CONFIG_NCPFS_NLS is not set
# CONFIG_NCPFS_EXTRAS is not set
#
# Partition Types
#
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_NLS=y
#
# Native Language Support
#
CONFIG_NLS_DEFAULT="iso8859-1"
CONFIG_NLS_CODEPAGE_437=m
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
CONFIG_NLS_ISO8859_1=m
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_14 is not set
CONFIG_NLS_ISO8859_15=m
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_UTF8 is not set
#
# Console drivers
#
CONFIG_VGA_CONSOLE=y
#
# Frame-buffer support
#
# CONFIG_FB is not set
#
# Sound
#
CONFIG_SOUND=y
CONFIG_SOUND_CMPCI=y
CONFIG_SOUND_CMPCI_SPDIFLOOP=y
CONFIG_SOUND_CMPCI_4CH=y
CONFIG_SOUND_CMPCI_REAR=y
CONFIG_SOUND_EMU10K1=y
# CONFIG_SOUND_FUSION is not set
CONFIG_SOUND_ES1370=y
CONFIG_SOUND_ES1371=y
CONFIG_SOUND_ESSSOLO1=y
CONFIG_SOUND_MAESTRO=y
CONFIG_SOUND_SONICVIBES=y
CONFIG_SOUND_TRIDENT=y
# CONFIG_SOUND_MSNDCLAS is not set
# CONFIG_SOUND_MSNDPIN is not set
# CONFIG_SOUND_VIA82CXXX is not set
# CONFIG_SOUND_OSS is not set
# CONFIG_SOUND_TVMIXER is not set
#
# USB support
#
CONFIG_USB=y
CONFIG_USB_DEBUG=y
#
# Miscellaneous USB options
#
# CONFIG_USB_DEVICEFS is not set
# CONFIG_USB_BANDWIDTH is not set
#
# USB Controllers
#
CONFIG_USB_UHCI_ALT=y
# CONFIG_USB_OHCI is not set
#
# USB Devices
#
CONFIG_USB_PRINTER=y
CONFIG_USB_SCANNER=y
CONFIG_USB_MICROTEK=y
CONFIG_USB_AUDIO=y
CONFIG_USB_ACM=y
CONFIG_USB_SERIAL=y
CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_VISOR=y
CONFIG_USB_SERIAL_WHITEHEAT=y
CONFIG_USB_SERIAL_FTDI_SIO=y
CONFIG_USB_SERIAL_KEYSPAN_PDA=y
CONFIG_USB_SERIAL_KEYSPAN=y
# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set
CONFIG_USB_SERIAL_DIGI_ACCELEPORT=y
CONFIG_USB_SERIAL_OMNINET=y
CONFIG_USB_SERIAL_DEBUG=y
# CONFIG_USB_IBMCAM is not set
# CONFIG_USB_OV511 is not set
CONFIG_USB_DC2XX=y
CONFIG_USB_MDC800=y
CONFIG_USB_STORAGE=y
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_USS720 is not set
CONFIG_USB_DABUSB=y
CONFIG_USB_PLUSB=y
CONFIG_USB_PEGASUS=y
CONFIG_USB_RIO500=y
# CONFIG_USB_DSBR is not set
CONFIG_USB_BLUETOOTH=y
#
# USB Human Interface Devices (HID)
#
#
# Input core support is needed for USB HID
#
#
# Kernel hacking
#
CONFIG_IA32_SUPPORT=y
CONFIG_MATHEMU=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_IA64_EARLY_PRINTK=y
# CONFIG_IA64_DEBUG_CMPXCHG is not set
# CONFIG_IA64_DEBUG_IRQ is not set
CONFIG_IA64_PRINT_HAZARDS=y
# CONFIG_IA64_NEW_UNWIND is not set
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (8 preceding siblings ...)
2000-09-09 19:07 ` H . J . Lu
@ 2000-09-09 20:49 ` David Mosberger
2000-09-09 21:25 ` Uros Prestor
` (205 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-09-09 20:49 UTC (permalink / raw)
To: linux-ia64
>>>>> On Sat, 9 Sep 2000 12:07:44 -0700, "H . J . Lu" <hjl@valinux.com> said:
HJ> I tried it on my UP Big Sur with BIOS Build 59 and B1 stepping
HJ> CPU. It didn't work. I got
HJ> Unexpected irq vector 0x0 on CPU 0!
We also do not have any B1 Big Sur, so it's not something I can test.
However, the "Unexpected irq" is normal, it's even in the bootlog you
included.
HJ> as soon as it booted. I am enclosing my .config here. BTW, the
HJ> kernel from the 0828 Turbo Linux booted fine. I am enclosing the
HJ> boot message here also. Any ideas?
You didn't describe how the new kernel failed, so there isn't much I
can do.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (9 preceding siblings ...)
2000-09-09 20:49 ` David Mosberger
@ 2000-09-09 21:25 ` Uros Prestor
2000-09-09 21:33 ` H . J . Lu
` (204 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Uros Prestor @ 2000-09-09 21:25 UTC (permalink / raw)
To: linux-ia64
David Mosberger wrote:
> HJ> as soon as it booted. I am enclosing my .config here. BTW, the
> HJ> kernel from the 0828 Turbo Linux booted fine. I am enclosing the
> HJ> boot message here also. Any ideas?
>
> You didn't describe how the new kernel failed, so there isn't much I
> can do.
The kernel doesn't get to print anything. I will recompile with early printk
enabled to see if I get any more output but basically as soon as lilo passes the
control to kernel the system locks up. I was only booting UP kernel even though
I am running 2xB1 BigSur.
I tried both BIOS 60 and BIOS 70 -- test8 locks up on both. Additionally,
I noticed that when booting a SMP test7 kernel under BIOS 70 the kernel
recognizes only one processor. I'm downgrading back to BIOS 60.
Uros
--
Uros Prestor
uros@turbolinux.com
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (10 preceding siblings ...)
2000-09-09 21:25 ` Uros Prestor
@ 2000-09-09 21:33 ` H . J . Lu
2000-09-09 21:45 ` David Mosberger
` (203 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: H . J . Lu @ 2000-09-09 21:33 UTC (permalink / raw)
To: linux-ia64
On Sat, Sep 09, 2000 at 01:49:53PM -0700, David Mosberger wrote:
> >>>>> On Sat, 9 Sep 2000 12:07:44 -0700, "H . J . Lu" <hjl@valinux.com> said:
>
> HJ> I tried it on my UP Big Sur with BIOS Build 59 and B1 stepping
> HJ> CPU. It didn't work. I got
>
> HJ> Unexpected irq vector 0x0 on CPU 0!
>
> We also do not have any B1 Big Sur, so it's not something I can test.
> However, the "Unexpected irq" is normal, it's even in the bootlog you
> included.
>
> HJ> as soon as it booted. I am enclosing my .config here. BTW, the
> HJ> kernel from the 0828 Turbo Linux booted fine. I am enclosing the
> HJ> boot message here also. Any ideas?
>
> You didn't describe how the new kernel failed, so there isn't much I
> can do.
It keeps printing
Unexpected irq vector 0x0 on CPU 0!
and nothing else. I don't think boot is finished.
--
H.J. Lu (hjl@gnu.org)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (11 preceding siblings ...)
2000-09-09 21:33 ` H . J . Lu
@ 2000-09-09 21:45 ` David Mosberger
2000-09-09 21:49 ` H . J . Lu
` (202 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-09-09 21:45 UTC (permalink / raw)
To: linux-ia64
>>>>> On Sat, 9 Sep 2000 14:33:01 -0700, "H . J . Lu" <hjl@valinux.com> said:
HJ> It keeps printing
HJ> Unexpected irq vector 0x0 on CPU 0!
HJ> and nothing else. I don't think boot is finished.
Do you happen to have any USB devices plugged in? If so, try
unplugging them.
Also, did you make sure you ran "make dep" before building the kernel
to ensure include/asm-ia64/offsets.h is up-to-date?
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (12 preceding siblings ...)
2000-09-09 21:45 ` David Mosberger
@ 2000-09-09 21:49 ` H . J . Lu
2000-09-10 0:17 ` David Mosberger
` (201 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: H . J . Lu @ 2000-09-09 21:49 UTC (permalink / raw)
To: linux-ia64
On Sat, Sep 09, 2000 at 02:45:15PM -0700, David Mosberger wrote:
> >>>>> On Sat, 9 Sep 2000 14:33:01 -0700, "H . J . Lu" <hjl@valinux.com> said:
>
> HJ> It keeps printing
>
> HJ> Unexpected irq vector 0x0 on CPU 0!
>
> HJ> and nothing else. I don't think boot is finished.
>
> Do you happen to have any USB devices plugged in? If so, try
> unplugging them.
I don't have any USB devices.
>
> Also, did you make sure you ran "make dep" before building the kernel
> to ensure include/asm-ia64/offsets.h is up-to-date?
>
Yes, I did "make dep". What should include/asm-ia64/offsets.h look
like? That may explain many things.
--
H.J. Lu (hjl@gnu.org)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (13 preceding siblings ...)
2000-09-09 21:49 ` H . J . Lu
@ 2000-09-10 0:17 ` David Mosberger
2000-09-10 0:24 ` Uros Prestor
` (200 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-09-10 0:17 UTC (permalink / raw)
To: linux-ia64
>>>>> On Sat, 9 Sep 2000 14:49:52 -0700, "H . J . Lu" <hjl@valinux.com> said:
HJ> Yes, I did "make dep". What should include/asm-ia64/offsets.h
HJ> look like? That may explain many things.
Well, it depends on your kernel configuration. If you did a "make
dep", you should have the right values in there.
I'm not sure why the new kernel isn't working for you. Nothing has
changed in the kernel that would explain an infinite stream of irq 0.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (14 preceding siblings ...)
2000-09-10 0:17 ` David Mosberger
@ 2000-09-10 0:24 ` Uros Prestor
2000-09-10 0:39 ` H . J . Lu
` (199 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Uros Prestor @ 2000-09-10 0:24 UTC (permalink / raw)
To: linux-ia64
David Mosberger wrote:
> >>>>> On Sat, 9 Sep 2000 14:49:52 -0700, "H . J . Lu" <hjl@valinux.com> said:
>
> HJ> Yes, I did "make dep". What should include/asm-ia64/offsets.h
> HJ> look like? That may explain many things.
>
> Well, it depends on your kernel configuration. If you did a "make
> dep", you should have the right values in there.
>
> I'm not sure why the new kernel isn't working for you. Nothing has
> changed in the kernel that would explain an infinite stream of irq 0.
Yep, that's what I am seeing now. I have disabled kdb and now the kernel boots,
I briefly see a panic dump and then an infinite stream of irq 0's. If you can
tell me of a way of capturing the kernel output I'll try to send you what I get.
Uros
--
Uros Prestor
uros@turbolinux.com
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (15 preceding siblings ...)
2000-09-10 0:24 ` Uros Prestor
@ 2000-09-10 0:39 ` H . J . Lu
2000-09-10 0:57 ` H . J . Lu
` (198 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: H . J . Lu @ 2000-09-10 0:39 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 1050 bytes --]
On Sat, Sep 09, 2000 at 05:17:12PM -0700, David Mosberger wrote:
> >>>>> On Sat, 9 Sep 2000 14:49:52 -0700, "H . J . Lu" <hjl@valinux.com> said:
>
> HJ> Yes, I did "make dep". What should include/asm-ia64/offsets.h
> HJ> look like? That may explain many things.
>
> Well, it depends on your kernel configuration. If you did a "make
> dep", you should have the right values in there.
>
> I'm not sure why the new kernel isn't working for you. Nothing has
> changed in the kernel that would explain an infinite stream of irq 0.
>
I am going through my build log. I found at least 2 problems:
1. In inclide/asm-ia64/processor.h, there is
# define loops_per_sec() loops_per_sec
It is bad if the versioned symbol is enabled.
2. ACPI_RSDT_SIG is defined both as a string and an integer in
2 different header files. In include/linux/acpi.h
#define ACPI_RSDT_SIG 0x54445352 /* 'RSDT' */
In include/asm-ia64/acpi-ext.h
#define ACPI_RSDT_SIG "RSDT"
It is very confusing.
I am enclosing 2 patches. Please take a look.
Thanks.
H.J.
[-- Attachment #2: macro.patch --]
[-- Type: text/plain, Size: 1010 bytes --]
--- linux/arch/ia64/kernel/setup.c.macro Sat Sep 9 17:20:57 2000
+++ linux/arch/ia64/kernel/setup.c Sat Sep 9 17:21:25 2000
@@ -320,7 +320,7 @@ get_cpuinfo (char *buffer)
features,
c->ppn, c->number, c->proc_freq / 1000000, c->proc_freq % 1000000,
c->itc_freq / 1000000, c->itc_freq % 1000000,
- loops_per_sec() / 500000, (loops_per_sec() / 5000) % 100);
+ ia64_loops_per_sec() / 500000, (ia64_loops_per_sec() / 5000) % 100);
}
return p - buffer;
}
--- linux/include/asm-ia64/processor.h.macro Sat Sep 9 17:20:05 2000
+++ linux/include/asm-ia64/processor.h Sat Sep 9 17:20:27 2000
@@ -253,9 +253,9 @@ struct cpuinfo_ia64 {
#define my_cpu_data cpu_data[smp_processor_id()]
#ifdef CONFIG_SMP
-# define loops_per_sec() my_cpu_data.loops_per_sec
+# define ia64_loops_per_sec() my_cpu_data.loops_per_sec
#else
-# define loops_per_sec() loops_per_sec
+# define ia64_loops_per_sec() loops_per_sec
#endif
extern struct cpuinfo_ia64 cpu_data[NR_CPUS];
[-- Attachment #3: string.patch --]
[-- Type: text/plain, Size: 1679 bytes --]
--- linux/arch/ia64/kernel/acpi.c.string Sat Sep 9 17:30:07 2000
+++ linux/arch/ia64/kernel/acpi.c Sat Sep 9 17:31:11 2000
@@ -233,7 +233,7 @@ acpi_parse(acpi_rsdp_t *rsdp)
rsdp->rsdt = __va(rsdp->rsdt);
rsdt = rsdp->rsdt;
- if (strncmp(rsdt->header.signature, ACPI_RSDT_SIG, ACPI_RSDT_SIG_LEN)) {
+ if (strncmp(rsdt->header.signature, ACPI_RSDT_SIG_STR, ACPI_RSDT_SIG_LEN)) {
printk("Uh-oh, ACPI RDST signature incorrect!\n");
return 0;
}
@@ -250,7 +250,7 @@ acpi_parse(acpi_rsdp_t *rsdp)
hdrp = (acpi_desc_table_hdr_t *) __va(rsdt->entry_ptrs[i]);
/* Only interested int the MSAPIC table for now ... */
- if (strncmp(hdrp->signature, ACPI_SAPIC_SIG, ACPI_SAPIC_SIG_LEN) != 0)
+ if (strncmp(hdrp->signature, ACPI_SAPIC_SIG_STR, ACPI_SAPIC_SIG_LEN) != 0)
continue;
acpi_parse_msapic((acpi_sapic_t *) hdrp);
--- linux/include/asm-ia64/acpi-ext.h.string Sat Sep 9 17:27:38 2000
+++ linux/include/asm-ia64/acpi-ext.h Sat Sep 9 17:28:12 2000
@@ -12,7 +12,7 @@
#include <linux/types.h>
-#define ACPI_RSDP_SIG "RSD PTR " /* Trailing space required */
+#define ACPI_RSDP_SIG_STR "RSD PTR " /* Trailing space required */
#define ACPI_RSDP_SIG_LEN 8
typedef struct {
char signature[8];
@@ -35,14 +35,14 @@ typedef struct {
char reserved[4];
} acpi_desc_table_hdr_t;
-#define ACPI_RSDT_SIG "RSDT"
+#define ACPI_RSDT_SIG_STR "RSDT"
#define ACPI_RSDT_SIG_LEN 4
typedef struct acpi_rsdt {
acpi_desc_table_hdr_t header;
unsigned long entry_ptrs[1]; /* Not really . . . */
} acpi_rsdt_t;
-#define ACPI_SAPIC_SIG "SPIC"
+#define ACPI_SAPIC_SIG_STR "SPIC"
#define ACPI_SAPIC_SIG_LEN 4
typedef struct {
acpi_desc_table_hdr_t header;
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (16 preceding siblings ...)
2000-09-10 0:39 ` H . J . Lu
@ 2000-09-10 0:57 ` H . J . Lu
2000-09-10 15:47 ` H . J . Lu
` (197 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: H . J . Lu @ 2000-09-10 0:57 UTC (permalink / raw)
To: linux-ia64
On Sat, Sep 09, 2000 at 05:39:26PM -0700, H . J . Lu wrote:
> On Sat, Sep 09, 2000 at 05:17:12PM -0700, David Mosberger wrote:
> > >>>>> On Sat, 9 Sep 2000 14:49:52 -0700, "H . J . Lu" <hjl@valinux.com> said:
> >
> > HJ> Yes, I did "make dep". What should include/asm-ia64/offsets.h
> > HJ> look like? That may explain many things.
> >
> > Well, it depends on your kernel configuration. If you did a "make
> > dep", you should have the right values in there.
> >
> > I'm not sure why the new kernel isn't working for you. Nothing has
> > changed in the kernel that would explain an infinite stream of irq 0.
> >
>
> I am going through my build log. I found at least 2 problems:
>
> 1. In inclide/asm-ia64/processor.h, there is
>
> # define loops_per_sec() loops_per_sec
>
> It is bad if the versioned symbol is enabled.
>
> 2. ACPI_RSDT_SIG is defined both as a string and an integer in
> 2 different header files. In include/linux/acpi.h
>
> #define ACPI_RSDT_SIG 0x54445352 /* 'RSDT' */
>
> In include/asm-ia64/acpi-ext.h
>
> #define ACPI_RSDT_SIG "RSDT"
>
> It is very confusing.
>
> I am enclosing 2 patches. Please take a look.
>
Opps. The patch for string missed one patch. Here is the new
string.patch.
H.J.
---
--- linux/arch/ia64/kernel/acpi.c.string Sat Sep 9 17:54:02 2000
+++ linux/arch/ia64/kernel/acpi.c Sat Sep 9 17:31:11 2000
@@ -226,14 +226,14 @@ acpi_parse(acpi_rsdp_t *rsdp)
return 0;
}
- if (strncmp(rsdp->signature, ACPI_RSDP_SIG, ACPI_RSDP_SIG_LEN)) {
+ if (strncmp(rsdp->signature, ACPI_RSDP_SIG_STR, ACPI_RSDP_SIG_LEN)) {
printk("Uh-oh, ACPI RSDP signature incorrect!\n");
return 0;
}
rsdp->rsdt = __va(rsdp->rsdt);
rsdt = rsdp->rsdt;
- if (strncmp(rsdt->header.signature, ACPI_RSDT_SIG, ACPI_RSDT_SIG_LEN)) {
+ if (strncmp(rsdt->header.signature, ACPI_RSDT_SIG_STR, ACPI_RSDT_SIG_LEN)) {
printk("Uh-oh, ACPI RDST signature incorrect!\n");
return 0;
}
@@ -250,7 +250,7 @@ acpi_parse(acpi_rsdp_t *rsdp)
hdrp = (acpi_desc_table_hdr_t *) __va(rsdt->entry_ptrs[i]);
/* Only interested int the MSAPIC table for now ... */
- if (strncmp(hdrp->signature, ACPI_SAPIC_SIG, ACPI_SAPIC_SIG_LEN) != 0)
+ if (strncmp(hdrp->signature, ACPI_SAPIC_SIG_STR, ACPI_SAPIC_SIG_LEN) != 0)
continue;
acpi_parse_msapic((acpi_sapic_t *) hdrp);
--- linux/include/asm-ia64/acpi-ext.h.string Sat Sep 9 17:27:38 2000
+++ linux/include/asm-ia64/acpi-ext.h Sat Sep 9 17:28:12 2000
@@ -12,7 +12,7 @@
#include <linux/types.h>
-#define ACPI_RSDP_SIG "RSD PTR " /* Trailing space required */
+#define ACPI_RSDP_SIG_STR "RSD PTR " /* Trailing space required */
#define ACPI_RSDP_SIG_LEN 8
typedef struct {
char signature[8];
@@ -35,14 +35,14 @@ typedef struct {
char reserved[4];
} acpi_desc_table_hdr_t;
-#define ACPI_RSDT_SIG "RSDT"
+#define ACPI_RSDT_SIG_STR "RSDT"
#define ACPI_RSDT_SIG_LEN 4
typedef struct acpi_rsdt {
acpi_desc_table_hdr_t header;
unsigned long entry_ptrs[1]; /* Not really . . . */
} acpi_rsdt_t;
-#define ACPI_SAPIC_SIG "SPIC"
+#define ACPI_SAPIC_SIG_STR "SPIC"
#define ACPI_SAPIC_SIG_LEN 4
typedef struct {
acpi_desc_table_hdr_t header;
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (17 preceding siblings ...)
2000-09-10 0:57 ` H . J . Lu
@ 2000-09-10 15:47 ` H . J . Lu
2000-09-14 1:50 ` David Mosberger
` (196 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: H . J . Lu @ 2000-09-10 15:47 UTC (permalink / raw)
To: linux-ia64
On Sat, Sep 09, 2000 at 05:24:29PM -0700, Uros Prestor wrote:
> David Mosberger wrote:
>
> > >>>>> On Sat, 9 Sep 2000 14:49:52 -0700, "H . J . Lu" <hjl@valinux.com> said:
> >
> > HJ> Yes, I did "make dep". What should include/asm-ia64/offsets.h
> > HJ> look like? That may explain many things.
> >
> > Well, it depends on your kernel configuration. If you did a "make
> > dep", you should have the right values in there.
> >
> > I'm not sure why the new kernel isn't working for you. Nothing has
> > changed in the kernel that would explain an infinite stream of irq 0.
>
> Yep, that's what I am seeing now. I have disabled kdb and now the kernel boots,
> I briefly see a panic dump and then an infinite stream of irq 0's. If you can
> tell me of a way of capturing the kernel output I'll try to send you what I get.
>
After applying my 2 patches to test8, the kernel boots. But the QLA1280
driver doesn't get the right irq. Also there are no IOSAPIC nor
PCI->APIC IRQ messages.
--
H.J. Lu (hjl@gnu.org)
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.0-test8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (18 preceding siblings ...)
2000-09-10 15:47 ` H . J . Lu
@ 2000-09-14 1:50 ` David Mosberger
2000-10-05 19:01 ` [Linux-ia64] kernel update (relative to v2.4.0-test9) David Mosberger
` (195 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-09-14 1:50 UTC (permalink / raw)
To: linux-ia64
Since the last diff had a few serious problems, including the infamous
ACPI bug and an fph problem that caused xmms not to work on UP
machine, there is now a new diff available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.0-test8-ia64-000913.diff.
Summary of changes:
- Applied Asit's ACPI fix that prevented booting on some machines
(B1 Big Sur mostly, it seems).
- Applied HJ's patch to rename the loops_per_sec() macro to
ia64_loops_per_sec(). While looking at this code, I also realized
that the SMP code still didn't do per-CPU BogoMIPS calibration, so
I fixed that. Now, you too can have a machine with several
thousand BogoMIPS! ;-)
- Applied Mike Stephen's patch to add unwind support for modules.
This also cleans up the module interface. It touches the include
file of the other platforms, but the changes involved are trivial.
- Reapplied the kernel_thread() fixed (don't know why the original
patch from SuSE got lost; my apologies).
- Fixed fph management for UP machines & cleaned up code some more.
This kernel is known to build and boot for the HP Ski simulator, Big
Sur, and SMP Lion. I tested the UP kernel extensively by listening to
xmms for hours (though job, I know... ;-).
--david
diff -urN linux-davidm/arch/ia64/kernel/efi.c lia64/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/efi.c Wed Sep 13 14:11:45 2000
@@ -279,7 +279,7 @@
continue;
}
- printk(__FUNCTION__": CPU %d mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
+ printk("CPU %d: mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
smp_processor_id(), md->phys_addr, md->phys_addr + (md->num_pages << 12),
vaddr & mask, (vaddr & mask) + 256*1024*1024);
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c lia64/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/ia64_ksyms.c Wed Sep 13 13:39:26 2000
@@ -10,6 +10,7 @@
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL_NOVERS(memcpy);
EXPORT_SYMBOL(memmove);
+EXPORT_SYMBOL(memscan);
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strcmp);
@@ -29,14 +30,21 @@
#include <linux/in6.h>
#include <asm/checksum.h>
+/* not coded yet?? EXPORT_SYMBOL(csum_ipv6_magic); */
EXPORT_SYMBOL(csum_partial_copy_nocheck);
EXPORT_SYMBOL(csum_tcpudp_magic);
EXPORT_SYMBOL(ip_compute_csum);
EXPORT_SYMBOL(ip_fast_csum);
+#include <asm/io.h>
+EXPORT_SYMBOL(__ia64_memcpy_fromio);
+EXPORT_SYMBOL(__ia64_memcpy_toio);
+EXPORT_SYMBOL(__ia64_memset_c_io);
+
#include <asm/irq.h>
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(disable_irq_nosync);
#include <asm/page.h>
EXPORT_SYMBOL(clear_page);
@@ -53,18 +61,27 @@
EXPORT_SYMBOL(cpu_data);
EXPORT_SYMBOL(kernel_thread);
+#include <asm/system.h>
+#ifdef CONFIG_IA64_DEBUG_IRQ
+EXPORT_SYMBOL(last_cli_ip);
+#endif
+
#ifdef CONFIG_SMP
+
+#include <asm/current.h>
#include <asm/hardirq.h>
EXPORT_SYMBOL(synchronize_irq);
#include <asm/smp.h>
EXPORT_SYMBOL(smp_call_function);
+
+#include <linux/smp.h>
EXPORT_SYMBOL(smp_num_cpus);
#include <asm/smplock.h>
EXPORT_SYMBOL(kernel_flag);
-#include <asm/system.h>
+/* #include <asm/system.h> */
EXPORT_SYMBOL(__global_sti);
EXPORT_SYMBOL(__global_cli);
EXPORT_SYMBOL(__global_save_flags);
@@ -74,6 +91,7 @@
#include <asm/uaccess.h>
EXPORT_SYMBOL(__copy_user);
+EXPORT_SYMBOL(__do_clear_user);
#include <asm/unistd.h>
EXPORT_SYMBOL(__ia64_syscall);
@@ -88,3 +106,4 @@
EXPORT_SYMBOL_NOVERS(__udivdi3);
EXPORT_SYMBOL_NOVERS(__moddi3);
EXPORT_SYMBOL_NOVERS(__umoddi3);
+
diff -urN linux-davidm/arch/ia64/kernel/process.c lia64/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/process.c Wed Sep 13 13:39:42 2000
@@ -495,14 +495,14 @@
kernel_thread (int (*fn)(void *), void *arg, unsigned long flags)
{
struct task_struct *parent = current;
- int result;
+ int result, tid;
- clone(flags | CLONE_VM, 0);
+ tid = clone(flags | CLONE_VM, 0);
if (parent != current) {
result = (*fn)(arg);
_exit(result);
}
- return 0; /* parent: just return */
+ return tid;
}
/*
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c lia64/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/ptrace.c Wed Sep 13 18:36:17 2000
@@ -543,32 +543,49 @@
child->thread.flags |= IA64_THREAD_KRBS_SYNCED;
}
-void
-ia64_flush_fph (struct task_struct *child)
+/*
+ * Write f32-f127 back to task->thread.fph if it has been modified.
+ */
+inline void
+ia64_flush_fph (struct task_struct *task)
{
- struct ia64_psr *psr = ia64_psr(ia64_task_regs(child));
+ struct ia64_psr *psr = ia64_psr(ia64_task_regs(task));
+#ifdef CONFIG_SMP
+ struct task_struct *fpu_owner = current;
+#else
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
+#endif
- if (psr->mfh) {
+ if (task = fpu_owner && psr->mfh) {
psr->mfh = 0;
-#ifndef CONFIG_SMP
- ia64_set_fpu_owner(0);
-#endif
- ia64_save_fpu(&child->thread.fph[0]);
- child->thread.flags |= IA64_THREAD_FPH_VALID;
+ ia64_save_fpu(&task->thread.fph[0]);
+ task->thread.flags |= IA64_THREAD_FPH_VALID;
}
}
/*
- * Ensure the state in child->thread.fph is up-to-date.
+ * Sync the fph state of the task so that it can be manipulated
+ * through thread.fph. If necessary, f32-f127 are written back to
+ * thread.fph or, if the fph state hasn't been used before, thread.fph
+ * is cleared to zeroes. Also, access to f32-f127 is disabled to
+ * ensure that the task picks up the state from thread.fph when it
+ * executes again.
*/
void
-ia64_sync_fph (struct task_struct *child)
+ia64_sync_fph (struct task_struct *task)
{
- ia64_flush_fph(child);
- if (!(child->thread.flags & IA64_THREAD_FPH_VALID)) {
- memset(&child->thread.fph, 0, sizeof(child->thread.fph));
- child->thread.flags |= IA64_THREAD_FPH_VALID;
+ struct ia64_psr *psr = ia64_psr(ia64_task_regs(task));
+
+ ia64_flush_fph(task);
+ if (!(task->thread.flags & IA64_THREAD_FPH_VALID)) {
+ task->thread.flags |= IA64_THREAD_FPH_VALID;
+ memset(&task->thread.fph, 0, sizeof(task->thread.fph));
}
+#ifndef CONFIG_SMP
+ if (ia64_get_fpu_owner() = task)
+ ia64_set_fpu_owner(0);
+#endif
+ psr->dfh = 1;
}
#ifdef CONFIG_IA64_NEW_UNWIND
@@ -611,7 +628,10 @@
if (addr < PT_F127 + 16) {
/* accessing fph */
- ia64_sync_fph(child);
+ if (write_access)
+ ia64_sync_fph(child);
+ else
+ ia64_flush_fph(child);
ptr = (unsigned long *) ((unsigned long) &child->thread.fph + addr);
} else if (addr >= PT_F10 && addr < PT_F15 + 16) {
/* scratch registers untouched by kernel (saved in switch_stack) */
@@ -808,7 +828,7 @@
static int
access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
{
- unsigned long *ptr, *rbs, *bspstore, ndirty, regnum;
+ unsigned long *ptr = NULL, *rbs, *bspstore, ndirty, regnum;
struct switch_stack *sw;
struct pt_regs *pt;
@@ -817,7 +837,10 @@
if (addr < PT_F127+16) {
/* accessing fph */
- ia64_sync_fph(child);
+ if (write_access)
+ ia64_sync_fph(child);
+ else
+ ia64_flush_fph(child);
ptr = (unsigned long *) ((unsigned long) &child->thread.fph + addr);
} else if (addr < PT_F9+16) {
/* accessing switch_stack or pt_regs: */
diff -urN linux-davidm/arch/ia64/kernel/setup.c lia64/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/setup.c Wed Sep 13 13:40:08 2000
@@ -320,7 +320,7 @@
features,
c->ppn, c->number, c->proc_freq / 1000000, c->proc_freq % 1000000,
c->itc_freq / 1000000, c->itc_freq % 1000000,
- loops_per_sec() / 500000, (loops_per_sec() / 5000) % 100);
+ ia64_loops_per_sec() / 500000, (ia64_loops_per_sec() / 5000) % 100);
}
return p - buffer;
}
@@ -382,8 +382,8 @@
#endif
phys_addr_size = vm1.pal_vm_info_1_s.phys_add_size;
}
- printk("processor implements %lu virtual and %lu physical address bits\n",
- impl_va_msb + 1, phys_addr_size);
+ printk("CPU %d: %lu virtual and %lu physical address bits\n",
+ smp_processor_id(), impl_va_msb + 1, phys_addr_size);
c->unimpl_va_mask = ~((7L<<61) | ((1L << (impl_va_msb + 1)) - 1));
c->unimpl_pa_mask = ~((1L<<63) | ((1L << phys_addr_size) - 1));
diff -urN linux-davidm/arch/ia64/kernel/signal.c lia64/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/signal.c Wed Sep 13 13:40:38 2000
@@ -147,12 +147,14 @@
ia64_put_nat_bits(&scr->pt, &scr->sw, nat); /* restore the original scratch NaT bits */
#endif
- if ((flags & IA64_SC_FLAG_FPH_VALID)) {
- struct ia64_psr *psr = ia64_psr(ia64_task_regs(current));
+ if ((flags & IA64_SC_FLAG_FPH_VALID) != 0) {
+ struct ia64_psr *psr = ia64_psr(&scr->pt);
__copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
- if (!psr->dfh)
+ if (!psr->dfh) {
+ psr->mfh = 0;
__ia64_load_fpu(current->thread.fph);
+ }
}
return err;
}
diff -urN linux-davidm/arch/ia64/kernel/smp.c lia64/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/smp.c Wed Sep 13 13:41:24 2000
@@ -6,6 +6,7 @@
*
* Lots of stuff stolen from arch/alpha/kernel/smp.c
*
+ * 00/09/11 David Mosberger <davidm@hpl.hp.com> Do loops_per_sec calibration on each CPU.
* 00/08/23 Asit Mallick <asit.k.mallick@intel.com> fixed logical processor id
* 00/03/31 Rohit Seth <rohit.seth@intel.com> Fixes for Bootstrap Processor & cpu_online_map
* now gets done here (instead of setup.c)
@@ -41,6 +42,7 @@
#include <asm/system.h>
#include <asm/unistd.h>
+extern void __init calibrate_delay(void);
extern int cpu_idle(void * unused);
extern void _start(void);
extern void machine_halt(void);
@@ -58,9 +60,13 @@
unsigned char smp_int_redirect; /* are INT and IPI redirectable by the chipset? */
volatile int __cpu_physical_id[NR_CPUS] = { -1, }; /* Logical ID -> SAPIC ID */
int smp_num_cpus = 1;
-int smp_threads_ready = 0; /* Set when the idlers are all forked */
-cycles_t cacheflush_time = 0;
+volatile int smp_threads_ready; /* Set when the idlers are all forked */
+cycles_t cacheflush_time;
unsigned long ap_wakeup_vector = -1; /* External Int to use to wakeup AP's */
+
+static volatile unsigned long cpu_callin_map;
+static volatile int smp_commenced;
+
static int max_cpus = -1; /* Command line */
static unsigned long ipi_op[NR_CPUS];
struct smp_call_struct {
@@ -335,7 +341,7 @@
smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int retry, int wait)
{
struct smp_call_struct data;
- long timeout;
+ unsigned long timeout;
int cpus = 1;
if (cpuid = smp_processor_id()) {
@@ -387,7 +393,7 @@
smp_call_function (void (*func) (void *info), void *info, int retry, int wait)
{
struct smp_call_struct data;
- long timeout;
+ unsigned long timeout;
int cpus = smp_num_cpus - 1;
if (cpus = 0)
@@ -457,23 +463,6 @@
}
}
-static inline void __init
-smp_calibrate_delay(int cpuid)
-{
- struct cpuinfo_ia64 *c = &cpu_data[cpuid];
-#if 0
- unsigned long old = loops_per_sec;
- extern void calibrate_delay(void);
-
- loops_per_sec = 0;
- calibrate_delay();
- c->loops_per_sec = loops_per_sec;
- loops_per_sec = old;
-#else
- c->loops_per_sec = loops_per_sec;
-#endif
-}
-
/*
* SAL shoves the AP's here when we start them. Physical mode, no kernel TR,
* no RRs set, better than even chance that psr is bogus. Fix all that and
@@ -519,7 +508,7 @@
* AP's start using C here.
*/
void __init
-smp_callin(void)
+smp_callin (void)
{
extern void ia64_rid_init(void);
extern void ia64_init_itm(void);
@@ -529,8 +518,14 @@
#endif
int cpu = smp_processor_id();
+ if (test_and_set_bit(cpu, &cpu_online_map)) {
+ printk("CPU#%d already initialized!\n", cpu);
+ machine_halt();
+ }
+
efi_map_pal_code();
cpu_init();
+
smp_setup_percpu_timer(cpu);
/* setup the CPU local timer tick */
@@ -544,16 +539,16 @@
ia64_set_lrr0(0, 1);
ia64_set_lrr1(0, 1);
- if (test_and_set_bit(cpu, &cpu_online_map)) {
- printk("CPU#%d already initialized!\n", cpu);
- machine_halt();
- }
- while (!smp_threads_ready)
- mb();
-
local_irq_enable(); /* Interrupts have been off until now */
- smp_calibrate_delay(cpu);
- printk("SMP: CPU %d starting idle loop\n", cpu);
+
+ calibrate_delay();
+ my_cpu_data.loops_per_sec = loops_per_sec;
+
+ /* allow the master to continue */
+ set_bit(cpu, &cpu_callin_map);
+
+ /* finally, wait for the BP to finish initialization: */
+ while (!smp_commenced);
cpu_idle(NULL);
}
@@ -616,23 +611,15 @@
/* Kick the AP in the butt */
ipi_send(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
- /*
- * OK, wait a bit for that CPU to finish staggering about. smp_callin() will
- * call cpu_init() which will set a bit for this AP. When that bit flips, the AP
- * is waiting for smp_threads_ready to be 1 and we can move on.
- */
+ /* wait up to 10s for the AP to start */
for (timeout = 0; timeout < 100000; timeout++) {
- if (test_bit(cpu, &cpu_online_map))
- goto alive;
+ if (test_bit(cpu, &cpu_callin_map))
+ return 1;
udelay(100);
- barrier();
}
printk(KERN_ERR "SMP: Processor 0x%x is stuck.\n", cpu_phys_id);
return 0;
-
-alive:
- return 1;
}
@@ -652,10 +639,11 @@
memset(&__cpu_physical_id, -1, sizeof(__cpu_physical_id));
memset(&ipi_op, 0, sizeof(ipi_op));
- /* Setup BSP mappings */
+ /* Setup BP mappings */
__cpu_physical_id[0] = hard_smp_processor_id();
- smp_calibrate_delay(smp_processor_id());
+ calibrate_delay();
+ my_cpu_data.loops_per_sec = loops_per_sec;
#if 0
smp_tune_scheduling();
#endif
@@ -717,20 +705,12 @@
}
/*
- * Called from main.c by each AP.
+ * Called when the BP is just about to fire off init.
*/
void __init
smp_commence(void)
{
- mb();
-}
-
-/*
- * Not used; part of the i386 bringup
- */
-void __init
-initialize_secondary(void)
-{
+ smp_commenced = 1;
}
int __init
diff -urN linux-davidm/arch/ia64/kernel/time.c lia64/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/time.c Wed Sep 13 13:43:16 2000
@@ -303,7 +303,7 @@
itc_freq = (platform_base_freq*itc_ratio.num)/itc_ratio.den;
itm.delta = itc_freq / HZ;
- printk("timer: CPU %d base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
+ printk("CPU %d: base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
smp_processor_id(),
platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000,
itc_ratio.num, itc_ratio.den, itc_freq / 1000000, (itc_freq / 1000) % 1000);
diff -urN linux-davidm/arch/ia64/kernel/traps.c lia64/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/traps.c Wed Sep 13 13:43:42 2000
@@ -192,54 +192,46 @@
}
/*
- * disabled_fp_fault() is called when a user-level process attempts to
- * access one of the registers f32..f127 while it doesn't own the
+ * disabled_fph_fault() is called when a user-level process attempts
+ * to access one of the registers f32..f127 when it doesn't own the
* fp-high register partition. When this happens, we save the current
* fph partition in the task_struct of the fpu-owner (if necessary)
* and then load the fp-high partition of the current task (if
- * necessary).
+ * necessary). Note that the kernel has access to fph by the time we
+ * get here, as the IVT's "Diabled FP-Register" handler takes care of
+ * clearing psr.dfh.
*/
static inline void
disabled_fph_fault (struct pt_regs *regs)
{
- /* first, clear psr.dfh and psr.mfh: */
- regs->cr_ipsr &= ~(IA64_PSR_DFH | IA64_PSR_MFH);
-#ifdef CONFIG_SMP
- if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0)
- __ia64_load_fpu(current->thread.fph);
- else {
- __ia64_init_fpu();
- /*
- * Set mfh because the state in thread.fph does not match
- * the state in the fph partition.
- */
- ia64_psr(regs)->mfh = 1;
- }
-#else /* !CONFIG_SMP */
+ struct ia64_psr *psr = ia64_psr(regs);
+
+ /* first, grant user-level access to fph partition: */
+ psr->dfh = 0;
+#ifndef CONFIG_SMP
{
struct task_struct *fpu_owner = ia64_get_fpu_owner();
- if (fpu_owner != current) {
- ia64_set_fpu_owner(current);
+ if (fpu_owner = current)
+ return;
- if (fpu_owner && ia64_psr(ia64_task_regs(fpu_owner))->mfh) {
- ia64_psr(ia64_task_regs(fpu_owner))->mfh = 0;
- fpu_owner->thread.flags |= IA64_THREAD_FPH_VALID;
- __ia64_save_fpu(fpu_owner->thread.fph);
- }
- if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
- __ia64_load_fpu(current->thread.fph);
- } else {
- __ia64_init_fpu();
- /*
- * Set mfh because the state in thread.fph does not match
- * the state in the fph partition.
- */
- ia64_psr(regs)->mfh = 1;
- }
- }
+ if (fpu_owner)
+ ia64_flush_fph(fpu_owner);
+
+ ia64_set_fpu_owner(current);
}
#endif /* !CONFIG_SMP */
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
+ __ia64_load_fpu(current->thread.fph);
+ psr->mfh = 0;
+ } else {
+ __ia64_init_fpu();
+ /*
+ * Set mfh because the state in thread.fph does not match the state in
+ * the fph partition.
+ */
+ psr->mfh = 1;
+ }
}
static inline int
diff -urN linux-davidm/arch/ia64/kernel/unaligned.c lia64/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/unaligned.c Wed Sep 13 13:43:58 2000
@@ -455,16 +455,15 @@
unsigned long addr;
/*
- * From EAS-2.5: FPDisableFault has higher priority than
- * Unaligned Fault. Thus, when we get here, we know the partition is
- * enabled.
+ * From EAS-2.5: FPDisableFault has higher priority than Unaligned
+ * Fault. Thus, when we get here, we know the partition is enabled.
+ * To update f32-f127, there are three choices:
+ *
+ * (1) save f32-f127 to thread.fph and update the values there
+ * (2) use a gigantic switch statement to directly access the registers
+ * (3) generate code on the fly to update the desired register
*
- * The registers [32-127] are ususally saved in the tss. When get here,
- * they are NECESSARILY live because they are only saved explicitely.
- * We have 3 ways of updating the values: force a save of the range
- * in tss, use a gigantic switch/case statement or generate code on the
- * fly to store to the right register.
- * For now, we are using the (slow) save/restore way.
+ * For now, we are using approach (1).
*/
if (regnum >= IA64_FIRST_ROTATING_FR) {
ia64_sync_fph(current);
@@ -491,7 +490,6 @@
* let's do it for safety.
*/
regs->cr_ipsr |= IA64_PSR_MFL;
-
}
}
@@ -522,12 +520,12 @@
* Unaligned Fault. Thus, when we get here, we know the partition is
* enabled.
*
- * When regnum > 31, the register is still live and
- * we need to force a save to the tss to get access to it.
- * See discussion in setfpreg() for reasons and other ways of doing this.
+ * When regnum > 31, the register is still live and we need to force a save
+ * to current->thread.fph to get access to it. See discussion in setfpreg()
+ * for reasons and other ways of doing this.
*/
if (regnum >= IA64_FIRST_ROTATING_FR) {
- ia64_sync_fph(current);
+ ia64_flush_fph(current);
*fpval = current->thread.fph[IA64_FPH_OFFS(regnum)];
} else {
/*
@@ -1084,9 +1082,9 @@
/*
* XXX fixme
*
- * A possible optimization would be to drop fpr_final
- * and directly use the storage from the saved context i.e.,
- * the actual final destination (pt_regs, switch_stack or tss).
+ * A possible optimization would be to drop fpr_final and directly
+ * use the storage from the saved context i.e., the actual final
+ * destination (pt_regs, switch_stack or thread structure).
*/
setfpreg(ld->r1, &fpr_final[0], regs);
setfpreg(ld->imm, &fpr_final[1], regs);
@@ -1212,9 +1210,9 @@
/*
* XXX fixme
*
- * A possible optimization would be to drop fpr_final
- * and directly use the storage from the saved context i.e.,
- * the actual final destination (pt_regs, switch_stack or tss).
+ * A possible optimization would be to drop fpr_final and directly
+ * use the storage from the saved context i.e., the actual final
+ * destination (pt_regs, switch_stack or thread structure).
*/
setfpreg(ld->r1, &fpr_final, regs);
}
@@ -1223,9 +1221,7 @@
* check for updates on any loads
*/
if (ld->op = 0x7 || ld->m)
- emulate_load_updates(ld->op = 0x7 ? UPD_IMMEDIATE: UPD_REG,
- ld, regs, ifa);
-
+ emulate_load_updates(ld->op = 0x7 ? UPD_IMMEDIATE: UPD_REG, ld, regs, ifa);
/*
* invalidate ALAT entry in case of advanced floating point loads
diff -urN linux-davidm/arch/ia64/kernel/unwind.c lia64/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Wed Sep 13 11:41:51 2000
+++ lia64/arch/ia64/kernel/unwind.c Wed Sep 13 13:47:44 2000
@@ -395,7 +395,10 @@
} else {
struct task_struct *t = info->task;
- ia64_sync_fph(t);
+ if (write)
+ ia64_sync_fph(t);
+ else
+ ia64_flush_fph(t);
addr = t->thread.fph + (regnum - 32);
}
diff -urN linux-davidm/arch/ia64/lib/io.c lia64/arch/ia64/lib/io.c
--- linux-davidm/arch/ia64/lib/io.c Thu Jun 22 07:09:44 2000
+++ lia64/arch/ia64/lib/io.c Wed Sep 13 13:47:27 2000
@@ -1,4 +1,3 @@
-#include <linux/module.h>
#include <linux/types.h>
#include <asm/io.h>
@@ -49,6 +48,3 @@
}
}
-EXPORT_SYMBOL(__ia64_memcpy_fromio);
-EXPORT_SYMBOL(__ia64_memcpy_toio);
-EXPORT_SYMBOL(__ia64_memset_c_io);
diff -urN linux-davidm/drivers/acpi/acpiconf.c lia64/drivers/acpi/acpiconf.c
--- linux-davidm/drivers/acpi/acpiconf.c Wed Sep 13 11:41:51 2000
+++ lia64/drivers/acpi/acpiconf.c Wed Sep 13 13:47:57 2000
@@ -311,7 +311,7 @@
pprts = (PCI_ROUTING_TABLE **)prts;
for ( i = 0; i < PCI_MAX_BUS; i++) {
- prt = prtf = *pprts++;
+ prt = *pprts++;
if (prt) {
for ( ; prt->length > 0; nvec++) {
prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
@@ -331,7 +331,7 @@
pprts = (PCI_ROUTING_TABLE **)prts;
for ( i = 0; i < PCI_MAX_BUS; i++) {
- prt = *pprts++;
+ prt = prtf = *pprts++;
if (prt) {
for ( ; prt->length > 0; pvec++) {
pvec->bus = (UINT16)i;
diff -urN linux-davidm/include/asm-alpha/module.h lia64/include/asm-alpha/module.h
--- linux-davidm/include/asm-alpha/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-alpha/module.h Wed Sep 13 13:48:34 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_ALPHA_MODULE_H
+#define _ASM_ALPHA_MODULE_H
+/*
+ * This file contains the alpha architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_ALPHA_MODULE_H */
diff -urN linux-davidm/include/asm-alpha/pgtable.h lia64/include/asm-alpha/pgtable.h
--- linux-davidm/include/asm-alpha/pgtable.h Thu Aug 10 19:56:31 2000
+++ lia64/include/asm-alpha/pgtable.h Wed Sep 13 13:48:41 2000
@@ -300,9 +300,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define PageSkip(page) (0)
#define kern_addr_valid(addr) (1)
diff -urN linux-davidm/include/asm-arm/module.h lia64/include/asm-arm/module.h
--- linux-davidm/include/asm-arm/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-arm/module.h Wed Sep 13 13:48:52 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_ARM_MODULE_H
+#define _ASM_ARM_MODULE_H
+/*
+ * This file contains the arm architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_ARM_MODULE_H */
diff -urN linux-davidm/include/asm-arm/pgtable.h lia64/include/asm-arm/pgtable.h
--- linux-davidm/include/asm-arm/pgtable.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-arm/pgtable.h Wed Sep 13 13:48:48 2000
@@ -170,9 +170,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(swp) ((pte_t) { (swp).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
#define io_remap_page_range remap_page_range
#endif /* !__ASSEMBLY__ */
diff -urN linux-davidm/include/asm-i386/module.h lia64/include/asm-i386/module.h
--- linux-davidm/include/asm-i386/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-i386/module.h Wed Sep 13 13:49:00 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_I386_MODULE_H
+#define _ASM_I386_MODULE_H
+/*
+ * This file contains the i386 architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_I386_MODULE_H */
diff -urN linux-davidm/include/asm-i386/pgtable.h lia64/include/asm-i386/pgtable.h
--- linux-davidm/include/asm-i386/pgtable.h Thu Aug 10 19:56:31 2000
+++ lia64/include/asm-i386/pgtable.h Wed Sep 13 13:49:03 2000
@@ -327,9 +327,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
#endif /* !__ASSEMBLY__ */
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
diff -urN linux-davidm/include/asm-ia64/module.h lia64/include/asm-ia64/module.h
--- linux-davidm/include/asm-ia64/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-ia64/module.h Wed Sep 13 13:50:04 2000
@@ -0,0 +1,106 @@
+#ifndef _ASM_IA64_MODULE_H
+#define _ASM_IA64_MODULE_H
+/*
+ * This file contains the ia64 architecture specific module code.
+ *
+ * Copyright (C) 2000 Intel Corporation.
+ * Copyright (C) 2000 Mike Stephens <mike.stephens@intel.com>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <asm/unwind.h>
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) ia64_module_unmap(x)
+#define module_arch_init(x) ia64_module_init(x)
+
+/*
+ * This must match in size and layout the data created by
+ * modutils/obj/obj-ia64.c
+ */
+struct archdata {
+ const char *unw_table;
+ const char *segment_base;
+ const char *unw_start;
+ const char *unw_end;
+ const char *gp;
+};
+
+/*
+ * functions to add/remove a modules unwind info when
+ * it is loaded or unloaded.
+ */
+static inline int
+ia64_module_init(struct module *mod)
+{
+#ifdef CONFIG_IA64_NEW_UNWIND
+ struct archdata *archdata;
+
+ if (!mod_member_present(mod, archdata_start) || !mod->archdata_start)
+ return 0;
+ archdata = (struct archdata *)(mod->archdata_start);
+
+ /*
+ * Make sure the unwind pointers are sane.
+ */
+
+ if (archdata->unw_table)
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_table must be zero.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->gp, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->gp out of bounds.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->unw_start, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_start out of bounds.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->unw_end, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_end out of bounds.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->segment_base, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_table out of bounds.\n");
+ return 1;
+ }
+
+ /*
+ * Pointers are reasonable, add the module unwind table
+ */
+ archdata->unw_table = unw_add_unwind_table(mod->name, archdata->segment_base,
+ archdata->gp, archdata->unw_start, archdata->unw_end);
+#endif /* CONFIG_IA64_NEW_UNWIND */
+ return 0;
+}
+
+static inline void
+ia64_module_unmap(void * addr)
+{
+#ifdef CONFIG_IA64_NEW_UNWIND
+ struct module *mod = (struct module *) addr;
+ struct archdata *archdata;
+
+ /*
+ * Before freeing the module memory remove the unwind table entry
+ */
+ if (mod_member_present(mod, archdata_start) && mod->archdata_start)
+ {
+ archdata = (struct archdata *)(mod->archdata_start);
+
+ if (archdata->unw_table != NULL)
+ unw_remove_unwind_table(archdata->unw_table);
+ }
+#endif /* CONFIG_IA64_NEW_UNWIND */
+
+ vfree(addr);
+}
+
+#endif /* _ASM_IA64_MODULE_H */
diff -urN linux-davidm/include/asm-ia64/pgtable.h lia64/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Wed Sep 13 11:41:51 2000
+++ lia64/include/asm-ia64/pgtable.h Wed Sep 13 15:44:36 2000
@@ -426,9 +426,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define PageSkip(page) (0)
diff -urN linux-davidm/include/asm-ia64/processor.h lia64/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Wed Sep 13 11:41:51 2000
+++ lia64/include/asm-ia64/processor.h Wed Sep 13 13:50:21 2000
@@ -253,9 +253,9 @@
#define my_cpu_data cpu_data[smp_processor_id()]
#ifdef CONFIG_SMP
-# define loops_per_sec() my_cpu_data.loops_per_sec
+# define ia64_loops_per_sec() my_cpu_data.loops_per_sec
#else
-# define loops_per_sec() loops_per_sec
+# define ia64_loops_per_sec() loops_per_sec
#endif
extern struct cpuinfo_ia64 cpu_data[NR_CPUS];
diff -urN linux-davidm/include/asm-m68k/module.h lia64/include/asm-m68k/module.h
--- linux-davidm/include/asm-m68k/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-m68k/module.h Wed Sep 13 13:50:35 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_M68K_MODULE_H
+#define _ASM_M68K_MODULE_H
+/*
+ * This file contains the m68k architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_M68K_MODULE_H */
diff -urN linux-davidm/include/asm-m68k/pgtable.h lia64/include/asm-m68k/pgtable.h
--- linux-davidm/include/asm-m68k/pgtable.h Thu Aug 10 19:56:31 2000
+++ lia64/include/asm-m68k/pgtable.h Wed Sep 13 13:50:37 2000
@@ -390,9 +390,6 @@
#endif /* __ASSEMBLY__ */
-#define module_map vmalloc
-#define module_unmap vfree
-
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define PageSkip(page) (0)
#define kern_addr_valid(addr) (1)
diff -urN linux-davidm/include/asm-mips/module.h lia64/include/asm-mips/module.h
--- linux-davidm/include/asm-mips/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-mips/module.h Wed Sep 13 13:50:47 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_MIPS_MODULE_H
+#define _ASM_MIPS_MODULE_H
+/*
+ * This file contains the mips architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_MIPS_MODULE_H */
diff -urN linux-davidm/include/asm-mips/pgtable.h lia64/include/asm-mips/pgtable.h
--- linux-davidm/include/asm-mips/pgtable.h Thu Aug 10 19:56:31 2000
+++ lia64/include/asm-mips/pgtable.h Wed Sep 13 13:50:44 2000
@@ -451,10 +451,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-
-#define module_map vmalloc
-#define module_unmap vfree
-
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define PageSkip(page) (0)
#define kern_addr_valid(addr) (1)
diff -urN linux-davidm/include/asm-mips64/module.h lia64/include/asm-mips64/module.h
--- linux-davidm/include/asm-mips64/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-mips64/module.h Wed Sep 13 13:50:57 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_MIPS64_MODULE_H
+#define _ASM_MIPS64_MODULE_H
+/*
+ * This file contains the mips64 architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_MIPS64_MODULE_H */
diff -urN linux-davidm/include/asm-mips64/pgtable.h lia64/include/asm-mips64/pgtable.h
--- linux-davidm/include/asm-mips64/pgtable.h Thu Aug 10 19:56:31 2000
+++ lia64/include/asm-mips64/pgtable.h Wed Sep 13 13:50:59 2000
@@ -525,9 +525,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define PageSkip(page) test_bit(PG_skip, &(page)->flags)
#ifndef CONFIG_DISCONTIGMEM
diff -urN linux-davidm/include/asm-ppc/module.h lia64/include/asm-ppc/module.h
--- linux-davidm/include/asm-ppc/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-ppc/module.h Wed Sep 13 13:51:10 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_PPC_MODULE_H
+#define _ASM_PPC_MODULE_H
+/*
+ * This file contains the PPC architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_PPC_MODULE_H */
diff -urN linux-davidm/include/asm-ppc/pgtable.h lia64/include/asm-ppc/pgtable.h
--- linux-davidm/include/asm-ppc/pgtable.h Thu Aug 10 19:56:32 2000
+++ lia64/include/asm-ppc/pgtable.h Wed Sep 13 13:51:08 2000
@@ -451,9 +451,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
/* CONFIG_APUS */
/* For virtual address to physical address conversion */
extern void cache_clear(__u32 addr, int length);
diff -urN linux-davidm/include/asm-s390/module.h lia64/include/asm-s390/module.h
--- linux-davidm/include/asm-s390/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-s390/module.h Wed Sep 13 13:51:20 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_S390_MODULE_H
+#define _ASM_S390_MODULE_H
+/*
+ * This file contains the s390 architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_S390_MODULE_H */
diff -urN linux-davidm/include/asm-s390/pgtable.h lia64/include/asm-s390/pgtable.h
--- linux-davidm/include/asm-s390/pgtable.h Thu Aug 10 19:56:32 2000
+++ lia64/include/asm-s390/pgtable.h Wed Sep 13 13:51:22 2000
@@ -405,9 +405,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
#endif /* !__ASSEMBLY__ */
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
diff -urN linux-davidm/include/asm-sh/module.h lia64/include/asm-sh/module.h
--- linux-davidm/include/asm-sh/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-sh/module.h Wed Sep 13 13:51:30 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_SH_MODULE_H
+#define _ASM_SH_MODULE_H
+/*
+ * This file contains the SH architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_SH_MODULE_H */
diff -urN linux-davidm/include/asm-sh/pgtable.h lia64/include/asm-sh/pgtable.h
--- linux-davidm/include/asm-sh/pgtable.h Thu Aug 10 19:56:32 2000
+++ lia64/include/asm-sh/pgtable.h Wed Sep 13 13:51:29 2000
@@ -250,9 +250,6 @@
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
#endif /* !__ASSEMBLY__ */
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
diff -urN linux-davidm/include/asm-sparc/module.h lia64/include/asm-sparc/module.h
--- linux-davidm/include/asm-sparc/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-sparc/module.h Wed Sep 13 13:51:41 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_SPARC_MODULE_H
+#define _ASM_SPARC_MODULE_H
+/*
+ * This file contains the sparc architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+
+#endif /* _ASM_SPARC_MODULE_H */
diff -urN linux-davidm/include/asm-sparc/pgtable.h lia64/include/asm-sparc/pgtable.h
--- linux-davidm/include/asm-sparc/pgtable.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-sparc/pgtable.h Wed Sep 13 13:51:44 2000
@@ -444,8 +444,6 @@
}
}
-#define module_map vmalloc
-#define module_unmap vfree
extern unsigned long *sparc_valid_addr_bitmap;
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
diff -urN linux-davidm/include/asm-sparc64/module.h lia64/include/asm-sparc64/module.h
--- linux-davidm/include/asm-sparc64/module.h Wed Dec 31 16:00:00 1969
+++ lia64/include/asm-sparc64/module.h Wed Sep 13 13:51:55 2000
@@ -0,0 +1,11 @@
+#ifndef _ASM_SPARC64_MODULE_H
+#define _ASM_SPARC64_MODULE_H
+/*
+ * This file contains the sparc64 architecture specific module code.
+ */
+
+extern void * module_map (unsigned long size);
+extern void module_unmap (void *addr);
+#define module_arch_init (x) (0)
+
+#endif /* _ASM_SPARC64_MODULE_H */
diff -urN linux-davidm/include/asm-sparc64/pgtable.h lia64/include/asm-sparc64/pgtable.h
--- linux-davidm/include/asm-sparc64/pgtable.h Thu Aug 24 08:17:48 2000
+++ lia64/include/asm-sparc64/pgtable.h Wed Sep 13 13:51:53 2000
@@ -284,8 +284,6 @@
return ((sun4u_get_pte (addr) & 0xf0000000) >> 28);
}
-extern void * module_map (unsigned long size);
-extern void module_unmap (void *addr);
extern unsigned long *sparc64_valid_addr_bitmap;
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
diff -urN linux-davidm/include/linux/module.h lia64/include/linux/module.h
--- linux-davidm/include/linux/module.h Thu Aug 10 19:56:32 2000
+++ lia64/include/linux/module.h Wed Sep 13 13:52:30 2000
@@ -83,6 +83,12 @@
const struct module_persist *persist_start;
const struct module_persist *persist_end;
int (*can_unload)(void);
+ int runsize; /* In modutils, not currently used */
+ const char *kallsyms_start; /* All symbols for kernel debugging */
+ const char *kallsyms_end;
+ const char *archdata_start; /* arch specific data for module */
+ const char *archdata_end;
+ const char *kernel_data; /* Reserved for kernel internal use */
};
struct module_info
@@ -122,6 +128,10 @@
#define mod_member_present(mod,member) \
((unsigned long)(&((struct module *)0L)->member + 1) \
<= (mod)->size_of_struct)
+
+/* Check if an address p with number of entries n is within the body of module m */
+#define mod_bound(p, n, m) ((unsigned long)(p) >= ((unsigned long)(m) + ((m)->size_of_struct)) && \
+ (unsigned long)((p)+(n)) <= (unsigned long)(m) + (m)->size)
/* Backwards compatibility definition. */
diff -urN linux-davidm/kernel/module.c lia64/kernel/module.c
--- linux-davidm/kernel/module.c Mon Jun 26 12:11:10 2000
+++ lia64/kernel/module.c Wed Sep 13 13:53:24 2000
@@ -1,6 +1,7 @@
#include <linux/config.h>
#include <linux/mm.h>
#include <linux/module.h>
+#include <asm/module.h>
#include <asm/uaccess.h>
#include <linux/vmalloc.h>
#include <linux/smp_lock.h>
@@ -195,7 +196,7 @@
of righteousness. */
mod_tmp = *mod;
- error = copy_from_user(mod, mod_user, sizeof(struct module));
+ error = copy_from_user(mod, mod_user, mod_user_size);
if (error) {
error = -EFAULT;
goto err2;
@@ -212,32 +213,29 @@
/* Make sure all interesting pointers are sane. */
-#define bound(p, n, m) ((unsigned long)(p) >= (unsigned long)(m+1) && \
- (unsigned long)((p)+(n)) <= (unsigned long)(m) + (m)->size)
-
- if (!bound(mod->name, namelen, mod)) {
+ if (!mod_bound(mod->name, namelen, mod)) {
printk(KERN_ERR "init_module: mod->name out of bounds.\n");
goto err2;
}
- if (mod->nsyms && !bound(mod->syms, mod->nsyms, mod)) {
+ if (mod->nsyms && !mod_bound(mod->syms, mod->nsyms, mod)) {
printk(KERN_ERR "init_module: mod->syms out of bounds.\n");
goto err2;
}
- if (mod->ndeps && !bound(mod->deps, mod->ndeps, mod)) {
+ if (mod->ndeps && !mod_bound(mod->deps, mod->ndeps, mod)) {
printk(KERN_ERR "init_module: mod->deps out of bounds.\n");
goto err2;
}
- if (mod->init && !bound(mod->init, 0, mod)) {
+ if (mod->init && !mod_bound(mod->init, 0, mod)) {
printk(KERN_ERR "init_module: mod->init out of bounds.\n");
goto err2;
}
- if (mod->cleanup && !bound(mod->cleanup, 0, mod)) {
+ if (mod->cleanup && !mod_bound(mod->cleanup, 0, mod)) {
printk(KERN_ERR "init_module: mod->cleanup out of bounds.\n");
goto err2;
}
if (mod->ex_table_start > mod->ex_table_end
|| (mod->ex_table_start &&
- !((unsigned long)mod->ex_table_start >= (unsigned long)(mod+1)
+ !((unsigned long)mod->ex_table_start >= ((unsigned long)mod + mod->size_of_struct)
&& ((unsigned long)mod->ex_table_end
< (unsigned long)mod + mod->size)))
|| (((unsigned long)mod->ex_table_start
@@ -251,24 +249,51 @@
goto err2;
}
#ifdef __alpha__
- if (!bound(mod->gp - 0x8000, 0, mod)) {
+ if (!mod_bound(mod->gp - 0x8000, 0, mod)) {
printk(KERN_ERR "init_module: mod->gp out of bounds.\n");
goto err2;
}
#endif
if (mod_member_present(mod, can_unload)
- && mod->can_unload && !bound(mod->can_unload, 0, mod)) {
+ && mod->can_unload && !mod_bound(mod->can_unload, 0, mod)) {
printk(KERN_ERR "init_module: mod->can_unload out of bounds.\n");
goto err2;
}
-
-#undef bound
+ if (mod_member_present(mod, kallsyms_end)) {
+ if (mod->kallsyms_end &&
+ (!mod_bound(mod->kallsyms_start, 0, mod) ||
+ !mod_bound(mod->kallsyms_end, 0, mod))) {
+ printk(KERN_ERR "init_module: mod->kallsyms out of bounds.\n");
+ goto err2;
+ }
+ if (mod->kallsyms_start > mod->kallsyms_end) {
+ printk(KERN_ERR "init_module: mod->kallsyms invalid.\n");
+ goto err2;
+ }
+ }
+ if (mod_member_present(mod, archdata_end)) {
+ if (mod->archdata_end &&
+ (!mod_bound(mod->archdata_start, 0, mod) ||
+ !mod_bound(mod->archdata_end, 0, mod))) {
+ printk(KERN_ERR "init_module: mod->archdata out of bounds.\n");
+ goto err2;
+ }
+ if (mod->archdata_start > mod->archdata_end) {
+ printk(KERN_ERR "init_module: mod->archdata invalid.\n");
+ goto err2;
+ }
+ }
+ if (mod_member_present(mod, kernel_data) && mod->kernel_data) {
+ printk(KERN_ERR "init_module: mod->kernel_data must be zero.\n");
+ goto err2;
+ }
/* Check that the user isn't doing something silly with the name. */
if ((n_namelen = get_mod_name(mod->name - (unsigned long)mod
+ (unsigned long)mod_user,
&n_name)) < 0) {
+ printk(KERN_ERR "init_module: get_mod_name failure.\n");
error = n_namelen;
goto err2;
}
@@ -285,6 +310,9 @@
error = -EFAULT;
goto err3;
}
+
+ if (module_arch_init(mod))
+ goto err3;
/* On some machines it is necessary to do something here
to make the I and D caches consistent. */
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.0-test9)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (19 preceding siblings ...)
2000-09-14 1:50 ` David Mosberger
@ 2000-10-05 19:01 ` David Mosberger
2000-10-05 22:08 ` Keith Owens
` (194 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-10-05 19:01 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 Linux kernel diff is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.0-test9-ia64-001004.diff. Actually, it has been
available since last night, but I ran into some problems booting the
new kernel remotely so I couldn't actually test it until this morning.
Here is a summary of what changed since the last kernel:
- Stephane's perfmon updates (warning: this is works in progress;
especially the changes to ptrace will disappear so don't write user
apps that depend on this way of accessing the PMU...)
- SGI SN1 updates (Kanoj); also stash away coherence domain info
in global variable "ia64_ptc_domain_info"
- Asit's patch to support running with VHPT disabled; this is for
kernel hacking only and should not be used for normal operation.
This patch uncovered some bad kernel references which were also
fixed.
- With SMP, do sync.i in context switch to ensure "fc"s are visible
on all CPUs (Asit)
- TLB handler fixes by Patrick and yours truly.
- qlogic SCSI driver update (BJ).
- Remove IA-64 version of ioperm syscall and fix vt.c to not
support the ioctl that used this call
- Added 32-bit division routines required by latest CVS compiler.
32-bit division can get away with one fewer iteration and also
uses few enough fp registers that we don't need to save/restore
anything.
- Fix SMP BogoMIPS calculation and printing; the BogoMIPS values
are now really per-CPU
- Various minor and not so minor updates to the kernel unwinder.
- Fixed SCSI disk driver so it works when compiled into the kernel
(yes, this is a generic test9 bug; Linus must have been in a hurry
to catch that plane to Germany... ;-).
- __atomic_fool_gcc() disappeared. It shouldn't be necessary with
the compilers available for IA-64 Linux.
- Replace "extern inline" with "static inline".
- Change HZ for simulator to 32 Hz; the kernel time-of-day code is
more accurate if HZ is an integer power of two.
- Add parport.h needed by some kernel modules.
- Various updates to get things in sync with 2.4.0-test9
That should be it. This kernel is known to boot fine on 4-way Lion,
2-way and 1-way Big Sur, as well as the HP Ski simulator.
Enjoy,
--david
diff -urN linux-davidm/arch/ia64/Makefile linux-2.4.0-test9-lia/arch/ia64/Makefile
--- linux-davidm/arch/ia64/Makefile Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/Makefile Wed Oct 4 21:32:24 2000
@@ -46,11 +46,18 @@
$(CORE_FILES)
endif
-ifdef CONFIG_IA64_SGI_SN1_SIM
+ifdef CONFIG_IA64_SGI_SN1
+CFLAGS := $(CFLAGS) -DSN -I. -DBRINGUP -DDIRECT_L1_CONSOLE \
+ -DNUMA_BASE -DSIMULATED_KLGRAPH -DNUMA_MIGR_CONTROL \
+ -DLITTLE_ENDIAN -DREAL_HARDWARE -DLANGUAGE_C=1 \
+ -D_LANGUAGE_C=1
SUBDIRS := arch/$(ARCH)/sn/sn1 \
arch/$(ARCH)/sn \
+ arch/$(ARCH)/sn/io \
+ arch/$(ARCH)/sn/fprom \
$(SUBDIRS)
CORE_FILES := arch/$(ARCH)/sn/sn.a \
+ arch/$(ARCH)/sn/io/sgiio.o\
$(CORE_FILES)
endif
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-test9-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/config.in Wed Oct 4 21:32:56 2000
@@ -27,7 +27,7 @@
"generic CONFIG_IA64_GENERIC \
DIG-compliant CONFIG_IA64_DIG \
HP-simulator CONFIG_IA64_HP_SIM \
- SN1-simulator CONFIG_IA64_SGI_SN1_SIM" generic
+ SGI-SN1 CONFIG_IA64_SGI_SN1" generic
choice 'Kernel page size' \
"4KB CONFIG_IA64_PAGE_SIZE_4KB \
@@ -61,9 +61,20 @@
fi
fi
-if [ "$CONFIG_IA64_SGI_SN1_SIM" = "y" ]; then
- define_bool CONFIG_NUMA y
- define_bool CONFIG_IA64_SOFTSDV_HACKS y
+if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
+ bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
+ bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
+ if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
+ bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
+ fi
+ bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM n
+ bool ' Enable SGI hack for version 1.0 syngery bugs' CONFIG_IA64_SGI_SYNERGY_1_0_HACKS n
+ define_bool CONFIG_DEVFS_DEBUG y
+ define_bool CONFIG_DEVFS_FS y
+ define_bool CONFIG_IA64_BRL_EMU y
+ define_bool CONFIG_IA64_MCA y
+ define_bool CONFIG_IA64_SGI_IO y
+ define_bool CONFIG_ITANIUM y
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
@@ -237,5 +248,6 @@
bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
bool 'Enable new unwind support' CONFIG_IA64_NEW_UNWIND
+bool 'Disable VHPT' CONFIG_DISABLE_VHPT
endmenu
diff -urN linux-davidm/arch/ia64/dig/iosapic.c linux-2.4.0-test9-lia/arch/ia64/dig/iosapic.c
--- linux-davidm/arch/ia64/dig/iosapic.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/dig/iosapic.c Wed Oct 4 21:33:55 2000
@@ -386,7 +386,7 @@
unsigned int ver, v;
int l, max_pin;
- ver = iosapic_version(iosapic->address);
+ ver = iosapic_version((unsigned long) ioremap(iosapic->address, 0));
max_pin = (ver >> 16) & 0xff;
printk("IOSAPIC Version %x.%x: address 0x%lx IRQs 0x%x - 0x%x\n",
diff -urN linux-davidm/arch/ia64/kernel/Makefile linux-2.4.0-test9-lia/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/Makefile Wed Oct 4 21:34:23 2000
@@ -16,7 +16,7 @@
obj-$(CONFIG_IA64_GENERIC) += machvec.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_PCI) += pci.o
-obj-$(CONFIG_SMP) += smp.o
+obj-$(CONFIG_SMP) += smp.o smpboot.o
obj-$(CONFIG_IA64_MCA) += mca.o mca_asm.o
obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o
diff -urN linux-davidm/arch/ia64/kernel/acpi.c linux-2.4.0-test9-lia/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/acpi.c Wed Oct 4 21:34:32 2000
@@ -136,13 +136,12 @@
break;
}
-#if 1/*def ACPI_DEBUG*/
+# ifdef ACPI_DEBUG
printk("Legacy ISA IRQ %x -> IA64 Vector %x IOSAPIC Pin %x Active %s %s Trigger\n",
legacy->isa_irq, vector, iosapic_pin(vector),
((iosapic_polarity(vector) = IO_SAPIC_POL_LOW) ? "Low" : "High"),
((iosapic_trigger(vector) = IO_SAPIC_LEVEL) ? "Level" : "Edge"));
-#endif /* ACPI_DEBUG */
-
+# endif /* ACPI_DEBUG */
#endif /* CONFIG_IA64_IRQ_ACPI */
}
@@ -279,7 +278,7 @@
#else
# if defined (CONFIG_IA64_HP_SIM)
return "hpsim";
-# elif defined (CONFIG_IA64_SGI_SN1_SIM)
+# elif defined (CONFIG_IA64_SGI_SN1)
return "sn1";
# elif defined (CONFIG_IA64_DIG)
return "dig";
diff -urN linux-davidm/arch/ia64/kernel/efi.c linux-2.4.0-test9-lia/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/efi.c Wed Oct 4 21:34:44 2000
@@ -376,6 +376,16 @@
#endif
efi_map_pal_code();
+
+#ifndef CONFIG_IA64_SOFTSDV_HACKS
+ /*
+ * (Some) SoftSDVs seem to have a problem with this call.
+ * Since it's mostly a performance optimization, just don't do
+ * it for now... --davidm 99/12/6
+ */
+ efi_enter_virtual_mode();
+#endif
+
}
void
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-test9-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Fri Sep 8 14:34:53 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/entry.S Wed Oct 4 23:08:09 2000
@@ -120,6 +120,9 @@
mov r13=in0 // set "current" pointer
;;
DO_LOAD_SWITCH_STACK( )
+#ifdef CONFIG_SMP
+ sync.i // ensure "fc"s done by this CPU are visible on other CPUs
+#endif
br.ret.sptk.few rp
END(ia64_switch_to)
@@ -1088,7 +1091,7 @@
data8 sys_setpriority
data8 sys_statfs
data8 sys_fstatfs
- data8 sys_ioperm // 1105
+ data8 ia64_ni_syscall
data8 sys_semget
data8 sys_semop
data8 sys_semctl
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.0-test9-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/ia64_ksyms.c Wed Oct 4 21:35:21 2000
@@ -97,13 +97,23 @@
EXPORT_SYMBOL(__ia64_syscall);
/* from arch/ia64/lib */
+extern void __divsi3(void);
+extern void __udivsi3(void);
+extern void __modsi3(void);
+extern void __umodsi3(void);
extern void __divdi3(void);
extern void __udivdi3(void);
extern void __moddi3(void);
extern void __umoddi3(void);
+EXPORT_SYMBOL_NOVERS(__divsi3);
+EXPORT_SYMBOL_NOVERS(__udivsi3);
+EXPORT_SYMBOL_NOVERS(__modsi3);
+EXPORT_SYMBOL_NOVERS(__umodsi3);
EXPORT_SYMBOL_NOVERS(__divdi3);
EXPORT_SYMBOL_NOVERS(__udivdi3);
EXPORT_SYMBOL_NOVERS(__moddi3);
EXPORT_SYMBOL_NOVERS(__umoddi3);
+extern unsigned long ia64_iobase;
+EXPORT_SYMBOL(ia64_iobase);
diff -urN linux-davidm/arch/ia64/kernel/irq_ia64.c linux-2.4.0-test9-lia/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/irq_ia64.c Wed Oct 4 21:35:32 2000
@@ -39,7 +39,8 @@
spinlock_t ivr_read_lock;
#endif
-unsigned long ipi_base_addr = IPI_DEFAULT_BASE_ADDR; /* default base addr of IPI table */
+/* default base addr of IPI table */
+unsigned long ipi_base_addr = (__IA64_UNCACHED_OFFSET | IPI_DEFAULT_BASE_ADDR);
/*
* Legacy IRQ to IA-64 vector translation table. Any vector not in
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.0-test9-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/ivt.S Wed Oct 4 21:36:12 2000
@@ -196,32 +196,32 @@
* The speculative access will fail if there is no TLB entry
* for the L3 page table page we're trying to access.
*/
- mov r16=cr.iha // get virtual address of L3 PTE
- mov r19=cr.ifa // get virtual address
+ mov r16=cr.ifa // get virtual address
+ mov r19=cr.iha // get virtual address of L3 PTE
;;
- ld8.s r17=[r16] // try to read L3 PTE
+ ld8.s r17=[r19] // try to read L3 PTE
mov r31=pr // save predicates
;;
- tnat.nz p6,p0=r16 // did read succeed?
+ tnat.nz p6,p0=r17 // did read succeed?
(p6) br.cond.spnt.many 1f
;;
itc.i r17
;;
#ifdef CONFIG_SMP
- ld8.s r18=[r16] // try to read L3 PTE again and see if same
+ ld8.s r18=[r19] // try to read L3 PTE again and see if same
mov r20=PAGE_SHIFT<<2 // setup page size for purge
;;
cmp.eq p6,p7=r17,r18
;;
-(p7) ptc.l r19,r20
+(p7) ptc.l r16,r20
#endif
-
mov pr=r31,-1
rfi
-1: mov r16=cr.ifa // get address that caused the TLB miss
- ;;
- rsm psr.dt // use physical addressing for data
+#ifdef CONFIG_DISABLE_VHPT
+itlb_fault:
+#endif
+1: rsm psr.dt // use physical addressing for data
mov r19=ar.k7 // get page table base address
shl r21=r16,3 // shift bit 60 into sign bit
shr.u r17=r16,61 // get the region number into r17
@@ -283,31 +283,32 @@
* The speculative access will fail if there is no TLB entry
* for the L3 page table page we're trying to access.
*/
- mov r16=cr.iha // get virtual address of L3 PTE
- mov r19=cr.ifa // get virtual address
+ mov r16=cr.ifa // get virtual address
+ mov r19=cr.iha // get virtual address of L3 PTE
;;
- ld8.s r17=[r16] // try to read L3 PTE
+ ld8.s r17=[r19] // try to read L3 PTE
mov r31=pr // save predicates
;;
- tnat.nz p6,p0=r16 // did read succeed?
+ tnat.nz p6,p0=r17 // did read succeed?
(p6) br.cond.spnt.many 1f
;;
itc.d r17
;;
#ifdef CONFIG_SMP
- ld8.s r18=[r16] // try to read L3 PTE again and see if same
+ ld8.s r18=[r19] // try to read L3 PTE again and see if same
mov r20=PAGE_SHIFT<<2 // setup page size for purge
;;
cmp.eq p6,p7=r17,r18
;;
-(p7) ptc.l r19,r20
+(p7) ptc.l r16,r20
#endif
mov pr=r31,-1
rfi
-1: mov r16=cr.ifa // get address that caused the TLB miss
- ;;
- rsm psr.dt // use physical addressing for data
+#ifdef CONFIG_DISABLE_VHPT
+dtlb_fault:
+#endif
+1: rsm psr.dt // use physical addressing for data
mov r19=ar.k7 // get page table base address
shl r21=r16,3 // shift bit 60 into sign bit
shr.u r17=r16,61 // get the region number into r17
@@ -360,6 +361,16 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
mov r16=cr.ifa // get address that caused the TLB miss
+#ifdef CONFIG_DISABLE_VHPT
+ mov r31=pr
+ ;;
+ shr.u r21=r16,61 // get the region number into r21
+ ;;
+ cmp.gt p6,p0=6,r21 // user mode
+(p6) br.cond.dptk.many itlb_fault
+ ;;
+ mov pr=r31,-1
+#endif
movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX
;;
shr.u r18=r16,57 // move address bit 61 to bit 4
@@ -380,8 +391,14 @@
movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RW
mov r20=cr.isr
mov r21=cr.ipsr
- mov r19=pr
+ mov r31=pr
;;
+#ifdef CONFIG_DISABLE_VHPT
+ shr.u r22=r16,61 // get the region number into r21
+ ;;
+ cmp.gt p8,p0=6,r22 // user mode
+(p8) br.cond.dptk.many dtlb_fault
+#endif
tbit.nz p6,p7=r20,IA64_ISR_SP_BIT // is speculation bit on?
shr.u r18=r16,57 // move address bit 61 to bit 4
dep r16=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits
@@ -394,7 +411,7 @@
(p6) mov cr.ipsr=r21
;;
(p7) itc.d r16 // insert the TLB entry
- mov pr=r19,-1
+ mov pr=r31,-1
rfi
;;
diff -urN linux-davidm/arch/ia64/kernel/mca.c linux-2.4.0-test9-lia/arch/ia64/kernel/mca.c
--- linux-davidm/arch/ia64/kernel/mca.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/mca.c Wed Oct 4 21:36:32 2000
@@ -255,8 +255,11 @@
IA64_MCA_DEBUG("ia64_mca_init : correctable mca vector setup done\n");
ia64_mc_info.imi_mca_handler = __pa(ia64_os_mca_dispatch);
- ia64_mc_info.imi_mca_handler_size =
- __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch);
+ /*
+ * XXX - disable SAL checksum by setting size to 0; should be
+ * __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch);
+ */
+ ia64_mc_info.imi_mca_handler_size = 0;
/* Register the os mca handler with SAL */
if (ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
ia64_mc_info.imi_mca_handler,
@@ -268,10 +271,14 @@
IA64_MCA_DEBUG("ia64_mca_init : registered os mca handler with SAL\n");
+ /*
+ * XXX - disable SAL checksum by setting size to 0, should be
+ * IA64_INIT_HANDLER_SIZE
+ */
ia64_mc_info.imi_monarch_init_handler = __pa(mon_init_ptr->fp);
- ia64_mc_info.imi_monarch_init_handler_size = IA64_INIT_HANDLER_SIZE;
+ ia64_mc_info.imi_monarch_init_handler_size = 0;
ia64_mc_info.imi_slave_init_handler = __pa(slave_init_ptr->fp);
- ia64_mc_info.imi_slave_init_handler_size = IA64_INIT_HANDLER_SIZE;
+ ia64_mc_info.imi_slave_init_handler_size = 0;
IA64_MCA_DEBUG("ia64_mca_init : os init handler at %lx\n",ia64_mc_info.imi_monarch_init_handler);
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c linux-2.4.0-test9-lia/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Thu Aug 24 08:17:30 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/perfmon.c Wed Oct 4 21:36:39 2000
@@ -10,15 +10,19 @@
#include <linux/config.h>
#include <linux/kernel.h>
+#include <linux/init.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/smp_lock.h>
+#include <linux/proc_fs.h>
+#include <linux/ptrace.h>
#include <asm/errno.h>
#include <asm/hw_irq.h>
#include <asm/processor.h>
#include <asm/system.h>
#include <asm/uaccess.h>
+#include <asm/pal.h>
/* Long blurb on how this works:
* We set dcr.pp, psr.pp, and the appropriate pmc control values with
@@ -52,68 +56,107 @@
#ifdef CONFIG_PERFMON
#define MAX_PERF_COUNTER 4 /* true for Itanium, at least */
+#define PMU_FIRST_COUNTER 4 /* first generic counter */
+
#define WRITE_PMCS_AND_START 0xa0
#define WRITE_PMCS 0xa1
#define READ_PMDS 0xa2
#define STOP_PMCS 0xa3
-#define IA64_COUNTER_MASK 0xffffffffffffff6fL
-#define PERF_OVFL_VAL 0xffffffffL
-volatile int used_by_system;
-struct perfmon_counter {
- unsigned long data;
- unsigned long counter_num;
-};
+/*
+ * this structure needs to be enhanced
+ */
+typedef struct {
+ unsigned long pmu_reg_data; /* generic PMD register */
+ unsigned long pmu_reg_num; /* which register number */
+} perfmon_reg_t;
+
+/*
+ * This structure is initialize at boot time and contains
+ * a description of the PMU main characteristic as indicated
+ * by PAL
+ */
+typedef struct {
+ unsigned long perf_ovfl_val; /* overflow value for generic counters */
+ unsigned long max_pmc; /* highest PMC */
+ unsigned long max_pmd; /* highest PMD */
+ unsigned long max_counters; /* number of generic counter pairs (PMC/PMD) */
+} pmu_config_t;
+/* XXX will go static when ptrace() is cleaned */
+unsigned long perf_ovfl_val; /* overflow value for generic counters */
+
+static pmu_config_t pmu_conf;
+
+/*
+ * could optimize to avoid cache conflicts in SMP
+ */
unsigned long pmds[NR_CPUS][MAX_PERF_COUNTER];
asmlinkage unsigned long
-sys_perfmonctl (int cmd1, int cmd2, void *ptr)
+sys_perfmonctl (int cmd, int count, void *ptr, long arg4, long arg5, long arg6, long arg7, long arg8, long stack)
{
- struct perfmon_counter tmp, *cptr = ptr;
- unsigned long cnum, dcr, flags;
- struct perf_counter;
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ perfmon_reg_t tmp, *cptr = ptr;
+ unsigned long cnum;
int i;
- switch (cmd1) {
+ switch (cmd) {
case WRITE_PMCS: /* Writes to PMC's and clears PMDs */
case WRITE_PMCS_AND_START: /* Also starts counting */
- if (cmd2 <= 0 || cmd2 > MAX_PERF_COUNTER - used_by_system)
- return -EINVAL;
-
- if (!access_ok(VERIFY_READ, cptr, sizeof(struct perf_counter)*cmd2))
+ if (!access_ok(VERIFY_READ, cptr, sizeof(struct perfmon_reg_t)*count))
return -EFAULT;
- current->thread.flags |= IA64_THREAD_PM_VALID;
+ for (i = 0; i < count; i++, cptr++) {
- for (i = 0; i < cmd2; i++, cptr++) {
copy_from_user(&tmp, cptr, sizeof(tmp));
- /* XXX need to check validity of counter_num and perhaps data!! */
- if (tmp.counter_num < 4
- || tmp.counter_num >= 4 + MAX_PERF_COUNTER - used_by_system)
- return -EFAULT;
-
- ia64_set_pmc(tmp.counter_num, tmp.data);
- ia64_set_pmd(tmp.counter_num, 0);
- pmds[smp_processor_id()][tmp.counter_num - 4] = 0;
+
+ /* XXX need to check validity of pmu_reg_num and perhaps data!! */
+
+ if (tmp.pmu_reg_num > pmu_conf.max_pmc || tmp.pmu_reg_num = 0) return -EFAULT;
+
+ ia64_set_pmc(tmp.pmu_reg_num, tmp.pmu_reg_data);
+
+ /* to go away */
+ if (tmp.pmu_reg_num >= PMU_FIRST_COUNTER && tmp.pmu_reg_num < PMU_FIRST_COUNTER+pmu_conf.max_counters) {
+ ia64_set_pmd(tmp.pmu_reg_num, 0);
+ pmds[smp_processor_id()][tmp.pmu_reg_num - PMU_FIRST_COUNTER] = 0;
+
+ printk(__FUNCTION__" setting PMC/PMD[%ld] es=0x%lx pmd[%ld]=%lx\n", tmp.pmu_reg_num, (tmp.pmu_reg_data>>8) & 0x7f, tmp.pmu_reg_num, ia64_get_pmd(tmp.pmu_reg_num));
+ } else
+ printk(__FUNCTION__" setting PMC[%ld]=0x%lx\n", tmp.pmu_reg_num, tmp.pmu_reg_data);
}
- if (cmd1 = WRITE_PMCS_AND_START) {
+ if (cmd = WRITE_PMCS_AND_START) {
+#if 0
+/* irrelevant with user monitors */
local_irq_save(flags);
+
dcr = ia64_get_dcr();
dcr |= IA64_DCR_PP;
ia64_set_dcr(dcr);
+
local_irq_restore(flags);
+#endif
+
ia64_set_pmc(0, 0);
+
+ /* will start monitoring right after rfi */
+ ia64_psr(regs)->up = 1;
}
+ /*
+ * mark the state as valid.
+ * this will trigger save/restore at context switch
+ */
+ current->thread.flags |= IA64_THREAD_PM_VALID;
break;
case READ_PMDS:
- if (cmd2 <= 0 || cmd2 > MAX_PERF_COUNTER - used_by_system)
+ if (count <= 0 || count > MAX_PERF_COUNTER)
return -EINVAL;
- if (!access_ok(VERIFY_WRITE, cptr, sizeof(struct perf_counter)*cmd2))
+ if (!access_ok(VERIFY_WRITE, cptr, sizeof(struct perfmon_reg_t)*count))
return -EFAULT;
/* This looks shady, but IMHO this will work fine. This is
@@ -121,14 +164,15 @@
* with the interrupt handler. See explanation in the
* following comment.
*/
-
+#if 0
+/* irrelevant with user monitors */
local_irq_save(flags);
__asm__ __volatile__("rsm psr.pp\n");
dcr = ia64_get_dcr();
dcr &= ~IA64_DCR_PP;
ia64_set_dcr(dcr);
local_irq_restore(flags);
-
+#endif
/*
* We cannot write to pmc[0] to stop counting here, as
* that particular instruction might cause an overflow
@@ -142,36 +186,47 @@
* when we re-enabled interrupts. When I muck with dcr,
* is the irq_save/restore needed?
*/
- for (i = 0, cnum = 4;i < cmd2; i++, cnum++, cptr++) {
- tmp.data = (pmds[smp_processor_id()][i]
- + (ia64_get_pmd(cnum) & PERF_OVFL_VAL));
- tmp.counter_num = cnum;
- if (copy_to_user(cptr, &tmp, sizeof(tmp)))
- return -EFAULT;
- //put_user(pmd, &cptr->data);
+
+
+ /* XXX: This needs to change to read more than just the counters */
+ for (i = 0, cnum = PMU_FIRST_COUNTER;i < count; i++, cnum++, cptr++) {
+
+ tmp.pmu_reg_data = (pmds[smp_processor_id()][i]
+ + (ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val));
+
+ tmp.pmu_reg_num = cnum;
+
+ if (copy_to_user(cptr, &tmp, sizeof(tmp))) return -EFAULT;
}
+#if 0
+/* irrelevant with user monitors */
local_irq_save(flags);
__asm__ __volatile__("ssm psr.pp");
dcr = ia64_get_dcr();
dcr |= IA64_DCR_PP;
ia64_set_dcr(dcr);
local_irq_restore(flags);
+#endif
break;
case STOP_PMCS:
ia64_set_pmc(0, 1);
ia64_srlz_d();
- for (i = 0; i < MAX_PERF_COUNTER - used_by_system; ++i)
+ for (i = 0; i < MAX_PERF_COUNTER; ++i)
ia64_set_pmc(4+i, 0);
- if (!used_by_system) {
- local_irq_save(flags);
- dcr = ia64_get_dcr();
- dcr &= ~IA64_DCR_PP;
- ia64_set_dcr(dcr);
- local_irq_restore(flags);
- }
+#if 0
+/* irrelevant with user monitors */
+ local_irq_save(flags);
+ dcr = ia64_get_dcr();
+ dcr &= ~IA64_DCR_PP;
+ ia64_set_dcr(dcr);
+ local_irq_restore(flags);
+ ia64_psr(regs)->up = 0;
+#endif
+
current->thread.flags &= ~(IA64_THREAD_PM_VALID);
+
break;
default:
@@ -187,13 +242,21 @@
unsigned long mask, i, cnum, val;
mask = ia64_get_pmc(0) >> 4;
- for (i = 0, cnum = 4; i < MAX_PERF_COUNTER - used_by_system; cnum++, i++, mask >>= 1) {
- val = 0;
+ for (i = 0, cnum = PMU_FIRST_COUNTER ; i < pmu_conf.max_counters; cnum++, i++, mask >>= 1) {
+
+
+ val = mask & 0x1 ? pmu_conf.perf_ovfl_val + 1 : 0;
+
if (mask & 0x1)
- val += PERF_OVFL_VAL + 1;
+ printk(__FUNCTION__ " PMD%ld overflowed pmd=%lx pmod=%lx\n", cnum, ia64_get_pmd(cnum), pmds[smp_processor_id()][i]);
+
/* since we got an interrupt, might as well clear every pmd. */
- val += ia64_get_pmd(cnum) & PERF_OVFL_VAL;
+ val += ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val;
+
+ printk(__FUNCTION__ " adding val=%lx to pmod[%ld]=%lx \n", val, i, pmds[smp_processor_id()][i]);
+
pmds[smp_processor_id()][i] += val;
+
ia64_set_pmd(cnum, 0);
}
}
@@ -212,16 +275,69 @@
name: "perfmon"
};
-void
+static int
+perfmon_proc_info(char *page)
+{
+ char *p = page;
+ u64 pmc0 = ia64_get_pmc(0);
+
+ p += sprintf(p, "PMC[0]=%lx\n", pmc0);
+
+ return p - page;
+}
+
+static int
+perfmon_read_entry(char *page, char **start, off_t off, int count, int *eof, void *data)
+{
+ int len = perfmon_proc_info(page);
+
+ if (len <= off+count) *eof = 1;
+
+ *start = page + off;
+ len -= off;
+
+ if (len>count) len = count;
+ if (len<0) len = 0;
+
+ return len;
+}
+
+static struct proc_dir_entry *perfmon_dir;
+
+void __init
perfmon_init (void)
{
+ pal_perf_mon_info_u_t pm_info;
+ u64 pm_buffer[16];
+ s64 status;
+
irq_desc[PERFMON_IRQ].status |= IRQ_PER_CPU;
irq_desc[PERFMON_IRQ].handler = &irq_type_ia64_sapic;
setup_irq(PERFMON_IRQ, &perfmon_irqaction);
ia64_set_pmv(PERFMON_IRQ);
ia64_srlz_d();
- printk("Initialized perfmon vector to %u\n",PERFMON_IRQ);
+
+ printk("perfmon: Initialized vector to %u\n",PERFMON_IRQ);
+
+ if ((status=ia64_pal_perf_mon_info(pm_buffer, &pm_info)) != 0) {
+ printk(__FUNCTION__ " pal call failed (%ld)\n", status);
+ return;
+ }
+ pmu_conf.perf_ovfl_val = perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1;
+
+ /* XXX need to use PAL instead */
+ pmu_conf.max_pmc = 13;
+ pmu_conf.max_pmd = 17;
+ pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic;
+
+ printk("perfmon: Counters are %d bits\n", pm_info.pal_perf_mon_info_s.width);
+ printk("perfmon: Maximum counter value 0x%lx\n", pmu_conf.perf_ovfl_val);
+
+ /*
+ * for now here for debug purposes
+ */
+ perfmon_dir = create_proc_read_entry ("perfmon", 0, 0, perfmon_read_entry, NULL);
}
void
@@ -238,10 +354,13 @@
ia64_set_pmc(0, 1);
ia64_srlz_d();
- for (i=0; i< IA64_NUM_PM_REGS - used_by_system ; i++) {
- t->pmd[i] = ia64_get_pmd(4+i);
+ /*
+ * XXX: this will need to be extended beyong just counters
+ */
+ for (i=0; i< IA64_NUM_PM_REGS; i++) {
+ t->pmd[i] = ia64_get_pmd(4+i);
t->pmod[i] = pmds[smp_processor_id()][i];
- t->pmc[i] = ia64_get_pmc(4+i);
+ t->pmc[i] = ia64_get_pmc(4+i);
}
}
@@ -250,7 +369,10 @@
{
int i;
- for (i=0; i< IA64_NUM_PM_REGS - used_by_system ; i++) {
+ /*
+ * XXX: this will need to be extended beyong just counters
+ */
+ for (i=0; i< IA64_NUM_PM_REGS ; i++) {
ia64_set_pmd(4+i, t->pmd[i]);
pmds[smp_processor_id()][i] = t->pmod[i];
ia64_set_pmc(4+i, t->pmc[i]);
@@ -262,7 +384,7 @@
#else /* !CONFIG_PERFMON */
asmlinkage unsigned long
-sys_perfmonctl (int cmd1, int cmd2, void *ptr)
+sys_perfmonctl (int cmd, int count, void *ptr)
{
return -ENOSYS;
}
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.0-test9-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/process.c Wed Oct 4 21:37:10 2000
@@ -294,7 +294,8 @@
* call behavior where scratch registers are preserved across
* system calls (unless used by the system call itself).
*/
-# define THREAD_FLAGS_TO_CLEAR (IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID)
+# define THREAD_FLAGS_TO_CLEAR (IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID \
+ | IA64_THREAD_PM_VALID)
# define THREAD_FLAGS_TO_SET 0
p->thread.flags = ((current->thread.flags & ~THREAD_FLAGS_TO_CLEAR)
| THREAD_FLAGS_TO_SET);
@@ -333,6 +334,17 @@
if (ia64_peek(pt, current, addr, &val) = 0)
access_process_vm(current, addr, &val, sizeof(val), 1);
+ /*
+ * coredump format:
+ * r0-r31
+ * NaT bits (for r0-r31; bit N = 1 iff rN is a NaT)
+ * predicate registers (p0-p63)
+ * b0-b7
+ * ip cfm user-mask
+ * ar.rsc ar.bsp ar.bspstore ar.rnat
+ * ar.ccv ar.unat ar.fpsr ar.pfs ar.lc ar.ec
+ */
+
/* r0 is zero */
for (i = 1, mask = (1UL << i); i < 32; ++i) {
unw_get_gr(info, i, &dst[i], &nat);
@@ -530,6 +542,24 @@
#ifndef CONFIG_SMP
if (ia64_get_fpu_owner() = current)
ia64_set_fpu_owner(0);
+#endif
+#ifdef CONFIG_PERFMON
+ /* stop monitoring */
+ if ((current->thread.flags & IA64_THREAD_PM_VALID) != 0) {
+ /*
+ * we cannot rely on switch_to() to save the PMU
+ * context for the last time. There is a possible race
+ * condition in SMP mode between the child and the
+ * parent. by explicitly saving the PMU context here
+ * we garantee no race. this call we also stop
+ * monitoring
+ */
+ ia64_save_pm_regs(¤t->thread);
+ /*
+ * make sure that switch_to() will not save context again
+ */
+ current->thread.flags &= ~IA64_THREAD_PM_VALID;
+ }
#endif
}
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.0-test9-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/ptrace.c Wed Oct 4 21:37:31 2000
@@ -617,6 +617,7 @@
struct switch_stack *sw;
struct unw_frame_info info;
struct pt_regs *pt;
+ unsigned long pmd_tmp;
pt = ia64_task_regs(child);
sw = (struct switch_stack *) (child->thread.ksp + 16);
@@ -793,7 +794,11 @@
addr);
return -1;
}
- } else {
+ } else
+#ifdef CONFIG_PERFMON
+ if (addr < PT_PMD)
+#endif
+ {
/* access debug registers */
if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
@@ -816,6 +821,32 @@
ptr += regnum;
}
+#ifdef CONFIG_PERFMON
+ else {
+ /*
+ * XXX: will eventually move back to perfmonctl()
+ */
+ unsigned long pmd = (addr - PT_PMD) >> 3;
+ extern unsigned long perf_ovfl_val;
+
+ /* we just use ptrace to read */
+ if (write_access) return -1;
+
+ if (pmd > 3) {
+ printk("ptrace: rejecting access to PMD[%ld] address 0x%lx\n", pmd, addr);
+ return -1;
+ }
+
+ /*
+ * We always need to mask upper 32bits of pmd because value is random
+ */
+ pmd_tmp = child->thread.pmod[pmd]+(child->thread.pmd[pmd]& perf_ovfl_val);
+
+ /*printk(__FUNCTION__" child=%d reading pmd[%ld]=%lx\n", child->pid, pmd, pmd_tmp);*/
+
+ ptr = &pmd_tmp;
+ }
+#endif
if (write_access)
*ptr = *data;
else
@@ -945,7 +976,12 @@
/* disallow accessing anything else... */
return -1;
}
- } else {
+ } else
+#ifdef CONFIG_PERFMON
+ if (addr < PT_PMD)
+#endif
+ {
+
/* access debug registers */
if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
@@ -966,6 +1002,33 @@
ptr += regnum;
}
+#ifdef CONFIG_PERFMON
+ else {
+ /*
+ * XXX: will eventually move back to perfmonctl()
+ */
+ unsigned long pmd = (addr - PT_PMD) >> 3;
+ extern unsigned long perf_ovfl_val;
+
+ /* we just use ptrace to read */
+ if (write_access) return -1;
+
+ if (pmd > 3) {
+ printk("ptrace: rejecting access to PMD[%ld] address 0x%lx\n", pmd, addr);
+ return -1;
+ }
+
+ /*
+ * We always need to mask upper 32bits of pmd because value is random
+ */
+ pmd_tmp = child->thread.pmod[pmd]+(child->thread.pmd[pmd]& perf_ovfl_val);
+
+ /*printk(__FUNCTION__" child=%d reading pmd[%ld]=%lx\n", child->pid, pmd, pmd_tmp);*/
+
+ ptr = &pmd_tmp;
+ }
+#endif
+
if (write_access)
*ptr = *data;
else
@@ -1041,10 +1104,12 @@
ret = -ESRCH;
if (!(child->ptrace & PT_PTRACED))
goto out_tsk;
+
if (child->state != TASK_STOPPED) {
- if (request != PTRACE_KILL)
+ if (request != PTRACE_KILL && request != PTRACE_PEEKUSR)
goto out_tsk;
}
+
if (child->p_pptr != current)
goto out_tsk;
diff -urN linux-davidm/arch/ia64/kernel/sal.c linux-2.4.0-test9-lia/arch/ia64/kernel/sal.c
--- linux-davidm/arch/ia64/kernel/sal.c Thu Aug 24 08:17:30 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/sal.c Wed Oct 4 21:37:41 2000
@@ -34,6 +34,7 @@
}
ia64_sal_handler ia64_sal = (ia64_sal_handler) default_handler;
+ia64_sal_desc_ptc_t *ia64_ptc_domain_info;
const char *
ia64_sal_strerror (long status)
@@ -125,6 +126,10 @@
#endif
ia64_pal_handler_init(__va(ep->pal_proc));
ia64_sal_handler_init(__va(ep->sal_proc), __va(ep->gp));
+ break;
+
+ case SAL_DESC_PTC:
+ ia64_ptc_domain_info = (ia64_sal_desc_ptc_t *)p;
break;
case SAL_DESC_AP_WAKEUP:
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.0-test9-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/setup.c Wed Oct 4 21:37:50 2000
@@ -270,6 +270,11 @@
int
get_cpuinfo (char *buffer)
{
+#ifdef CONFIG_SMP
+# define lps c->loops_per_sec
+#else
+# define lps loops_per_sec
+#endif
char family[32], model[32], features[128], *cp, *p = buffer;
struct cpuinfo_ia64 *c;
unsigned long mask;
@@ -320,7 +325,7 @@
features,
c->ppn, c->number, c->proc_freq / 1000000, c->proc_freq % 1000000,
c->itc_freq / 1000000, c->itc_freq % 1000000,
- ia64_loops_per_sec() / 500000, (ia64_loops_per_sec() / 5000) % 100);
+ lps / 500000, (lps / 5000) % 100);
}
return p - buffer;
}
@@ -416,8 +421,9 @@
* do NOT defer TLB misses, page-not-present, access bit, or
* debug faults but kernel code should not rely on any
* particular setting of these bits.
- */
ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_PP);
+ */
+ ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX );
#ifndef CONFIG_SMP
ia64_set_fpu_owner(0); /* initialize ar.k5 */
#endif
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-test9-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/smp.c Wed Oct 4 21:38:41 2000
@@ -44,8 +44,8 @@
extern void __init calibrate_delay(void);
extern int cpu_idle(void * unused);
-extern void _start(void);
extern void machine_halt(void);
+extern void start_ap(void);
extern int cpu_now_booting; /* Used by head.S to find idle task */
extern volatile unsigned long cpu_online_map; /* Bitmap of available cpu's */
@@ -463,46 +463,6 @@
}
}
-/*
- * SAL shoves the AP's here when we start them. Physical mode, no kernel TR,
- * no RRs set, better than even chance that psr is bogus. Fix all that and
- * call _start. In effect, pretend to be lilo.
- *
- * Stolen from lilo_start.c. Thanks David!
- */
-void
-start_ap(void)
-{
- unsigned long flags;
-
- /*
- * Install a translation register that identity maps the
- * kernel's 256MB page(s).
- */
- ia64_clear_ic(flags);
- ia64_set_rr( 0, (0x1000 << 8) | (_PAGE_SIZE_1M << 2));
- ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
- ia64_srlz_d();
- ia64_itr(0x3, 1, PAGE_OFFSET,
- pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
- _PAGE_SIZE_256M);
- ia64_srlz_i();
-
- flags = (IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT | IA64_PSR_DFH |
- IA64_PSR_BN);
-
- asm volatile ("movl r8 = 1f\n"
- ";;\n"
- "mov cr.ipsr=%0\n"
- "mov cr.iip=r8\n"
- "mov cr.ifs=r0\n"
- ";;\n"
- "rfi;;"
- "1:\n"
- "movl r1 = __gp" :: "r"(flags) : "r8");
- _start();
-}
-
/*
* AP's start using C here.
@@ -642,7 +602,7 @@
/* Setup BP mappings */
__cpu_physical_id[0] = hard_smp_processor_id();
- calibrate_delay();
+ /* on the BP, the kernel already called calibrate_delay_loop() in init/main.c */
my_cpu_data.loops_per_sec = loops_per_sec;
#if 0
smp_tune_scheduling();
diff -urN linux-davidm/arch/ia64/kernel/smpboot.c linux-2.4.0-test9-lia/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/smpboot.c Wed Oct 4 21:38:56 2000
@@ -0,0 +1,76 @@
+/*
+ * SMP Support
+ *
+ * Application processor startup code, moved from smp.c to better support kernel profile
+ */
+
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/kernel_stat.h>
+#include <linux/mm.h>
+#include <linux/delay.h>
+
+#include <asm/atomic.h>
+#include <asm/bitops.h>
+#include <asm/current.h>
+#include <asm/delay.h>
+#include <asm/efi.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/sal.h>
+#include <asm/system.h>
+#include <asm/unistd.h>
+
+/*
+ * SAL shoves the AP's here when we start them. Physical mode, no kernel TR,
+ * no RRs set, better than even chance that psr is bogus. Fix all that and
+ * call _start. In effect, pretend to be lilo.
+ *
+ * Stolen from lilo_start.c. Thanks David!
+ */
+void
+start_ap(void)
+{
+ extern void _start (void);
+ unsigned long flags;
+
+ /*
+ * Install a translation register that identity maps the
+ * kernel's 256MB page(s).
+ */
+ ia64_clear_ic(flags);
+ ia64_set_rr( 0, (0x1000 << 8) | (_PAGE_SIZE_1M << 2));
+ ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
+ ia64_srlz_d();
+ ia64_itr(0x3, 1, PAGE_OFFSET,
+ pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
+ _PAGE_SIZE_256M);
+ ia64_srlz_i();
+
+ flags = (IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT | IA64_PSR_DFH |
+ IA64_PSR_BN);
+
+ asm volatile ("movl r8 = 1f\n"
+ ";;\n"
+ "mov cr.ipsr=%0\n"
+ "mov cr.iip=r8\n"
+ "mov cr.ifs=r0\n"
+ ";;\n"
+ "rfi;;"
+ "1:\n"
+ "movl r1 = __gp" :: "r"(flags) : "r8");
+ _start();
+}
+
+
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.0-test9-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/traps.c Wed Oct 4 21:39:28 2000
@@ -254,10 +254,11 @@
* kernel, so set those bits in the mask and set the low volatile
* pointer to point to these registers.
*/
- fp_state.bitmask_low64 = 0xffc0; /* bit6..bit15 */
#ifndef FPSWA_BUG
- fp_state.fp_state_low_volatile = ®s->f6;
+ fp_state.bitmask_low64 = 0x3c0; /* bit 6..9 */
+ fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) ®s->f6;
#else
+ fp_state.bitmask_low64 = 0xffc0; /* bit6..bit15 */
f6_15[0] = regs->f6;
f6_15[1] = regs->f7;
f6_15[2] = regs->f8;
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.0-test9-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/unwind.c Wed Oct 4 21:39:58 2000
@@ -66,7 +66,7 @@
#define UNW_STATS 0 /* WARNING: this disabled interrupts for long time-spans!! */
#if UNW_DEBUG
- static long unw_debug_level = 1;
+ static long unw_debug_level = 255;
# define debug(level,format...) if (unw_debug_level > level) printk(format)
# define dprintk(format...) printk(format)
# define inline
@@ -111,7 +111,7 @@
struct unw_table kernel_table;
/* hash table that maps instruction pointer to script index: */
- unw_hash_index_t hash[UNW_HASH_SIZE];
+ unsigned short hash[UNW_HASH_SIZE];
/* script cache: */
struct unw_script cache[UNW_CACHE_SIZE];
@@ -152,47 +152,47 @@
UNW_REG_UNAT, UNW_REG_LC, UNW_REG_FPSR, UNW_REG_PRI_UNAT_GR
},
preg_index: {
- struct_offset(struct unw_frame_info, pri_unat)/8, /* PRI_UNAT_GR */
- struct_offset(struct unw_frame_info, pri_unat)/8, /* PRI_UNAT_MEM */
- struct_offset(struct unw_frame_info, pbsp)/8,
- struct_offset(struct unw_frame_info, bspstore)/8,
- struct_offset(struct unw_frame_info, pfs)/8,
- struct_offset(struct unw_frame_info, rnat)/8,
+ struct_offset(struct unw_frame_info, pri_unat_loc)/8, /* PRI_UNAT_GR */
+ struct_offset(struct unw_frame_info, pri_unat_loc)/8, /* PRI_UNAT_MEM */
+ struct_offset(struct unw_frame_info, bsp_loc)/8,
+ struct_offset(struct unw_frame_info, bspstore_loc)/8,
+ struct_offset(struct unw_frame_info, pfs_loc)/8,
+ struct_offset(struct unw_frame_info, rnat_loc)/8,
struct_offset(struct unw_frame_info, psp)/8,
- struct_offset(struct unw_frame_info, rp)/8,
+ struct_offset(struct unw_frame_info, rp_loc)/8,
struct_offset(struct unw_frame_info, r4)/8,
struct_offset(struct unw_frame_info, r5)/8,
struct_offset(struct unw_frame_info, r6)/8,
struct_offset(struct unw_frame_info, r7)/8,
- struct_offset(struct unw_frame_info, unat)/8,
- struct_offset(struct unw_frame_info, pr)/8,
- struct_offset(struct unw_frame_info, lc)/8,
- struct_offset(struct unw_frame_info, fpsr)/8,
- struct_offset(struct unw_frame_info, b1)/8,
- struct_offset(struct unw_frame_info, b2)/8,
- struct_offset(struct unw_frame_info, b3)/8,
- struct_offset(struct unw_frame_info, b4)/8,
- struct_offset(struct unw_frame_info, b5)/8,
- struct_offset(struct unw_frame_info, f2)/8,
- struct_offset(struct unw_frame_info, f3)/8,
- struct_offset(struct unw_frame_info, f4)/8,
- struct_offset(struct unw_frame_info, f5)/8,
- struct_offset(struct unw_frame_info, fr[16 - 16])/8,
- struct_offset(struct unw_frame_info, fr[17 - 16])/8,
- struct_offset(struct unw_frame_info, fr[18 - 16])/8,
- struct_offset(struct unw_frame_info, fr[19 - 16])/8,
- struct_offset(struct unw_frame_info, fr[20 - 16])/8,
- struct_offset(struct unw_frame_info, fr[21 - 16])/8,
- struct_offset(struct unw_frame_info, fr[22 - 16])/8,
- struct_offset(struct unw_frame_info, fr[23 - 16])/8,
- struct_offset(struct unw_frame_info, fr[24 - 16])/8,
- struct_offset(struct unw_frame_info, fr[25 - 16])/8,
- struct_offset(struct unw_frame_info, fr[26 - 16])/8,
- struct_offset(struct unw_frame_info, fr[27 - 16])/8,
- struct_offset(struct unw_frame_info, fr[28 - 16])/8,
- struct_offset(struct unw_frame_info, fr[29 - 16])/8,
- struct_offset(struct unw_frame_info, fr[30 - 16])/8,
- struct_offset(struct unw_frame_info, fr[31 - 16])/8,
+ struct_offset(struct unw_frame_info, unat_loc)/8,
+ struct_offset(struct unw_frame_info, pr_loc)/8,
+ struct_offset(struct unw_frame_info, lc_loc)/8,
+ struct_offset(struct unw_frame_info, fpsr_loc)/8,
+ struct_offset(struct unw_frame_info, b1_loc)/8,
+ struct_offset(struct unw_frame_info, b2_loc)/8,
+ struct_offset(struct unw_frame_info, b3_loc)/8,
+ struct_offset(struct unw_frame_info, b4_loc)/8,
+ struct_offset(struct unw_frame_info, b5_loc)/8,
+ struct_offset(struct unw_frame_info, f2_loc)/8,
+ struct_offset(struct unw_frame_info, f3_loc)/8,
+ struct_offset(struct unw_frame_info, f4_loc)/8,
+ struct_offset(struct unw_frame_info, f5_loc)/8,
+ struct_offset(struct unw_frame_info, fr_loc[16 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[17 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[18 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[19 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[20 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[21 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[22 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[23 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[24 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[25 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[26 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[27 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[28 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[29 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[30 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[31 - 16])/8,
},
hash : { [0 ... UNW_HASH_SIZE - 1] = -1 },
#if UNW_DEBUG
@@ -211,6 +211,27 @@
\f
/* Unwind accessors. */
+/*
+ * Returns offset of rREG in struct pt_regs.
+ */
+static inline unsigned long
+pt_regs_off (unsigned long reg)
+{
+ unsigned long off =0;
+
+ if (reg >= 1 && reg <= 3)
+ off = struct_offset(struct pt_regs, r1) + 8*(reg - 1);
+ else if (reg <= 11)
+ off = struct_offset(struct pt_regs, r8) + 8*(reg - 8);
+ else if (reg <= 15)
+ off = struct_offset(struct pt_regs, r12) + 8*(reg - 12);
+ else if (reg <= 31)
+ off = struct_offset(struct pt_regs, r16) + 8*(reg - 16);
+ else
+ dprintk("unwind: bad scratch reg r%lu\n", reg);
+ return off;
+}
+
int
unw_access_gr (struct unw_frame_info *info, int regnum, unsigned long *val, char *nat, int write)
{
@@ -251,26 +272,22 @@
}
/* fall through */
case UNW_NAT_NONE:
+ dummy_nat = 0;
nat_addr = &dummy_nat;
break;
- case UNW_NAT_SCRATCH:
- if (info->pri_unat)
- nat_addr = info->pri_unat;
- else
- nat_addr = &info->sw->caller_unat;
- case UNW_NAT_PRI_UNAT:
+ case UNW_NAT_MEMSTK:
nat_mask = (1UL << ((long) addr & 0x1f8)/8);
break;
- case UNW_NAT_STACKED:
+ case UNW_NAT_REGSTK:
nat_addr = ia64_rse_rnat_addr(addr);
if ((unsigned long) addr < info->regstk.limit
|| (unsigned long) addr >= info->regstk.top)
{
dprintk("unwind: %p outside of regstk "
- "[0x%lx-0x%lx)\n", addr,
- (void *) info->regstk.limit,
+ "[0x%lx-0x%lx)\n", (void *) addr,
+ info->regstk.limit,
info->regstk.top);
return -1;
}
@@ -290,18 +307,11 @@
pt = (struct pt_regs *) info->psp - 1;
else
pt = (struct pt_regs *) info->sp - 1;
- if (regnum <= 3)
- addr = &pt->r1 + (regnum - 1);
- else if (regnum <= 11)
- addr = &pt->r8 + (regnum - 8);
- else if (regnum <= 15)
- addr = &pt->r12 + (regnum - 12);
- else
- addr = &pt->r16 + (regnum - 16);
- if (info->pri_unat)
- nat_addr = info->pri_unat;
+ addr = (unsigned long *) ((long) pt + pt_regs_off(regnum));
+ if (info->pri_unat_loc)
+ nat_addr = info->pri_unat_loc;
else
- nat_addr = &info->sw->caller_unat;
+ nat_addr = &info->sw->ar_unat;
nat_mask = (1UL << ((long) addr & 0x1f8)/8);
}
} else {
@@ -321,7 +331,10 @@
if (write) {
*addr = *val;
- *nat_addr = (*nat_addr & ~nat_mask) | nat_mask;
+ if (*nat)
+ *nat_addr |= nat_mask;
+ else
+ *nat_addr &= ~nat_mask;
} else {
*val = *addr;
*nat = (*nat_addr & nat_mask) != 0;
@@ -347,7 +360,7 @@
/* preserved: */
case 1: case 2: case 3: case 4: case 5:
- addr = *(&info->b1 + (regnum - 1));
+ addr = *(&info->b1_loc + (regnum - 1));
if (!addr)
addr = &info->sw->b1 + (regnum - 1);
break;
@@ -380,7 +393,7 @@
pt = (struct pt_regs *) info->sp - 1;
if (regnum <= 5) {
- addr = *(&info->f2 + (regnum - 2));
+ addr = *(&info->f2_loc + (regnum - 2));
if (!addr)
addr = &info->sw->f2 + (regnum - 2);
} else if (regnum <= 15) {
@@ -389,7 +402,7 @@
else
addr = &info->sw->f10 + (regnum - 10);
} else if (regnum <= 31) {
- addr = info->fr[regnum - 16];
+ addr = info->fr_loc[regnum - 16];
if (!addr)
addr = &info->sw->f16 + (regnum - 16);
} else {
@@ -422,52 +435,53 @@
switch (regnum) {
case UNW_AR_BSP:
- addr = info->pbsp;
+ addr = info->bsp_loc;
if (!addr)
addr = &info->sw->ar_bspstore;
break;
case UNW_AR_BSPSTORE:
- addr = info->bspstore;
+ addr = info->bspstore_loc;
if (!addr)
addr = &info->sw->ar_bspstore;
break;
case UNW_AR_PFS:
- addr = info->pfs;
+ addr = info->pfs_loc;
if (!addr)
addr = &info->sw->ar_pfs;
break;
case UNW_AR_RNAT:
- addr = info->rnat;
+ addr = info->rnat_loc;
if (!addr)
addr = &info->sw->ar_rnat;
break;
case UNW_AR_UNAT:
- addr = info->unat;
+ addr = info->unat_loc;
if (!addr)
addr = &info->sw->ar_unat;
break;
case UNW_AR_LC:
- addr = info->lc;
+ addr = info->lc_loc;
if (!addr)
addr = &info->sw->ar_lc;
break;
case UNW_AR_EC:
- if (!info->cfm)
+ if (!info->cfm_loc)
return -1;
if (write)
- *info->cfm = (*info->cfm & ~(0x3fUL << 52)) | ((*val & 0x3f) << 52);
+ *info->cfm_loc + (*info->cfm_loc & ~(0x3fUL << 52)) | ((*val & 0x3f) << 52);
else
- *val = (*info->cfm >> 52) & 0x3f;
+ *val = (*info->cfm_loc >> 52) & 0x3f;
return 0;
case UNW_AR_FPSR:
- addr = info->fpsr;
+ addr = info->fpsr_loc;
if (!addr)
addr = &info->sw->ar_fpsr;
break;
@@ -497,7 +511,7 @@
{
unsigned long *addr;
- addr = info->pr;
+ addr = info->pr_loc;
if (!addr)
addr = &info->sw->pr;
@@ -609,9 +623,8 @@
int i;
/*
- * First, resolve implicit register save locations
- * (see Section "11.4.2.3 Rules for Using Unwind
- * Descriptors", rule 3):
+ * First, resolve implicit register save locations (see Section "11.4.2.3 Rules
+ * for Using Unwind Descriptors", rule 3):
*/
for (i = 0; i < (int) sizeof(unw.save_order)/sizeof(unw.save_order[0]); ++i) {
reg = sr->curr.reg + unw.save_order[i];
@@ -1049,16 +1062,16 @@
static inline unw_hash_index_t
hash (unsigned long ip)
{
-# define magic 0x9e3779b97f4a7c16 /* (sqrt(5)/2-1)*2^64 */
+# define magic 0x9e3779b97f4a7c16 /* based on (sqrt(5)/2-1)*2^64 */
return (ip >> 4)*magic >> (64 - UNW_LOG_HASH_SIZE);
}
static inline long
-cache_match (struct unw_script *script, unsigned long ip, unsigned long pr_val)
+cache_match (struct unw_script *script, unsigned long ip, unsigned long pr)
{
read_lock(&script->lock);
- if ((ip) = (script)->ip && (((pr_val) ^ (script)->pr_val) & (script)->pr_mask) = 0)
+ if (ip = script->ip && ((pr ^ script->pr_val) & script->pr_mask) = 0)
/* keep the read lock... */
return 1;
read_unlock(&script->lock);
@@ -1069,21 +1082,26 @@
script_lookup (struct unw_frame_info *info)
{
struct unw_script *script = unw.cache + info->hint;
- unsigned long ip, pr_val;
+ unsigned short index;
+ unsigned long ip, pr;
STAT(++unw.stat.cache.lookups);
ip = info->ip;
- pr_val = info->pr_val;
+ pr = info->pr;
- if (cache_match(script, ip, pr_val)) {
+ if (cache_match(script, ip, pr)) {
STAT(++unw.stat.cache.hinted_hits);
return script;
}
- script = unw.cache + unw.hash[hash(ip)];
+ index = unw.hash[hash(ip)];
+ if (index >= UNW_CACHE_SIZE)
+ return 0;
+
+ script = unw.cache + index;
while (1) {
- if (cache_match(script, ip, pr_val)) {
+ if (cache_match(script, ip, pr)) {
/* update hint; no locking required as single-word writes are atomic */
STAT(++unw.stat.cache.normal_hits);
unw.cache[info->prev_script].hint = script - unw.cache;
@@ -1103,8 +1121,8 @@
script_new (unsigned long ip)
{
struct unw_script *script, *prev, *tmp;
+ unw_hash_index_t index;
unsigned long flags;
- unsigned char index;
unsigned short head;
STAT(++unw.stat.script.news);
@@ -1137,22 +1155,24 @@
unw.lru_tail = head;
/* remove the old script from the hash table (if it's there): */
- index = hash(script->ip);
- tmp = unw.cache + unw.hash[index];
- prev = 0;
- while (1) {
- if (tmp = script) {
- if (prev)
- prev->coll_chain = tmp->coll_chain;
- else
- unw.hash[index] = tmp->coll_chain;
- break;
- } else
- prev = tmp;
- if (tmp->coll_chain >= UNW_CACHE_SIZE)
+ if (script->ip) {
+ index = hash(script->ip);
+ tmp = unw.cache + unw.hash[index];
+ prev = 0;
+ while (1) {
+ if (tmp = script) {
+ if (prev)
+ prev->coll_chain = tmp->coll_chain;
+ else
+ unw.hash[index] = tmp->coll_chain;
+ break;
+ } else
+ prev = tmp;
+ if (tmp->coll_chain >= UNW_CACHE_SIZE)
/* old script wasn't in the hash-table */
- break;
- tmp = unw.cache + tmp->coll_chain;
+ break;
+ tmp = unw.cache + tmp->coll_chain;
+ }
}
/* enter new script in the hash table */
@@ -1202,19 +1222,17 @@
struct unw_reg_info *r = sr->curr.reg + i;
enum unw_insn_opcode opc;
struct unw_insn insn;
- unsigned long val;
+ unsigned long val = 0;
switch (r->where) {
case UNW_WHERE_GR:
if (r->val >= 32) {
/* register got spilled to a stacked register */
opc = UNW_INSN_SETNAT_TYPE;
- val = UNW_NAT_STACKED;
- } else {
+ val = UNW_NAT_REGSTK;
+ } else
/* register got spilled to a scratch register */
- opc = UNW_INSN_SETNAT_TYPE;
- val = UNW_NAT_SCRATCH;
- }
+ opc = UNW_INSN_SETNAT_MEMSTK;
break;
case UNW_WHERE_FR:
@@ -1229,8 +1247,7 @@
case UNW_WHERE_PSPREL:
case UNW_WHERE_SPREL:
- opc = UNW_INSN_SETNAT_PRI_UNAT;
- val = 0;
+ opc = UNW_INSN_SETNAT_MEMSTK;
break;
default:
@@ -1271,18 +1288,8 @@
}
val = unw.preg_index[UNW_REG_R4 + (rval - 4)];
} else {
- opc = UNW_INSN_LOAD_SPREL;
- val = -sizeof(struct pt_regs);
- if (rval >= 1 && rval <= 3)
- val += struct_offset(struct pt_regs, r1) + 8*(rval - 1);
- else if (rval <= 11)
- val += struct_offset(struct pt_regs, r8) + 8*(rval - 8);
- else if (rval <= 15)
- val += struct_offset(struct pt_regs, r12) + 8*(rval - 12);
- else if (rval <= 31)
- val += struct_offset(struct pt_regs, r16) + 8*(rval - 16);
- else
- dprintk("unwind: bad scratch reg r%lu\n", rval);
+ opc = UNW_INSN_ADD_SP;
+ val = -sizeof(struct pt_regs) + pt_regs_off(rval);
}
break;
@@ -1292,7 +1299,7 @@
else if (rval >= 16 && rval <= 31)
val = unw.preg_index[UNW_REG_F16 + (rval - 16)];
else {
- opc = UNW_INSN_LOAD_SPREL;
+ opc = UNW_INSN_ADD_SP;
val = -sizeof(struct pt_regs);
if (rval <= 9)
val += struct_offset(struct pt_regs, f6) + 16*(rval - 6);
@@ -1305,7 +1312,7 @@
if (rval >= 1 && rval <= 5)
val = unw.preg_index[UNW_REG_B1 + (rval - 1)];
else {
- opc = UNW_INSN_LOAD_SPREL;
+ opc = UNW_INSN_ADD_SP;
val = -sizeof(struct pt_regs);
if (rval = 0)
val += struct_offset(struct pt_regs, b0);
@@ -1317,11 +1324,11 @@
break;
case UNW_WHERE_SPREL:
- opc = UNW_INSN_LOAD_SPREL;
+ opc = UNW_INSN_ADD_SP;
break;
case UNW_WHERE_PSPREL:
- opc = UNW_INSN_LOAD_PSPREL;
+ opc = UNW_INSN_ADD_PSP;
break;
default:
@@ -1334,6 +1341,18 @@
script_emit(script, insn);
if (need_nat_info)
emit_nat_info(sr, i, script);
+
+ if (i = UNW_REG_PSP) {
+ /*
+ * info->psp must contain the _value_ of the previous
+ * sp, not it's save location. We get this by
+ * dereferencing the value we just stored in
+ * info->psp:
+ */
+ insn.opc = UNW_INSN_LOAD;
+ insn.dst = insn.val = unw.preg_index[UNW_REG_PSP];
+ script_emit(script, insn);
+ }
}
static inline struct unw_table_entry *
@@ -1382,7 +1401,7 @@
memset(&sr, 0, sizeof(sr));
for (r = sr.curr.reg; r < sr.curr.reg + UNW_NUM_REGS; ++r)
r->when = UNW_WHEN_NEVER;
- sr.pr_val = info->pr_val;
+ sr.pr_val = info->pr;
script = script_new(ip);
if (!script) {
@@ -1451,8 +1470,8 @@
}
#if UNW_DEBUG
- printk ("unwind: state record for func 0x%lx, t=%u:\n",
- table->segment_base + e->start_offset, sr.when_target);
+ printk("unwind: state record for func 0x%lx, t=%u:\n",
+ table->segment_base + e->start_offset, sr.when_target);
for (r = sr.curr.reg; r < sr.curr.reg + UNW_NUM_REGS; ++r) {
if (r->where != UNW_WHERE_NONE || r->when != UNW_WHEN_NEVER) {
printk(" %s <- ", unw.preg_name[r - sr.curr.reg]);
@@ -1467,7 +1486,7 @@
break;
default: printk("BADWHERE(%d)", r->where); break;
}
- printk ("\t\t%d\n", r->when);
+ printk("\t\t%d\n", r->when);
}
}
#endif
@@ -1476,13 +1495,17 @@
/* translate state record into unwinder instructions: */
- if (sr.curr.reg[UNW_REG_PSP].where = UNW_WHERE_NONE
- && sr.when_target > sr.curr.reg[UNW_REG_PSP].when && sr.curr.reg[UNW_REG_PSP].val != 0)
- {
+ /*
+ * First, set psp if we're dealing with a fixed-size frame;
+ * subsequent instructions may depend on this value.
+ */
+ if (sr.when_target > sr.curr.reg[UNW_REG_PSP].when
+ && (sr.curr.reg[UNW_REG_PSP].where = UNW_WHERE_NONE)
+ && sr.curr.reg[UNW_REG_PSP].val != 0) {
/* new psp is sp plus frame size */
insn.opc = UNW_INSN_ADD;
- insn.dst = unw.preg_index[UNW_REG_PSP];
- insn.val = sr.curr.reg[UNW_REG_PSP].val;
+ insn.dst = struct_offset(struct unw_frame_info, psp)/8;
+ insn.val = sr.curr.reg[UNW_REG_PSP].val; /* frame size */
script_emit(script, insn);
}
@@ -1566,23 +1589,34 @@
val);
break;
- case UNW_INSN_LOAD_PSPREL:
+ case UNW_INSN_ADD_PSP:
s[dst] = state->psp + val;
break;
- case UNW_INSN_LOAD_SPREL:
+ case UNW_INSN_ADD_SP:
s[dst] = state->sp + val;
break;
- case UNW_INSN_SETNAT_PRI_UNAT:
- if (!state->pri_unat)
- state->pri_unat = &state->sw->caller_unat;
- s[dst+1] = ((*state->pri_unat - s[dst]) << 32) | UNW_NAT_PRI_UNAT;
+ case UNW_INSN_SETNAT_MEMSTK:
+ if (!state->pri_unat_loc)
+ state->pri_unat_loc = &state->sw->ar_unat;
+ /* register off. is a multiple of 8, so the least 3 bits (type) are 0 */
+ s[dst+1] = (*state->pri_unat_loc - s[dst]) | UNW_NAT_MEMSTK;
break;
case UNW_INSN_SETNAT_TYPE:
s[dst+1] = val;
break;
+
+ case UNW_INSN_LOAD:
+#if UNW_DEBUG
+ if ((s[val] & (my_cpu_data.unimpl_va_mask | 0x7)) || s[val] < TASK_SIZE) {
+ debug(1, "unwind: rejecting bad psp=0x%lx\n", s[val]);
+ break;
+ }
+#endif
+ s[dst] = *(unsigned long *) s[val];
+ break;
}
}
STAT(unw.stat.script.run_time += ia64_get_itc() - start);
@@ -1591,13 +1625,14 @@
lazy_init:
off = unw.sw_off[val];
s[val] = (unsigned long) state->sw + off;
- if (off >= struct_offset (struct unw_frame_info, r4)
- && off <= struct_offset (struct unw_frame_info, r7))
+ if (off >= struct_offset(struct switch_stack, r4)
+ && off <= struct_offset(struct switch_stack, r7))
/*
- * We're initializing a general register: init NaT info, too. Note that we
- * rely on the fact that call_unat is the first field in struct switch_stack:
+ * We're initializing a general register: init NaT info, too. Note that
+ * the offset is a multiple of 8 which gives us the 3 bits needed for
+ * the type field.
*/
- s[val+1] = (-off << 32) | UNW_NAT_PRI_UNAT;
+ s[val+1] = (struct_offset(struct switch_stack, ar_unat) - off) | UNW_NAT_MEMSTK;
goto redo;
}
@@ -1610,7 +1645,7 @@
if ((info->ip & (my_cpu_data.unimpl_va_mask | 0xf)) || info->ip < TASK_SIZE) {
/* don't let obviously bad addresses pollute the cache */
debug(1, "unwind: rejecting bad ip=0x%lx\n", info->ip);
- info->rp = 0;
+ info->rp_loc = 0;
return -1;
}
@@ -1651,12 +1686,12 @@
prev_bsp = info->bsp;
/* restore the ip */
- if (!info->rp) {
+ if (!info->rp_loc) {
debug(1, "unwind: failed to locate return link (ip=0x%lx)!\n", info->ip);
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
- ip = info->ip = *info->rp;
+ ip = info->ip = *info->rp_loc;
if (ip < GATE_ADDR + PAGE_SIZE) {
/*
* We don't have unwind info for the gate page, so we consider that part
@@ -1668,23 +1703,23 @@
}
/* restore the cfm: */
- if (!info->pfs) {
+ if (!info->pfs_loc) {
dprintk("unwind: failed to locate ar.pfs!\n");
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
- info->cfm = info->pfs;
+ info->cfm_loc = info->pfs_loc;
/* restore the bsp: */
- pr = info->pr_val;
+ pr = info->pr;
num_regs = 0;
if ((info->flags & UNW_FLAG_INTERRUPT_FRAME)) {
if ((pr & (1UL << pNonSys)) != 0)
- num_regs = *info->cfm & 0x7f; /* size of frame */
- info->pfs + num_regs = *info->cfm_loc & 0x7f; /* size of frame */
+ info->pfs_loc (unsigned long *) (info->sp + 16 + struct_offset(struct pt_regs, ar_pfs));
} else
- num_regs = (*info->cfm >> 7) & 0x7f; /* size of locals */
+ num_regs = (*info->cfm_loc >> 7) & 0x7f; /* size of locals */
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -num_regs);
if (info->bsp < info->regstk.limit || info->bsp > info->regstk.top) {
dprintk("unwind: bsp (0x%lx) out of range [0x%lx-0x%lx]\n",
@@ -1697,7 +1732,7 @@
info->sp = info->psp;
if (info->sp < info->memstk.top || info->sp > info->memstk.limit) {
dprintk("unwind: sp (0x%lx) out of range [0x%lx-0x%lx]\n",
- info->sp, info->regstk.top, info->regstk.limit);
+ info->sp, info->memstk.top, info->memstk.limit);
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
@@ -1708,8 +1743,11 @@
return -1;
}
+ /* as we unwind, the saved ar.unat becomes the primary unat: */
+ info->pri_unat_loc = info->unat_loc;
+
/* finally, restore the predicates: */
- unw_get_pr(info, &info->pr_val);
+ unw_get_pr(info, &info->pr);
retval = find_save_locs(info);
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
@@ -1776,11 +1814,11 @@
info->task = t;
info->sw = sw;
info->sp = info->psp = (unsigned long) (sw + 1) - 16;
- info->cfm = &sw->ar_pfs;
- sol = (*info->cfm >> 7) & 0x7f;
+ info->cfm_loc = &sw->ar_pfs;
+ sol = (*info->cfm_loc >> 7) & 0x7f;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
info->ip = sw->b0;
- info->pr_val = sw->pr;
+ info->pr = sw->pr;
find_save_locs(info);
STAT(unw.stat.api.init_time += ia64_get_itc() - start; local_irq_restore(flags));
@@ -1811,7 +1849,7 @@
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
- info->cfm = &sw->ar_pfs;
+ info->cfm_loc = &sw->ar_pfs;
info->ip = sw->b0;
#endif
}
@@ -1848,7 +1886,7 @@
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs(bsp, -sof);
- info->cfm = ®s->cr_ifs;
+ info->cfm_loc = ®s->cr_ifs;
info->ip = regs->cr_iip;
#endif
}
@@ -1884,7 +1922,7 @@
int
unw_unwind (struct unw_frame_info *info)
{
- unsigned long sol, cfm = *info->cfm;
+ unsigned long sol, cfm = *info->cfm_loc;
int is_nat;
sol = (cfm >> 7) & 0x7f; /* size of locals */
@@ -1906,7 +1944,7 @@
/* reject let obviously bad addresses */
return -1;
- info->cfm = ia64_rse_skip_regs((unsigned long *) info->bsp, sol - 1);
+ info->cfm_loc = ia64_rse_skip_regs((unsigned long *) info->bsp, sol - 1);
cfm = read_reg(info, sol - 1, &is_nat);
if (is_nat)
return -1;
@@ -2073,9 +2111,9 @@
for (i = UNW_REG_F16, off = SW(F16); i <= UNW_REG_F31; ++i, off += 16)
unw.sw_off[unw.preg_index[i]] = off;
- unw.cache[0].coll_chain = -1;
- for (i = 1; i < UNW_CACHE_SIZE; ++i) {
- unw.cache[i].lru_chain = (i - 1);
+ for (i = 0; i < UNW_CACHE_SIZE; ++i) {
+ if (i > 0)
+ unw.cache[i].lru_chain = (i - 1);
unw.cache[i].coll_chain = -1;
unw.cache[i].lock = RW_LOCK_UNLOCKED;
}
diff -urN linux-davidm/arch/ia64/kernel/unwind_i.h linux-2.4.0-test9-lia/arch/ia64/kernel/unwind_i.h
--- linux-davidm/arch/ia64/kernel/unwind_i.h Thu Jun 22 07:09:44 2000
+++ linux-2.4.0-test9-lia/arch/ia64/kernel/unwind_i.h Wed Oct 4 21:40:28 2000
@@ -115,21 +115,21 @@
enum unw_nat_type {
UNW_NAT_NONE, /* NaT not represented */
UNW_NAT_VAL, /* NaT represented by NaT value (fp reg) */
- UNW_NAT_PRI_UNAT, /* NaT value is in unat word at offset OFF */
- UNW_NAT_SCRATCH, /* NaT value is in scratch.pri_unat */
- UNW_NAT_STACKED /* NaT is in rnat */
+ UNW_NAT_MEMSTK, /* NaT value is in unat word at offset OFF */
+ UNW_NAT_REGSTK /* NaT is in rnat */
};
enum unw_insn_opcode {
UNW_INSN_ADD, /* s[dst] += val */
+ UNW_INSN_ADD_PSP, /* s[dst] = (s.psp + val) */
+ UNW_INSN_ADD_SP, /* s[dst] = (s.sp + val) */
UNW_INSN_MOVE, /* s[dst] = s[val] */
UNW_INSN_MOVE2, /* s[dst] = s[val]; s[dst+1] = s[val+1] */
UNW_INSN_MOVE_STACKED, /* s[dst] = ia64_rse_skip(*s.bsp, val) */
- UNW_INSN_LOAD_PSPREL, /* s[dst] = *(*s.psp + 8*val) */
- UNW_INSN_LOAD_SPREL, /* s[dst] = *(*s.sp + 8*val) */
- UNW_INSN_SETNAT_PRI_UNAT, /* s[dst+1].nat.type = PRI_UNAT;
+ UNW_INSN_SETNAT_MEMSTK, /* s[dst+1].nat.type = MEMSTK;
s[dst+1].nat.off = *s.pri_unat - s[dst] */
- UNW_INSN_SETNAT_TYPE /* s[dst+1].nat.type = val */
+ UNW_INSN_SETNAT_TYPE, /* s[dst+1].nat.type = val */
+ UNW_INSN_LOAD /* s[dst] = *s[val] */
};
struct unw_insn {
diff -urN linux-davidm/arch/ia64/lib/Makefile linux-2.4.0-test9-lia/arch/ia64/lib/Makefile
--- linux-davidm/arch/ia64/lib/Makefile Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/arch/ia64/lib/Makefile Wed Oct 4 21:40:41 2000
@@ -7,7 +7,8 @@
L_TARGET = lib.a
-L_OBJS = __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o \
+L_OBJS = __divsi3.o __udivsi3.o __modsi3.o __umodsi3.o \
+ __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o \
checksum.o clear_page.o csum_partial_copy.o copy_page.o \
copy_user.o clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o \
flush.o do_csum.o
@@ -18,20 +19,33 @@
LX_OBJS = io.o
-IGNORE_FLAGS_OBJS = __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o
+IGNORE_FLAGS_OBJS = __divsi3.o __udivsi3.o __modsi3.o __umodsi3.o \
+ __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o
$(L_TARGET):
-__divdi3.o: idiv.S
+__divdi3.o: idiv64.S
$(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $@ $<
-__udivdi3.o: idiv.S
+__udivdi3.o: idiv64.S
$(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DUNSIGNED -c -o $@ $<
-__moddi3.o: idiv.S
+__moddi3.o: idiv64.S
$(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -c -o $@ $<
-__umoddi3.o: idiv.S
+__umoddi3.o: idiv64.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -DUNSIGNED -c -o $@ $<
+
+__divsi3.o: idiv32.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $@ $<
+
+__udivsi3.o: idiv32.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DUNSIGNED -c -o $@ $<
+
+__modsi3.o: idiv32.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -c -o $@ $<
+
+__umodsi3.o: idiv32.S
$(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -DUNSIGNED -c -o $@ $<
include $(TOPDIR)/Rules.make
diff -urN linux-davidm/arch/ia64/lib/idiv.S linux-2.4.0-test9-lia/arch/ia64/lib/idiv.S
--- linux-davidm/arch/ia64/lib/idiv.S Wed Aug 2 18:54:02 2000
+++ linux-2.4.0-test9-lia/arch/ia64/lib/idiv.S Wed Dec 31 16:00:00 1969
@@ -1,98 +0,0 @@
-/*
- * Integer division routine.
- *
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
- */
-
-#include <asm/asmmacro.h>
-
-/*
- * Compute a 64-bit unsigned integer quotient.
- *
- * Use reciprocal approximation and Newton-Raphson iteration to compute the
- * quotient. frcpa gives 8.6 significant bits, so we need 3 iterations
- * to get more than the 64 bits of precision that we need for DImode.
- *
- * Must use max precision for the reciprocal computations to get 64 bits of
- * precision.
- *
- * r32 holds the dividend. r33 holds the divisor.
- */
-
-#ifdef MODULO
-# define OP mod
-#else
-# define OP div
-#endif
-
-#ifdef UNSIGNED
-# define SGN u
-# define INT_TO_FP(a,b) fcvt.xuf.s1 a=b
-# define FP_TO_INT(a,b) fcvt.fxu.trunc.s1 a=b
-#else
-# define SGN
-# define INT_TO_FP(a,b) fcvt.xf a=b
-# define FP_TO_INT(a,b) fcvt.fx.trunc.s1 a=b
-#endif
-
-#define PASTE1(a,b) a##b
-#define PASTE(a,b) PASTE1(a,b)
-#define NAME PASTE(PASTE(__,SGN),PASTE(OP,di3))
-
-GLOBAL_ENTRY(NAME)
- UNW(.prologue)
- .regstk 2,0,0,0
- // Transfer inputs to FP registers.
- setf.sig f8 = in0
- setf.sig f9 = in1
- UNW(.fframe 16)
- UNW(.save.f 0x20)
- stf.spill [sp] = f17,-16
-
- // Convert the inputs to FP, to avoid FP software-assist faults.
- INT_TO_FP(f8, f8)
- ;;
-
- UNW(.save.f 0x10)
- stf.spill [sp] = f16
- UNW(.body)
- INT_TO_FP(f9, f9)
- ;;
- frcpa.s1 f17, p6 = f8, f9 // y = frcpa(b)
- ;;
- /*
- * This is the magic algorithm described in Section 8.6.2 of "IA-64
- * and Elementary Functions" by Peter Markstein; HP Professional Books
- * (http://www.hp.com/go/retailbooks/)
- */
-(p6) fmpy.s1 f7 = f8, f17 // q = a*y
-(p6) fnma.s1 f6 = f9, f17, f1 // e = -b*y + 1
- ;;
-(p6) fma.s1 f16 = f7, f6, f7 // q1 = q*e + q
-(p6) fmpy.s1 f7 = f6, f6 // e1 = e*e
- ;;
-(p6) fma.s1 f16 = f16, f7, f16 // q2 = q1*e1 + q1
-(p6) fma.s1 f6 = f17, f6, f17 // y1 = y*e + y
- ;;
-(p6) fma.s1 f6 = f6, f7, f6 // y2 = y1*e1 + y1
-(p6) fnma.s1 f7 = f9, f16, f8 // r = -b*q2 + a
- ;;
-(p6) fma.s1 f17 = f7, f6, f16 // q3 = r*y2 + q2
- ;;
-#ifdef MODULO
- FP_TO_INT(f17, f17) // round quotient to an unsigned integer
- ;;
- INT_TO_FP(f17, f17) // renormalize
- ;;
- fnma.s1 f17 = f17, f9, f8 // compute remainder
- ;;
-#endif
- UNW(.restore sp)
- ldf.fill f16 = [sp], 16
- FP_TO_INT(f8, f17) // round result to an (unsigned) integer
- ;;
- ldf.fill f17 = [sp]
- getf.sig r8 = f8 // transfer result to result register
- br.ret.sptk rp
-END(NAME)
diff -urN linux-davidm/arch/ia64/lib/idiv32.S linux-2.4.0-test9-lia/arch/ia64/lib/idiv32.S
--- linux-davidm/arch/ia64/lib/idiv32.S Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test9-lia/arch/ia64/lib/idiv32.S Wed Oct 4 21:41:02 2000
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2000 Hewlett-Packard Co
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 32-bit integer division.
+ *
+ * This code is based on the application note entitled "Divide, Square Root
+ * and Remainder Algorithms for the IA-64 Architecture". This document
+ * is available as Intel document number 248725-002 or via the web at
+ * http://developer.intel.com/software/opensource/numerics/
+ *
+ * For more details on the theory behind these algorithms, see "IA-64
+ * and Elementary Functions" by Peter Markstein; HP Professional Books
+ * (http://www.hp.com/go/retailbooks/)
+ */
+
+#include <asm/asmmacro.h>
+
+#ifdef MODULO
+# define OP mod
+#else
+# define OP div
+#endif
+
+#ifdef UNSIGNED
+# define SGN u
+# define EXTEND zxt4
+# define INT_TO_FP(a,b) fcvt.xuf.s1 a=b
+# define FP_TO_INT(a,b) fcvt.fxu.trunc.s1 a=b
+#else
+# define SGN
+# define EXTEND sxt4
+# define INT_TO_FP(a,b) fcvt.xf a=b
+# define FP_TO_INT(a,b) fcvt.fx.trunc.s1 a=b
+#endif
+
+#define PASTE1(a,b) a##b
+#define PASTE(a,b) PASTE1(a,b)
+#define NAME PASTE(PASTE(__,SGN),PASTE(OP,si3))
+
+GLOBAL_ENTRY(NAME)
+ .regstk 2,0,0,0
+ // Transfer inputs to FP registers.
+ mov r2 = 0xffdd // r2 = -34 + 65535 (fp reg format bias)
+ EXTEND in0 = in0 // in0 = a
+ EXTEND in1 = in1 // in1 = b
+ ;;
+ setf.sig f8 = in0
+ setf.sig f9 = in1
+#ifdef MODULO
+ sub in1 = r0, in1 // in1 = -b
+#endif
+ ;;
+ // Convert the inputs to FP, to avoid FP software-assist faults.
+ INT_TO_FP(f8, f8)
+ INT_TO_FP(f9, f9)
+ ;;
+ setf.exp f7 = r2 // f7 = 2^-34
+ frcpa.s1 f6, p6 = f8, f9 // y0 = frcpa(b)
+ ;;
+(p6) fmpy.s1 f8 = f8, f6 // q0 = a*y0
+(p6) fnma.s1 f6 = f9, f6, f1 // e0 = -b*y0 + 1
+ ;;
+#ifdef MODULO
+ setf.sig f9 = in1 // f9 = -b
+#endif
+(p6) fma.s1 f8 = f6, f8, f8 // q1 = e0*q0 + q0
+(p6) fma.s1 f6 = f6, f6, f7 // e1 = e0*e0 + 2^-34
+ ;;
+#ifdef MODULO
+ setf.sig f7 = in0
+#endif
+(p6) fma.s1 f6 = f6, f8, f8 // q2 = e1*q1 + q1
+ ;;
+ FP_TO_INT(f6, f6) // q = trunc(q2)
+ ;;
+#ifdef MODULO
+ xma.l f6 = f6, f9, f7 // r = q*(-b) + a
+ ;;
+#endif
+ getf.sig r8 = f6 // transfer result to result register
+ br.ret.sptk rp
+END(NAME)
diff -urN linux-davidm/arch/ia64/lib/idiv64.S linux-2.4.0-test9-lia/arch/ia64/lib/idiv64.S
--- linux-davidm/arch/ia64/lib/idiv64.S Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test9-lia/arch/ia64/lib/idiv64.S Wed Oct 4 21:41:04 2000
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 64-bit integer division.
+ *
+ * This code is based on the application note entitled "Divide, Square Root
+ * and Remainder Algorithms for the IA-64 Architecture". This document
+ * is available as Intel document number 248725-002 or via the web at
+ * http://developer.intel.com/software/opensource/numerics/
+ *
+ * For more details on the theory behind these algorithms, see "IA-64
+ * and Elementary Functions" by Peter Markstein; HP Professional Books
+ * (http://www.hp.com/go/retailbooks/)
+ */
+
+#include <asm/asmmacro.h>
+
+#ifdef MODULO
+# define OP mod
+#else
+# define OP div
+#endif
+
+#ifdef UNSIGNED
+# define SGN u
+# define INT_TO_FP(a,b) fcvt.xuf.s1 a=b
+# define FP_TO_INT(a,b) fcvt.fxu.trunc.s1 a=b
+#else
+# define SGN
+# define INT_TO_FP(a,b) fcvt.xf a=b
+# define FP_TO_INT(a,b) fcvt.fx.trunc.s1 a=b
+#endif
+
+#define PASTE1(a,b) a##b
+#define PASTE(a,b) PASTE1(a,b)
+#define NAME PASTE(PASTE(__,SGN),PASTE(OP,di3))
+
+GLOBAL_ENTRY(NAME)
+ UNW(.prologue)
+ .regstk 2,0,0,0
+ // Transfer inputs to FP registers.
+ setf.sig f8 = in0
+ setf.sig f9 = in1
+ UNW(.fframe 16)
+ UNW(.save.f 0x20)
+ stf.spill [sp] = f17,-16
+
+ // Convert the inputs to FP, to avoid FP software-assist faults.
+ INT_TO_FP(f8, f8)
+ ;;
+
+ UNW(.save.f 0x10)
+ stf.spill [sp] = f16
+ UNW(.body)
+ INT_TO_FP(f9, f9)
+ ;;
+ frcpa.s1 f17, p6 = f8, f9 // y0 = frcpa(b)
+ ;;
+(p6) fmpy.s1 f7 = f8, f17 // q0 = a*y0
+(p6) fnma.s1 f6 = f9, f17, f1 // e0 = -b*y0 + 1
+ ;;
+(p6) fma.s1 f16 = f7, f6, f7 // q1 = q0*e0 + q0
+(p6) fmpy.s1 f7 = f6, f6 // e1 = e0*e0
+ ;;
+#ifdef MODULO
+ sub in1 = r0, in1 // in1 = -b
+#endif
+(p6) fma.s1 f16 = f16, f7, f16 // q2 = q1*e1 + q1
+(p6) fma.s1 f6 = f17, f6, f17 // y1 = y0*e0 + y0
+ ;;
+(p6) fma.s1 f6 = f6, f7, f6 // y2 = y1*e1 + y1
+(p6) fnma.s1 f7 = f9, f16, f8 // r = -b*q2 + a
+ ;;
+#ifdef MODULO
+ setf.sig f8 = in0 // f8 = a
+ setf.sig f9 = in1 // f9 = -b
+#endif
+(p6) fma.s1 f17 = f7, f6, f16 // q3 = r*y2 + q2
+ ;;
+ UNW(.restore sp)
+ ldf.fill f16 = [sp], 16
+ FP_TO_INT(f17, f17) // q = trunc(q3)
+ ;;
+#ifdef MODULO
+ xma.l f17 = f17, f9, f8 // r = q*(-b) + a
+ ;;
+#endif
+ getf.sig r8 = f17 // transfer result to result register
+ ldf.fill f17 = [sp]
+ br.ret.sptk rp
+END(NAME)
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.0-test9-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Thu Aug 24 08:17:30 2000
+++ linux-2.4.0-test9-lia/arch/ia64/mm/init.c Wed Oct 4 23:03:06 2000
@@ -357,6 +357,7 @@
panic("mm/init: overlap between virtually mapped linear page table and "
"mapped kernel space!");
pta = POW2(61) - POW2(impl_va_msb);
+#ifndef CONFIG_DISABLE_VHPT
/*
* Set the (virtually mapped linear) page table address. Bit
* 8 selects between the short and long format, bits 2-7 the
@@ -364,6 +365,9 @@
* enabled.
*/
ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 1);
+#else
+ ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 0);
+#endif
}
/*
@@ -444,15 +448,6 @@
/* install the gate page in the global page table: */
put_gate_page(virt_to_page(__start_gate_section), GATE_ADDR);
-
-#ifndef CONFIG_IA64_SOFTSDV_HACKS
- /*
- * (Some) SoftSDVs seem to have a problem with this call.
- * Since it's mostly a performance optimization, just don't do
- * it for now... --davidm 99/12/6
- */
- efi_enter_virtual_mode();
-#endif
#ifdef CONFIG_IA32_SUPPORT
ia32_gdt_init();
diff -urN linux-davidm/drivers/char/vt.c linux-2.4.0-test9-lia/drivers/char/vt.c
--- linux-davidm/drivers/char/vt.c Wed Aug 2 18:54:18 2000
+++ linux-2.4.0-test9-lia/drivers/char/vt.c Wed Oct 4 21:43:13 2000
@@ -62,7 +62,7 @@
*/
unsigned char keyboard_type = KB_101;
-#if !defined(__alpha__) && !defined(__mips__) && !defined(__arm__) && !defined(__sh__)
+#if !defined(__alpha__) && !defined(__ia64__) && !defined(__mips__) && !defined(__arm__) && !defined(__sh__)
asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int on);
#endif
@@ -472,7 +472,7 @@
ucval = keyboard_type;
goto setchar;
-#if !defined(__alpha__) && !defined(__mips__) && !defined(__arm__) && !defined(__sh__)
+#if !defined(__alpha__) && !defined(__ia64__) && !defined(__mips__) && !defined(__arm__) && !defined(__sh__)
/*
* These cannot be implemented on any machine that implements
* ioperm() in user level (such as Alpha PCs).
diff -urN linux-davidm/drivers/scsi/Makefile linux-2.4.0-test9-lia/drivers/scsi/Makefile
--- linux-davidm/drivers/scsi/Makefile Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/drivers/scsi/Makefile Wed Oct 4 21:30:21 2000
@@ -123,7 +123,7 @@
scsicam.o scsi_proc.o scsi_error.o \
scsi_obsolete.o scsi_queue.o scsi_lib.o \
scsi_merge.o scsi_dma.o scsi_scan.o \
-
+
sr_mod-objs := sr.o sr_ioctl.o sr_vendor.o
initio-objs := ini9100u.o i91uscsi.o
a100u2w-objs := inia100.o i60uscsi.o
diff -urN linux-davidm/drivers/scsi/qla1280.c linux-2.4.0-test9-lia/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/drivers/scsi/qla1280.c Wed Oct 4 21:43:30 2000
@@ -19,6 +19,10 @@
/****************************************************************************
Revision History:
+ Rev 3.17 Beta September 18, 2000 BN Qlogic
+ - Removed warnings for 32 bit 2.4.x compiles
+ - Corrected declared size for request and response
+ DMA addresses that are kept in each ha
Rev. 3.16 Beta August 25, 2000 BN Qlogic
- Corrected 64 bit addressing issue on IA-64
where the upper 32 bits were not properly
@@ -98,7 +102,7 @@
#include <linux/module.h>
#endif
-#define QLA1280_VERSION "3.16 Beta"
+#define QLA1280_VERSION "3.17 Beta"
#include <stdarg.h>
#include <asm/io.h>
@@ -175,8 +179,13 @@
#define QLA1280_DELAY(sec) mdelay(sec * 1000)
/* 3.16 */
+#if BITS_PER_LONG > 32
#define pci_dma_lo32(a) (a & 0xffffffff)
#define pci_dma_hi32(a) ((a >> 32) & 0xffffffff)
+#else
+#define pci_dma_lo32(a) (a & 0xffffffff)
+#define pci_dma_hi32(a) 0
+#endif
#define VIRT_TO_BUS(a) virt_to_bus(((void *)a))
@@ -2789,7 +2798,7 @@
uint8_t *sp;
uint8_t *tbuf;
#if BITS_PER_LONG > 32
- u_long p_tbuf;
+ dma_addr_t p_tbuf;
#else
uint32_t p_tbuf;
#endif
@@ -4170,12 +4179,12 @@
#endif
}
}
-#ifdef QL_DEBUG_LEVEL_5
- else /* No data transfer */
+ else /* No data transfer */
{
*dword_ptr++ = (uint32_t) 0;
*dword_ptr++ = (uint32_t) 0;
*dword_ptr = (uint32_t) 0;
+#ifdef QL_DEBUG_LEVEL_5
qla1280_print(
"qla1280_64bit_start_scsi: No data, command packet data - c");
qla1280_print(" b ");
@@ -4186,8 +4195,8 @@
qla1280_output_number((uint32_t)SCSI_LUN_32(cmd), 10);
qla1280_print("\n\r");
qla1280_dump_buffer((caddr_t)pkt, REQUEST_ENTRY_SIZE);
- }
#endif
+ }
/* Adjust ring index. */
ha->req_ring_index++;
if (ha->req_ring_index = REQUEST_ENTRY_CNT)
diff -urN linux-davidm/drivers/scsi/qla1280.h linux-2.4.0-test9-lia/drivers/scsi/qla1280.h
--- linux-davidm/drivers/scsi/qla1280.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/drivers/scsi/qla1280.h Wed Oct 4 21:43:38 2000
@@ -1439,22 +1439,35 @@
request_t req[REQUEST_ENTRY_CNT+1];
response_t res[RESPONSE_ENTRY_CNT+1];
+#if BITS_PER_LONG > 32
+ dma_addr_t request_dma; /* Physical Address */
+#else
uint32_t request_dma; /* Physical address. */
+#endif
request_t *request_ring; /* Base virtual address */
request_t *request_ring_ptr; /* Current address. */
uint16_t req_ring_index; /* Current index. */
uint16_t req_q_cnt; /* Number of available entries. */
+#if BITS_PER_LONG > 32
+ dma_addr_t response_dma; /* Physical address. */
+#else
uint32_t response_dma; /* Physical address. */
+#endif
response_t *response_ring; /* Base virtual address */
response_t *response_ring_ptr; /* Current address. */
uint16_t rsp_ring_index; /* Current index. */
#if QL1280_TARGET_MODE_SUPPORT
/* Target buffer and sense data. */
+#if BITS_PER_LONG > 32
+ dma_addr_t tbuf_dma; /* Physical address. */
+ dma_addr_t tsense_dma; /* Physical address. */
+#else
uint32_t tbuf_dma; /* Physical address. */
- tgt_t *tbuf;
uint32_t tsense_dma; /* Physical address. */
+#endif
+ tgt_t *tbuf;
uint8_t *tsense;
#endif
diff -urN linux-davidm/drivers/scsi/simscsi.c linux-2.4.0-test9-lia/drivers/scsi/simscsi.c
--- linux-davidm/drivers/scsi/simscsi.c Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/drivers/scsi/simscsi.c Wed Oct 4 21:43:52 2000
@@ -357,3 +357,8 @@
}
return 0;
}
+
+
+static Scsi_Host_Template driver_template = SIMSCSI;
+
+#include "scsi_module.c"
diff -urN linux-davidm/include/asm-ia64/acpikcfg.h linux-2.4.0-test9-lia/include/asm-ia64/acpikcfg.h
--- linux-davidm/include/asm-ia64/acpikcfg.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/acpikcfg.h Wed Oct 4 21:46:27 2000
@@ -7,12 +7,10 @@
*/
-typedef u32 ACPI_STATUS; /* from actypes.h */
+u32 __init acpi_cf_init (void * rsdp);
+u32 __init acpi_cf_terminate (void );
-ACPI_STATUS __init acpi_cf_init (void * rsdp);
-ACPI_STATUS __init acpi_cf_terminate (void );
-
-ACPI_STATUS __init
+u32 __init
acpi_cf_get_pci_vectors (
struct pci_vector_struct **vectors,
int *num_pci_vectors
diff -urN linux-davidm/include/asm-ia64/atomic.h linux-2.4.0-test9-lia/include/asm-ia64/atomic.h
--- linux-davidm/include/asm-ia64/atomic.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/atomic.h Wed Oct 4 21:46:40 2000
@@ -17,13 +17,6 @@
#include <asm/system.h>
/*
- * Make sure gcc doesn't try to be clever and move things around
- * on us. We need to use _exactly_ the address the user gave us,
- * not some alias that contains the same information.
- */
-#define __atomic_fool_gcc(x) (*(volatile struct { int a[100]; } *)x)
-
-/*
* On IA-64, counter must always be volatile to ensure that that the
* memory accesses are ordered.
*/
diff -urN linux-davidm/include/asm-ia64/bitops.h linux-2.4.0-test9-lia/include/asm-ia64/bitops.h
--- linux-davidm/include/asm-ia64/bitops.h Wed Jul 5 22:15:26 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/bitops.h Wed Oct 4 21:46:53 2000
@@ -20,7 +20,7 @@
* bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
*/
-extern __inline__ void
+static __inline__ void
set_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
@@ -36,7 +36,12 @@
} while (cmpxchg_acq(m, old, new) != old);
}
-extern __inline__ void
+/*
+ * clear_bit() doesn't provide any barrier for the compiler.
+ */
+#define smp_mb__before_clear_bit() smp_mb()
+#define smp_mb__after_clear_bit() smp_mb()
+static __inline__ void
clear_bit (int nr, volatile void *addr)
{
__u32 mask, old, new;
@@ -52,7 +57,7 @@
} while (cmpxchg_acq(m, old, new) != old);
}
-extern __inline__ void
+static __inline__ void
change_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
@@ -68,7 +73,7 @@
} while (cmpxchg_acq(m, old, new) != old);
}
-extern __inline__ int
+static __inline__ int
test_and_set_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
@@ -85,7 +90,7 @@
return (old & bit) != 0;
}
-extern __inline__ int
+static __inline__ int
test_and_clear_bit (int nr, volatile void *addr)
{
__u32 mask, old, new;
@@ -102,7 +107,7 @@
return (old & ~mask) != 0;
}
-extern __inline__ int
+static __inline__ int
test_and_change_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
@@ -119,7 +124,7 @@
return (old & bit) != 0;
}
-extern __inline__ int
+static __inline__ int
test_bit (int nr, volatile void *addr)
{
return 1 & (((const volatile __u32 *) addr)[nr >> 5] >> (nr & 31));
@@ -129,7 +134,7 @@
* ffz = Find First Zero in word. Undefined if no zero exists,
* so code should check against ~0UL first..
*/
-extern inline unsigned long
+static inline unsigned long
ffz (unsigned long x)
{
unsigned long result;
@@ -164,7 +169,7 @@
* hweightN: returns the hamming weight (i.e. the number
* of bits set) of a N-bit word
*/
-extern __inline__ unsigned long
+static __inline__ unsigned long
hweight64 (unsigned long x)
{
unsigned long result;
@@ -181,7 +186,7 @@
/*
* Find next zero bit in a bitmap reasonably efficiently..
*/
-extern inline int
+static inline int
find_next_zero_bit (void *addr, unsigned long size, unsigned long offset)
{
unsigned long *p = ((unsigned long *) addr) + (offset >> 6);
diff -urN linux-davidm/include/asm-ia64/delay.h linux-2.4.0-test9-lia/include/asm-ia64/delay.h
--- linux-davidm/include/asm-ia64/delay.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/delay.h Wed Oct 4 21:46:59 2000
@@ -18,13 +18,13 @@
#include <asm/processor.h>
-extern __inline__ void
+static __inline__ void
ia64_set_itm (unsigned long val)
{
__asm__ __volatile__("mov cr.itm=%0;; srlz.d;;" :: "r"(val) : "memory");
}
-extern __inline__ unsigned long
+static __inline__ unsigned long
ia64_get_itm (void)
{
unsigned long result;
@@ -33,7 +33,7 @@
return result;
}
-extern __inline__ void
+static __inline__ void
ia64_set_itv (unsigned char vector, unsigned char masked)
{
if (masked > 1)
@@ -43,13 +43,13 @@
:: "r"((masked << 16) | vector) : "memory");
}
-extern __inline__ void
+static __inline__ void
ia64_set_itc (unsigned long val)
{
__asm__ __volatile__("mov ar.itc=%0;; srlz.d;;" :: "r"(val) : "memory");
}
-extern __inline__ unsigned long
+static __inline__ unsigned long
ia64_get_itc (void)
{
unsigned long result;
@@ -58,7 +58,7 @@
return result;
}
-extern __inline__ void
+static __inline__ void
__delay (unsigned long loops)
{
unsigned long saved_ar_lc;
@@ -72,7 +72,7 @@
__asm__ __volatile__("mov ar.lc=%0" :: "r"(saved_ar_lc));
}
-extern __inline__ void
+static __inline__ void
udelay (unsigned long usecs)
{
#ifdef CONFIG_IA64_SOFTSDV_HACKS
diff -urN linux-davidm/include/asm-ia64/efi.h linux-2.4.0-test9-lia/include/asm-ia64/efi.h
--- linux-davidm/include/asm-ia64/efi.h Thu Aug 24 08:17:47 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/efi.h Wed Oct 4 21:47:05 2000
@@ -219,7 +219,7 @@
efi_reset_system_t *reset_system;
} efi;
-extern inline int
+static inline int
efi_guidcmp (efi_guid_t left, efi_guid_t right)
{
return memcmp(&left, &right, sizeof (efi_guid_t));
diff -urN linux-davidm/include/asm-ia64/io.h linux-2.4.0-test9-lia/include/asm-ia64/io.h
--- linux-davidm/include/asm-ia64/io.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/io.h Wed Oct 4 21:47:47 2000
@@ -63,7 +63,7 @@
*/
#define __ia64_mf_a() __asm__ __volatile__ ("mf.a" ::: "memory")
-extern inline const unsigned long
+static inline const unsigned long
__ia64_get_io_port_base (void)
{
extern unsigned long ia64_iobase;
@@ -71,7 +71,7 @@
return ia64_iobase;
}
-extern inline void*
+static inline void*
__ia64_mk_io_addr (unsigned long port)
{
const unsigned long io_base = __ia64_get_io_port_base();
@@ -99,7 +99,7 @@
* order. --davidm 99/12/07
*/
-extern inline unsigned int
+static inline unsigned int
__inb (unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
@@ -110,7 +110,7 @@
return ret;
}
-extern inline unsigned int
+static inline unsigned int
__inw (unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
@@ -121,7 +121,7 @@
return ret;
}
-extern inline unsigned int
+static inline unsigned int
__inl (unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
@@ -132,7 +132,7 @@
return ret;
}
-extern inline void
+static inline void
__insb (unsigned long port, void *dst, unsigned long count)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
@@ -146,7 +146,7 @@
return;
}
-extern inline void
+static inline void
__insw (unsigned long port, void *dst, unsigned long count)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
@@ -160,7 +160,7 @@
return;
}
-extern inline void
+static inline void
__insl (unsigned long port, void *dst, unsigned long count)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
@@ -174,7 +174,7 @@
return;
}
-extern inline void
+static inline void
__outb (unsigned char val, unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
@@ -183,7 +183,7 @@
__ia64_mf_a();
}
-extern inline void
+static inline void
__outw (unsigned short val, unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
@@ -192,7 +192,7 @@
__ia64_mf_a();
}
-extern inline void
+static inline void
__outl (unsigned int val, unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
@@ -201,7 +201,7 @@
__ia64_mf_a();
}
-extern inline void
+static inline void
__outsb (unsigned long port, const void *src, unsigned long count)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
@@ -214,7 +214,7 @@
return;
}
-extern inline void
+static inline void
__outsw (unsigned long port, const void *src, unsigned long count)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
@@ -227,7 +227,7 @@
return;
}
-extern inline void
+static inline void
__outsl (unsigned long port, void *src, unsigned long count)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
@@ -256,49 +256,49 @@
/*
* The address passed to these functions are ioremap()ped already.
*/
-extern inline unsigned char
+static inline unsigned char
__readb (void *addr)
{
return *(volatile unsigned char *)addr;
}
-extern inline unsigned short
+static inline unsigned short
__readw (void *addr)
{
return *(volatile unsigned short *)addr;
}
-extern inline unsigned int
+static inline unsigned int
__readl (void *addr)
{
return *(volatile unsigned int *) addr;
}
-extern inline unsigned long
+static inline unsigned long
__readq (void *addr)
{
return *(volatile unsigned long *) addr;
}
-extern inline void
+static inline void
__writeb (unsigned char val, void *addr)
{
*(volatile unsigned char *) addr = val;
}
-extern inline void
+static inline void
__writew (unsigned short val, void *addr)
{
*(volatile unsigned short *) addr = val;
}
-extern inline void
+static inline void
__writel (unsigned int val, void *addr)
{
*(volatile unsigned int *) addr = val;
}
-extern inline void
+static inline void
__writeq (unsigned long val, void *addr)
{
*(volatile unsigned long *) addr = val;
diff -urN linux-davidm/include/asm-ia64/mmu_context.h linux-2.4.0-test9-lia/include/asm-ia64/mmu_context.h
--- linux-davidm/include/asm-ia64/mmu_context.h Thu Aug 24 08:17:47 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/mmu_context.h Wed Oct 4 21:48:03 2000
@@ -57,7 +57,7 @@
{
}
-extern inline unsigned long
+static inline unsigned long
ia64_rid (unsigned long context, unsigned long region_addr)
{
# ifdef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
@@ -67,7 +67,7 @@
# endif
}
-extern inline void
+static inline void
get_new_mmu_context (struct mm_struct *mm)
{
spin_lock(&ia64_ctx.lock);
@@ -80,7 +80,7 @@
}
-extern inline void
+static inline void
get_mmu_context (struct mm_struct *mm)
{
/* check if our ASN is of an older generation and thus invalid: */
@@ -88,20 +88,20 @@
get_new_mmu_context(mm);
}
-extern inline int
+static inline int
init_new_context (struct task_struct *p, struct mm_struct *mm)
{
mm->context = 0;
return 0;
}
-extern inline void
+static inline void
destroy_context (struct mm_struct *mm)
{
/* Nothing to do. */
}
-extern inline void
+static inline void
reload_context (struct mm_struct *mm)
{
unsigned long rid;
diff -urN linux-davidm/include/asm-ia64/module.h linux-2.4.0-test9-lia/include/asm-ia64/module.h
--- linux-davidm/include/asm-ia64/module.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/module.h Wed Oct 4 21:48:16 2000
@@ -76,7 +76,9 @@
* Pointers are reasonable, add the module unwind table
*/
archdata->unw_table = unw_add_unwind_table(mod->name, archdata->segment_base,
- archdata->gp, archdata->unw_start, archdata->unw_end);
+ (unsigned long) archdata->gp,
+ (unsigned long) archdata->unw_start,
+ (unsigned long) archdata->unw_end);
#endif /* CONFIG_IA64_NEW_UNWIND */
return 0;
}
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.0-test9-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/offsets.h Wed Oct 4 21:48:29 2000
@@ -11,7 +11,7 @@
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 2928 /* 0xb70 */
+#define IA64_TASK_SIZE 3328 /* 0xd00 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
@@ -21,9 +21,9 @@
#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
-#define IA64_TASK_THREAD_OFFSET 928 /* 0x3a0 */
-#define IA64_TASK_THREAD_KSP_OFFSET 928 /* 0x3a0 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 2784 /* 0xae0 */
+#define IA64_TASK_THREAD_OFFSET 1424 /* 0x590 */
+#define IA64_TASK_THREAD_KSP_OFFSET 1424 /* 0x590 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 3184 /* 0xc70 */
#define IA64_TASK_PID_OFFSET 188 /* 0xbc */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/page.h linux-2.4.0-test9-lia/include/asm-ia64/page.h
--- linux-davidm/include/asm-ia64/page.h Thu Aug 24 08:17:47 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/page.h Wed Oct 4 21:48:41 2000
@@ -102,15 +102,13 @@
#ifdef CONFIG_IA64_GENERIC
# include <asm/machvec.h>
# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
-#elif defined (CONFIG_IA64_SN_SN1_SIM)
+#elif defined (CONFIG_IA64_SN_SN1)
# define virt_to_page(kaddr) (mem_map + MAP_NR_SN1(kaddr))
#else
# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
#endif
#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
-# endif /* __KERNEL__ */
-
typedef union ia64_va {
struct {
unsigned long off : 61; /* intra-region offset */
@@ -138,7 +136,7 @@
#define BUG() do { printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); *(int *)0=0; } while (0)
#define PAGE_BUG(page) do { BUG(); } while (0)
-extern __inline__ int
+static __inline__ int
get_order (unsigned long size)
{
double d = size - 1;
@@ -151,6 +149,7 @@
return order;
}
+# endif /* __KERNEL__ */
#endif /* !ASSEMBLY */
#define PAGE_OFFSET 0xe000000000000000
diff -urN linux-davidm/include/asm-ia64/pal.h linux-2.4.0-test9-lia/include/asm-ia64/pal.h
--- linux-davidm/include/asm-ia64/pal.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/pal.h Wed Oct 4 21:48:58 2000
@@ -708,7 +708,7 @@
extern void pal_bus_features_print (u64);
/* Provide information about configurable processor bus features */
-extern inline s64
+static inline s64
ia64_pal_bus_get_features (pal_bus_features_u_t *features_avail,
pal_bus_features_u_t *features_status,
pal_bus_features_u_t *features_control)
@@ -725,7 +725,7 @@
}
/* Enables/disables specific processor bus features */
-extern inline s64
+static inline s64
ia64_pal_bus_set_features (pal_bus_features_u_t feature_select)
{
struct ia64_pal_retval iprv;
@@ -734,7 +734,7 @@
}
/* Get detailed cache information */
-extern inline s64
+static inline s64
ia64_pal_cache_config_info (u64 cache_level, u64 cache_type, pal_cache_config_info_t *conf)
{
struct ia64_pal_retval iprv;
@@ -752,7 +752,7 @@
}
/* Get detailed cche protection information */
-extern inline s64
+static inline s64
ia64_pal_cache_prot_info (u64 cache_level, u64 cache_type, pal_cache_protection_info_t *prot)
{
struct ia64_pal_retval iprv;
@@ -775,7 +775,7 @@
* Flush the processor instruction or data caches. *PROGRESS must be
* initialized to zero before calling this for the first time..
*/
-extern inline s64
+static inline s64
ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 *progress)
{
struct ia64_pal_retval iprv;
@@ -786,7 +786,7 @@
/* Initialize the processor controlled caches */
-extern inline s64
+static inline s64
ia64_pal_cache_init (u64 level, u64 cache_type, u64 restrict)
{
struct ia64_pal_retval iprv;
@@ -798,7 +798,7 @@
* processor controlled cache to known values without the availability
* of backing memory.
*/
-extern inline s64
+static inline s64
ia64_pal_cache_line_init (u64 physical_addr, u64 data_value)
{
struct ia64_pal_retval iprv;
@@ -808,7 +808,7 @@
/* Read the data and tag of a processor controlled cache line for diags */
-extern inline s64
+static inline s64
ia64_pal_cache_read (pal_cache_line_id_u_t line_id, u64 physical_addr)
{
struct ia64_pal_retval iprv;
@@ -817,7 +817,7 @@
}
/* Return summary information about the heirarchy of caches controlled by the processor */
-extern inline s64
+static inline s64
ia64_pal_cache_summary (u64 *cache_levels, u64 *unique_caches)
{
struct ia64_pal_retval iprv;
@@ -830,7 +830,7 @@
}
/* Write the data and tag of a processor-controlled cache line for diags */
-extern inline s64
+static inline s64
ia64_pal_cache_write (pal_cache_line_id_u_t line_id, u64 physical_addr, u64 data)
{
struct ia64_pal_retval iprv;
@@ -840,7 +840,7 @@
/* Return the parameters needed to copy relocatable PAL procedures from ROM to memory */
-extern inline s64
+static inline s64
ia64_pal_copy_info (u64 copy_type, u64 num_procs, u64 num_iopics,
u64 *buffer_size, u64 *buffer_align)
{
@@ -854,7 +854,7 @@
}
/* Copy relocatable PAL procedures from ROM to memory */
-extern inline s64
+static inline s64
ia64_pal_copy_pal (u64 target_addr, u64 alloc_size, u64 processor, u64 *pal_proc_offset)
{
struct ia64_pal_retval iprv;
@@ -865,7 +865,7 @@
}
/* Return the number of instruction and data debug register pairs */
-extern inline s64
+static inline s64
ia64_pal_debug_info (u64 *inst_regs, u64 *data_regs)
{
struct ia64_pal_retval iprv;
@@ -880,7 +880,7 @@
#ifdef TBD
/* Switch from IA64-system environment to IA-32 system environment */
-extern inline s64
+static inline s64
ia64_pal_enter_ia32_env (ia32_env1, ia32_env2, ia32_env3)
{
struct ia64_pal_retval iprv;
@@ -890,7 +890,7 @@
#endif
/* Get unique geographical address of this processor on its bus */
-extern inline s64
+static inline s64
ia64_pal_fixed_addr (u64 *global_unique_addr)
{
struct ia64_pal_retval iprv;
@@ -901,7 +901,7 @@
}
/* Get base frequency of the platform if generated by the processor */
-extern inline s64
+static inline s64
ia64_pal_freq_base (u64 *platform_base_freq)
{
struct ia64_pal_retval iprv;
@@ -915,7 +915,7 @@
* Get the ratios for processor frequency, bus frequency and interval timer to
* to base frequency of the platform
*/
-extern inline s64
+static inline s64
ia64_pal_freq_ratios (struct pal_freq_ratio *proc_ratio, struct pal_freq_ratio *bus_ratio,
struct pal_freq_ratio *itc_ratio)
{
@@ -934,7 +934,7 @@
* power states where prefetching and execution are suspended and cache and
* TLB coherency is not maintained.
*/
-extern inline s64
+static inline s64
ia64_pal_halt (u64 halt_state)
{
struct ia64_pal_retval iprv;
@@ -954,7 +954,7 @@
} pal_power_mgmt_info_u_t;
/* Return information about processor's optional power management capabilities. */
-extern inline s64
+static inline s64
ia64_pal_halt_info (pal_power_mgmt_info_u_t *power_buf)
{
struct ia64_pal_retval iprv;
@@ -965,7 +965,7 @@
/* Cause the processor to enter LIGHT HALT state, where prefetching and execution are
* suspended, but cache and TLB coherency is maintained.
*/
-extern inline s64
+static inline s64
ia64_pal_halt_light (void)
{
struct ia64_pal_retval iprv;
@@ -977,7 +977,7 @@
* the error logging registers to be written. This procedure also checks the pending
* machine check bit and pending INIT bit and reports their states.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_clear_log (u64 *pending_vector)
{
struct ia64_pal_retval iprv;
@@ -990,7 +990,7 @@
/* Ensure that all outstanding transactions in a processor are completed or that any
* MCA due to thes outstanding transaction is taken.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_drain (void)
{
struct ia64_pal_retval iprv;
@@ -999,7 +999,7 @@
}
/* Return the machine check dynamic processor state */
-extern inline s64
+static inline s64
ia64_pal_mc_dynamic_state (u64 offset, u64 *size, u64 *pds)
{
struct ia64_pal_retval iprv;
@@ -1012,7 +1012,7 @@
}
/* Return processor machine check information */
-extern inline s64
+static inline s64
ia64_pal_mc_error_info (u64 info_index, u64 type_index, u64 *size, u64 *error_info)
{
struct ia64_pal_retval iprv;
@@ -1027,7 +1027,7 @@
/* Inform PALE_CHECK whether a machine check is expected so that PALE_CHECK willnot
* attempt to correct any expected machine checks.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_expected (u64 expected, u64 *previous)
{
struct ia64_pal_retval iprv;
@@ -1041,7 +1041,7 @@
* minimal processor state in the event of a machine check or initialization
* event.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_register_mem (u64 physical_addr)
{
struct ia64_pal_retval iprv;
@@ -1052,7 +1052,7 @@
/* Restore minimal architectural processor state, set CMC interrupt if necessary
* and resume execution
*/
-extern inline s64
+static inline s64
ia64_pal_mc_resume (u64 set_cmci, u64 save_ptr)
{
struct ia64_pal_retval iprv;
@@ -1061,7 +1061,7 @@
}
/* Return the memory attributes implemented by the processor */
-extern inline s64
+static inline s64
ia64_pal_mem_attrib (u64 *mem_attrib)
{
struct ia64_pal_retval iprv;
@@ -1074,7 +1074,7 @@
/* Return the amount of memory needed for second phase of processor
* self-test and the required alignment of memory.
*/
-extern inline s64
+static inline s64
ia64_pal_mem_for_test (u64 *bytes_needed, u64 *alignment)
{
struct ia64_pal_retval iprv;
@@ -1100,7 +1100,7 @@
/* Return the performance monitor information about what can be counted
* and how to configure the monitors to count the desired events.
*/
-extern inline s64
+static inline s64
ia64_pal_perf_mon_info (u64 *pm_buffer, pal_perf_mon_info_u_t *pm_info)
{
struct ia64_pal_retval iprv;
@@ -1113,7 +1113,7 @@
/* Specifies the physical address of the processor interrupt block
* and I/O port space.
*/
-extern inline s64
+static inline s64
ia64_pal_platform_addr (u64 type, u64 physical_addr)
{
struct ia64_pal_retval iprv;
@@ -1122,7 +1122,7 @@
}
/* Set the SAL PMI entrypoint in memory */
-extern inline s64
+static inline s64
ia64_pal_pmi_entrypoint (u64 sal_pmi_entry_addr)
{
struct ia64_pal_retval iprv;
@@ -1132,7 +1132,7 @@
struct pal_features_s;
/* Provide information about configurable processor features */
-extern inline s64
+static inline s64
ia64_pal_proc_get_features (u64 *features_avail,
u64 *features_status,
u64 *features_control)
@@ -1148,7 +1148,7 @@
}
/* Enable/disable processor dependent features */
-extern inline s64
+static inline s64
ia64_pal_proc_set_features (u64 feature_select)
{
struct ia64_pal_retval iprv;
@@ -1169,7 +1169,7 @@
/* Return the information required for the architected loop used to purge
* (initialize) the entire TC
*/
-extern inline s64
+static inline s64
ia64_get_ptce (ia64_ptce_info_t *ptce)
{
struct ia64_pal_retval iprv;
@@ -1189,7 +1189,7 @@
}
/* Return info about implemented application and control registers. */
-extern inline s64
+static inline s64
ia64_pal_register_info (u64 info_request, u64 *reg_info_1, u64 *reg_info_2)
{
struct ia64_pal_retval iprv;
@@ -1213,7 +1213,7 @@
/* Return information about the register stack and RSE for this processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_rse_info (u64 *num_phys_stacked, pal_hints_u_t *hints)
{
struct ia64_pal_retval iprv;
@@ -1229,7 +1229,7 @@
* suspended, but cause cache and TLB coherency to be maintained.
* This is usually called in IA-32 mode.
*/
-extern inline s64
+static inline s64
ia64_pal_shutdown (void)
{
struct ia64_pal_retval iprv;
@@ -1238,7 +1238,7 @@
}
/* Perform the second phase of processor self-test. */
-extern inline s64
+static inline s64
ia64_pal_test_proc (u64 test_addr, u64 test_size, u64 attributes, u64 *self_test_state)
{
struct ia64_pal_retval iprv;
@@ -1263,7 +1263,7 @@
/* Return PAL version information */
-extern inline s64
+static inline s64
ia64_pal_version (pal_version_u_t *pal_min_version, pal_version_u_t *pal_cur_version)
{
struct ia64_pal_retval iprv;
@@ -1301,7 +1301,7 @@
/* Return information about the virtual memory characteristics of the processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_vm_info (u64 tc_level, u64 tc_type, pal_tc_info_u_t *tc_info, u64 *tc_pages)
{
struct ia64_pal_retval iprv;
@@ -1316,7 +1316,7 @@
/* Get page size information about the virtual memory characteristics of the processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_vm_page_size (u64 *tr_pages, u64 *vw_pages)
{
struct ia64_pal_retval iprv;
@@ -1355,7 +1355,7 @@
/* Get summary information about the virtual memory characteristics of the processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_vm_summary (pal_vm_info_1_u_t *vm_info_1, pal_vm_info_2_u_t *vm_info_2)
{
struct ia64_pal_retval iprv;
@@ -1379,7 +1379,7 @@
} pal_tr_valid_u_t;
/* Read a translation register */
-extern inline s64
+static inline s64
ia64_pal_tr_read (u64 reg_num, u64 tr_type, u64 *tr_buffer, pal_tr_valid_u_t *tr_valid)
{
struct ia64_pal_retval iprv;
@@ -1389,7 +1389,7 @@
return iprv.status;
}
-extern inline s64
+static inline s64
ia64_pal_prefetch_visibility (void)
{
struct ia64_pal_retval iprv;
diff -urN linux-davidm/include/asm-ia64/param.h linux-2.4.0-test9-lia/include/asm-ia64/param.h
--- linux-davidm/include/asm-ia64/param.h Thu Aug 24 08:17:47 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/param.h Wed Oct 4 21:49:06 2000
@@ -15,7 +15,7 @@
* Yeah, simulating stuff is slow, so let us catch some breath between
* timer interrupts...
*/
-# define HZ 20
+# define HZ 32
#else
# define HZ 1024
#endif
diff -urN linux-davidm/include/asm-ia64/parport.h linux-2.4.0-test9-lia/include/asm-ia64/parport.h
--- linux-davidm/include/asm-ia64/parport.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test9-lia/include/asm-ia64/parport.h Wed Oct 4 21:49:17 2000
@@ -0,0 +1,20 @@
+/*
+ * parport.h: platform-specific PC-style parport initialisation
+ *
+ * Copyright (C) 1999, 2000 Tim Waugh <tim@cyberelk.demon.co.uk>
+ *
+ * This file should only be included by drivers/parport/parport_pc.c.
+ */
+
+#ifndef _ASM_IA64_PARPORT_H
+#define _ASM_IA64_PARPORT_H 1
+
+static int __devinit parport_pc_find_isa_ports (int autoirq, int autodma);
+
+static int __devinit
+parport_pc_find_nonpci_ports (int autoirq, int autodma)
+{
+ return parport_pc_find_isa_ports(autoirq, autodma);
+}
+
+#endif /* _ASM_IA64_PARPORT_H */
diff -urN linux-davidm/include/asm-ia64/pci.h linux-2.4.0-test9-lia/include/asm-ia64/pci.h
--- linux-davidm/include/asm-ia64/pci.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/pci.h Wed Oct 4 21:49:25 2000
@@ -22,12 +22,12 @@
struct pci_dev;
-extern inline void pcibios_set_master(struct pci_dev *dev)
+static inline void pcibios_set_master(struct pci_dev *dev)
{
/* No special bus mastering setup handling */
}
-extern inline void pcibios_penalize_isa_irq(int irq)
+static inline void pcibios_penalize_isa_irq(int irq)
{
/* We don't do dynamic PCI IRQ allocation */
}
@@ -128,7 +128,7 @@
* only drive the low 24-bits during PCI bus mastering, then
* you would pass 0x00ffffff as the mask to this function.
*/
-extern inline int
+static inline int
pci_dma_supported(struct pci_dev *hwdev, dma_addr_t mask)
{
return 1;
diff -urN linux-davidm/include/asm-ia64/pgalloc.h linux-2.4.0-test9-lia/include/asm-ia64/pgalloc.h
--- linux-davidm/include/asm-ia64/pgalloc.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/pgalloc.h Wed Oct 4 21:49:31 2000
@@ -32,7 +32,7 @@
#define pte_quicklist (my_cpu_data.pte_quick)
#define pgtable_cache_size (my_cpu_data.pgtable_cache_sz)
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
get_pgd_slow (void)
{
pgd_t *ret = (pgd_t *)__get_free_page(GFP_KERNEL);
@@ -41,7 +41,7 @@
return ret;
}
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
get_pgd_fast (void)
{
unsigned long *ret = pgd_quicklist;
@@ -54,7 +54,7 @@
return (pgd_t *)ret;
}
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
pgd_alloc (void)
{
pgd_t *pgd;
@@ -65,7 +65,7 @@
return pgd;
}
-extern __inline__ void
+static __inline__ void
free_pgd_fast (pgd_t *pgd)
{
*(unsigned long *)pgd = (unsigned long) pgd_quicklist;
@@ -73,7 +73,7 @@
++pgtable_cache_size;
}
-extern __inline__ pmd_t *
+static __inline__ pmd_t *
get_pmd_slow (void)
{
pmd_t *pmd = (pmd_t *) __get_free_page(GFP_KERNEL);
@@ -83,7 +83,7 @@
return pmd;
}
-extern __inline__ pmd_t *
+static __inline__ pmd_t *
get_pmd_fast (void)
{
unsigned long *ret = (unsigned long *)pmd_quicklist;
@@ -96,7 +96,7 @@
return (pmd_t *)ret;
}
-extern __inline__ void
+static __inline__ void
free_pmd_fast (pmd_t *pmd)
{
*(unsigned long *)pmd = (unsigned long) pmd_quicklist;
@@ -104,7 +104,7 @@
++pgtable_cache_size;
}
-extern __inline__ void
+static __inline__ void
free_pmd_slow (pmd_t *pmd)
{
free_page((unsigned long)pmd);
@@ -112,7 +112,7 @@
extern pte_t *get_pte_slow (pmd_t *pmd, unsigned long address_preadjusted);
-extern __inline__ pte_t *
+static __inline__ pte_t *
get_pte_fast (void)
{
unsigned long *ret = (unsigned long *)pte_quicklist;
@@ -125,7 +125,7 @@
return (pte_t *)ret;
}
-extern __inline__ void
+static __inline__ void
free_pte_fast (pte_t *pte)
{
*(unsigned long *)pte = (unsigned long) pte_quicklist;
@@ -142,7 +142,7 @@
extern void __handle_bad_pgd (pgd_t *pgd);
extern void __handle_bad_pmd (pmd_t *pmd);
-extern __inline__ pte_t*
+static __inline__ pte_t*
pte_alloc (pmd_t *pmd, unsigned long vmaddr)
{
unsigned long offset;
@@ -163,7 +163,7 @@
return (pte_t *) pmd_page(*pmd) + offset;
}
-extern __inline__ pmd_t*
+static __inline__ pmd_t*
pmd_alloc (pgd_t *pgd, unsigned long vmaddr)
{
unsigned long offset;
@@ -228,7 +228,7 @@
/*
* Flush a specified user mapping
*/
-extern __inline__ void
+static __inline__ void
flush_tlb_mm (struct mm_struct *mm)
{
if (mm) {
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.4.0-test9-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/pgtable.h Wed Oct 4 21:49:41 2000
@@ -318,7 +318,7 @@
/*
* Return the region index for virtual address ADDRESS.
*/
-extern __inline__ unsigned long
+static __inline__ unsigned long
rgn_index (unsigned long address)
{
ia64_va a;
@@ -330,7 +330,7 @@
/*
* Return the region offset for virtual address ADDRESS.
*/
-extern __inline__ unsigned long
+static __inline__ unsigned long
rgn_offset (unsigned long address)
{
ia64_va a;
@@ -342,7 +342,7 @@
#define RGN_SIZE (1UL << 61)
#define RGN_KERNEL 7
-extern __inline__ unsigned long
+static __inline__ unsigned long
pgd_index (unsigned long address)
{
unsigned long region = address >> 61;
@@ -353,7 +353,7 @@
/* The offset in the 1-level directory is given by the 3 region bits
(61..63) and the seven level-1 bits (33-39). */
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
pgd_offset (struct mm_struct *mm, unsigned long address)
{
return mm->pgd + pgd_index(address);
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.0-test9-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/processor.h Wed Oct 4 21:49:46 2000
@@ -454,31 +454,31 @@
ia64_fph_disable();
}
-extern inline void
+static inline void
ia64_fc (void *addr)
{
__asm__ __volatile__ ("fc %0" :: "r"(addr) : "memory");
}
-extern inline void
+static inline void
ia64_sync_i (void)
{
__asm__ __volatile__ (";; sync.i" ::: "memory");
}
-extern inline void
+static inline void
ia64_srlz_i (void)
{
__asm__ __volatile__ (";; srlz.i ;;" ::: "memory");
}
-extern inline void
+static inline void
ia64_srlz_d (void)
{
__asm__ __volatile__ (";; srlz.d" ::: "memory");
}
-extern inline __u64
+static inline __u64
ia64_get_rr (__u64 reg_bits)
{
__u64 r;
@@ -486,13 +486,13 @@
return r;
}
-extern inline void
+static inline void
ia64_set_rr (__u64 reg_bits, __u64 rr_val)
{
__asm__ __volatile__ ("mov rr[%0]=%1" :: "r"(reg_bits), "r"(rr_val) : "memory");
}
-extern inline __u64
+static inline __u64
ia64_get_dcr (void)
{
__u64 r;
@@ -500,14 +500,14 @@
return r;
}
-extern inline void
+static inline void
ia64_set_dcr (__u64 val)
{
__asm__ __volatile__ ("mov cr.dcr=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern inline __u64
+static inline __u64
ia64_get_lid (void)
{
__u64 r;
@@ -515,7 +515,7 @@
return r;
}
-extern inline void
+static inline void
ia64_invala (void)
{
__asm__ __volatile__ ("invala" ::: "memory");
@@ -533,7 +533,7 @@
* Insert a translation into an instruction and/or data translation
* register.
*/
-extern inline void
+static inline void
ia64_itr (__u64 target_mask, __u64 tr_num,
__u64 vmaddr, __u64 pte,
__u64 log_page_size)
@@ -552,7 +552,7 @@
* Insert a translation into the instruction and/or data translation
* cache.
*/
-extern inline void
+static inline void
ia64_itc (__u64 target_mask, __u64 vmaddr, __u64 pte,
__u64 log_page_size)
{
@@ -569,7 +569,7 @@
* Purge a range of addresses from instruction and/or data translation
* register(s).
*/
-extern inline void
+static inline void
ia64_ptr (__u64 target_mask, __u64 vmaddr, __u64 log_size)
{
if (target_mask & 0x1)
@@ -579,21 +579,21 @@
}
/* Set the interrupt vector address. The address must be suitably aligned (32KB). */
-extern inline void
+static inline void
ia64_set_iva (void *ivt_addr)
{
__asm__ __volatile__ ("mov cr.iva=%0;; srlz.i;;" :: "r"(ivt_addr) : "memory");
}
/* Set the page table address and control bits. */
-extern inline void
+static inline void
ia64_set_pta (__u64 pta)
{
/* Note: srlz.i implies srlz.d */
__asm__ __volatile__ ("mov cr.pta=%0;; srlz.i;;" :: "r"(pta) : "memory");
}
-extern inline __u64
+static inline __u64
ia64_get_cpuid (__u64 regnum)
{
__u64 r;
@@ -602,13 +602,13 @@
return r;
}
-extern inline void
+static inline void
ia64_eoi (void)
{
__asm__ ("mov cr.eoi=r0;; srlz.d;;" ::: "memory");
}
-extern __inline__ void
+static inline void
ia64_set_lrr0 (__u8 vector, __u8 masked)
{
if (masked > 1)
@@ -619,7 +619,7 @@
}
-extern __inline__ void
+static inline void
ia64_set_lrr1 (__u8 vector, __u8 masked)
{
if (masked > 1)
@@ -629,13 +629,13 @@
:: "r"((masked << 16) | vector) : "memory");
}
-extern __inline__ void
+static inline void
ia64_set_pmv (__u64 val)
{
__asm__ __volatile__ ("mov cr.pmv=%0" :: "r"(val) : "memory");
}
-extern __inline__ __u64
+static inline __u64
ia64_get_pmc (__u64 regnum)
{
__u64 retval;
@@ -644,13 +644,13 @@
return retval;
}
-extern __inline__ void
+static inline void
ia64_set_pmc (__u64 regnum, __u64 value)
{
__asm__ __volatile__ ("mov pmc[%0]=%1" :: "r"(regnum), "r"(value));
}
-extern __inline__ __u64
+static inline __u64
ia64_get_pmd (__u64 regnum)
{
__u64 retval;
@@ -659,7 +659,7 @@
return retval;
}
-extern __inline__ void
+static inline void
ia64_set_pmd (__u64 regnum, __u64 value)
{
__asm__ __volatile__ ("mov pmd[%0]=%1" :: "r"(regnum), "r"(value));
@@ -669,7 +669,7 @@
* Given the address to which a spill occurred, return the unat bit
* number that corresponds to this address.
*/
-extern inline __u64
+static inline __u64
ia64_unat_pos (void *spill_addr)
{
return ((__u64) spill_addr >> 3) & 0x3f;
@@ -679,7 +679,7 @@
* Set the NaT bit of an integer register which was spilled at address
* SPILL_ADDR. UNAT is the mask to be updated.
*/
-extern inline void
+static inline void
ia64_set_unat (__u64 *unat, void *spill_addr, unsigned long nat)
{
__u64 bit = ia64_unat_pos(spill_addr);
@@ -692,7 +692,7 @@
* Return saved PC of a blocked thread.
* Note that the only way T can block is through a call to schedule() -> switch_to().
*/
-extern inline unsigned long
+static inline unsigned long
thread_saved_pc (struct thread_struct *t)
{
struct unw_frame_info info;
@@ -727,7 +727,7 @@
/*
* Set the correctable machine check vector register
*/
-extern __inline__ void
+static inline void
ia64_set_cmcv (__u64 val)
{
__asm__ __volatile__ ("mov cr.cmcv=%0" :: "r"(val) : "memory");
@@ -736,7 +736,7 @@
/*
* Read the correctable machine check vector register
*/
-extern __inline__ __u64
+static inline __u64
ia64_get_cmcv (void)
{
__u64 val;
@@ -745,7 +745,7 @@
return val;
}
-extern inline __u64
+static inline __u64
ia64_get_ivr (void)
{
__u64 r;
@@ -753,13 +753,13 @@
return r;
}
-extern inline void
+static inline void
ia64_set_tpr (__u64 val)
{
__asm__ __volatile__ ("mov cr.tpr=%0" :: "r"(val));
}
-extern inline __u64
+static inline __u64
ia64_get_tpr (void)
{
__u64 r;
@@ -767,71 +767,75 @@
return r;
}
-extern __inline__ void
+static inline void
ia64_set_irr0 (__u64 val)
{
__asm__ __volatile__("mov cr.irr0=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr0 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr0" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr0" : "=r"(val));
return val;
}
-extern __inline__ void
+static inline void
ia64_set_irr1 (__u64 val)
{
__asm__ __volatile__("mov cr.irr1=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr1 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr1" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr1" : "=r"(val));
return val;
}
-extern __inline__ void
+static inline void
ia64_set_irr2 (__u64 val)
{
__asm__ __volatile__("mov cr.irr2=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr2 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr2" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr2" : "=r"(val));
return val;
}
-extern __inline__ void
+static inline void
ia64_set_irr3 (__u64 val)
{
__asm__ __volatile__("mov cr.irr3=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr3 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr3" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr3" : "=r"(val));
return val;
}
-extern __inline__ __u64
+static inline __u64
ia64_get_gp(void)
{
__u64 val;
@@ -859,7 +863,7 @@
#define ia64_rotl(w,n) ia64_rotr((w),(64)-(n))
-extern __inline__ __u64
+static inline __u64
ia64_thash (__u64 addr)
{
__u64 result;
diff -urN linux-davidm/include/asm-ia64/ptrace_offsets.h linux-2.4.0-test9-lia/include/asm-ia64/ptrace_offsets.h
--- linux-davidm/include/asm-ia64/ptrace_offsets.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/ptrace_offsets.h Wed Oct 4 23:05:46 2000
@@ -17,6 +17,8 @@
* unsigned long dbr[8];
* unsigned long rsvd2[504];
* unsigned long ibr[8];
+ * unsigned long rsvd3[504];
+ * unsigned long pmd[4];
* }
*/
@@ -210,5 +212,6 @@
#define PT_DBR 0x2000 /* data breakpoint registers */
#define PT_IBR 0x3000 /* instruction breakpoint registers */
+#define PT_PMD 0x4000 /* performance monitoring counters */
#endif /* _ASM_IA64_PTRACE_OFFSETS_H */
diff -urN linux-davidm/include/asm-ia64/sal.h linux-2.4.0-test9-lia/include/asm-ia64/sal.h
--- linux-davidm/include/asm-ia64/sal.h Thu Jun 22 07:09:45 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/sal.h Wed Oct 4 21:50:21 2000
@@ -17,6 +17,7 @@
*/
#include <linux/config.h>
+#include <linux/spinlock.h>
#include <asm/pal.h>
#include <asm/system.h>
@@ -158,12 +159,22 @@
char reserved2[8];
};
-struct ia64_sal_desc_ptc {
+typedef struct ia64_sal_desc_ptc {
char type;
char reserved1[3];
unsigned int num_domains; /* # of coherence domains */
- long domain_info; /* physical address of domain info table */
-};
+ s64 domain_info; /* physical address of domain info table */
+} ia64_sal_desc_ptc_t;
+
+typedef struct ia64_sal_ptc_domain_info {
+ unsigned long proc_count; /* number of processors in domain */
+ long proc_list; /* physical address of LID array */
+} ia64_sal_ptc_domain_info_t;
+
+typedef struct ia64_sal_ptc_domain_proc_entry {
+ unsigned char id; /* id of processor */
+ unsigned char eid; /* eid of processor */
+} ia64_sal_ptc_domain_proc_entry_t;
#define IA64_SAL_AP_EXTERNAL_INT 0
@@ -175,6 +186,7 @@
};
extern ia64_sal_handler ia64_sal;
+extern struct ia64_sal_desc_ptc *ia64_ptc_domain_info;
extern const char *ia64_sal_strerror (long status);
extern void ia64_sal_init (struct ia64_sal_systab *sal_systab);
@@ -387,7 +399,7 @@
* Now define a couple of inline functions for improved type checking
* and convenience.
*/
-extern inline long
+static inline long
ia64_sal_freq_base (unsigned long which, unsigned long *ticks_per_second,
unsigned long *drift_info)
{
@@ -400,7 +412,7 @@
}
/* Flush all the processor and platform level instruction and/or data caches */
-extern inline s64
+static inline s64
ia64_sal_cache_flush (u64 cache_type)
{
struct ia64_sal_retval isrv;
@@ -411,7 +423,7 @@
/* Initialize all the processor and platform level instruction and data caches */
-extern inline s64
+static inline s64
ia64_sal_cache_init (void)
{
struct ia64_sal_retval isrv;
@@ -422,7 +434,7 @@
/* Clear the processor and platform information logged by SAL with respect to the
* machine state at the time of MCA's, INITs or CMCs
*/
-extern inline s64
+static inline s64
ia64_sal_clear_state_info (u64 sal_info_type, u64 sal_info_sub_type)
{
struct ia64_sal_retval isrv;
@@ -434,7 +446,7 @@
/* Get the processor and platform information logged by SAL with respect to the machine
* state at the time of the MCAs, INITs or CMCs.
*/
-extern inline u64
+static inline u64
ia64_sal_get_state_info (u64 sal_info_type, u64 sal_info_sub_type, u64 *sal_info)
{
struct ia64_sal_retval isrv;
@@ -446,7 +458,7 @@
/* Get the maximum size of the information logged by SAL with respect to the machine
* state at the time of MCAs, INITs or CMCs
*/
-extern inline u64
+static inline u64
ia64_sal_get_state_info_size (u64 sal_info_type, u64 sal_info_sub_type)
{
struct ia64_sal_retval isrv;
@@ -459,7 +471,7 @@
/* Causes the processor to go into a spin loop within SAL where SAL awaits a wakeup
* from the monarch processor.
*/
-extern inline s64
+static inline s64
ia64_sal_mc_rendez (void)
{
struct ia64_sal_retval isrv;
@@ -471,7 +483,7 @@
* the machine check rendezvous sequence as well as the mechanism to wake up the
* non-monarch processor at the end of machine check processing.
*/
-extern inline s64
+static inline s64
ia64_sal_mc_set_params (u64 param_type, u64 i_or_m, u64 i_or_m_val, u64 timeout)
{
struct ia64_sal_retval isrv;
@@ -480,7 +492,7 @@
}
/* Read from PCI configuration space */
-extern inline s64
+static inline s64
ia64_sal_pci_config_read (u64 pci_config_addr, u64 size, u64 *value)
{
struct ia64_sal_retval isrv;
@@ -503,7 +515,7 @@
}
/* Write to PCI configuration space */
-extern inline s64
+static inline s64
ia64_sal_pci_config_write (u64 pci_config_addr, u64 size, u64 value)
{
struct ia64_sal_retval isrv;
@@ -527,7 +539,7 @@
* Register physical addresses of locations needed by SAL when SAL
* procedures are invoked in virtual mode.
*/
-extern inline s64
+static inline s64
ia64_sal_register_physical_addr (u64 phys_entry, u64 phys_addr)
{
struct ia64_sal_retval isrv;
@@ -539,7 +551,7 @@
* or entry points where SAL will pass control for the specified event. These event
* handlers are for the bott rendezvous, MCAs and INIT scenarios.
*/
-extern inline s64
+static inline s64
ia64_sal_set_vectors (u64 vector_type,
u64 handler_addr1, u64 gp1, u64 handler_len1,
u64 handler_addr2, u64 gp2, u64 handler_len2)
@@ -552,7 +564,7 @@
return isrv.status;
}
/* Update the contents of PAL block in the non-volatile storage device */
-extern inline s64
+static inline s64
ia64_sal_update_pal (u64 param_buf, u64 scratch_buf, u64 scratch_buf_size,
u64 *error_code, u64 *scratch_buf_size_needed)
{
diff -urN linux-davidm/include/asm-ia64/semaphore.h linux-2.4.0-test9-lia/include/asm-ia64/semaphore.h
--- linux-davidm/include/asm-ia64/semaphore.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/semaphore.h Wed Oct 4 21:50:28 2000
@@ -39,7 +39,7 @@
#define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name, 1)
#define DECLARE_MUTEX_LOCKED(name) __DECLARE_SEMAPHORE_GENERIC(name, 0)
-extern inline void
+static inline void
sema_init (struct semaphore *sem, int val)
{
*sem = (struct semaphore) __SEMAPHORE_INITIALIZER(*sem, val);
@@ -68,7 +68,7 @@
* Atomically decrement the semaphore's count. If it goes negative,
* block the calling thread in the TASK_UNINTERRUPTIBLE state.
*/
-extern inline void
+static inline void
down (struct semaphore *sem)
{
#if WAITQUEUE_DEBUG
@@ -82,7 +82,7 @@
* Atomically decrement the semaphore's count. If it goes negative,
* block the calling thread in the TASK_INTERRUPTIBLE state.
*/
-extern inline int
+static inline int
down_interruptible (struct semaphore * sem)
{
int ret = 0;
@@ -95,7 +95,7 @@
return ret;
}
-extern inline int
+static inline int
down_trylock (struct semaphore *sem)
{
int ret = 0;
@@ -108,7 +108,7 @@
return ret;
}
-extern inline void
+static inline void
up (struct semaphore * sem)
{
#if WAITQUEUE_DEBUG
@@ -181,7 +181,7 @@
extern void __down_write_failed (struct rw_semaphore *sem, long count);
extern void __rwsem_wake (struct rw_semaphore *sem, long count);
-extern inline void
+static inline void
init_rwsem (struct rw_semaphore *sem)
{
sem->count = RW_LOCK_BIAS;
@@ -196,7 +196,7 @@
#endif
}
-extern inline void
+static inline void
down_read (struct rw_semaphore *sem)
{
long count;
@@ -218,7 +218,7 @@
#endif
}
-extern inline void
+static inline void
down_write (struct rw_semaphore *sem)
{
long old_count, new_count;
@@ -252,7 +252,7 @@
* case is when there was a writer waiting, and we've
* bumped the count to 0: we must wake the writer up.
*/
-extern inline void
+static inline void
__up_read (struct rw_semaphore *sem)
{
long count;
@@ -271,7 +271,7 @@
* Releasing the writer is easy -- just release it and
* wake up any sleepers.
*/
-extern inline void
+static inline void
__up_write (struct rw_semaphore *sem)
{
long old_count, new_count;
@@ -290,7 +290,7 @@
__rwsem_wake(sem, new_count);
}
-extern inline void
+static inline void
up_read (struct rw_semaphore *sem)
{
#if WAITQUEUE_DEBUG
@@ -303,7 +303,7 @@
__up_read(sem);
}
-extern inline void
+static inline void
up_write (struct rw_semaphore *sem)
{
#if WAITQUEUE_DEBUG
diff -urN linux-davidm/include/asm-ia64/siginfo.h linux-2.4.0-test9-lia/include/asm-ia64/siginfo.h
--- linux-davidm/include/asm-ia64/siginfo.h Thu Aug 24 08:17:47 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/siginfo.h Wed Oct 4 21:50:51 2000
@@ -235,7 +235,8 @@
#ifdef __KERNEL__
#include <linux/string.h>
-extern inline void copy_siginfo(siginfo_t *to, siginfo_t *from)
+static inline void
+copy_siginfo (siginfo_t *to, siginfo_t *from)
{
if (from->si_code < 0)
memcpy(to, from, sizeof(siginfo_t));
diff -urN linux-davidm/include/asm-ia64/smp.h linux-2.4.0-test9-lia/include/asm-ia64/smp.h
--- linux-davidm/include/asm-ia64/smp.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/smp.h Wed Oct 4 21:51:16 2000
@@ -49,7 +49,7 @@
* Function to map hard smp processor id to logical id. Slow, so
* don't use this in performance-critical code.
*/
-extern __inline__ int
+static inline int
cpu_logical_id (int cpuid)
{
int i;
@@ -68,28 +68,28 @@
* max_xtp : never deliver interrupts to this CPU.
*/
-extern __inline__ void
+static inline void
min_xtp(void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
writeb(0x00, ipi_base_addr | XTP_OFFSET); /* XTP to min */
}
-extern __inline__ void
+static inline void
normal_xtp(void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
writeb(0x08, ipi_base_addr | XTP_OFFSET); /* XTP normal */
}
-extern __inline__ void
+static inline void
max_xtp(void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
writeb(0x0f, ipi_base_addr | XTP_OFFSET); /* Set XTP to max */
}
-extern __inline__ unsigned int
+static inline unsigned int
hard_smp_processor_id(void)
{
struct {
diff -urN linux-davidm/include/asm-ia64/spinlock.h linux-2.4.0-test9-lia/include/asm-ia64/spinlock.h
--- linux-davidm/include/asm-ia64/spinlock.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/spinlock.h Wed Oct 4 21:51:32 2000
@@ -63,8 +63,8 @@
})
#define spin_is_locked(x) ((x)->lock != 0)
-#define spin_unlock(x) ({((spinlock_t *) x)->lock = 0;})
-#define spin_unlock_wait(x) ({ while ((x)->lock); })
+#define spin_unlock(x) do {((spinlock_t *) x)->lock = 0;} while (0)
+#define spin_unlock_wait(x) do {} while ((x)->lock)
#else /* !NEW_LOCK */
@@ -97,9 +97,9 @@
:: "r"(&(x)->lock) : "r2", "r29", "memory")
#define spin_is_locked(x) ((x)->lock != 0)
-#define spin_unlock(x) ({((spinlock_t *) x)->lock = 0; barrier();})
+#define spin_unlock(x) do {((spinlock_t *) x)->lock = 0; barrier(); } while (0)
#define spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) = 0)
-#define spin_unlock_wait(x) ({ do { barrier(); } while ((x)->lock); })
+#define spin_unlock_wait(x) do { barrier(); } while ((x)->lock)
#endif /* !NEW_LOCK */
@@ -146,16 +146,16 @@
"movl r29 = 0x80000000\n" \
";;\n" \
"1:\n" \
- "ld4 r2 = %0\n" \
+ "ld4 r2 = [%0]\n" \
";;\n" \
"cmp4.eq p0,p7 = r0,r2\n" \
"(p7) br.cond.spnt.few 1b \n" \
- IA64_SEMFIX"cmpxchg4.acq r2 = %0, r29, ar.ccv\n" \
+ IA64_SEMFIX"cmpxchg4.acq r2 = [%0], r29, ar.ccv\n" \
";;\n" \
"cmp4.eq p0,p7 = r0, r2\n" \
"(p7) br.cond.spnt.few 1b\n" \
";;\n" \
- :: "m" __atomic_fool_gcc((rw)) : "r2", "r29", "memory"); \
+ :: "r"(rw) : "r2", "r29", "memory"); \
} while(0)
/*
diff -urN linux-davidm/include/asm-ia64/system.h linux-2.4.0-test9-lia/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/system.h Thu Oct 5 00:20:25 2000
@@ -38,6 +38,7 @@
#ifndef __ASSEMBLY__
+#include <linux/kernel.h>
#include <linux/types.h>
struct pci_vector_struct {
@@ -67,7 +68,7 @@
__u64 initrd_size;
} ia64_boot_param;
-extern inline void
+static inline void
ia64_insn_group_barrier (void)
{
__asm__ __volatile__ (";;" ::: "memory");
@@ -98,6 +99,16 @@
#define mb() __asm__ __volatile__ ("mf" ::: "memory")
#define rmb() mb()
#define wmb() mb()
+
+#ifdef CONFIG_SMP
+# define smp_mb() mb()
+# define smp_rmb() rmb()
+# define smp_wmb() wmb()
+#else
+# define smp_mb() barrier()
+# define smp_rmb() barrier()
+# define smp_wmb() barrier()
+#endif
/*
* XXX check on these---I suspect what Linus really wants here is
diff -urN linux-davidm/include/asm-ia64/uaccess.h linux-2.4.0-test9-lia/include/asm-ia64/uaccess.h
--- linux-davidm/include/asm-ia64/uaccess.h Wed Oct 4 23:20:21 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/uaccess.h Wed Oct 4 21:51:49 2000
@@ -61,7 +61,7 @@
#define __access_ok(addr,size,segment) (((unsigned long) (addr)) <= (segment).seg)
#define access_ok(type,addr,size) __access_ok((addr),(size),get_fs())
-extern inline int
+static inline int
verify_area (int type, const void *addr, unsigned long size)
{
return access_ok(type,addr,size) ? 0 : -EFAULT;
diff -urN linux-davidm/include/asm-ia64/unaligned.h linux-2.4.0-test9-lia/include/asm-ia64/unaligned.h
--- linux-davidm/include/asm-ia64/unaligned.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/unaligned.h Wed Oct 4 21:52:08 2000
@@ -22,42 +22,42 @@
struct __una_u32 { __u32 x __attribute__((packed)); };
struct __una_u16 { __u16 x __attribute__((packed)); };
-extern inline unsigned long
+static inline unsigned long
__uldq (const unsigned long * r11)
{
const struct __una_u64 *ptr = (const struct __una_u64 *) r11;
return ptr->x;
}
-extern inline unsigned long
+static inline unsigned long
__uldl (const unsigned int * r11)
{
const struct __una_u32 *ptr = (const struct __una_u32 *) r11;
return ptr->x;
}
-extern inline unsigned long
+static inline unsigned long
__uldw (const unsigned short * r11)
{
const struct __una_u16 *ptr = (const struct __una_u16 *) r11;
return ptr->x;
}
-extern inline void
+static inline void
__ustq (unsigned long r5, unsigned long * r11)
{
struct __una_u64 *ptr = (struct __una_u64 *) r11;
ptr->x = r5;
}
-extern inline void
+static inline void
__ustl (unsigned long r5, unsigned int * r11)
{
struct __una_u32 *ptr = (struct __una_u32 *) r11;
ptr->x = r5;
}
-extern inline void
+static inline void
__ustw (unsigned long r5, unsigned short * r11)
{
struct __una_u16 *ptr = (struct __una_u16 *) r11;
diff -urN linux-davidm/include/asm-ia64/unistd.h linux-2.4.0-test9-lia/include/asm-ia64/unistd.h
--- linux-davidm/include/asm-ia64/unistd.h Wed Sep 13 18:25:40 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/unistd.h Wed Oct 4 21:52:16 2000
@@ -93,7 +93,7 @@
#define __NR_setpriority 1102
#define __NR_statfs 1103
#define __NR_fstatfs 1104
-#define __NR_ioperm 1105
+/* unused; used to be __NR_ioperm */
#define __NR_semget 1106
#define __NR_semop 1107
#define __NR_semctl 1108
diff -urN linux-davidm/include/asm-ia64/unwind.h linux-2.4.0-test9-lia/include/asm-ia64/unwind.h
--- linux-davidm/include/asm-ia64/unwind.h Thu Jun 22 07:09:45 2000
+++ linux-2.4.0-test9-lia/include/asm-ia64/unwind.h Wed Oct 4 21:52:27 2000
@@ -52,36 +52,38 @@
unsigned int flags;
short hint;
short prev_script;
- unsigned long bsp;
- unsigned long sp; /* stack pointer */
- unsigned long psp; /* previous sp */
- unsigned long ip; /* instruction pointer */
- unsigned long pr_val; /* current predicates */
- unsigned long *cfm;
+
+ /* current frame info: */
+ unsigned long bsp; /* backing store pointer value */
+ unsigned long sp; /* stack pointer value */
+ unsigned long psp; /* previous sp value */
+ unsigned long ip; /* instruction pointer value */
+ unsigned long pr; /* current predicate values */
+ unsigned long *cfm_loc; /* cfm save location (or NULL) */
struct task_struct *task;
struct switch_stack *sw;
/* preserved state: */
- unsigned long *pbsp; /* previous bsp */
- unsigned long *bspstore;
- unsigned long *pfs;
- unsigned long *rnat;
- unsigned long *rp;
- unsigned long *pri_unat;
- unsigned long *unat;
- unsigned long *pr;
- unsigned long *lc;
- unsigned long *fpsr;
+ unsigned long *bsp_loc; /* previous bsp save location */
+ unsigned long *bspstore_loc;
+ unsigned long *pfs_loc;
+ unsigned long *rnat_loc;
+ unsigned long *rp_loc;
+ unsigned long *pri_unat_loc;
+ unsigned long *unat_loc;
+ unsigned long *pr_loc;
+ unsigned long *lc_loc;
+ unsigned long *fpsr_loc;
struct unw_ireg {
unsigned long *loc;
struct unw_ireg_nat {
- int type : 3; /* enum unw_nat_type */
- signed int off; /* NaT word is at loc+nat.off */
+ long type : 3; /* enum unw_nat_type */
+ signed long off : 61; /* NaT word is at loc+nat.off */
} nat;
} r4, r5, r6, r7;
- unsigned long *b1, *b2, *b3, *b4, *b5;
- struct ia64_fpreg *f2, *f3, *f4, *f5, *fr[16];
+ unsigned long *b1_loc, *b2_loc, *b3_loc, *b4_loc, *b5_loc;
+ struct ia64_fpreg *f2_loc, *f3_loc, *f4_loc, *f5_loc, *fr_loc[16];
};
/*
@@ -140,19 +142,56 @@
*/
extern int unw_unwind_to_user (struct unw_frame_info *info);
-#define unw_get_ip(info,vp) ({*(vp) = (info)->ip; 0;})
-#define unw_get_sp(info,vp) ({*(vp) = (unsigned long) (info)->sp; 0;})
-#define unw_get_psp(info,vp) ({*(vp) = (unsigned long) (info)->psp; 0;})
-#define unw_get_bsp(info,vp) ({*(vp) = (unsigned long) (info)->bsp; 0;})
-#define unw_get_cfm(info,vp) ({*(vp) = *(info)->cfm; 0;})
-#define unw_set_cfm(info,val) ({*(info)->cfm = (val); 0;})
+#define unw_is_intr_frame(info) (((info)->flags & UNW_FLAG_INTERRUPT_FRAME) != 0)
+
+static inline unsigned long
+unw_get_ip (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->ip;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_sp (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->sp;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_psp (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->psp;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_bsp (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->bsp;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_cfm (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = *(info)->cfm_loc;
+ return 0;
+}
+
+static inline unsigned long
+unw_set_cfm (struct unw_frame_info *info, unsigned long val)
+{
+ *(info)->cfm_loc = val;
+ return 0;
+}
static inline int
unw_get_rp (struct unw_frame_info *info, unsigned long *val)
{
- if (!info->rp)
+ if (!info->rp_loc)
return -1;
- *val = *info->rp;
+ *val = *info->rp_loc;
return 0;
}
diff -urN linux-davidm/kernel/Makefile linux-2.4.0-test9-lia/kernel/Makefile
--- linux-davidm/kernel/Makefile Thu Aug 10 19:56:32 2000
+++ linux-2.4.0-test9-lia/kernel/Makefile Wed Oct 4 21:53:44 2000
@@ -30,6 +30,13 @@
OX_OBJS += pm.o
endif
+ifneq ($(CONFIG_IA64),y)
+# According to Alan Modra <alan@linuxcare.com.au>, the -fno-omit-frame-pointer is
+# needed for x86 only. Why this used to be enabled for all architectures is beyond
+# me. I suspect most platforms don't need this, but until we know that for sure
+# I turn this off for IA-64 only. Andreas Schwab says it's also needed on m68k
+# to get a correct value for the wait-channel (WCHAN in ps). --davidm
CFLAGS_sched.o := $(PROFILING) -fno-omit-frame-pointer
+endif
include $(TOPDIR)/Rules.make
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test9)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (20 preceding siblings ...)
2000-10-05 19:01 ` [Linux-ia64] kernel update (relative to v2.4.0-test9) David Mosberger
@ 2000-10-05 22:08 ` Keith Owens
2000-10-05 22:15 ` David Mosberger
` (193 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2000-10-05 22:08 UTC (permalink / raw)
To: linux-ia64
On Thu, 5 Oct 2000 12:01:10 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>The latest IA-64 Linux kernel diff is now available at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>
>in file linux-2.4.0-test9-ia64-001004.diff.
The patch contains a deletion for unistd.h~ which is not in base
2.4.0-test9. It also deletes a lot of acpi files, which may cause
problems if you try to compile ix86 from a source tree with ia64-001004
applied, i386 config.in does not have CONFIG_ACPI_KERNEL_CONFIG. Any
reason for these deletions and other acpi changes?
drivers/acpi/table.c
drivers/acpi/tables/Makefile
drivers/acpi/resources/Makefile
drivers/acpi/parser/psfind.c
drivers/acpi/parser/Makefile
drivers/acpi/namespace/nsdump.c
drivers/acpi/namespace/Makefile
drivers/acpi/interpreter/amdump.c
drivers/acpi/interpreter/Makefile
drivers/acpi/include/actbl.h
drivers/acpi/include/acresrc.h
drivers/acpi/include/acparser.h
drivers/acpi/include/acoutput.h
drivers/acpi/include/acnamesp.h
drivers/acpi/include/acmacros.h
drivers/acpi/include/aclocal.h
drivers/acpi/include/acinterp.h
drivers/acpi/include/achware.h
drivers/acpi/include/acglobal.h
drivers/acpi/include/acevents.h
drivers/acpi/include/acdispat.h
drivers/acpi/include/acdebug.h
drivers/acpi/include/acconfig.h
drivers/acpi/include/accommon.h
drivers/acpi/hardware/Makefile
drivers/acpi/events/Makefile
drivers/acpi/dispatcher/Makefile
drivers/acpi/common/cmclib.c
drivers/acpi/common/Makefile
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.0-test9)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (21 preceding siblings ...)
2000-10-05 22:08 ` Keith Owens
@ 2000-10-05 22:15 ` David Mosberger
2000-10-31 8:55 ` [Linux-ia64] kernel update (relative to 2.4.0-test9) David Mosberger
` (192 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-10-05 22:15 UTC (permalink / raw)
To: linux-ia64
>>>>> On Fri, 06 Oct 2000 09:08:33 +1100, Keith Owens <kaos@ocs.com.au> said:
Keith> The patch contains a deletion for unistd.h~ which is not in
Keith> base 2.4.0-test9.
My fault, just ignore it (it was late, what can I say).
Keith> It also deletes a lot of acpi files, which may cause problems
Keith> if you try to compile ix86 from a source tree with
Keith> ia64-001004 applied, i386 config.in does not have
Keith> CONFIG_ACPI_KERNEL_CONFIG. Any reason for these deletions
Keith> and other acpi changes?
The ACPI code in the x86 tree is currently not compatible with what's
in the IA-64 tree. So the IA-64 patch restores the ACPI files from
the IA-64 tree. I don't know why or how this came to be and Intel is
looking into it. (Yeah, it's a major pain...)
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0-test9)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (22 preceding siblings ...)
2000-10-05 22:15 ` David Mosberger
@ 2000-10-31 8:55 ` David Mosberger
2000-11-02 8:50 ` [Linux-ia64] kernel update (relative to 2.4.0-test10) David Mosberger
` (191 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-10-31 8:55 UTC (permalink / raw)
To: linux-ia64
Here is a quick kernel update. I anticipate another update shortly
after Linus releases test10. Thus, this patch hasn't received as much
testing as I usually do. The kernel is known to work fine on 2P Big
Sur and a slightly earlier version also worked fine on Lion and the
simulator. As usual, YMMV.
Summary of changes:
- Asit & Goutham: fixed IOSAPIC support to work correctly when
irq lines are shared by PCI devices
- Asit: fix to unaligned access handler
- Goutham: workaround for lost IPI problem
- Don: IA-32 fixes & updates, in particular the beginnings of
DRM support
- Takayoshi: early-printk fixes to kernel startup code
- Jun: optimized copy_user() for case where pointers are not co-aligned
- Johannes: add VM_NONCACHED and VM_WRITECOMBINE for /dev/mem
- Christophe: fixed unaligned accesses in Tulip (21140) NIC
- BJ: update qla1280 driver to v3.19
- Intel: huge ACPI update (it's not in the diff below; go get the
full patch to get this part)
- Stephane: many performance related updates & fixes; in particular,
the API is now entirely perfmonctl() based (no more ptrace() hacks
needed)
- drop platform_register_iosapic()---it's no longer needed
- add workaround for potential ITC discontiguity
- "backported" ftruncate() fix that caused kernel crash
with certain filesystem stress tests
- memcpy: tuned it some more
- remove off workaround for unsorted unwind tables---it's no longer
needed with the latest toolchain (thanks to Rich!)
- separated IOSAPIC support from DIG, added some documentation,
and restructured the code to make it (hopefully) more readable;
also removed all (hard) IOSAPIC dependencies from acpi code
- dropped BAD_ACPI_TABLE workaround; if you're still running
old Lion firmware, this might bite you; if so, please upgrade
the firmware (I don't think very many systems were shipped
with the broken firmware, so I expect few if any people to
be affected by this)
- added AT_CLKTCK support for ELF binaries
I hope I didn't miss anything. For the full patch, see:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
as usual.
--david
diff -urN linux-davidm/arch/ia64/dig/dig_irq.c lia64/arch/ia64/dig/dig_irq.c
--- linux-davidm/arch/ia64/dig/dig_irq.c Wed Dec 31 16:00:00 1969
+++ lia64/arch/ia64/dig/dig_irq.c Mon Oct 30 23:40:05 2000
@@ -0,0 +1,10 @@
+void
+dig_irq_init (void)
+{
+ /*
+ * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support
+ * enabled.
+ */
+ outb(0xff, 0xA1);
+ outb(0xff, 0x21);
+}
diff -urN linux-davidm/arch/ia64/dig/iosapic.c lia64/arch/ia64/dig/iosapic.c
--- linux-davidm/arch/ia64/dig/iosapic.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/dig/iosapic.c Wed Dec 31 16:00:00 1969
@@ -1,409 +0,0 @@
-/*
- * Streamlined APIC support.
- *
- * Copyright (C) 1999 Intel Corp.
- * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
- * Copyright (C) 1999-2000 Hewlett-Packard Co.
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1999 VA Linux Systems
- * Copyright (C) 1999,2000 Walt Drummond <drummond@valinux.com>
- *
- * 00/04/19 D. Mosberger Rewritten to mirror more closely the x86 I/O APIC code.
- * In particular, we now have separate handlers for edge
- * and level triggered interrupts.
- */
-#include <linux/config.h>
-
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/pci.h>
-#include <linux/smp.h>
-#include <linux/smp_lock.h>
-#include <linux/string.h>
-#include <linux/irq.h>
-
-#include <asm/acpi-ext.h>
-#include <asm/delay.h>
-#include <asm/io.h>
-#include <asm/iosapic.h>
-#include <asm/machvec.h>
-#include <asm/processor.h>
-#include <asm/ptrace.h>
-#include <asm/system.h>
-
-#ifdef CONFIG_ACPI_KERNEL_CONFIG
-# include <asm/acpikcfg.h>
-#endif
-
-#undef DEBUG_IRQ_ROUTING
-
-static spinlock_t iosapic_lock = SPIN_LOCK_UNLOCKED;
-
-struct iosapic_vector iosapic_vector[NR_IRQS] = {
- [0 ... NR_IRQS-1] = { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }
-};
-
-/*
- * find the IRQ in the IOSAPIC map for the PCI device on bus/slot/pin
- */
-int
-iosapic_get_PCI_irq_vector (int bus, int slot, int pci_pin)
-{
- int i;
-
- for (i = 0; i < NR_IRQS; i++) {
- if ((iosapic_bustype(i) = BUS_PCI) &&
- (iosapic_bus(i) = bus) &&
- (iosapic_busdata(i) = ((slot << 16) | pci_pin))) {
- return i;
- }
- }
- return -1;
-}
-
-static void
-set_rte (unsigned long iosapic_addr, int entry, int pol, int trigger, int delivery,
- long dest, int vector)
-{
- u32 low32;
- u32 high32;
-
- low32 = ((pol << IO_SAPIC_POLARITY_SHIFT) |
- (trigger << IO_SAPIC_TRIGGER_SHIFT) |
- (delivery << IO_SAPIC_DELIVERY_SHIFT) |
- vector);
-
-#ifdef CONFIG_IA64_AZUSA_HACKS
- /* set Flush Disable bit */
- if (iosapic_addr != 0xc0000000fec00000)
- low32 |= (1 << 17);
-#endif
-
- /* dest contains both id and eid */
- high32 = (dest << IO_SAPIC_DEST_SHIFT);
-
- writel(IO_SAPIC_RTE_HIGH(entry), iosapic_addr + IO_SAPIC_REG_SELECT);
- writel(high32, iosapic_addr + IO_SAPIC_WINDOW);
- writel(IO_SAPIC_RTE_LOW(entry), iosapic_addr + IO_SAPIC_REG_SELECT);
- writel(low32, iosapic_addr + IO_SAPIC_WINDOW);
-}
-
-static void
-nop (unsigned int irq)
-{
- /* do nothing... */
-}
-
-static void
-mask_irq (unsigned int irq)
-{
- unsigned long flags, iosapic_addr = iosapic_addr(irq);
- u32 low32;
-
- spin_lock_irqsave(&iosapic_lock, flags);
- {
- writel(IO_SAPIC_RTE_LOW(iosapic_pin(irq)), iosapic_addr + IO_SAPIC_REG_SELECT);
- low32 = readl(iosapic_addr + IO_SAPIC_WINDOW);
-
- low32 |= (1 << IO_SAPIC_MASK_SHIFT); /* Zero only the mask bit */
- writel(low32, iosapic_addr + IO_SAPIC_WINDOW);
- }
- spin_unlock_irqrestore(&iosapic_lock, flags);
-}
-
-static void
-unmask_irq (unsigned int irq)
-{
- unsigned long flags, iosapic_addr = iosapic_addr(irq);
- u32 low32;
-
- spin_lock_irqsave(&iosapic_lock, flags);
- {
- writel(IO_SAPIC_RTE_LOW(iosapic_pin(irq)), iosapic_addr + IO_SAPIC_REG_SELECT);
- low32 = readl(iosapic_addr + IO_SAPIC_WINDOW);
-
- low32 &= ~(1 << IO_SAPIC_MASK_SHIFT); /* Zero only the mask bit */
- writel(low32, iosapic_addr + IO_SAPIC_WINDOW);
- }
- spin_unlock_irqrestore(&iosapic_lock, flags);
-}
-
-
-static void
-iosapic_set_affinity (unsigned int irq, unsigned long mask)
-{
- printk("iosapic_set_affinity: not implemented yet\n");
-}
-
-/*
- * Handlers for level-triggered interrupts.
- */
-
-static unsigned int
-iosapic_startup_level_irq (unsigned int irq)
-{
- unmask_irq(irq);
- return 0;
-}
-
-static void
-iosapic_end_level_irq (unsigned int irq)
-{
- writel(irq, iosapic_addr(irq) + IO_SAPIC_EOI);
-}
-
-#define iosapic_shutdown_level_irq mask_irq
-#define iosapic_enable_level_irq unmask_irq
-#define iosapic_disable_level_irq mask_irq
-#define iosapic_ack_level_irq nop
-
-struct hw_interrupt_type irq_type_iosapic_level = {
- typename: "IO-SAPIC-level",
- startup: iosapic_startup_level_irq,
- shutdown: iosapic_shutdown_level_irq,
- enable: iosapic_enable_level_irq,
- disable: iosapic_disable_level_irq,
- ack: iosapic_ack_level_irq,
- end: iosapic_end_level_irq,
- set_affinity: iosapic_set_affinity
-};
-
-/*
- * Handlers for edge-triggered interrupts.
- */
-
-static unsigned int
-iosapic_startup_edge_irq (unsigned int irq)
-{
- unmask_irq(irq);
- /*
- * IOSAPIC simply drops interrupts pended while the
- * corresponding pin was masked, so we can't know if an
- * interrupt is pending already. Let's hope not...
- */
- return 0;
-}
-
-static void
-iosapic_ack_edge_irq (unsigned int irq)
-{
- /*
- * Once we have recorded IRQ_PENDING already, we can mask the
- * interrupt for real. This prevents IRQ storms from unhandled
- * devices.
- */
- if ((irq_desc[irq].status & (IRQ_PENDING | IRQ_DISABLED)) = (IRQ_PENDING | IRQ_DISABLED))
- mask_irq(irq);
-}
-
-#define iosapic_enable_edge_irq unmask_irq
-#define iosapic_disable_edge_irq nop
-#define iosapic_end_edge_irq nop
-
-struct hw_interrupt_type irq_type_iosapic_edge = {
- typename: "IO-SAPIC-edge",
- startup: iosapic_startup_edge_irq,
- shutdown: iosapic_disable_edge_irq,
- enable: iosapic_enable_edge_irq,
- disable: iosapic_disable_edge_irq,
- ack: iosapic_ack_edge_irq,
- end: iosapic_end_edge_irq,
- set_affinity: iosapic_set_affinity
-};
-
-unsigned int
-iosapic_version (unsigned long base_addr)
-{
- /*
- * IOSAPIC Version Register return 32 bit structure like:
- * {
- * unsigned int version : 8;
- * unsigned int reserved1 : 8;
- * unsigned int pins : 8;
- * unsigned int reserved2 : 8;
- * }
- */
- writel(IO_SAPIC_VERSION, base_addr + IO_SAPIC_REG_SELECT);
- return readl(IO_SAPIC_WINDOW + base_addr);
-}
-
-void
-iosapic_init (unsigned long address, int irqbase)
-{
- struct hw_interrupt_type *irq_type;
- struct pci_vector_struct *vectors;
- int i, irq, num_pci_vectors;
-
- if (irqbase = 0)
- /*
- * Map the legacy ISA devices into the IOSAPIC data.
- * Some of these may get reprogrammed later on with
- * data from the ACPI Interrupt Source Override table.
- */
- for (i = 0; i < 16; i++) {
- irq = isa_irq_to_vector(i);
- iosapic_pin(irq) = i;
- iosapic_bus(irq) = BUS_ISA;
- iosapic_busdata(irq) = 0;
- iosapic_dmode(irq) = IO_SAPIC_LOWEST_PRIORITY;
- iosapic_trigger(irq) = IO_SAPIC_EDGE;
- iosapic_polarity(irq) = IO_SAPIC_POL_HIGH;
-#ifdef DEBUG_IRQ_ROUTING
- printk("ISA: IRQ %02x -> Vector %02x IOSAPIC Pin %d\n",
- i, irq, iosapic_pin(irq));
-#endif
- }
-
-#ifndef CONFIG_IA64_SOFTSDV_HACKS
- /*
- * Map the PCI Interrupt data into the ACPI IOSAPIC data using
- * the info that the bootstrap loader passed to us.
- */
-# ifdef CONFIG_ACPI_KERNEL_CONFIG
- acpi_cf_get_pci_vectors(&vectors, &num_pci_vectors);
-# else
- ia64_boot_param.pci_vectors = (__u64) __va(ia64_boot_param.pci_vectors);
- vectors = (struct pci_vector_struct *) ia64_boot_param.pci_vectors;
- num_pci_vectors = ia64_boot_param.num_pci_vectors;
-# endif
- for (i = 0; i < num_pci_vectors; i++) {
- irq = vectors[i].irq;
- if (irq < 16)
- irq = isa_irq_to_vector(irq);
- if (iosapic_baseirq(irq) != irqbase)
- continue;
-
- iosapic_bustype(irq) = BUS_PCI;
- iosapic_pin(irq) = irq - iosapic_baseirq(irq);
- iosapic_bus(irq) = vectors[i].bus;
- /*
- * Map the PCI slot and pin data into iosapic_busdata()
- */
- iosapic_busdata(irq) = (vectors[i].pci_id & 0xffff0000) | vectors[i].pin;
-
- /* Default settings for PCI */
- iosapic_dmode(irq) = IO_SAPIC_LOWEST_PRIORITY;
- iosapic_trigger(irq) = IO_SAPIC_LEVEL;
- iosapic_polarity(irq) = IO_SAPIC_POL_LOW;
-
-# ifdef DEBUG_IRQ_ROUTING
- printk("PCI: BUS %d Slot %x Pin %x IRQ %02x --> Vector %02x IOSAPIC Pin %d\n",
- vectors[i].bus, vectors[i].pci_id>>16, vectors[i].pin, vectors[i].irq,
- irq, iosapic_pin(irq));
-# endif
- }
-#endif /* CONFIG_IA64_SOFTSDV_HACKS */
-
- for (i = 0; i < NR_IRQS; ++i) {
- if (iosapic_baseirq(i) != irqbase)
- continue;
-
- if (iosapic_pin(i) != -1) {
- if (iosapic_trigger(i) = IO_SAPIC_LEVEL)
- irq_type = &irq_type_iosapic_level;
- else
- irq_type = &irq_type_iosapic_edge;
- if (irq_desc[i].handler != &no_irq_type)
- printk("dig_irq_init: warning: changing vector %d from %s to %s\n",
- i, irq_desc[i].handler->typename,
- irq_type->typename);
- irq_desc[i].handler = irq_type;
-
- /* program the IOSAPIC routing table: */
- set_rte(iosapic_addr(i), iosapic_pin(i), iosapic_polarity(i),
- iosapic_trigger(i), iosapic_dmode(i),
- (ia64_get_lid() >> 16) & 0xffff, i);
- }
- }
-}
-
-void
-dig_irq_init (void)
-{
- /*
- * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support
- * enabled.
- */
- outb(0xff, 0xA1);
- outb(0xff, 0x21);
-}
-
-void
-dig_pci_fixup (void)
-{
- struct pci_dev *dev;
- int irq;
- unsigned char pin;
-
- pci_for_each_dev(dev) {
- pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
- if (pin) {
- pin--; /* interrupt pins are numbered starting from 1 */
- irq = iosapic_get_PCI_irq_vector(dev->bus->number, PCI_SLOT(dev->devfn),
- pin);
- if (irq < 0 && dev->bus->parent) { /* go back to the bridge */
- struct pci_dev * bridge = dev->bus->self;
-
- /* allow for multiple bridges on an adapter */
- do {
- /* do the bridge swizzle... */
- pin = (pin + PCI_SLOT(dev->devfn)) % 4;
- irq = iosapic_get_PCI_irq_vector(bridge->bus->number,
- PCI_SLOT(bridge->devfn), pin);
- } while (irq < 0 && (bridge = bridge->bus->self));
- if (irq >= 0)
- printk(KERN_WARNING
- "PCI: using PPB(B%d,I%d,P%d) to get irq %02x\n",
- bridge->bus->number, PCI_SLOT(bridge->devfn),
- pin, irq);
- else
- printk(KERN_WARNING
- "PCI: Couldn't map irq for B%d,I%d,P%d\n",
- bridge->bus->number, PCI_SLOT(bridge->devfn),
- pin);
- }
- if (irq >= 0) {
- printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> %02x\n",
- dev->bus->number, PCI_SLOT(dev->devfn), pin, irq);
- dev->irq = irq;
- }
- }
- /*
- * Nothing to fixup
- * Fix out-of-range IRQ numbers
- */
- if (dev->irq >= NR_IRQS)
- dev->irq = 15; /* Spurious interrupts */
- }
-}
-
-/*
- * Register an IOSAPIC discovered via ACPI.
- */
-void __init
-dig_register_iosapic (acpi_entry_iosapic_t *iosapic)
-{
- unsigned int ver, v;
- int l, max_pin;
-
- ver = iosapic_version((unsigned long) ioremap(iosapic->address, 0));
- max_pin = (ver >> 16) & 0xff;
-
- printk("IOSAPIC Version %x.%x: address 0x%lx IRQs 0x%x - 0x%x\n",
- (ver & 0xf0) >> 4, (ver & 0x0f), iosapic->address,
- iosapic->irq_base, iosapic->irq_base + max_pin);
-
- for (l = 0; l <= max_pin; l++) {
- v = iosapic->irq_base + l;
- if (v < 16)
- v = isa_irq_to_vector(v);
- if (v > IA64_MAX_VECTORED_IRQ) {
- printk(" !!! bad IOSAPIC interrupt vector: %u\n", v);
- continue;
- }
- /* XXX Check for IOSAPIC collisions */
- iosapic_addr(v) = (unsigned long) ioremap(iosapic->address, 0);
- iosapic_baseirq(v) = iosapic->irq_base;
- }
- iosapic_init(iosapic->address, iosapic->irq_base);
-}
diff -urN linux-davidm/arch/ia64/dig/setup.c lia64/arch/ia64/dig/setup.c
--- linux-davidm/arch/ia64/dig/setup.c Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/dig/setup.c Mon Oct 30 22:28:55 2000
@@ -84,3 +84,14 @@
screen_info.orig_video_isVGA = 1; /* XXX fake */
screen_info.orig_video_ega_bx = 3; /* XXX fake */
}
+
+void
+dig_irq_init (void)
+{
+ /*
+ * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support
+ * enabled.
+ */
+ outb(0xff, 0xA1);
+ outb(0xff, 0x21);
+}
diff -urN linux-davidm/arch/ia64/ia32/binfmt_elf32.c lia64/arch/ia64/ia32/binfmt_elf32.c
--- linux-davidm/arch/ia64/ia32/binfmt_elf32.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/ia32/binfmt_elf32.c Mon Oct 30 22:29:21 2000
@@ -9,6 +9,7 @@
#include <linux/types.h>
+#include <asm/param.h>
#include <asm/signal.h>
#include <asm/ia32.h>
@@ -31,6 +32,9 @@
# define CONFIG_BINFMT_ELF_MODULE CONFIG_BINFMT_ELF32_MODULE
#endif
+#undef CLOCKS_PER_SEC
+#define CLOCKS_PER_SEC IA32_CLOCKS_PER_SEC
+
extern void ia64_elf32_init(struct pt_regs *regs);
extern void put_dirty_page(struct task_struct * tsk, struct page *page, unsigned long address);
@@ -239,6 +243,12 @@
if (eppnt->p_memsz >= (1UL<<32) || addr > (1UL<<32) - eppnt->p_memsz)
return -EINVAL;
+ /*
+ * Make sure the elf interpreter doesn't get loaded at location 0
+ * so that NULL pointers correctly cause segfaults.
+ */
+ if (addr = 0)
+ addr += PAGE_SIZE;
#if 1
set_brk(ia32_mm_addr(addr), addr + eppnt->p_memsz);
memset((char *) addr + eppnt->p_filesz, 0, eppnt->p_memsz - eppnt->p_filesz);
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S lia64/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Fri Sep 8 14:34:53 2000
+++ lia64/arch/ia64/ia32/ia32_entry.S Mon Oct 30 22:29:33 2000
@@ -291,11 +291,43 @@
data8 sys_getcwd
data8 sys_capget
data8 sys_capset /* 185 */
- data8 sys_sigaltstack
+ data8 sys32_sigaltstack
data8 sys_sendfile
data8 sys32_ni_syscall /* streams1 */
data8 sys32_ni_syscall /* streams2 */
data8 sys32_vfork /* 190 */
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall /* 195 */
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall /* 200 */
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall /* 205 */
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall /* 210 */
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall /* 215 */
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall /* 220 */
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
/*
* CAUTION: If any system calls are added beyond this point
* then the check in `arch/ia64/kernel/ivt.S' will have
diff -urN linux-davidm/arch/ia64/ia32/ia32_ioctl.c lia64/arch/ia64/ia32/ia32_ioctl.c
--- linux-davidm/arch/ia64/ia32/ia32_ioctl.c Wed Aug 2 18:54:01 2000
+++ lia64/arch/ia64/ia32/ia32_ioctl.c Mon Oct 30 22:29:51 2000
@@ -22,81 +22,158 @@
#include <linux/if_ppp.h>
#include <linux/ixjuser.h>
#include <linux/i2o-dev.h>
+#include <../drivers/char/drm/drm.h>
+
+#define IOCTL_NR(a) ((a) & ~(_IOC_SIZEMASK << _IOC_SIZESHIFT))
+
+#define DO_IOCTL(fd, cmd, arg) ({ \
+ int _ret; \
+ mm_segment_t _old_fs = get_fs(); \
+ \
+ set_fs(KERNEL_DS); \
+ _ret = sys_ioctl(fd, cmd, (unsigned long)arg); \
+ set_fs(_old_fs); \
+ _ret; \
+})
+
+#define P(i) ((void *)(long)(i))
+
asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg);
asmlinkage long ia32_ioctl(unsigned int fd, unsigned int cmd, unsigned int arg)
{
+ long ret;
+
+ switch (IOCTL_NR(cmd)) {
+
+ case IOCTL_NR(DRM_IOCTL_VERSION):
+ break;
+ case IOCTL_NR(DRM_IOCTL_GET_UNIQUE):
+ {
+ drm_unique_t un;
+ struct {
+ unsigned int unique_len;
+ unsigned int unique;
+ } un32;
+
+ if (copy_from_user(&un32, P(arg), sizeof(un32)))
+ return -EFAULT;
+ un.unique_len = un32.unique_len;
+ un.unique = P(un32.unique);
+ ret = DO_IOCTL(fd, cmd, &un);
+ if (ret >= 0) {
+ un32.unique_len = un.unique_len;
+ if (copy_to_user(P(arg), &un32, sizeof(un32)))
+ return -EFAULT;
+ }
+ return(ret);
+ }
+ case IOCTL_NR(DRM_IOCTL_SET_UNIQUE):
+ case IOCTL_NR(DRM_IOCTL_ADD_MAP):
+ case IOCTL_NR(DRM_IOCTL_ADD_BUFS):
+ case IOCTL_NR(DRM_IOCTL_MARK_BUFS):
+ case IOCTL_NR(DRM_IOCTL_INFO_BUFS):
+ case IOCTL_NR(DRM_IOCTL_MAP_BUFS):
+ case IOCTL_NR(DRM_IOCTL_FREE_BUFS):
+ case IOCTL_NR(DRM_IOCTL_ADD_CTX):
+ case IOCTL_NR(DRM_IOCTL_RM_CTX):
+ case IOCTL_NR(DRM_IOCTL_MOD_CTX):
+ case IOCTL_NR(DRM_IOCTL_GET_CTX):
+ case IOCTL_NR(DRM_IOCTL_SWITCH_CTX):
+ case IOCTL_NR(DRM_IOCTL_NEW_CTX):
+ case IOCTL_NR(DRM_IOCTL_RES_CTX):
+
+ case IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE):
+ case IOCTL_NR(DRM_IOCTL_AGP_RELEASE):
+ case IOCTL_NR(DRM_IOCTL_AGP_ENABLE):
+ case IOCTL_NR(DRM_IOCTL_AGP_INFO):
+ case IOCTL_NR(DRM_IOCTL_AGP_ALLOC):
+ case IOCTL_NR(DRM_IOCTL_AGP_FREE):
+ case IOCTL_NR(DRM_IOCTL_AGP_BIND):
+ case IOCTL_NR(DRM_IOCTL_AGP_UNBIND):
+
+ /* Mga specific ioctls */
+
+ case IOCTL_NR(DRM_IOCTL_MGA_INIT):
+
+ /* I810 specific ioctls */
+
+ case IOCTL_NR(DRM_IOCTL_I810_GETBUF):
+ case IOCTL_NR(DRM_IOCTL_I810_COPY):
+
+ /* Rage 128 specific ioctls */
- switch (cmd) {
+ case IOCTL_NR(DRM_IOCTL_R128_PACKET):
- case VFAT_IOCTL_READDIR_BOTH:
- case VFAT_IOCTL_READDIR_SHORT:
- case MTIOCGET:
- case MTIOCPOS:
- case MTIOCGETCONFIG:
- case MTIOCSETCONFIG:
- case PPPIOCSCOMPRESS:
- case PPPIOCGIDLE:
- case NCP_IOC_GET_FS_INFO_V2:
- case NCP_IOC_GETOBJECTNAME:
- case NCP_IOC_SETOBJECTNAME:
- case NCP_IOC_GETPRIVATEDATA:
- case NCP_IOC_SETPRIVATEDATA:
- case NCP_IOC_GETMOUNTUID2:
- case CAPI_MANUFACTURER_CMD:
- case VIDIOCGTUNER:
- case VIDIOCSTUNER:
- case VIDIOCGWIN:
- case VIDIOCSWIN:
- case VIDIOCGFBUF:
- case VIDIOCSFBUF:
- case MGSL_IOCSPARAMS:
- case MGSL_IOCGPARAMS:
- case ATM_GETNAMES:
- case ATM_GETLINKRATE:
- case ATM_GETTYPE:
- case ATM_GETESI:
- case ATM_GETADDR:
- case ATM_RSTADDR:
- case ATM_ADDADDR:
- case ATM_DELADDR:
- case ATM_GETCIRANGE:
- case ATM_SETCIRANGE:
- case ATM_SETESI:
- case ATM_SETESIF:
- case ATM_GETSTAT:
- case ATM_GETSTATZ:
- case ATM_GETLOOP:
- case ATM_SETLOOP:
- case ATM_QUERYLOOP:
- case ENI_SETMULT:
- case NS_GETPSTAT:
- /* case NS_SETBUFLEV: This is a duplicate case with ZATM_GETPOOLZ */
- case ZATM_GETPOOLZ:
- case ZATM_GETPOOL:
- case ZATM_SETPOOL:
- case ZATM_GETTHIST:
- case IDT77105_GETSTAT:
- case IDT77105_GETSTATZ:
- case IXJCTL_TONE_CADENCE:
- case IXJCTL_FRAMES_READ:
- case IXJCTL_FRAMES_WRITTEN:
- case IXJCTL_READ_WAIT:
- case IXJCTL_WRITE_WAIT:
- case IXJCTL_DRYBUFFER_READ:
- case I2OHRTGET:
- case I2OLCTGET:
- case I2OPARMSET:
- case I2OPARMGET:
- case I2OSWDL:
- case I2OSWUL:
- case I2OSWDEL:
- case I2OHTML:
- printk("%x:unimplemented IA32 ioctl system call\n", cmd);
- return(-EINVAL);
+ case IOCTL_NR(VFAT_IOCTL_READDIR_BOTH):
+ case IOCTL_NR(VFAT_IOCTL_READDIR_SHORT):
+ case IOCTL_NR(MTIOCGET):
+ case IOCTL_NR(MTIOCPOS):
+ case IOCTL_NR(MTIOCGETCONFIG):
+ case IOCTL_NR(MTIOCSETCONFIG):
+ case IOCTL_NR(PPPIOCSCOMPRESS):
+ case IOCTL_NR(PPPIOCGIDLE):
+ case IOCTL_NR(NCP_IOC_GET_FS_INFO_V2):
+ case IOCTL_NR(NCP_IOC_GETOBJECTNAME):
+ case IOCTL_NR(NCP_IOC_SETOBJECTNAME):
+ case IOCTL_NR(NCP_IOC_GETPRIVATEDATA):
+ case IOCTL_NR(NCP_IOC_SETPRIVATEDATA):
+ case IOCTL_NR(NCP_IOC_GETMOUNTUID2):
+ case IOCTL_NR(CAPI_MANUFACTURER_CMD):
+ case IOCTL_NR(VIDIOCGTUNER):
+ case IOCTL_NR(VIDIOCSTUNER):
+ case IOCTL_NR(VIDIOCGWIN):
+ case IOCTL_NR(VIDIOCSWIN):
+ case IOCTL_NR(VIDIOCGFBUF):
+ case IOCTL_NR(VIDIOCSFBUF):
+ case IOCTL_NR(MGSL_IOCSPARAMS):
+ case IOCTL_NR(MGSL_IOCGPARAMS):
+ case IOCTL_NR(ATM_GETNAMES):
+ case IOCTL_NR(ATM_GETLINKRATE):
+ case IOCTL_NR(ATM_GETTYPE):
+ case IOCTL_NR(ATM_GETESI):
+ case IOCTL_NR(ATM_GETADDR):
+ case IOCTL_NR(ATM_RSTADDR):
+ case IOCTL_NR(ATM_ADDADDR):
+ case IOCTL_NR(ATM_DELADDR):
+ case IOCTL_NR(ATM_GETCIRANGE):
+ case IOCTL_NR(ATM_SETCIRANGE):
+ case IOCTL_NR(ATM_SETESI):
+ case IOCTL_NR(ATM_SETESIF):
+ case IOCTL_NR(ATM_GETSTAT):
+ case IOCTL_NR(ATM_GETSTATZ):
+ case IOCTL_NR(ATM_GETLOOP):
+ case IOCTL_NR(ATM_SETLOOP):
+ case IOCTL_NR(ATM_QUERYLOOP):
+ case IOCTL_NR(ENI_SETMULT):
+ case IOCTL_NR(NS_GETPSTAT):
+ /* case IOCTL_NR(NS_SETBUFLEV): This is a duplicate case with ZATM_GETPOOLZ */
+ case IOCTL_NR(ZATM_GETPOOLZ):
+ case IOCTL_NR(ZATM_GETPOOL):
+ case IOCTL_NR(ZATM_SETPOOL):
+ case IOCTL_NR(ZATM_GETTHIST):
+ case IOCTL_NR(IDT77105_GETSTAT):
+ case IOCTL_NR(IDT77105_GETSTATZ):
+ case IOCTL_NR(IXJCTL_TONE_CADENCE):
+ case IOCTL_NR(IXJCTL_FRAMES_READ):
+ case IOCTL_NR(IXJCTL_FRAMES_WRITTEN):
+ case IOCTL_NR(IXJCTL_READ_WAIT):
+ case IOCTL_NR(IXJCTL_WRITE_WAIT):
+ case IOCTL_NR(IXJCTL_DRYBUFFER_READ):
+ case IOCTL_NR(I2OHRTGET):
+ case IOCTL_NR(I2OLCTGET):
+ case IOCTL_NR(I2OPARMSET):
+ case IOCTL_NR(I2OPARMGET):
+ case IOCTL_NR(I2OSWDL):
+ case IOCTL_NR(I2OSWUL):
+ case IOCTL_NR(I2OSWDEL):
+ case IOCTL_NR(I2OHTML):
+ break;
default:
return(sys_ioctl(fd, cmd, (unsigned long)arg));
}
+ printk("%x:unimplemented IA32 ioctl system call\n", cmd);
+ return(-EINVAL);
}
diff -urN linux-davidm/arch/ia64/ia32/ia32_traps.c lia64/arch/ia64/ia32/ia32_traps.c
--- linux-davidm/arch/ia64/ia32/ia32_traps.c Thu Jun 22 07:09:44 2000
+++ lia64/arch/ia64/ia32/ia32_traps.c Mon Oct 30 22:30:04 2000
@@ -119,6 +119,6 @@
default:
return -1;
}
- force_sig_info(SIGTRAP, &siginfo, current);
+ force_sig_info(siginfo.si_signo, &siginfo, current);
return 0;
}
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c lia64/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/ia32/sys_ia32.c Mon Oct 30 22:32:34 2000
@@ -236,8 +236,6 @@
if (OFFSET4K(addr) || OFFSET4K(off))
return -EINVAL;
- if (prot & PROT_WRITE)
- prot |= PROT_EXEC;
prot |= PROT_WRITE;
front = NULL;
back = NULL;
@@ -287,23 +285,20 @@
unsigned int poff;
flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ prot |= PROT_EXEC;
if ((flags & MAP_FIXED) && ((addr & ~PAGE_MASK) || (offset & ~PAGE_MASK)))
error = do_mmap_fake(file, addr, len, prot, flags, (loff_t)offset);
- else if (!addr && (offset & ~PAGE_MASK)) {
+ else {
poff = offset & PAGE_MASK;
len += offset - poff;
down(¤t->mm->mmap_sem);
- error = do_mmap(file, addr, len, prot, flags, poff);
+ error = do_mmap_pgoff(file, addr, len, prot, flags, poff >> PAGE_SHIFT);
up(¤t->mm->mmap_sem);
if (!IS_ERR((void *) error))
error += offset - poff;
- } else {
- down(¤t->mm->mmap_sem);
- error = do_mmap(file, addr, len, prot, flags, offset);
- up(¤t->mm->mmap_sem);
}
return error;
}
@@ -2032,14 +2027,14 @@
ret = sys_times(tbuf ? &t : NULL);
set_fs (old_fs);
if (tbuf) {
- err = put_user (t.tms_utime, &tbuf->tms_utime);
- err |= __put_user (t.tms_stime, &tbuf->tms_stime);
- err |= __put_user (t.tms_cutime, &tbuf->tms_cutime);
- err |= __put_user (t.tms_cstime, &tbuf->tms_cstime);
+ err = put_user (IA32_TICK(t.tms_utime), &tbuf->tms_utime);
+ err |= __put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime);
+ err |= __put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime);
+ err |= __put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime);
if (err)
ret = -EFAULT;
}
- return ret;
+ return IA32_TICK(ret);
}
unsigned int
@@ -2617,6 +2612,45 @@
* manipulating the page protections...
*/
return(sys_iopl(3, 0, 0, 0));
+}
+
+typedef struct {
+ unsigned int ss_sp;
+ unsigned int ss_flags;
+ unsigned int ss_size;
+} ia32_stack_t;
+
+asmlinkage long
+sys32_sigaltstack (const ia32_stack_t *uss32, ia32_stack_t *uoss32,
+long arg2, long arg3, long arg4,
+long arg5, long arg6, long arg7,
+long stack)
+{
+ struct pt_regs *pt = (struct pt_regs *) &stack;
+ stack_t uss, uoss;
+ ia32_stack_t buf32;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ if (uss32)
+ if (copy_from_user(&buf32, (void *)A(uss32), sizeof(ia32_stack_t)))
+ return(-EFAULT);
+ uss.ss_sp = buf32.ss_sp;
+ uss.ss_flags = buf32.ss_flags;
+ uss.ss_size = buf32.ss_size;
+ set_fs(KERNEL_DS);
+ ret = do_sigaltstack(uss32 ? &uss : NULL, &uoss, pt->r12);
+ set_fs(old_fs);
+ if (ret < 0)
+ return(ret);
+ if (uoss32) {
+ buf32.ss_sp = uoss.ss_sp;
+ buf32.ss_flags = uoss.ss_flags;
+ buf32.ss_size = uoss.ss_size;
+ if (copy_to_user((void*)A(uoss32), &buf32, sizeof(ia32_stack_t)))
+ return(-EFAULT);
+ }
+ return(ret);
}
#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
diff -urN linux-davidm/arch/ia64/kernel/Makefile lia64/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/Makefile Mon Oct 30 22:33:15 2000
@@ -13,7 +13,8 @@
machvec.o pal.o pci-dma.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
-obj-$(CONFIG_IA64_GENERIC) += machvec.o
+obj-$(CONFIG_IA64_GENERIC) += machvec.o iosapic.o
+obj-$(CONFIG_IA64_DIG) += iosapic.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_PCI) += pci.o
obj-$(CONFIG_SMP) += smp.o smpboot.o
@@ -21,7 +22,7 @@
obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o
O_TARGET := kernel.o
-O_OBJS := $(obj-y)
+O_OBJS := $(obj-y)
OX_OBJS := ia64_ksyms.o
clean::
diff -urN linux-davidm/arch/ia64/kernel/acpi.c lia64/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/acpi.c Mon Oct 30 22:33:27 2000
@@ -6,6 +6,8 @@
*
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999,2000 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 2000 Hewlett-Packard Co.
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
@@ -36,11 +38,14 @@
void (*pm_idle)(void);
+asm (".weak iosapic_register_legacy_irq");
+asm (".weak iosapic_init");
+
/*
* Identify usable CPU's and remember them for SMP bringup later.
*/
static void __init
-acpi_lsapic(char *p)
+acpi_lsapic (char *p)
{
int add = 1;
@@ -58,7 +63,7 @@
printk("Performance Restricted; ignoring.\n");
add = 0;
}
-
+
#ifdef CONFIG_SMP
smp_boot_data.cpu_phys_id[total_cpus] = -1;
#endif
@@ -73,83 +78,40 @@
}
/*
- * Configure legacy IRQ information in iosapic_vector
+ * Configure legacy IRQ information.
*/
static void __init
-acpi_legacy_irq(char *p)
+acpi_legacy_irq (char *p)
{
- /*
- * This is not good. ACPI is not necessarily limited to CONFIG_IA64_DIG, yet
- * ACPI does not necessarily imply IOSAPIC either. Perhaps there should be
- * a means for platform_setup() to register ACPI handlers?
- */
-#ifdef CONFIG_IA64_IRQ_ACPI
acpi_entry_int_override_t *legacy = (acpi_entry_int_override_t *) p;
- unsigned char vector;
- int i;
-
- vector = isa_irq_to_vector(legacy->isa_irq);
+ unsigned long polarity = 0, edge_triggered = 0;
/*
- * Clobber any old pin mapping. It may be that it gets replaced later on
+ * If the platform we're running doesn't define
+ * iosapic_register_legacy_irq(), we ignore this info...
*/
- for (i = 0; i < IA64_MAX_VECTORED_IRQ; i++) {
- if (i = vector)
- continue;
- if (iosapic_pin(i) = iosapic_pin(vector))
- iosapic_pin(i) = 0xff;
- }
-
- iosapic_pin(vector) = legacy->pin;
- iosapic_bus(vector) = BUS_ISA; /* This table only overrides the ISA devices */
- iosapic_busdata(vector) = 0;
-
- /*
- * External timer tick is special...
- */
- if (vector != TIMER_IRQ)
- iosapic_dmode(vector) = IO_SAPIC_LOWEST_PRIORITY;
- else
- iosapic_dmode(vector) = IO_SAPIC_FIXED;
-
+ if (!iosapic_register_legacy_irq)
+ return;
+
/* See MPS 1.4 section 4.3.4 */
switch (legacy->flags) {
- case 0x5:
- iosapic_polarity(vector) = IO_SAPIC_POL_HIGH;
- iosapic_trigger(vector) = IO_SAPIC_EDGE;
- break;
- case 0x8:
- iosapic_polarity(vector) = IO_SAPIC_POL_LOW;
- iosapic_trigger(vector) = IO_SAPIC_EDGE;
- break;
- case 0xd:
- iosapic_polarity(vector) = IO_SAPIC_POL_HIGH;
- iosapic_trigger(vector) = IO_SAPIC_LEVEL;
- break;
- case 0xf:
- iosapic_polarity(vector) = IO_SAPIC_POL_LOW;
- iosapic_trigger(vector) = IO_SAPIC_LEVEL;
- break;
- default:
+ case 0x5: polarity = 1; edge_triggered = 1; break;
+ case 0x8: polarity = 0; edge_triggered = 1; break;
+ case 0xd: polarity = 1; edge_triggered = 0; break;
+ case 0xf: polarity = 0; edge_triggered = 0; break;
+ default:
printk(" ACPI Legacy IRQ 0x%02x: Unknown flags 0x%x\n", legacy->isa_irq,
legacy->flags);
break;
}
-
-# ifdef ACPI_DEBUG
- printk("Legacy ISA IRQ %x -> IA64 Vector %x IOSAPIC Pin %x Active %s %s Trigger\n",
- legacy->isa_irq, vector, iosapic_pin(vector),
- ((iosapic_polarity(vector) = IO_SAPIC_POL_LOW) ? "Low" : "High"),
- ((iosapic_trigger(vector) = IO_SAPIC_LEVEL) ? "Level" : "Edge"));
-# endif /* ACPI_DEBUG */
-#endif /* CONFIG_IA64_IRQ_ACPI */
+ iosapic_register_legacy_irq(legacy->isa_irq, legacy->pin, polarity, edge_triggered);
}
/*
* Info on platform interrupt sources: NMI. PMI, INIT, etc.
*/
static void __init
-acpi_platform(char *p)
+acpi_platform (char *p)
{
acpi_entry_platform_src_t *plat = (acpi_entry_platform_src_t *) p;
@@ -161,8 +123,9 @@
* Parse the ACPI Multiple SAPIC Table
*/
static void __init
-acpi_parse_msapic(acpi_sapic_t *msapic)
+acpi_parse_msapic (acpi_sapic_t *msapic)
{
+ acpi_entry_iosapic_t *iosapic;
char *p, *end;
/* Base address of IPI Message Block */
@@ -172,41 +135,31 @@
end = p + (msapic->header.length - sizeof(acpi_sapic_t));
while (p < end) {
-
switch (*p) {
- case ACPI_ENTRY_LOCAL_SAPIC:
+ case ACPI_ENTRY_LOCAL_SAPIC:
acpi_lsapic(p);
break;
- case ACPI_ENTRY_IO_SAPIC:
- platform_register_iosapic((acpi_entry_iosapic_t *) p);
+ case ACPI_ENTRY_IO_SAPIC:
+ iosapic = (acpi_entry_iosapic_t *) p;
+ if (iosapic_init)
+ iosapic_init(iosapic->address, iosapic->irq_base);
break;
- case ACPI_ENTRY_INT_SRC_OVERRIDE:
+ case ACPI_ENTRY_INT_SRC_OVERRIDE:
acpi_legacy_irq(p);
break;
-
- case ACPI_ENTRY_PLATFORM_INT_SOURCE:
+
+ case ACPI_ENTRY_PLATFORM_INT_SOURCE:
acpi_platform(p);
break;
-
- default:
+
+ default:
break;
}
/* Move to next table entry. */
-#define BAD_ACPI_TABLE
-#ifdef BAD_ACPI_TABLE
- /*
- * Some prototype Lion's have a bad ACPI table
- * requiring this fix. Without this fix, those
- * machines crash during bootup.
- */
- if (p[1] = 0)
- p = end;
- else
-#endif
- p += p[1];
+ p += p[1];
}
/* Make bootup pretty */
@@ -214,7 +167,7 @@
}
int __init
-acpi_parse(acpi_rsdp_t *rsdp)
+acpi_parse (acpi_rsdp_t *rsdp)
{
acpi_rsdt_t *rsdt;
acpi_desc_table_hdr_t *hdrp;
@@ -256,7 +209,7 @@
}
#ifdef CONFIG_ACPI_KERNEL_CONFIG
- acpi_cf_terminate();
+ acpi_cf_terminate();
#endif
#ifdef CONFIG_SMP
diff -urN linux-davidm/arch/ia64/kernel/efi.c lia64/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/efi.c Mon Oct 30 22:33:51 2000
@@ -363,7 +363,7 @@
#if EFI_DEBUG
/* print EFI memory map: */
{
- efi_memory_desc_t *md = p;
+ efi_memory_desc_t *md;
void *p;
for (i = 0, p = efi_map_start; p < efi_map_end; ++i, p += efi_desc_size) {
diff -urN linux-davidm/arch/ia64/kernel/head.S lia64/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/head.S Mon Oct 30 22:34:49 2000
@@ -74,8 +74,8 @@
;;
#ifdef CONFIG_IA64_EARLY_PRINTK
- mov r2=6
- mov r3=(8<<8) | (28<<2)
+ mov r3=(6<<8) | (28<<2)
+ movl r2=6<<61
;;
mov rr[r2]=r3
;;
@@ -181,7 +181,8 @@
GLOBAL_ENTRY(ia64_load_debug_regs)
alloc r16=ar.pfs,1,0,0,0
-#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
lfetch.nta [in0]
#endif
mov r20=ar.lc // preserve ar.lc
@@ -754,7 +755,7 @@
mov tmp=ar.itc
(p15) br.cond.sptk .wait
;;
- ld1 tmp=[r31]
+ ld4 tmp=[r31]
;;
cmp.ne p15,p0=tmp,r0
mov tmp=ar.itc
@@ -764,7 +765,7 @@
mov tmp=1
;;
IA64_SEMFIX_INSN
- cmpxchg1.acq tmp=[r31],tmp,ar.ccv
+ cmpxchg4.acq tmp=[r31],tmp,ar.ccv
;;
cmp.eq p15,p0=tmp,r0
diff -urN linux-davidm/arch/ia64/kernel/iosapic.c lia64/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Wed Dec 31 16:00:00 1969
+++ lia64/arch/ia64/kernel/iosapic.c Mon Oct 30 22:28:42 2000
@@ -0,0 +1,495 @@
+/*
+ * I/O SAPIC support.
+ *
+ * Copyright (C) 1999 Intel Corp.
+ * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 1999-2000 Hewlett-Packard Co.
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999,2000 Walt Drummond <drummond@valinux.com>
+ *
+ * 00/04/19 D. Mosberger Rewritten to mirror more closely the x86 I/O APIC code.
+ * In particular, we now have separate handlers for edge
+ * and level triggered interrupts.
+ * 00/10/27 Asit Mallick, Goutham Rao <goutham.rao@intel.com> IRQ vector allocation
+ * PCI to vector mapping, shared PCI interrupts.
+ * 00/10/27 D. Mosberger Document things a bit more to make them more understandable.
+ * Clean up much of the old IOSAPIC cruft.
+ */
+/*
+ * Here is what the interrupt logic between a PCI device and the CPU looks like:
+ *
+ * (1) A PCI device raises one of the four interrupt pins (INTA, INTB, INTC, INTD). The
+ * device is uniquely identified by its bus-, device-, and slot-number (the function
+ * number does not matter here because all functions share the same interrupt
+ * lines).
+ *
+ * (2) The motherboard routes the interrupt line to a pin on a IOSAPIC controller.
+ * Multiple interrupt lines may have to share the same IOSAPIC pin (if they're level
+ * triggered and use the same polarity). Each interrupt line has a unique IOSAPIC
+ * irq number which can be calculated as the sum of the controller's base irq number
+ * and the IOSAPIC pin number to which the line connects.
+ *
+ * (3) The IOSAPIC uses an internal table to map the IOSAPIC pin into the IA-64 interrupt
+ * vector. This interrupt vector is then sent to the CPU.
+ *
+ * In other words, there are two levels of indirections involved:
+ *
+ * pci pin -> iosapic irq -> IA-64 vector
+ *
+ * Note: outside this module, IA-64 vectors are called "irqs". This is because that's
+ * the traditional name Linux uses for interrupt vectors.
+ */
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/string.h>
+#include <linux/irq.h>
+
+#include <asm/acpi-ext.h>
+#include <asm/delay.h>
+#include <asm/io.h>
+#include <asm/iosapic.h>
+#include <asm/machvec.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/system.h>
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+# include <asm/acpikcfg.h>
+#endif
+
+#undef DEBUG_IRQ_ROUTING
+
+static spinlock_t iosapic_lock = SPIN_LOCK_UNLOCKED;
+
+/* PCI pin to IOSAPIC irq routing information. This info typically comes from ACPI. */
+
+static struct {
+ int num_routes;
+ struct pci_vector_struct *route;
+} pci_irq;
+
+/* This tables maps IA-64 vectors to the IOSAPIC pin that generates this vector. */
+
+static struct iosapic_irq {
+ char *addr; /* base address of IOSAPIC */
+ unsigned char base_irq; /* first irq assigned to this IOSAPIC */
+ char pin; /* IOSAPIC pin (-1 => not an IOSAPIC irq) */
+ unsigned char dmode : 3; /* delivery mode (see iosapic.h) */
+ unsigned char polarity : 1; /* interrupt polarity (see iosapic.h) */
+ unsigned char trigger : 1; /* trigger mode (see iosapic.h) */
+} iosapic_irq[NR_IRQS];
+
+/*
+ * Translate IOSAPIC irq number to the corresponding IA-64 interrupt vector. If no
+ * entry exists, return -1.
+ */
+static int
+iosapic_irq_to_vector (int irq)
+{
+ int vector;
+
+ for (vector = 0; vector < NR_IRQS; ++vector)
+ if (iosapic_irq[vector].base_irq + iosapic_irq[vector].pin = irq)
+ return vector;
+ return -1;
+}
+
+/*
+ * Map PCI pin to the corresponding IA-64 interrupt vector. If no such mapping exists,
+ * return -1.
+ */
+static int
+pci_pin_to_vector (int bus, int slot, int pci_pin)
+{
+ struct pci_vector_struct *r;
+
+ for (r = pci_irq.route; r < pci_irq.route + pci_irq.num_routes; ++r)
+ if (r->bus = bus && (r->pci_id >> 16) = slot && r->pin = pci_pin)
+ return iosapic_irq_to_vector(r->irq);
+ return -1;
+}
+
+static void
+set_rte (unsigned int vector, unsigned long dest)
+{
+ unsigned long pol, trigger, dmode;
+ u32 low32, high32;
+ char *addr;
+ int pin;
+
+ pin = iosapic_irq[vector].pin;
+ if (pin < 0)
+ return; /* not an IOSAPIC interrupt */
+
+ addr = iosapic_irq[vector].addr;
+ pol = iosapic_irq[vector].polarity;
+ trigger = iosapic_irq[vector].trigger;
+ dmode = iosapic_irq[vector].dmode;
+
+ low32 = ((pol << IOSAPIC_POLARITY_SHIFT) |
+ (trigger << IOSAPIC_TRIGGER_SHIFT) |
+ (dmode << IOSAPIC_DELIVERY_SHIFT) |
+ vector);
+
+#ifdef CONFIG_IA64_AZUSA_HACKS
+ /* set Flush Disable bit */
+ if (addr != (char *) 0xc0000000fec00000)
+ low32 |= (1 << 17);
+#endif
+
+ /* dest contains both id and eid */
+ high32 = (dest << IOSAPIC_DEST_SHIFT);
+
+ writel(IOSAPIC_RTE_HIGH(pin), addr + IOSAPIC_REG_SELECT);
+ writel(high32, addr + IOSAPIC_WINDOW);
+ writel(IOSAPIC_RTE_LOW(pin), addr + IOSAPIC_REG_SELECT);
+ writel(low32, addr + IOSAPIC_WINDOW);
+}
+
+static void
+nop (unsigned int vector)
+{
+ /* do nothing... */
+}
+
+static void
+mask_irq (unsigned int vector)
+{
+ unsigned long flags;
+ char *addr;
+ u32 low32;
+ int pin;
+
+ addr = iosapic_irq[vector].addr;
+ pin = iosapic_irq[vector].pin;
+
+ if (pin < 0)
+ return; /* not an IOSAPIC interrupt! */
+
+ spin_lock_irqsave(&iosapic_lock, flags);
+ {
+ writel(IOSAPIC_RTE_LOW(pin), addr + IOSAPIC_REG_SELECT);
+ low32 = readl(addr + IOSAPIC_WINDOW);
+
+ low32 |= (1 << IOSAPIC_MASK_SHIFT); /* set only the mask bit */
+ writel(low32, addr + IOSAPIC_WINDOW);
+ }
+ spin_unlock_irqrestore(&iosapic_lock, flags);
+}
+
+static void
+unmask_irq (unsigned int vector)
+{
+ unsigned long flags;
+ char *addr;
+ u32 low32;
+ int pin;
+
+ addr = iosapic_irq[vector].addr;
+ pin = iosapic_irq[vector].pin;
+ if (pin < 0)
+ return; /* not an IOSAPIC interrupt! */
+
+ spin_lock_irqsave(&iosapic_lock, flags);
+ {
+ writel(IOSAPIC_RTE_LOW(pin), addr + IOSAPIC_REG_SELECT);
+ low32 = readl(addr + IOSAPIC_WINDOW);
+
+ low32 &= ~(1 << IOSAPIC_MASK_SHIFT); /* clear only the mask bit */
+ writel(low32, addr + IOSAPIC_WINDOW);
+ }
+ spin_unlock_irqrestore(&iosapic_lock, flags);
+}
+
+
+static void
+iosapic_set_affinity (unsigned int vector, unsigned long mask)
+{
+ printk("iosapic_set_affinity: not implemented yet\n");
+}
+
+/*
+ * Handlers for level-triggered interrupts.
+ */
+
+static unsigned int
+iosapic_startup_level_irq (unsigned int vector)
+{
+ unmask_irq(vector);
+ return 0;
+}
+
+static void
+iosapic_end_level_irq (unsigned int vector)
+{
+ writel(vector, iosapic_irq[vector].addr + IOSAPIC_EOI);
+}
+
+#define iosapic_shutdown_level_irq mask_irq
+#define iosapic_enable_level_irq unmask_irq
+#define iosapic_disable_level_irq mask_irq
+#define iosapic_ack_level_irq nop
+
+struct hw_interrupt_type irq_type_iosapic_level = {
+ typename: "IO-SAPIC-level",
+ startup: iosapic_startup_level_irq,
+ shutdown: iosapic_shutdown_level_irq,
+ enable: iosapic_enable_level_irq,
+ disable: iosapic_disable_level_irq,
+ ack: iosapic_ack_level_irq,
+ end: iosapic_end_level_irq,
+ set_affinity: iosapic_set_affinity
+};
+
+/*
+ * Handlers for edge-triggered interrupts.
+ */
+
+static unsigned int
+iosapic_startup_edge_irq (unsigned int vector)
+{
+ unmask_irq(vector);
+ /*
+ * IOSAPIC simply drops interrupts pended while the
+ * corresponding pin was masked, so we can't know if an
+ * interrupt is pending already. Let's hope not...
+ */
+ return 0;
+}
+
+static void
+iosapic_ack_edge_irq (unsigned int vector)
+{
+ /*
+ * Once we have recorded IRQ_PENDING already, we can mask the
+ * interrupt for real. This prevents IRQ storms from unhandled
+ * devices.
+ */
+ if ((irq_desc[vector].status & (IRQ_PENDING|IRQ_DISABLED)) = (IRQ_PENDING|IRQ_DISABLED))
+ mask_irq(vector);
+}
+
+#define iosapic_enable_edge_irq unmask_irq
+#define iosapic_disable_edge_irq nop
+#define iosapic_end_edge_irq nop
+
+struct hw_interrupt_type irq_type_iosapic_edge = {
+ typename: "IO-SAPIC-edge",
+ startup: iosapic_startup_edge_irq,
+ shutdown: iosapic_disable_edge_irq,
+ enable: iosapic_enable_edge_irq,
+ disable: iosapic_disable_edge_irq,
+ ack: iosapic_ack_edge_irq,
+ end: iosapic_end_edge_irq,
+ set_affinity: iosapic_set_affinity
+};
+
+static unsigned int
+iosapic_version (char *addr)
+{
+ /*
+ * IOSAPIC Version Register return 32 bit structure like:
+ * {
+ * unsigned int version : 8;
+ * unsigned int reserved1 : 8;
+ * unsigned int pins : 8;
+ * unsigned int reserved2 : 8;
+ * }
+ */
+ writel(IOSAPIC_VERSION, addr + IOSAPIC_REG_SELECT);
+ return readl(IOSAPIC_WINDOW + addr);
+}
+
+/*
+ * ACPI calls this when it finds an entry for a legacy ISA interrupt. Note that the
+ * irq_base and IOSAPIC address must be set in iosapic_init().
+ */
+void
+iosapic_register_legacy_irq (unsigned long irq,
+ unsigned long pin, unsigned long polarity,
+ unsigned long edge_triggered)
+{
+ unsigned int vector = isa_irq_to_vector(irq);
+
+#ifdef DEBUG_IRQ_ROUTING
+ printk("ISA: IRQ %u -> IOSAPIC irq 0x%02x (%s, %s) -> vector %02x\n",
+ (unsigned) irq, (unsigned) pin,
+ polarity ? "high" : "low", edge_triggered ? "edge" : "level",
+ vector);
+#endif
+
+ iosapic_irq[vector].pin = pin;
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+ iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW;
+ iosapic_irq[vector].trigger = edge_triggered ? IOSAPIC_EDGE : IOSAPIC_LEVEL;
+}
+
+void __init
+iosapic_init (unsigned long phys_addr, unsigned int base_irq)
+{
+ struct hw_interrupt_type *irq_type;
+ int i, irq, max_pin, vector;
+ unsigned int ver;
+ char *addr;
+ static int first_time = 1;
+
+ if (first_time) {
+ first_time = 0;
+
+ for (vector = 0; vector < NR_IRQS; ++vector)
+ iosapic_irq[vector].pin = -1; /* mark as unused */
+
+ /*
+ * Fetch the PCI interrupt routing table:
+ */
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+ acpi_cf_get_pci_vectors(&pci_irq.route, &pci_irq.num_routes);
+#else
+ pci_irq.route + (struct pci_vector_struct *) __va(ia64_boot_param.pci_vectors);
+ pci_irq.num_routes = ia64_boot_param.num_pci_vectors;
+#endif
+ }
+
+ addr = ioremap(phys_addr, 0);
+
+ ver = iosapic_version(addr);
+ max_pin = (ver >> 16) & 0xff;
+
+ printk("IOSAPIC: version %x.%x, address 0x%lx, IRQs 0x%02x-0x%02x\n",
+ (ver & 0xf0) >> 4, (ver & 0x0f), phys_addr, base_irq, base_irq + max_pin);
+
+ if (base_irq = 0)
+ /*
+ * Map the legacy ISA devices into the IOSAPIC data. Some of these may
+ * get reprogrammed later on with data from the ACPI Interrupt Source
+ * Override table.
+ */
+ for (irq = 0; irq < 16; ++irq) {
+ vector = isa_irq_to_vector(irq);
+ iosapic_irq[vector].addr = addr;
+ iosapic_irq[vector].base_irq = 0;
+ if (iosapic_irq[vector].pin = -1)
+ iosapic_irq[vector].pin = irq;
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+ iosapic_irq[vector].trigger = IOSAPIC_EDGE;
+ iosapic_irq[vector].polarity = IOSAPIC_POL_HIGH;
+#ifdef DEBUG_IRQ_ROUTING
+ printk("ISA: IRQ %u -> IOSAPIC irq 0x%02x (high, edge) -> vector 0x%02x\n",
+ irq, iosapic_irq[vector].base_irq + iosapic_irq[vector].pin,
+ vector);
+#endif
+ irq_type = &irq_type_iosapic_edge;
+ if (irq_desc[vector].handler != irq_type) {
+ if (irq_desc[vector].handler != &no_irq_type)
+ printk("iosapic_init: changing vector 0x%02x from %s to "
+ "%s\n", irq, irq_desc[vector].handler->typename,
+ irq_type->typename);
+ irq_desc[vector].handler = irq_type;
+ }
+
+ /* program the IOSAPIC routing table: */
+ set_rte(vector, (ia64_get_lid() >> 16) & 0xffff);
+ }
+
+#ifndef CONFIG_IA64_SOFTSDV_HACKS
+ for (i = 0; i < pci_irq.num_routes; i++) {
+ irq = pci_irq.route[i].irq;
+
+ if ((unsigned) (irq - base_irq) > max_pin)
+ /* the interrupt route is for another controller... */
+ continue;
+
+ if (irq < 16)
+ vector = isa_irq_to_vector(irq);
+ else {
+ vector = iosapic_irq_to_vector(irq);
+ if (vector < 0)
+ /* new iosapic irq: allocate a vector for it */
+ vector = ia64_alloc_irq();
+ }
+
+ iosapic_irq[vector].addr = addr;
+ iosapic_irq[vector].base_irq = base_irq;
+ iosapic_irq[vector].pin = (irq - base_irq);
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+ iosapic_irq[vector].trigger = IOSAPIC_LEVEL;
+ iosapic_irq[vector].polarity = IOSAPIC_POL_LOW;
+
+# ifdef DEBUG_IRQ_ROUTING
+ printk("PCI: (B%d,I%d,P%d) -> IOSAPIC irq 0x%02x -> vector 0x%02x\n",
+ pci_irq.route[i].bus, pci_irq.route[i].pci_id>>16, pci_irq.route[i].pin,
+ iosapic_irq[vector].base_irq + iosapic_irq[vector].pin, vector);
+# endif
+ irq_type = &irq_type_iosapic_level;
+ if (irq_desc[vector].handler != irq_type){
+ if (irq_desc[vector].handler != &no_irq_type)
+ printk("iosapic_init: changing vector 0x%02x from %s to %s\n",
+ vector, irq_desc[vector].handler->typename,
+ irq_type->typename);
+ irq_desc[vector].handler = irq_type;
+ }
+
+ /* program the IOSAPIC routing table: */
+ set_rte(vector, (ia64_get_lid() >> 16) & 0xffff);
+ }
+#endif /* !CONFIG_IA64_SOFTSDV_HACKS */
+}
+
+void
+iosapic_pci_fixup (void)
+{
+ struct pci_dev *dev;
+ unsigned char pin;
+ int vector;
+
+ pci_for_each_dev(dev) {
+ pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
+ if (pin) {
+ pin--; /* interrupt pins are numbered starting from 1 */
+ vector = pci_pin_to_vector(dev->bus->number, PCI_SLOT(dev->devfn), pin);
+ if (vector < 0 && dev->bus->parent) {
+ /* go back to the bridge */
+ struct pci_dev *bridge = dev->bus->self;
+
+ if (bridge) {
+ /* allow for multiple bridges on an adapter */
+ do {
+ /* do the bridge swizzle... */
+ pin = (pin + PCI_SLOT(dev->devfn)) % 4;
+ vector = pci_pin_to_vector(bridge->bus->number,
+ PCI_SLOT(bridge->devfn),
+ pin);
+ } while (vector < 0 && (bridge = bridge->bus->self));
+ }
+ if (vector >= 0)
+ printk(KERN_WARNING
+ "PCI: using PPB(B%d,I%d,P%d) to get vector %02x\n",
+ bridge->bus->number, PCI_SLOT(bridge->devfn),
+ pin, vector);
+ else
+ printk(KERN_WARNING
+ "PCI: Couldn't map irq for (B%d,I%d,P%d)o\n",
+ bridge->bus->number, PCI_SLOT(bridge->devfn),
+ pin);
+ }
+ if (vector >= 0) {
+ printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> 0x%02x\n",
+ dev->bus->number, PCI_SLOT(dev->devfn), pin, vector);
+ dev->irq = vector;
+ }
+ }
+ /*
+ * Nothing to fixup
+ * Fix out-of-range IRQ numbers
+ */
+ if (dev->irq >= NR_IRQS)
+ dev->irq = 15; /* Spurious interrupts */
+ }
+}
diff -urN linux-davidm/arch/ia64/kernel/irq_ia64.c lia64/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/irq_ia64.c Mon Oct 30 22:35:56 2000
@@ -7,6 +7,9 @@
*
* 6/10/99: Updated to bring in sync with x86 version to facilitate
* support for SMP and different interrupt controllers.
+ *
+ * 09/15/00 Goutham Rao <goutham.rao@intel.com> Implemented pci_irq_to_vector
+ * PCI to vector allocation routine.
*/
#include <linux/config.h>
@@ -43,15 +46,25 @@
unsigned long ipi_base_addr = (__IA64_UNCACHED_OFFSET | IPI_DEFAULT_BASE_ADDR);
/*
- * Legacy IRQ to IA-64 vector translation table. Any vector not in
- * this table maps to itself (ie: irq 0x30 => IA64 vector 0x30)
+ * Legacy IRQ to IA-64 vector translation table.
*/
__u8 isa_irq_to_vector_map[16] = {
/* 8259 IRQ translation, first 16 entries */
- 0x60, 0x50, 0x10, 0x51, 0x52, 0x53, 0x43, 0x54,
- 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x40, 0x41
+ 0x2f, 0x20, 0x2e, 0x2d, 0x2c, 0x2b, 0x2a, 0x29,
+ 0x28, 0x27, 0x26, 0x25, 0x24, 0x23, 0x22, 0x21
};
+int
+ia64_alloc_irq (void)
+{
+ static int next_irq = FIRST_DEVICE_IRQ;
+
+ if (next_irq > LAST_DEVICE_IRQ)
+ /* XXX could look for sharable vectors instead of panic'ing... */
+ panic("ia64_alloc_irq: out of interrupt vectors!");
+ return next_irq++;
+}
+
#ifdef CONFIG_ITANIUM_A1_SPECIFIC
int usbfix;
@@ -217,7 +230,7 @@
}
void
-ipi_send (int cpu, int vector, int delivery_mode, int redirect)
+ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect)
{
unsigned long ipi_addr;
unsigned long ipi_data;
diff -urN linux-davidm/arch/ia64/kernel/ivt.S lia64/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/ivt.S Tue Oct 31 00:02:02 2000
@@ -924,7 +924,7 @@
alloc r15=ar.pfs,0,0,6,0 // must first in an insn group
;;
ld4 r8=[r14],8 // r8 = EAX (syscall number)
- mov r15\x190 // sys_vfork - last implemented system call
+ mov r15"2 // sys_vfork - last implemented system call
;;
cmp.leu.unc p6,p7=r8,r15
ld4 out1=[r14],8 // r9 = ecx
diff -urN linux-davidm/arch/ia64/kernel/mca.c lia64/arch/ia64/kernel/mca.c
--- linux-davidm/arch/ia64/kernel/mca.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/mca.c Mon Oct 30 23:56:26 2000
@@ -19,6 +19,7 @@
#include <linux/irq.h>
#include <linux/smp_lock.h>
+#include <asm/machvec.h>
#include <asm/page.h>
#include <asm/ptrace.h>
#include <asm/system.h>
@@ -365,7 +366,7 @@
void
ia64_mca_wakeup(int cpu)
{
- ipi_send(cpu, IA64_MCA_WAKEUP_INT_VECTOR, IA64_IPI_DM_INT, 0);
+ ia64_send_ipi(cpu, IA64_MCA_WAKEUP_INT_VECTOR, IA64_IPI_DM_INT, 0);
ia64_mc_info.imi_rendez_checkin[cpu] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
}
diff -urN linux-davidm/arch/ia64/kernel/pci.c lia64/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c Thu Jun 22 07:09:44 2000
+++ lia64/arch/ia64/kernel/pci.c Mon Oct 30 22:36:54 2000
@@ -56,7 +56,8 @@
/* Macro to build a PCI configuration address to be passed as a parameter to SAL. */
-#define PCI_CONFIG_ADDRESS(dev, where) (((u64) dev->bus->number << 16) | ((u64) (dev->devfn & 0xff) << 8) | (where & 0xff))
+#define PCI_CONFIG_ADDRESS(dev, where) \
+ (((u64) dev->bus->number << 16) | ((u64) (dev->devfn & 0xff) << 8) | (where & 0xff))
static int
pci_conf_read_config_byte(struct pci_dev *dev, int where, u8 *value)
@@ -109,7 +110,6 @@
return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS(dev, where), 4, value);
}
-
static struct pci_ops pci_conf = {
pci_conf_read_config_byte,
pci_conf_read_config_word,
@@ -120,35 +120,18 @@
};
/*
- * Try to find PCI BIOS. This will always work for IA64.
- */
-
-static struct pci_ops * __init
-pci_find_bios(void)
-{
- return &pci_conf;
-}
-
-/*
* Initialization. Uses the SAL interface
*/
-
-#define PCI_BUSES_TO_SCAN 255
-
void __init
pcibios_init(void)
{
+# define PCI_BUSES_TO_SCAN 255
struct pci_ops *ops = NULL;
int i;
- if ((ops = pci_find_bios()) = NULL) {
- printk("PCI: No PCI bus detected\n");
- return;
- }
-
printk("PCI: Probing PCI hardware\n");
for (i = 0; i < PCI_BUSES_TO_SCAN; i++)
- pci_scan_bus(i, ops, NULL);
+ pci_scan_bus(i, &pci_conf, NULL);
platform_pci_fixup();
return;
}
@@ -157,7 +140,6 @@
* Called after each bus is probed, but before its children
* are examined.
*/
-
void __init
pcibios_fixup_bus(struct pci_bus *b)
{
@@ -207,7 +189,6 @@
/*
* PCI BIOS setup, always defaults to SAL interface
*/
-
char * __init
pcibios_setup(char *str)
{
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c lia64/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/perfmon.c Mon Oct 30 22:37:23 2000
@@ -4,18 +4,20 @@
*
* Originaly Written by Ganesh Venkitachalam, IBM Corp.
* Modifications by David Mosberger-Tang, Hewlett-Packard Co.
+ * Modifications by Stephane Eranian, Hewlett-Packard Co.
* Copyright (C) 1999 Ganesh Venkitachalam <venkitac@us.ibm.com>
* Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
*/
#include <linux/config.h>
+
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/smp_lock.h>
#include <linux/proc_fs.h>
-#include <linux/ptrace.h>
#include <asm/errno.h>
#include <asm/hw_irq.h>
@@ -58,19 +60,51 @@
#define MAX_PERF_COUNTER 4 /* true for Itanium, at least */
#define PMU_FIRST_COUNTER 4 /* first generic counter */
-#define WRITE_PMCS_AND_START 0xa0
-#define WRITE_PMCS 0xa1
-#define READ_PMDS 0xa2
-#define STOP_PMCS 0xa3
+#define PFM_WRITE_PMCS 0xa0
+#define PFM_WRITE_PMDS 0xa1
+#define PFM_READ_PMDS 0xa2
+#define PFM_STOP 0xa3
+#define PFM_START 0xa4
+#define PFM_ENABLE 0xa5 /* unfreeze only */
+#define PFM_DISABLE 0xa6 /* freeze only */
+/*
+ * Those 2 are just meant for debugging. I considered using sysctl() for
+ * that but it is a little bit too pervasive. This solution is at least
+ * self-contained.
+ */
+#define PFM_DEBUG_ON 0xe0
+#define PFM_DEBUG_OFF 0xe1
+
+#ifdef CONFIG_SMP
+#define cpu_is_online(i) (cpu_online_map & (1UL << i))
+#else
+#define cpu_is_online(i) 1
+#endif
+#define PMC_IS_IMPL(i) (pmu_conf.impl_regs[i>>6] & (1<< (i&~(64-1))))
+#define PMD_IS_IMPL(i) (pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
+#define PMD_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
+#define PMC_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
/*
* this structure needs to be enhanced
*/
typedef struct {
+ unsigned long pfr_reg_num; /* which register */
+ unsigned long pfr_reg_value; /* configuration (PMC) or initial value (PMD) */
+ unsigned long pfr_reg_reset; /* reset value on overflow (PMD) */
+ void *pfr_smpl_buf; /* pointer to user buffer for EAR/BTB */
+ unsigned long pfr_smpl_size; /* size of user buffer for EAR/BTB */
+ pid_t pfr_notify_pid; /* process to notify */
+ int pfr_notify_sig; /* signal for notification, 0=no notification */
+} perfmon_req_t;
+
+#if 0
+typedef struct {
unsigned long pmu_reg_data; /* generic PMD register */
unsigned long pmu_reg_num; /* which register number */
} perfmon_reg_t;
+#endif
/*
* This structure is initialize at boot time and contains
@@ -78,86 +112,141 @@
* by PAL
*/
typedef struct {
- unsigned long perf_ovfl_val; /* overflow value for generic counters */
- unsigned long max_pmc; /* highest PMC */
- unsigned long max_pmd; /* highest PMD */
- unsigned long max_counters; /* number of generic counter pairs (PMC/PMD) */
+ unsigned long perf_ovfl_val; /* overflow value for generic counters */
+ unsigned long max_counters; /* upper limit on counter pair (PMC/PMD) */
+ unsigned long impl_regs[16]; /* buffer used to hold implememted PMC/PMD mask */
} pmu_config_t;
-/* XXX will go static when ptrace() is cleaned */
-unsigned long perf_ovfl_val; /* overflow value for generic counters */
-
static pmu_config_t pmu_conf;
+/* for debug only */
+static unsigned long pfm_debug=1; /* 0= nodebug, >0= debug output on */
+#define DBprintk(a) {\
+ if (pfm_debug >0) { printk a; } \
+}
+
/*
- * could optimize to avoid cache conflicts in SMP
+ * could optimize to avoid cache line conflicts in SMP
*/
-unsigned long pmds[NR_CPUS][MAX_PERF_COUNTER];
+static struct task_struct *pmu_owners[NR_CPUS];
-asmlinkage unsigned long
-sys_perfmonctl (int cmd, int count, void *ptr, long arg4, long arg5, long arg6, long arg7, long arg8, long stack)
+static int
+do_perfmonctl (struct task_struct *task, int cmd, int flags, perfmon_req_t *req, int count, struct pt_regs *regs)
{
- struct pt_regs *regs = (struct pt_regs *) &stack;
- perfmon_reg_t tmp, *cptr = ptr;
- unsigned long cnum;
+ perfmon_req_t tmp;
int i;
switch (cmd) {
- case WRITE_PMCS: /* Writes to PMC's and clears PMDs */
- case WRITE_PMCS_AND_START: /* Also starts counting */
+ case PFM_WRITE_PMCS:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
+
+ if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
+
+ for (i = 0; i < count; i++, req++) {
+ copy_from_user(&tmp, req, sizeof(tmp));
+
+ /* XXX needs to check validity of the data maybe */
+
+ if (!PMC_IS_IMPL(tmp.pfr_reg_num)) {
+ DBprintk((__FUNCTION__ " invalid pmc[%ld]\n", tmp.pfr_reg_num));
+ return -EINVAL;
+ }
+
+ /* XXX: for counters, need to some checks */
+ if (PMC_IS_COUNTER(tmp.pfr_reg_num)) {
+ current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].sig = tmp.pfr_notify_sig;
+ current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].pid = tmp.pfr_notify_pid;
+
+ DBprintk((__FUNCTION__" setting PMC[%ld] send sig %d to %d\n",tmp.pfr_reg_num, tmp.pfr_notify_sig, tmp.pfr_notify_pid));
+ }
+ ia64_set_pmc(tmp.pfr_reg_num, tmp.pfr_reg_value);
+
+ DBprintk((__FUNCTION__" setting PMC[%ld]=0x%lx\n", tmp.pfr_reg_num, tmp.pfr_reg_value));
+ }
+ /*
+ * we have to set this here event hough we haven't necessarily started monitoring
+ * because we may be context switched out
+ */
+ current->thread.flags |= IA64_THREAD_PM_VALID;
+ break;
+
+ case PFM_WRITE_PMDS:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
+
+ if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
+
+ for (i = 0; i < count; i++, req++) {
+ copy_from_user(&tmp, req, sizeof(tmp));
+
+ if (!PMD_IS_IMPL(tmp.pfr_reg_num)) return -EINVAL;
+
+ /* update virtualized (64bits) counter */
+ if (PMD_IS_COUNTER(tmp.pfr_reg_num)) {
+ current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].val = tmp.pfr_reg_value & ~pmu_conf.perf_ovfl_val;
+ current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].rval = tmp.pfr_reg_reset;
+ }
+ /* writes to unimplemented part is ignored, so this is safe */
+ ia64_set_pmd(tmp.pfr_reg_num, tmp.pfr_reg_value);
+ /* to go away */
+ ia64_srlz_d();
+ DBprintk((__FUNCTION__" setting PMD[%ld]: pmod.val=0x%lx pmd=0x%lx rval=0x%lx\n", tmp.pfr_reg_num, current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].val, ia64_get_pmd(tmp.pfr_reg_num),current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].rval));
+ }
+ /*
+ * we have to set this here event hough we haven't necessarily started monitoring
+ * because we may be context switched out
+ */
+ current->thread.flags |= IA64_THREAD_PM_VALID;
+ break;
+
+ case PFM_START:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
- if (!access_ok(VERIFY_READ, cptr, sizeof(struct perfmon_reg_t)*count))
- return -EFAULT;
+ pmu_owners[smp_processor_id()] = current;
- for (i = 0; i < count; i++, cptr++) {
+ /* will start monitoring right after rfi */
+ ia64_psr(regs)->up = 1;
- copy_from_user(&tmp, cptr, sizeof(tmp));
+ /*
+ * mark the state as valid.
+ * this will trigger save/restore at context switch
+ */
+ current->thread.flags |= IA64_THREAD_PM_VALID;
- /* XXX need to check validity of pmu_reg_num and perhaps data!! */
+ ia64_set_pmc(0, 0);
- if (tmp.pmu_reg_num > pmu_conf.max_pmc || tmp.pmu_reg_num = 0) return -EFAULT;
+ break;
- ia64_set_pmc(tmp.pmu_reg_num, tmp.pmu_reg_data);
+ case PFM_ENABLE:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
- /* to go away */
- if (tmp.pmu_reg_num >= PMU_FIRST_COUNTER && tmp.pmu_reg_num < PMU_FIRST_COUNTER+pmu_conf.max_counters) {
- ia64_set_pmd(tmp.pmu_reg_num, 0);
- pmds[smp_processor_id()][tmp.pmu_reg_num - PMU_FIRST_COUNTER] = 0;
+ pmu_owners[smp_processor_id()] = current;
- printk(__FUNCTION__" setting PMC/PMD[%ld] es=0x%lx pmd[%ld]=%lx\n", tmp.pmu_reg_num, (tmp.pmu_reg_data>>8) & 0x7f, tmp.pmu_reg_num, ia64_get_pmd(tmp.pmu_reg_num));
- } else
- printk(__FUNCTION__" setting PMC[%ld]=0x%lx\n", tmp.pmu_reg_num, tmp.pmu_reg_data);
- }
-
- if (cmd = WRITE_PMCS_AND_START) {
-#if 0
-/* irrelevant with user monitors */
- local_irq_save(flags);
-
- dcr = ia64_get_dcr();
- dcr |= IA64_DCR_PP;
- ia64_set_dcr(dcr);
-
- local_irq_restore(flags);
-#endif
+ /*
+ * mark the state as valid.
+ * this will trigger save/restore at context switch
+ */
+ current->thread.flags |= IA64_THREAD_PM_VALID;
+ /* simply unfreeze */
ia64_set_pmc(0, 0);
+ break;
- /* will start monitoring right after rfi */
- ia64_psr(regs)->up = 1;
- }
- /*
- * mark the state as valid.
- * this will trigger save/restore at context switch
- */
- current->thread.flags |= IA64_THREAD_PM_VALID;
- break;
-
- case READ_PMDS:
- if (count <= 0 || count > MAX_PERF_COUNTER)
- return -EINVAL;
- if (!access_ok(VERIFY_WRITE, cptr, sizeof(struct perfmon_reg_t)*count))
- return -EFAULT;
+ case PFM_DISABLE:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
+
+ /* simply unfreeze */
+ ia64_set_pmc(0, 1);
+ ia64_srlz_d();
+ break;
+
+ case PFM_READ_PMDS:
+ if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
+ if (!access_ok(VERIFY_WRITE, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
/* This looks shady, but IMHO this will work fine. This is
* the sequence that I could come up with to avoid races
@@ -187,16 +276,31 @@
* is the irq_save/restore needed?
*/
+ for (i = 0; i < count; i++, req++) {
+ unsigned long val=0;
- /* XXX: This needs to change to read more than just the counters */
- for (i = 0, cnum = PMU_FIRST_COUNTER;i < count; i++, cnum++, cptr++) {
+ copy_from_user(&tmp, req, sizeof(tmp));
- tmp.pmu_reg_data = (pmds[smp_processor_id()][i]
- + (ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val));
+ if (!PMD_IS_IMPL(tmp.pfr_reg_num)) return -EINVAL;
- tmp.pmu_reg_num = cnum;
+ if (PMD_IS_COUNTER(tmp.pfr_reg_num)) {
+ if (task = current){
+ val = ia64_get_pmd(tmp.pfr_reg_num) & pmu_conf.perf_ovfl_val;
+ } else {
+ val = task->thread.pmd[tmp.pfr_reg_num - PMU_FIRST_COUNTER] & pmu_conf.perf_ovfl_val;
+ }
+ val += task->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].val;
+ } else {
+ /* for now */
+ if (task != current) return -EINVAL;
+
+ val = ia64_get_pmd(tmp.pfr_reg_num);
+ }
+ tmp.pfr_reg_value = val;
- if (copy_to_user(cptr, &tmp, sizeof(tmp))) return -EFAULT;
+DBprintk((__FUNCTION__" reading PMD[%ld]=0x%lx\n", tmp.pfr_reg_num, val));
+
+ if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT;
}
#if 0
/* irrelevant with user monitors */
@@ -209,11 +313,18 @@
#endif
break;
- case STOP_PMCS:
+ case PFM_STOP:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
+
ia64_set_pmc(0, 1);
ia64_srlz_d();
- for (i = 0; i < MAX_PERF_COUNTER; ++i)
- ia64_set_pmc(4+i, 0);
+
+ ia64_psr(regs)->up = 0;
+
+ current->thread.flags &= ~IA64_THREAD_PM_VALID;
+
+ pmu_owners[smp_processor_id()] = NULL;
#if 0
/* irrelevant with user monitors */
@@ -225,48 +336,140 @@
ia64_psr(regs)->up = 0;
#endif
- current->thread.flags &= ~(IA64_THREAD_PM_VALID);
-
break;
+ case PFM_DEBUG_ON:
+ printk(__FUNCTION__" debuggin on\n");
+ pfm_debug = 1;
+ break;
+
+ case PFM_DEBUG_OFF:
+ printk(__FUNCTION__" debuggin off\n");
+ pfm_debug = 0;
+ break;
+
default:
+ DBprintk((__FUNCTION__" UNknown command 0x%x\n", cmd));
return -EINVAL;
break;
}
return 0;
}
-static inline void
-update_counters (void)
+asmlinkage int
+sys_perfmonctl (int pid, int cmd, int flags, perfmon_req_t *req, int count, long arg6, long arg7, long arg8, long stack)
{
- unsigned long mask, i, cnum, val;
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ struct task_struct *child = current;
+ int ret;
- mask = ia64_get_pmc(0) >> 4;
- for (i = 0, cnum = PMU_FIRST_COUNTER ; i < pmu_conf.max_counters; cnum++, i++, mask >>= 1) {
+ if (pid != current->pid) {
+ read_lock(&tasklist_lock);
+ {
+ child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
+ }
+ if (!child) {
+ read_unlock(&tasklist_lock);
+ return -ESRCH;
+ }
+ /*
+ * XXX: need to do more checking here
+ */
+ if (child->state != TASK_ZOMBIE) {
+ DBprintk((__FUNCTION__" warning process %d not in stable state %ld\n", pid, child->state));
+ }
+ }
+ ret = do_perfmonctl(child, cmd, flags, req, count, regs);
+ if (child != current) read_unlock(&tasklist_lock);
- val = mask & 0x1 ? pmu_conf.perf_ovfl_val + 1 : 0;
+ return ret;
+}
- if (mask & 0x1)
- printk(__FUNCTION__ " PMD%ld overflowed pmd=%lx pmod=%lx\n", cnum, ia64_get_pmd(cnum), pmds[smp_processor_id()][i]);
- /* since we got an interrupt, might as well clear every pmd. */
- val += ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val;
+static inline int
+update_counters (u64 pmc0)
+{
+ unsigned long mask, i, cnum;
+ struct thread_struct *th;
+ struct task_struct *ta;
+
+ if (pmu_owners[smp_processor_id()] = NULL) {
+ DBprintk((__FUNCTION__" Spurious overflow interrupt: PMU not owned\n"));
+ return 0;
+ }
+
+ /*
+ * It is never safe to access the task for which the overflow interrupt is destinated
+ * using the current variable as the interrupt may occur in the middle of a context switch
+ * where current does not hold the task that is running yet.
+ *
+ * For monitoring, however, we do need to get access to the task which caused the overflow
+ * to account for overflow on the counters.
+ * We accomplish this by maintaining a current owner of the PMU per CPU. During context
+ * switch the ownership is changed in a way such that the reflected owner is always the
+ * valid one, i.e. the one that caused the interrupt.
+ */
+ ta = pmu_owners[smp_processor_id()];
+ th = &pmu_owners[smp_processor_id()]->thread;
- printk(__FUNCTION__ " adding val=%lx to pmod[%ld]=%lx \n", val, i, pmds[smp_processor_id()][i]);
+ /*
+ * Don't think this could happen given first test. Keep as sanity check
+ */
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
+ DBprintk((__FUNCTION__" Spurious overflow interrupt: process %d not using perfmon\n", ta->pid));
+ return 0;
+ }
+
+ /*
+ * if PMU not frozen: spurious from previous context
+ * if PMC[0] = 0x1 : frozen but no overflow reported: leftover from previous context
+ *
+ * in either case we don't touch the state upon return from handler
+ */
+ if ((pmc0 & 0x1) = 0 || pmc0 = 0x1) {
+ DBprintk((__FUNCTION__" Spurious overflow interrupt: process %d freeze=0\n",ta->pid));
+ return 0;
+ }
- pmds[smp_processor_id()][i] += val;
+ mask = pmc0 >> 4;
- ia64_set_pmd(cnum, 0);
+ for (i = 0, cnum = PMU_FIRST_COUNTER; i < pmu_conf.max_counters; cnum++, i++, mask >>= 1) {
+
+ if (mask & 0x1) {
+ DBprintk((__FUNCTION__ " PMD[%ld] overflowed pmd=0x%lx pmod.val=0x%lx\n", cnum, ia64_get_pmd(cnum), th->pmu_counters[i].val));
+
+ /*
+ * Because we somtimes (EARS/BTB) reset to a specific value, we cannot simply use
+ * val to count the number of times we overflowed. Otherwise we would loose the value
+ * current in the PMD (which can be >0). So to make sure we don't loose
+ * the residual counts we set val to contain full 64bits value of the counter.
+ */
+ th->pmu_counters[i].val += 1+pmu_conf.perf_ovfl_val+(ia64_get_pmd(cnum) &pmu_conf.perf_ovfl_val);
+
+ /* writes to upper part are ignored, so this is safe */
+ ia64_set_pmd(cnum, th->pmu_counters[i].rval);
+
+ DBprintk((__FUNCTION__ " pmod[%ld].val=0x%lx pmd=0x%lx\n", i, th->pmu_counters[i].val, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val));
+
+ if (th->pmu_counters[i].pid != 0 && th->pmu_counters[i].sig>0) {
+ DBprintk((__FUNCTION__ " shouild notify process %d with signal %d\n",th->pmu_counters[i].pid, th->pmu_counters[i].sig));
+ }
+ }
}
+ return 1;
}
static void
perfmon_interrupt (int irq, void *arg, struct pt_regs *regs)
{
- update_counters();
- ia64_set_pmc(0, 0);
- ia64_srlz_d();
+ /* unfreeze if not spurious */
+ if ( update_counters(ia64_get_pmc(0)) ) {
+ ia64_set_pmc(0, 0);
+ ia64_srlz_d();
+ }
}
static struct irqaction perfmon_irqaction = {
@@ -280,9 +483,13 @@
{
char *p = page;
u64 pmc0 = ia64_get_pmc(0);
+ int i;
- p += sprintf(p, "PMC[0]=%lx\n", pmc0);
-
+ p += sprintf(p, "PMC[0]=%lx\nPerfmon debug: %s\n", pmc0, pfm_debug ? "On" : "Off");
+ for(i=0; i < NR_CPUS; i++) {
+ if (cpu_is_online(i))
+ p += sprintf(p, "CPU%d.PMU %d\n", i, pmu_owners[i] ? pmu_owners[i]->pid: -1);
+ }
return p - page;
}
@@ -308,7 +515,6 @@
perfmon_init (void)
{
pal_perf_mon_info_u_t pm_info;
- u64 pm_buffer[16];
s64 status;
irq_desc[PERFMON_IRQ].status |= IRQ_PER_CPU;
@@ -320,15 +526,13 @@
printk("perfmon: Initialized vector to %u\n",PERFMON_IRQ);
- if ((status=ia64_pal_perf_mon_info(pm_buffer, &pm_info)) != 0) {
+ if ((status=ia64_pal_perf_mon_info(pmu_conf.impl_regs, &pm_info)) != 0) {
printk(__FUNCTION__ " pal call failed (%ld)\n", status);
return;
}
- pmu_conf.perf_ovfl_val = perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1;
+ pmu_conf.perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1;
/* XXX need to use PAL instead */
- pmu_conf.max_pmc = 13;
- pmu_conf.max_pmd = 17;
pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic;
printk("perfmon: Counters are %d bits\n", pm_info.pal_perf_mon_info_s.width);
@@ -347,36 +551,137 @@
ia64_srlz_d();
}
+/*
+ * XXX: for system wide this function MUST never be called
+ */
void
-ia64_save_pm_regs (struct thread_struct *t)
+ia64_save_pm_regs (struct task_struct *ta)
{
- int i;
+ struct thread_struct *t = &ta->thread;
+ u64 pmc0, psr;
+ int i,j;
+
+ /*
+ * We must maek sure that we don't loose any potential overflow
+ * interrupt while saving PMU context. In this code, external
+ * interrupts are always enabled.
+ */
+
+ /*
+ * save current PSR: needed because we modify it
+ */
+ __asm__ __volatile__ ("mov %0=psr;;": "=r"(psr) :: "memory");
+
+ /*
+ * stop monitoring:
+ * This is the only way to stop monitoring without destroying overflow
+ * information in PMC[0..3].
+ * This is the last instruction which can cause overflow when monitoring
+ * in kernel.
+ * By now, we could still have an overflow interrupt in flight.
+ */
+ __asm__ __volatile__ ("rsm psr.up;;"::: "memory");
+
+ /*
+ * read current overflow status:
+ *
+ * We may be reading stale information at this point, if we got interrupt
+ * just before the read(pmc0) but that's all right. However, if we did
+ * not get the interrupt before, this read reflects LAST state.
+ *
+ */
+ pmc0 = ia64_get_pmc(0);
+ /*
+ * freeze PMU:
+ *
+ * This destroys the overflow information. This is required to make sure
+ * next process does not start with monitoring on if not requested
+ * (PSR.up may not be enough).
+ *
+ * We could still get an overflow interrupt by now. However the handler
+ * will not do anything if is sees PMC[0].fr=1 but no overflow bits
+ * are set. So PMU will stay in frozen state. This implies that pmc0
+ * will still be holding the correct unprocessed information.
+ *
+ */
ia64_set_pmc(0, 1);
ia64_srlz_d();
+
+ /*
+ * check for overflow bits set:
+ *
+ * If pmc0 reports PMU frozen, this means we have a pending overflow,
+ * therefore we invoke the handler. Handler is reentrant with regards
+ * to PMC[0] so it is safe to call it twice.
+ *
+ * IF pmc0 reports overflow, we need to reread current PMC[0] value
+ * in case the handler was invoked right after the first pmc0 read.
+ * it is was not invoked then pmc0=PMC[0], otherwise it's been invoked
+ * and overflow information has been processed, so we don't need to call.
+ *
+ * Test breakdown:
+ * - pmc0 & ~0x1: test if overflow happened
+ * - second part: check if current register reflects this as well.
+ *
+ * NOTE: testing for pmc0 & 0x1 is not enough has it would trigger call
+ * when PM_VALID and PMU.fr which is common when setting up registers
+ * just before actually starting monitors.
+ *
+ */
+ if ((pmc0 & ~0x1) && ((pmc0=ia64_get_pmc(0)) &~0x1) ) {
+ printk(__FUNCTION__" Warning: pmc[0]=0x%lx\n", pmc0);
+ update_counters(pmc0);
+ /*
+ * XXX: not sure that's enough. the next task may still get the
+ * interrupt.
+ */
+ }
+
+ /*
+ * restore PSR for context switch to save
+ */
+ __asm__ __volatile__ ("mov psr.l=%0;;"::"r"(psr): "memory");
+
/*
* XXX: this will need to be extended beyong just counters
*/
- for (i=0; i< IA64_NUM_PM_REGS; i++) {
- t->pmd[i] = ia64_get_pmd(4+i);
- t->pmod[i] = pmds[smp_processor_id()][i];
- t->pmc[i] = ia64_get_pmc(4+i);
+ for (i=0,j=4; i< IA64_NUM_PMD_COUNTERS; i++,j++) {
+ t->pmd[i] = ia64_get_pmd(j);
+ t->pmc[i] = ia64_get_pmc(j);
}
+ /*
+ * PMU is frozen, PMU context is saved: nobody owns the PMU on this CPU
+ * At this point, we should not receive any pending interrupt from the
+ * 'switched out' task
+ */
+ pmu_owners[smp_processor_id()] = NULL;
}
void
-ia64_load_pm_regs (struct thread_struct *t)
+ia64_load_pm_regs (struct task_struct *ta)
{
- int i;
+ struct thread_struct *t = &ta->thread;
+ int i,j;
+
+ /*
+ * we first restore ownership of the PMU to the 'soon to be current'
+ * context. This way, if, as soon as we unfreeze the PMU at the end
+ * of this function, we get an interrupt, we attribute it to the correct
+ * task
+ */
+ pmu_owners[smp_processor_id()] = ta;
/*
* XXX: this will need to be extended beyong just counters
*/
- for (i=0; i< IA64_NUM_PM_REGS ; i++) {
- ia64_set_pmd(4+i, t->pmd[i]);
- pmds[smp_processor_id()][i] = t->pmod[i];
- ia64_set_pmc(4+i, t->pmc[i]);
+ for (i=0,j=4; i< IA64_NUM_PMD_COUNTERS; i++,j++) {
+ ia64_set_pmd(j, t->pmd[i]);
+ ia64_set_pmc(j, t->pmc[i]);
}
+ /*
+ * unfreeze PMU
+ */
ia64_set_pmc(0, 0);
ia64_srlz_d();
}
diff -urN linux-davidm/arch/ia64/kernel/process.c lia64/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/process.c Mon Oct 30 22:38:03 2000
@@ -164,7 +164,7 @@
ia64_save_debug_regs(&task->thread.dbr[0]);
#ifdef CONFIG_PERFMON
if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
- ia64_save_pm_regs(&task->thread);
+ ia64_save_pm_regs(task);
#endif
if (IS_IA32_PROCESS(ia64_task_regs(task)))
ia32_save_state(&task->thread);
@@ -177,7 +177,7 @@
ia64_load_debug_regs(&task->thread.dbr[0]);
#ifdef CONFIG_PERFMON
if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
- ia64_load_pm_regs(&task->thread);
+ ia64_load_pm_regs(task);
#endif
if (IS_IA32_PROCESS(ia64_task_regs(task)))
ia32_load_state(&task->thread);
@@ -299,6 +299,14 @@
# define THREAD_FLAGS_TO_SET 0
p->thread.flags = ((current->thread.flags & ~THREAD_FLAGS_TO_CLEAR)
| THREAD_FLAGS_TO_SET);
+#ifdef CONFIG_IA32_SUPPORT
+ /*
+ * If we're cloning an IA32 task then save the IA32 extra
+ * state from the current task to the new task
+ */
+ if (IS_IA32_PROCESS(ia64_task_regs(current)))
+ ia32_save_state(&p->thread);
+#endif
return 0;
}
@@ -554,7 +562,7 @@
* we garantee no race. this call we also stop
* monitoring
*/
- ia64_save_pm_regs(¤t->thread);
+ ia64_save_pm_regs(current);
/*
* make sure that switch_to() will not save context again
*/
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c lia64/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/ptrace.c Mon Oct 30 22:38:17 2000
@@ -794,11 +794,7 @@
addr);
return -1;
}
- } else
-#ifdef CONFIG_PERFMON
- if (addr < PT_PMD)
-#endif
- {
+ } else {
/* access debug registers */
if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
@@ -820,33 +816,14 @@
}
ptr += regnum;
- }
-#ifdef CONFIG_PERFMON
- else {
- /*
- * XXX: will eventually move back to perfmonctl()
- */
- unsigned long pmd = (addr - PT_PMD) >> 3;
- extern unsigned long perf_ovfl_val;
-
- /* we just use ptrace to read */
- if (write_access) return -1;
-
- if (pmd > 3) {
- printk("ptrace: rejecting access to PMD[%ld] address 0x%lx\n", pmd, addr);
- return -1;
- }
- /*
- * We always need to mask upper 32bits of pmd because value is random
- */
- pmd_tmp = child->thread.pmod[pmd]+(child->thread.pmd[pmd]& perf_ovfl_val);
-
- /*printk(__FUNCTION__" child=%d reading pmd[%ld]=%lx\n", child->pid, pmd, pmd_tmp);*/
-
- ptr = &pmd_tmp;
+ if (write_access)
+ /* don't let the user set kernel-level breakpoints... */
+ *ptr = *data & ~(7UL << 56);
+ else
+ *data = *ptr;
+ return 0;
}
-#endif
if (write_access)
*ptr = *data;
else
@@ -977,11 +954,7 @@
/* disallow accessing anything else... */
return -1;
}
- } else
-#ifdef CONFIG_PERFMON
- if (addr < PT_PMD)
-#endif
- {
+ } else {
/* access debug registers */
@@ -1002,34 +975,14 @@
return -1;
ptr += regnum;
- }
-#ifdef CONFIG_PERFMON
- else {
- /*
- * XXX: will eventually move back to perfmonctl()
- */
- unsigned long pmd = (addr - PT_PMD) >> 3;
- extern unsigned long perf_ovfl_val;
- /* we just use ptrace to read */
- if (write_access) return -1;
-
- if (pmd > 3) {
- printk("ptrace: rejecting access to PMD[%ld] address 0x%lx\n", pmd, addr);
- return -1;
- }
-
- /*
- * We always need to mask upper 32bits of pmd because value is random
- */
- pmd_tmp = child->thread.pmod[pmd]+(child->thread.pmd[pmd]& perf_ovfl_val);
-
- /*printk(__FUNCTION__" child=%d reading pmd[%ld]=%lx\n", child->pid, pmd, pmd_tmp);*/
-
- ptr = &pmd_tmp;
+ if (write_access)
+ /* don't let the user set kernel-level breakpoints... */
+ *ptr = *data & ~(7UL << 56);
+ else
+ *data = *ptr;
+ return 0;
}
-#endif
-
if (write_access)
*ptr = *data;
else
diff -urN linux-davidm/arch/ia64/kernel/smp.c lia64/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/smp.c Mon Oct 30 23:57:04 2000
@@ -11,6 +11,8 @@
* 00/03/31 Rohit Seth <rohit.seth@intel.com> Fixes for Bootstrap Processor & cpu_online_map
* now gets done here (instead of setup.c)
* 99/10/05 davidm Update to bring it in sync with new command-line processing scheme.
+ * 10/13/00 Goutham Rao <goutham.rao@intel.com> Updated smp_call_function and
+ * smp_call_function_single to resend IPI on timeouts
*/
#define __KERNEL_SYSCALLS__
@@ -30,6 +32,7 @@
#include <asm/current.h>
#include <asm/delay.h>
#include <asm/efi.h>
+#include <asm/machvec.h>
#include <asm/io.h>
#include <asm/irq.h>
@@ -276,7 +279,7 @@
return;
set_bit(op, &ipi_op[dest_cpu]);
- ipi_send(dest_cpu, IPI_IRQ, IA64_IPI_DM_INT, 0);
+ ia64_send_ipi(dest_cpu, IPI_IRQ, IA64_IPI_DM_INT, 0);
}
static inline void
@@ -358,6 +361,7 @@
if (pointer_lock(&smp_call_function_data, &data, retry))
return -EBUSY;
+resend:
/* Send a message to all other CPUs and wait for them to respond */
send_IPI_single(cpuid, IPI_CALL_FUNC);
@@ -366,8 +370,12 @@
while ((atomic_read(&data.unstarted_count) > 0) && time_before(jiffies, timeout))
barrier();
if (atomic_read(&data.unstarted_count) > 0) {
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+ goto resend;
+#else
smp_call_function_data = NULL;
return -ETIMEDOUT;
+#endif
}
if (wait)
while (atomic_read(&data.unfinished_count) > 0)
@@ -411,13 +419,23 @@
/* Send a message to all other CPUs and wait for them to respond */
send_IPI_allbutself(IPI_CALL_FUNC);
+retry:
/* Wait for response */
timeout = jiffies + HZ;
while ((atomic_read(&data.unstarted_count) > 0) && time_before(jiffies, timeout))
barrier();
if (atomic_read(&data.unstarted_count) > 0) {
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+ int i;
+ for (i = 0; i < smp_num_cpus; i++) {
+ if (i != smp_processor_id())
+ ia64_send_ipi(i, IPI_IRQ, IA64_IPI_DM_INT, 0);
+ }
+ goto retry;
+#else
smp_call_function_data = NULL;
return -ETIMEDOUT;
+#endif
}
if (wait)
while (atomic_read(&data.unfinished_count) > 0)
@@ -569,7 +587,7 @@
cpu_now_booting = cpu;
/* Kick the AP in the butt */
- ipi_send(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
+ ia64_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
/* wait up to 10s for the AP to start */
for (timeout = 0; timeout < 100000; timeout++) {
diff -urN linux-davidm/arch/ia64/kernel/traps.c lia64/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/traps.c Mon Oct 30 22:40:14 2000
@@ -544,7 +544,7 @@
case 46:
printk("Unexpected IA-32 intercept trap (Trap 46)\n");
- printk(" iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx\n", regs->cr_iip, ifa, isr);
+ printk(" iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n", regs->cr_iip, ifa, isr, iim);
force_sig(SIGSEGV, current);
return;
diff -urN linux-davidm/arch/ia64/kernel/unaligned.c lia64/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/unaligned.c Mon Oct 30 22:40:27 2000
@@ -572,7 +572,8 @@
*/
if (regnum = 0) {
*val = 0;
- *nat = 0;
+ if (nat)
+ *nat = 0;
return;
}
diff -urN linux-davidm/arch/ia64/kernel/unwind.c lia64/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Mon Oct 30 23:54:13 2000
+++ lia64/arch/ia64/kernel/unwind.c Mon Oct 30 22:40:45 2000
@@ -46,16 +46,6 @@
#define MIN(a,b) ((a) < (b) ? (a) : (b))
#define p5 5
-/*
- * The unwind tables are supposed to be sorted, but the GNU toolchain
- * currently fails to produce a sorted table in the presence of
- * functions that go into sections other than .text. For example, the
- * kernel likes to put initialization code into .text.init, which
- * messes up the sort order. Hopefully, this will get fixed sometime
- * soon. --davidm 00/05/23
- */
-#define UNWIND_TABLE_SORT_BUG
-
#define UNW_LOG_CACHE_SIZE 7 /* each unw_script is ~256 bytes in size */
#define UNW_CACHE_SIZE (1 << UNW_LOG_CACHE_SIZE)
@@ -63,7 +53,7 @@
#define UNW_HASH_SIZE (1 << UNW_LOG_HASH_SIZE)
#define UNW_DEBUG 0
-#define UNW_STATS 0 /* WARNING: this disabled interrupts for long time-spans!! */
+#define UNW_STATS 1 /* WARNING: this disabled interrupts for long time-spans!! */
#if UNW_DEBUG
static long unw_debug_level = 255;
@@ -1964,23 +1954,6 @@
{
struct unw_table_entry *start = table_start, *end = table_end;
-#ifdef UNWIND_TABLE_SORT_BUG
- {
- struct unw_table_entry *e1, *e2, tmp;
-
- /* stupid bubble sort... */
-
- for (e1 = start; e1 < end; ++e1) {
- for (e2 = e1 + 1; e2 < end; ++e2) {
- if (e2->start_offset < e1->start_offset) {
- tmp = *e1;
- *e1 = *e2;
- *e2 = tmp;
- }
- }
- }
- }
-#endif
table->name = name;
table->segment_base = segment_base;
table->gp = gp;
diff -urN linux-davidm/arch/ia64/lib/copy_user.S lia64/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S Wed Aug 2 18:54:02 2000
+++ lia64/arch/ia64/lib/copy_user.S Mon Oct 30 23:45:16 2000
@@ -65,6 +65,12 @@
//
// local registers
//
+#define t1 r2 // rshift in bytes
+#define t2 r3 // lshift in bytes
+#define rshift r14 // right shift in bits
+#define lshift r15 // left shift in bits
+#define word1 r16
+#define word2 r17
#define cnt r18
#define len2 r19
#define saved_lc r20
@@ -134,6 +140,190 @@
br.ret.sptk.few rp // end of short memcpy
//
+ // Not 8-byte aligned
+ //
+diff_align_copy_user:
+ // At this point we know we have more than 16 bytes to copy
+ // and also that src and dest do _not_ have the same alignment.
+ and src2=0x7,src1 // src offset
+ and dst2=0x7,dst1 // dst offset
+ ;;
+ // The basic idea is that we copy byte-by-byte at the head so
+ // that we can reach 8-byte alignment for both src1 and dst1.
+ // Then copy the body using software pipelined 8-byte copy,
+ // shifting the two back-to-back words right and left, then copy
+ // the tail by copying byte-by-byte.
+ //
+ // Fault handling. If the byte-by-byte at the head fails on the
+ // load, then restart and finish the pipleline by copying zeros
+ // to the dst1. Then copy zeros for the rest of dst1.
+ // If 8-byte software pipeline fails on the load, do the same as
+ // failure_in3 does. If the byte-by-byte at the tail fails, it is
+ // handled simply by failure_in_pipe1.
+ //
+ // The case p14 represents the source has more bytes in the
+ // the first word (by the shifted part), whereas the p15 needs to
+ // copy some bytes from the 2nd word of the source that has the
+ // tail of the 1st of the destination.
+ //
+
+ //
+ // Optimization. If dst1 is 8-byte aligned (not rarely), we don't need
+ // to copy the head to dst1, to start 8-byte copy software pipleline.
+ // We know src1 is not 8-byte aligned in this case.
+ //
+ cmp.eq p14,p15=r0,dst2
+(p15) br.cond.spnt.few 1f
+ ;;
+ sub t1=8,src2
+ mov t2=src2
+ ;;
+ shl rshift=t2,3
+ sub len1=len,t1 // set len1
+ ;;
+ sub lshiftd,rshift
+ ;;
+ br.cond.spnt.few word_copy_user
+ ;;
+1:
+ cmp.leu p14,p15=src2,dst2
+ sub t1=dst2,src2
+ ;;
+ .pred.rel "mutex", p14, p15
+(p14) sub word1=8,src2 // (8 - src offset)
+(p15) sub t1=r0,t1 // absolute value
+(p15) sub word1=8,dst2 // (8 - dst offset)
+ ;;
+ // For the case p14, we don't need to copy the shifted part to
+ // the 1st word of destination.
+ sub t2=8,t1
+(p14) sub word1=word1,t1
+ ;;
+ sub len1=len,word1 // resulting len
+(p15) shl rshift=t1,3 // in bits
+(p14) shl rshift=t2,3
+ ;;
+(p14) sub len1=len1,t1
+ adds cnt=-1,word1
+ ;;
+ sub lshiftd,rshift
+ mov ar.ec=PIPE_DEPTH
+ mov pr.rot=1<<16 // p16=true all others are false
+ mov ar.lc=cnt
+ ;;
+2:
+ EX(failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
+ ;;
+ EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ br.ctop.dptk.few 2b
+ ;;
+ clrrrb
+ ;;
+word_copy_user:
+ cmp.gtu p9,p0\x16,len1
+(p9) br.cond.spnt.few 4f // if (16 > len1) skip 8-byte copy
+ ;;
+ shr.u cnt=len1,3 // number of 64-bit words
+ ;;
+ adds cnt=-1,cnt
+ ;;
+ .pred.rel "mutex", p14, p15
+(p14) sub src1=src1,t2
+(p15) sub src1=src1,t1
+ //
+ // Now both src1 and dst1 point to an 8-byte aligned address. And
+ // we have more than 8 bytes to copy.
+ //
+ mov ar.lc=cnt
+ mov ar.ec=PIPE_DEPTH
+ mov pr.rot=1<<16 // p16=true all others are false
+ ;;
+3:
+ //
+ // The pipleline consists of 3 stages:
+ // 1 (p16): Load a word from src1
+ // 2 (EPI_1): Shift right pair, saving to tmp
+ // 3 (EPI): Store tmp to dst1
+ //
+ // To make it simple, use at least 2 (p16) loops to set up val1[n]
+ // because we need 2 back-to-back val1[] to get tmp.
+ // Note that this implies EPI_2 must be p18 or greater.
+ //
+
+#define EPI_1 p[PIPE_DEPTH-2]
+#define SWITCH(pred, shift) cmp.eq pred,p0=shift,rshift
+#define CASE(pred, shift) \
+ (pred) br.cond.spnt.few copy_user_bit##shift
+#define BODY(rshift) \
+copy_user_bit##rshift: \
+1: \
+ EX(failure_out,(EPI) st8 [dst1]=tmp,8); \
+(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
+ EX(failure_in2,(p16) ld8 val1[0]=[src1],8); \
+ br.ctop.dptk.few 1b; \
+ ;; \
+ br.cond.spnt.few .diff_align_do_tail
+
+ //
+ // Since the instruction 'shrp' requires a fixed 128-bit value
+ // specifying the bits to shift, we need to provide 7 cases
+ // below.
+ //
+ SWITCH(p6, 8)
+ SWITCH(p7, 16)
+ SWITCH(p8, 24)
+ SWITCH(p9, 32)
+ SWITCH(p10, 40)
+ SWITCH(p11, 48)
+ SWITCH(p12, 56)
+ ;;
+ CASE(p6, 8)
+ CASE(p7, 16)
+ CASE(p8, 24)
+ CASE(p9, 32)
+ CASE(p10, 40)
+ CASE(p11, 48)
+ CASE(p12, 56)
+ ;;
+ BODY(8)
+ BODY(16)
+ BODY(24)
+ BODY(32)
+ BODY(40)
+ BODY(48)
+ BODY(56)
+ ;;
+.diff_align_do_tail:
+ .pred.rel "mutex", p14, p15
+(p14) sub src1=src1,t1
+(p14) adds dst1=-8,dst1
+(p15) sub dst1=dst1,t1
+ ;;
+4:
+ // Tail correction.
+ //
+ // The problem with this piplelined loop is that the last word is not
+ // loaded and thus parf of the last word written is not correct.
+ // To fix that, we simply copy the tail byte by byte.
+
+ sub len1=endsrc,src1,1
+ clrrrb
+ ;;
+ mov ar.ec=PIPE_DEPTH
+ mov pr.rot=1<<16 // p16=true all others are false
+ mov ar.lc=len1
+ ;;
+5:
+ EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
+
+ EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ br.ctop.dptk.few 5b
+ ;;
+ mov pr=saved_pr,0xffffffffffff0000
+ mov ar.pfs=saved_pfs
+ br.ret.dptk.few rp
+
+ //
// Beginning of long mempcy (i.e. > 16 bytes)
//
long_copy_user:
@@ -142,7 +332,7 @@
;;
cmp.eq p10,p8=r0,tmp
mov len1=len // copy because of rotation
-(p8) br.cond.dpnt.few 1b // XXX Fixme. memcpy_diff_align
+(p8) br.cond.dpnt.few diff_align_copy_user
;;
// At this point we know we have more than 16 bytes to copy
// and also that both src and dest have the same alignment
@@ -267,6 +457,21 @@
mov ar.pfs=saved_pfs
br.ret.dptk.few rp
+ //
+ // This is the case where the byte by byte copy fails on the load
+ // when we copy the head. We need to finish the pipeline and copy
+ // zeros for the rest of the destination. Since this happens
+ // at the top we still need to fill the body and tail.
+failure_in_pipe2:
+ sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
+2:
+(p16) mov val1[0]=r0
+(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1
+ br.ctop.dptk.few 2b
+ ;;
+ sub len=enddst,dst1,1 // precompute len
+ br.cond.dptk.few failure_in1bis
+ ;;
//
// Here we handle the head & tail part when we check for alignment.
@@ -395,6 +600,23 @@
mov ar.pfs=saved_pfs
br.ret.dptk.few rp
+failure_in2:
+ sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
+ ;;
+3:
+(p16) mov val1[0]=r0
+(EPI) st8 [dst1]=val1[PIPE_DEPTH-1],8
+ br.ctop.dptk.few 3b
+ ;;
+ cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
+ sub len=enddst,dst1,1 // precompute len
+(p6) br.cond.dptk.few failure_in1bis
+ ;;
+ mov pr=saved_pr,0xffffffffffff0000
+ mov ar.lc=saved_lc
+ mov ar.pfs=saved_pfs
+ br.ret.dptk.few rp
+
//
// handling of failures on stores: that's the easy part
//
diff -urN linux-davidm/arch/ia64/lib/memcpy.S lia64/arch/ia64/lib/memcpy.S
--- linux-davidm/arch/ia64/lib/memcpy.S Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/lib/memcpy.S Mon Oct 30 22:45:50 2000
@@ -17,17 +17,24 @@
#include <asm/asmmacro.h>
+#if defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC)
+# define BRP(args...) nop.b 0
+#else
+# define BRP(args...) brp.loop.imp args
+#endif
+
GLOBAL_ENTRY(bcopy)
.regstk 3,0,0,0
mov r8=in0
mov in0=in1
;;
mov in1=r8
+ ;;
END(bcopy)
// FALL THROUGH
GLOBAL_ENTRY(memcpy)
-# define MEM_LAT 2 /* latency to L1 cache */
+# define MEM_LAT 21 /* latency to memory */
# define dst r2
# define src r3
@@ -57,20 +64,17 @@
UNW(.prologue)
UNW(.save ar.pfs, saved_pfs)
alloc saved_pfs=ar.pfs,3,Nrot,0,Nrot
-#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
- lfetch [in1]
-#else
- nop.m 0
-#endif
+ UNW(.save ar.lc, saved_lc)
+ mov saved_lc=ar.lc
or t0=in0,in1
;;
or t0=t0,in2
- UNW(.save ar.lc, saved_lc)
- mov saved_lc=ar.lc
UNW(.save pr, saved_pr)
mov saved_pr=pr
+ UNW(.body)
+
cmp.eq p6,p0=in2,r0 // zero length?
mov retval=in0 // return dst
(p6) br.ret.spnt.many rp // zero length, return immediately
@@ -83,7 +87,6 @@
adds cnt=-1,cnt // br.ctop is repeat/until
cmp.gtu p7,p0\x16,in2 // copying less than 16 bytes?
- UNW(.body)
mov ar.ec=N
;;
@@ -98,10 +101,17 @@
;;
.rotr val[N]
.rotp p[N]
-1:
+ .align 32
+1: { .mib
(p[0]) ld8 val[0]=[src],8
+ nop.i 0
+ BRP(1b, 2f)
+}
+2: { .mfb
(p[N-1])st8 [dst]=val[N-1],8
+ nop.f 0
br.ctop.dptk.few 1b
+}
;;
mov ar.lc=saved_lc
mov pr=saved_pr,-1
@@ -118,6 +128,7 @@
memcpy_short:
adds cnt=-1,in2 // br.ctop is repeat/until
mov ar.ec=MEM_LAT
+ BRP(1f, 2f)
;;
mov ar.lc=cnt
;;
@@ -125,12 +136,17 @@
* It is faster to put a stop bit in the loop here because it makes
* the pipeline shorter (and latency is what matters on short copies).
*/
-1:
+ .align 32
+1: { .mib
(p[0]) ld1 val[0]=[src],1
- ;;
+ nop.i 0
+ BRP(1b, 2f)
+} ;;
+2: { .mfb
(p[MEM_LAT-1])st1 [dst]=val[MEM_LAT-1],1
+ nop.f 0
br.ctop.dptk.few 1b
- ;;
+} ;;
mov ar.lc=saved_lc
mov pr=saved_pr,-1
mov ar.pfs=saved_pfs
@@ -251,17 +267,16 @@
.align 64
#define COPY(shift,index) \
- 1: \
- { .mfi \
+ 1: { .mib \
(p[0]) ld8 val[0]=[src2],8; \
- nop.f 0; \
(p[MEM_LAT+3]) shrp w[0]=val[MEM_LAT+3],val[MEM_LAT+4-index],shift; \
- }; \
- { .mbb \
+ BRP(1b, 2f) \
+ }; \
+ 2: { .mfb \
(p[MEM_LAT+4]) st8 [dst]=w[1],8; \
- nop.b 0; \
+ nop.f 0; \
br.ctop.dptk.few 1b; \
- }; \
+ }; \
;; \
ld8 val[N-1]=[src_end]; /* load last word (may be same as val[N]) */ \
;; \
diff -urN linux-davidm/arch/ia64/mm/tlb.c lia64/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Thu Aug 24 08:17:30 2000
+++ lia64/arch/ia64/mm/tlb.c Mon Oct 30 22:46:48 2000
@@ -6,6 +6,8 @@
*
* 08/02/00 A. Mallick <asit.k.mallick@intel.com>
* Modified RID allocation for SMP
+ * Goutham Rao <goutham.rao@intel.com>
+ * IPI based ptc implementation and A-step IPI implementation.
*/
#include <linux/config.h>
#include <linux/init.h>
@@ -17,6 +19,7 @@
#include <asm/mmu_context.h>
#include <asm/pgalloc.h>
#include <asm/pal.h>
+#include <asm/delay.h>
#define SUPPORTED_PGBITS ( \
1 << _PAGE_SIZE_256M | \
@@ -99,9 +102,22 @@
/*
* Wait for other CPUs to finish purging entries.
*/
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+ {
+ unsigned long start = ia64_get_itc();
+ while (atomic_read(&flush_cpu_count) > 0) {
+ if ((ia64_get_itc() - start) > 40000UL) {
+ atomic_set(&flush_cpu_count, smp_num_cpus - 1);
+ smp_send_flush_tlb();
+ start = ia64_get_itc();
+ }
+ }
+ }
+#else
while (atomic_read(&flush_cpu_count)) {
/* Nothing */
}
+#endif
if (!(flags & IA64_PSR_I)) {
local_irq_disable();
ia64_set_tpr(saved_tpr);
diff -urN linux-davidm/drivers/char/drm/vm.c lia64/drivers/char/drm/vm.c
--- linux-davidm/drivers/char/drm/vm.c Wed Oct 4 16:53:20 2000
+++ lia64/drivers/char/drm/vm.c Mon Oct 30 22:48:30 2000
@@ -272,6 +272,7 @@
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_map_t *map = NULL;
+ unsigned long off;
int i;
DRM_DEBUG("start = 0x%lx, end = 0x%lx, offset = 0x%lx\n",
@@ -288,7 +289,16 @@
bit longer. */
for (i = 0; i < dev->map_count; i++) {
map = dev->maplist[i];
- if (map->offset = VM_OFFSET(vma)) break;
+ off = map->offset ^ VM_OFFSET(vma);
+#ifdef __ia64__
+ /*
+ * Ignore region bits, makes IA32 processes happier
+ * XXX This is a hack...
+ */
+ off &= ~0xe000000000000000;
+#endif // __ia64__
+ if (off = 0)
+ break;
}
if (i >= dev->map_count) return -EINVAL;
diff -urN linux-davidm/drivers/char/mem.c lia64/drivers/char/mem.c
--- linux-davidm/drivers/char/mem.c Wed Oct 4 16:53:21 2000
+++ lia64/drivers/char/mem.c Tue Oct 31 00:13:05 2000
@@ -198,8 +198,12 @@
* through a file pointer that was marked O_SYNC will be
* done non-cached.
*/
- if (noncached_address(offset) || (file->f_flags & O_SYNC))
+ if (noncached_address(offset) || (file->f_flags & O_SYNC)
+ || vma->vm_flags & VM_NONCACHED)
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+ if (vma->vm_flags & VM_WRITECOMBINED)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
/*
* Don't dump addresses that are not real memory to a core file.
diff -urN linux-davidm/drivers/net/tulip/tulip_core.c lia64/drivers/net/tulip/tulip_core.c
--- linux-davidm/drivers/net/tulip/tulip_core.c Wed Oct 4 16:53:25 2000
+++ lia64/drivers/net/tulip/tulip_core.c Mon Oct 30 22:49:57 2000
@@ -52,7 +52,7 @@
/* Set the copy breakpoint for the copy-only-tiny-buffer Rx structure. */
#if defined(__alpha__) || defined(__arm__) || defined(__hppa__) \
- || defined(__sparc_)
+ || defined(__sparc_) || defined(__ia64__)
static int rx_copybreak = 1518;
#else
static int rx_copybreak = 100;
@@ -71,7 +71,7 @@
ToDo: Non-Intel setting could be better.
*/
-#if defined(__alpha__)
+#if defined(__alpha__) || defined(__ia64__)
static int csr0 = 0x01A00000 | 0xE000;
#elif defined(__i386__) || defined(__powerpc__) || defined(__hppa__)
static int csr0 = 0x01A00000 | 0x8000;
diff -urN linux-davidm/drivers/scsi/qla1280.c lia64/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Mon Oct 30 23:54:14 2000
+++ lia64/drivers/scsi/qla1280.c Mon Oct 30 22:50:11 2000
@@ -16,9 +16,17 @@
* General Public License for more details.
**
******************************************************************************/
-
+#define QLA1280_VERSION "3.19 Beta"
/****************************************************************************
Revision History:
+ Rev 3.19 Beta October 13, 2000 BN Qlogic
+ - Declare driver_template for new kernel
+ (2.4.0 and greater) scsi initialization scheme.
+ - Update /proc/scsi entry for 2.3.18 kernels and
+ above as qla1280
+ Rev 3.18 Beta October 10, 2000 BN Qlogic
+ - Changed scan order of adapters to map
+ the QLA12160 followed by the QLA1280.
Rev 3.17 Beta September 18, 2000 BN Qlogic
- Removed warnings for 32 bit 2.4.x compiles
- Corrected declared size for request and response
@@ -102,8 +110,6 @@
#include <linux/module.h>
#endif
-#define QLA1280_VERSION "3.17 Beta"
-
#include <stdarg.h>
#include <asm/io.h>
#include <asm/irq.h>
@@ -408,14 +414,14 @@
struct _qlaboards QL1280BoardTbl[NUM_OF_ISP_DEVICES] =
{
/* Name , Board PCI Device ID, Number of ports */
+ {"QLA12160 ", QLA12160_DEVICE_ID, 2,
+ &fw12160i_code01[0], (unsigned long *)&fw12160i_length01,&fw12160i_addr01, &fw12160i_version_str[0] },
{"QLA1080 ", QLA1080_DEVICE_ID, 1,
&fw1280ei_code01[0], (unsigned long *)&fw1280ei_length01,&fw1280ei_addr01, &fw1280ei_version_str[0] },
{"QLA1240 ", QLA1240_DEVICE_ID, 2,
&fw1280ei_code01[0], (unsigned long *)&fw1280ei_length01,&fw1280ei_addr01, &fw1280ei_version_str[0] },
{"QLA1280 ", QLA1280_DEVICE_ID, 2,
&fw1280ei_code01[0], (unsigned long *)&fw1280ei_length01,&fw1280ei_addr01, &fw1280ei_version_str[0] },
- {"QLA12160 ", QLA12160_DEVICE_ID, 2,
- &fw12160i_code01[0], (unsigned long *)&fw12160i_length01,&fw12160i_addr01, &fw12160i_version_str[0] },
{"QLA10160 ", QLA10160_DEVICE_ID, 1,
&fw12160i_code01[0], (unsigned long *)&fw12160i_length01,&fw12160i_addr01, &fw12160i_version_str[0] },
{" ", 0, 0}
@@ -739,7 +745,7 @@
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
template->proc_dir = &proc_scsi_qla1280;
#else
- template->proc_name = "qla1x80";
+ template->proc_name = "qla1280";
#endif
/* Try and find each different type of adapter we support */
for( i=0; bdp->device_id != 0 && i < NUM_OF_ISP_DEVICES; i++, bdp++ ) {
@@ -6342,13 +6348,15 @@
return(ret);
}
-
-/*
- * Declarations for load module
- */
-static Scsi_Host_Template driver_template = QLA1280_LINUX_TEMPLATE;
-
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,0)
+#ifdef MODULE
+Scsi_Host_Template driver_template = QLA1280_LINUX_TEMPLATE;
#include "scsi_module.c"
+#endif
+#else /* new kernel scsi initialization scheme */
+static Scsi_Host_Template driver_template = QLA1280_LINUX_TEMPLATE;
+#include "scsi_module.c"
+#endif
/************************************************************************
* qla1280_check_for_dead_scsi_bus *
diff -urN linux-davidm/fs/binfmt_elf.c lia64/fs/binfmt_elf.c
--- linux-davidm/fs/binfmt_elf.c Mon Oct 30 23:54:14 2000
+++ lia64/fs/binfmt_elf.c Mon Oct 30 22:50:33 2000
@@ -33,6 +33,7 @@
#include <linux/smp_lock.h>
#include <asm/uaccess.h>
+#include <asm/param.h>
#include <asm/pgalloc.h>
#define DLINFO_ITEMS 13
@@ -159,23 +160,27 @@
sp -= 2;
NEW_AUX_ENT(0, AT_PLATFORM, (elf_addr_t)(unsigned long) u_platform);
}
- sp -= 2;
+ sp -= 2*2;
NEW_AUX_ENT(0, AT_HWCAP, hwcap);
+ NEW_AUX_ENT(1, AT_PAGESZ, ELF_EXEC_PAGESIZE);
+#ifdef CLOCKS_PER_SEC
+ sp -= 2;
+ NEW_AUX_ENT(0, AT_CLKTCK, CLOCKS_PER_SEC);
+#endif
if (exec) {
- sp -= 11*2;
+ sp -= 10*2;
NEW_AUX_ENT(0, AT_PHDR, load_addr + exec->e_phoff);
NEW_AUX_ENT(1, AT_PHENT, sizeof (struct elf_phdr));
NEW_AUX_ENT(2, AT_PHNUM, exec->e_phnum);
- NEW_AUX_ENT(3, AT_PAGESZ, ELF_EXEC_PAGESIZE);
- NEW_AUX_ENT(4, AT_BASE, interp_load_addr);
- NEW_AUX_ENT(5, AT_FLAGS, 0);
- NEW_AUX_ENT(6, AT_ENTRY, load_bias + exec->e_entry);
- NEW_AUX_ENT(7, AT_UID, (elf_addr_t) current->uid);
- NEW_AUX_ENT(8, AT_EUID, (elf_addr_t) current->euid);
- NEW_AUX_ENT(9, AT_GID, (elf_addr_t) current->gid);
- NEW_AUX_ENT(10, AT_EGID, (elf_addr_t) current->egid);
+ NEW_AUX_ENT(3, AT_BASE, interp_load_addr);
+ NEW_AUX_ENT(4, AT_FLAGS, 0);
+ NEW_AUX_ENT(5, AT_ENTRY, load_bias + exec->e_entry);
+ NEW_AUX_ENT(6, AT_UID, (elf_addr_t) current->uid);
+ NEW_AUX_ENT(7, AT_EUID, (elf_addr_t) current->euid);
+ NEW_AUX_ENT(8, AT_GID, (elf_addr_t) current->gid);
+ NEW_AUX_ENT(9, AT_EGID, (elf_addr_t) current->egid);
}
#undef NEW_AUX_ENT
diff -urN linux-davidm/fs/open.c lia64/fs/open.c
--- linux-davidm/fs/open.c Wed Oct 4 16:53:39 2000
+++ lia64/fs/open.c Mon Oct 30 22:50:49 2000
@@ -103,7 +103,7 @@
inode = nd.dentry->d_inode;
error = -EACCES;
- if (S_ISDIR(inode->i_mode))
+ if (!S_ISREG(inode->i_mode))
goto dput_and_out;
error = permission(inode,MAY_WRITE);
@@ -164,7 +164,7 @@
dentry = file->f_dentry;
inode = dentry->d_inode;
error = -EACCES;
- if (S_ISDIR(inode->i_mode) || !(file->f_mode & FMODE_WRITE))
+ if (!S_ISREG(inode->i_mode) || !(file->f_mode & FMODE_WRITE))
goto out_putf;
error = -EPERM;
if (IS_IMMUTABLE(inode) || IS_APPEND(inode))
diff -urN linux-davidm/fs/proc/array.c lia64/fs/proc/array.c
--- linux-davidm/fs/proc/array.c Fri Sep 8 14:34:59 2000
+++ lia64/fs/proc/array.c Mon Oct 30 22:51:05 2000
@@ -575,7 +575,7 @@
goto getlen_out;
/* Check whether the mmaps could change if we sleep */
- volatile_task = (task != current || atomic_read(&mm->mm_users) > 1);
+ volatile_task = (task != current || atomic_read(&mm->mm_users) > 2);
/* decode f_pos */
lineno = *ppos >> MAPS_LINE_SHIFT;
diff -urN linux-davidm/include/asm-ia64/delay.h lia64/include/asm-ia64/delay.h
--- linux-davidm/include/asm-ia64/delay.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/delay.h Mon Oct 30 22:51:21 2000
@@ -55,6 +55,10 @@
unsigned long result;
__asm__ __volatile__("mov %0=ar.itc" : "=r"(result) :: "memory");
+#ifdef CONFIG_ITANIUM
+ while (__builtin_expect ((__s32) result = -1, 0))
+ __asm__ __volatile__("mov %0=ar.itc" : "=r"(result) :: "memory");
+#endif
return result;
}
diff -urN linux-davidm/include/asm-ia64/hw_irq.h lia64/include/asm-ia64/hw_irq.h
--- linux-davidm/include/asm-ia64/hw_irq.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/hw_irq.h Tue Oct 31 00:15:17 2000
@@ -31,13 +31,22 @@
#define IA64_SPURIOUS_INT 0x0f
-#define IA64_MIN_VECTORED_IRQ 16
-#define IA64_MAX_VECTORED_IRQ 255
+/*
+ * Vectors 0x10-0x1f are used for low priority interrupts, e.g. CMCI.
+ */
+#define PCE_IRQ 0x1e /* platform corrected error interrupt vector */
+#define CMC_IRQ 0x1f /* correctable machine-check interrupt vector */
+/*
+ * Vectors 0x20-0x2f are reserved for legacy ISA IRQs.
+ */
+#define FIRST_DEVICE_IRQ 0x30
+#define LAST_DEVICE_IRQ 0xe7
-#define PERFMON_IRQ 0x28 /* performanc monitor interrupt vector */
+#define MCA_RENDEZ_IRQ 0xe8 /* MCA rendez interrupt */
+#define PERFMON_IRQ 0xee /* performanc monitor interrupt vector */
#define TIMER_IRQ 0xef /* use highest-prio group 15 interrupt for timer */
+#define MCA_WAKEUP_IRQ 0xf0 /* MCA wakeup interrupt (must be higher than MCA_RENDEZ_IRQ) */
#define IPI_IRQ 0xfe /* inter-processor interrupt vector */
-#define CMC_IRQ 0xff /* correctable machine-check interrupt vector */
/* IA64 inter-cpu interrupt related definitions */
@@ -62,12 +71,13 @@
extern struct hw_interrupt_type irq_type_ia64_sapic; /* CPU-internal interrupt controller */
-extern void ipi_send (int cpu, int vector, int delivery_mode, int redirect);
+extern int ia64_alloc_irq (void); /* allocate a free irq */
+extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect);
static inline void
hw_resend_irq (struct hw_interrupt_type *h, unsigned int vector)
{
- ipi_send(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
+ ia64_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
}
#endif /* _ASM_IA64_HW_IRQ_H */
diff -urN linux-davidm/include/asm-ia64/ia32.h lia64/include/asm-ia64/ia32.h
--- linux-davidm/include/asm-ia64/ia32.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/ia32.h Mon Oct 30 22:53:22 2000
@@ -5,6 +5,8 @@
#ifdef CONFIG_IA32_SUPPORT
+#include <linux/param.h>
+
/*
* 32 bit structures for IA32 support.
*/
@@ -32,6 +34,8 @@
#define IA32_PAGE_SHIFT 12 /* 4KB pages */
#define IA32_PAGE_SIZE (1ULL << IA32_PAGE_SHIFT)
+#define IA32_CLOCKS_PER_SEC 100 /* Cast in stone for IA32 Linux */
+#define IA32_TICK(tick) ((unsigned long long)(tick) * IA32_CLOCKS_PER_SEC / CLOCKS_PER_SEC)
/* fcntl.h */
struct flock32 {
diff -urN linux-davidm/include/asm-ia64/iosapic.h lia64/include/asm-ia64/iosapic.h
--- linux-davidm/include/asm-ia64/iosapic.h Thu Jun 22 07:09:45 2000
+++ lia64/include/asm-ia64/iosapic.h Mon Oct 30 22:53:32 2000
@@ -3,121 +3,60 @@
#include <linux/config.h>
-#define IO_SAPIC_DEFAULT_ADDR 0xFEC00000
+#define IOSAPIC_DEFAULT_ADDR 0xFEC00000
-#define IO_SAPIC_REG_SELECT 0x0
-#define IO_SAPIC_WINDOW 0x10
-#define IO_SAPIC_EOI 0x40
+#define IOSAPIC_REG_SELECT 0x0
+#define IOSAPIC_WINDOW 0x10
+#define IOSAPIC_EOI 0x40
-#define IO_SAPIC_VERSION 0x1
+#define IOSAPIC_VERSION 0x1
/*
* Redirection table entry
*/
+#define IOSAPIC_RTE_LOW(i) (0x10+i*2)
+#define IOSAPIC_RTE_HIGH(i) (0x11+i*2)
-#define IO_SAPIC_RTE_LOW(i) (0x10+i*2)
-#define IO_SAPIC_RTE_HIGH(i) (0x11+i*2)
-
-
-#define IO_SAPIC_DEST_SHIFT 16
+#define IOSAPIC_DEST_SHIFT 16
/*
* Delivery mode
*/
-
-#define IO_SAPIC_DELIVERY_SHIFT 8
-#define IO_SAPIC_FIXED 0x0
-#define IO_SAPIC_LOWEST_PRIORITY 0x1
-#define IO_SAPIC_PMI 0x2
-#define IO_SAPIC_NMI 0x4
-#define IO_SAPIC_INIT 0x5
-#define IO_SAPIC_EXTINT 0x7
+#define IOSAPIC_DELIVERY_SHIFT 8
+#define IOSAPIC_FIXED 0x0
+#define IOSAPIC_LOWEST_PRIORITY 0x1
+#define IOSAPIC_PMI 0x2
+#define IOSAPIC_NMI 0x4
+#define IOSAPIC_INIT 0x5
+#define IOSAPIC_EXTINT 0x7
/*
* Interrupt polarity
*/
-
-#define IO_SAPIC_POLARITY_SHIFT 13
-#define IO_SAPIC_POL_HIGH 0
-#define IO_SAPIC_POL_LOW 1
+#define IOSAPIC_POLARITY_SHIFT 13
+#define IOSAPIC_POL_HIGH 0
+#define IOSAPIC_POL_LOW 1
/*
* Trigger mode
*/
-
-#define IO_SAPIC_TRIGGER_SHIFT 15
-#define IO_SAPIC_EDGE 0
-#define IO_SAPIC_LEVEL 1
+#define IOSAPIC_TRIGGER_SHIFT 15
+#define IOSAPIC_EDGE 0
+#define IOSAPIC_LEVEL 1
/*
* Mask bit
*/
-
-#define IO_SAPIC_MASK_SHIFT 16
-#define IO_SAPIC_UNMASK 0
-#define IO_SAPIC_MSAK 1
-
-/*
- * Bus types
- */
-#define BUS_ISA 0 /* ISA Bus */
-#define BUS_PCI 1 /* PCI Bus */
-
-#ifndef CONFIG_IA64_PCI_FIRMWARE_IRQ
-struct intr_routing_entry {
- unsigned char srcbus;
- unsigned char srcbusno;
- unsigned char srcbusirq;
- unsigned char iosapic_pin;
- unsigned char dstiosapic;
- unsigned char mode;
- unsigned char trigger;
- unsigned char polarity;
-};
-
-extern struct intr_routing_entry intr_routing[];
-#endif
+#define IOSAPIC_MASK_SHIFT 16
+#define IOSAPIC_UNMASK 0
+#define IOSAPIC_MSAK 1
#ifndef __ASSEMBLY__
-#include <asm/irq.h>
-
-/*
- * IOSAPIC Version Register return 32 bit structure like:
- * {
- * unsigned int version : 8;
- * unsigned int reserved1 : 8;
- * unsigned int pins : 8;
- * unsigned int reserved2 : 8;
- * }
- */
-extern unsigned int iosapic_version(unsigned long);
-extern void iosapic_init(unsigned long, int);
-
-struct iosapic_vector {
- unsigned long iosapic_base; /* IOSAPIC Base address */
- char pin; /* IOSAPIC pin (-1 = No data) */
- unsigned char bus; /* Bus number */
- unsigned char baseirq; /* Base IRQ handled by this IOSAPIC */
- unsigned char bustype; /* Bus type (ISA, PCI, etc) */
- unsigned int busdata; /* Bus specific ID */
- /* These bitfields use the values defined above */
- unsigned char dmode : 3;
- unsigned char polarity : 1;
- unsigned char trigger : 1;
- unsigned char UNUSED : 3;
-};
-extern struct iosapic_vector iosapic_vector[NR_IRQS];
-
-#define iosapic_addr(v) iosapic_vector[v].iosapic_base
-#define iosapic_pin(v) iosapic_vector[v].pin
-#define iosapic_bus(v) iosapic_vector[v].bus
-#define iosapic_baseirq(v) iosapic_vector[v].baseirq
-#define iosapic_bustype(v) iosapic_vector[v].bustype
-#define iosapic_busdata(v) iosapic_vector[v].busdata
-#define iosapic_dmode(v) iosapic_vector[v].dmode
-#define iosapic_trigger(v) iosapic_vector[v].trigger
-#define iosapic_polarity(v) iosapic_vector[v].polarity
+extern void __init iosapic_init (unsigned long address, unsigned int base_irq);
+extern void iosapic_register_legacy_irq (unsigned long irq, unsigned long pin,
+ unsigned long polarity, unsigned long trigger);
+extern void iosapic_pci_fixup (void);
# endif /* !__ASSEMBLY__ */
#endif /* __ASM_IA64_IOSAPIC_H */
diff -urN linux-davidm/include/asm-ia64/machvec.h lia64/include/asm-ia64/machvec.h
--- linux-davidm/include/asm-ia64/machvec.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-ia64/machvec.h Mon Oct 30 23:37:09 2000
@@ -31,7 +31,6 @@
typedef void ia64_mv_mca_handler_t (void);
typedef void ia64_mv_cmci_handler_t (int, void *, struct pt_regs *);
typedef void ia64_mv_log_print_t (void);
-typedef void ia64_mv_register_iosapic_t (struct acpi_entry_iosapic *);
extern void machvec_noop (void);
@@ -39,7 +38,7 @@
# include <asm/machvec_hpsim.h>
# elif defined (CONFIG_IA64_DIG)
# include <asm/machvec_dig.h>
-# elif defined (CONFIG_IA64_SGI_SN1_SIM)
+# elif defined (CONFIG_IA64_SGI_SN1)
# include <asm/machvec_sn1.h>
# elif defined (CONFIG_IA64_GENERIC)
@@ -55,7 +54,6 @@
# define platform_cmci_handler ia64_mv.cmci_handler
# define platform_log_print ia64_mv.log_print
# define platform_pci_fixup ia64_mv.pci_fixup
-# define platform_register_iosapic ia64_mv.register_iosapic
# endif
struct ia64_machine_vector {
@@ -68,7 +66,6 @@
ia64_mv_mca_handler_t *mca_handler;
ia64_mv_cmci_handler_t *cmci_handler;
ia64_mv_log_print_t *log_print;
- ia64_mv_register_iosapic_t *register_iosapic;
};
#define MACHVEC_INIT(name) \
@@ -81,8 +78,7 @@
platform_mca_init, \
platform_mca_handler, \
platform_cmci_handler, \
- platform_log_print, \
- platform_register_iosapic \
+ platform_log_print \
}
extern struct ia64_machine_vector ia64_mv;
@@ -116,9 +112,6 @@
#endif
#ifndef platform_pci_fixup
# define platform_pci_fixup ((ia64_mv_pci_fixup_t *) machvec_noop)
-#endif
-#ifndef platform_register_iosapic
-# define platform_register_iosapic ((ia64_mv_register_iosapic_t *) machvec_noop)
#endif
#endif /* _ASM_IA64_MACHVEC_H */
diff -urN linux-davidm/include/asm-ia64/machvec_dig.h lia64/include/asm-ia64/machvec_dig.h
--- linux-davidm/include/asm-ia64/machvec_dig.h Thu Aug 24 08:17:47 2000
+++ lia64/include/asm-ia64/machvec_dig.h Mon Oct 30 22:54:21 2000
@@ -5,7 +5,6 @@
extern ia64_mv_irq_init_t dig_irq_init;
extern ia64_mv_pci_fixup_t dig_pci_fixup;
extern ia64_mv_map_nr_t map_nr_dense;
-extern ia64_mv_register_iosapic_t dig_register_iosapic;
/*
* This stuff has dual use!
@@ -17,8 +16,7 @@
#define platform_name "dig"
#define platform_setup dig_setup
#define platform_irq_init dig_irq_init
-#define platform_pci_fixup dig_pci_fixup
+#define platform_pci_fixup iosapic_pci_fixup
#define platform_map_nr map_nr_dense
-#define platform_register_iosapic dig_register_iosapic
#endif /* _ASM_IA64_MACHVEC_DIG_h */
diff -urN linux-davidm/include/asm-ia64/mca.h lia64/include/asm-ia64/mca.h
--- linux-davidm/include/asm-ia64/mca.h Fri Apr 21 15:21:24 2000
+++ lia64/include/asm-ia64/mca.h Mon Oct 30 22:54:41 2000
@@ -18,6 +18,7 @@
#include <asm/param.h>
#include <asm/sal.h>
#include <asm/processor.h>
+#include <asm/hw_irq.h>
/* These are the return codes from all the IA64_MCA specific interfaces */
typedef int ia64_mca_return_code_t;
@@ -30,9 +31,9 @@
#define IA64_MCA_RENDEZ_TIMEOUT (100 * HZ) /* 1000 milliseconds */
/* Interrupt vectors reserved for MC handling. */
-#define IA64_MCA_RENDEZ_INT_VECTOR 0xF3 /* Rendez interrupt */
-#define IA64_MCA_WAKEUP_INT_VECTOR 0x12 /* Wakeup interrupt */
-#define IA64_MCA_CMC_INT_VECTOR 0xF2 /* Correctable machine check interrupt */
+#define IA64_MCA_RENDEZ_INT_VECTOR MCA_RENDEZ_IRQ /* Rendez interrupt */
+#define IA64_MCA_WAKEUP_INT_VECTOR MCA_WAKEUP_IRQ /* Wakeup interrupt */
+#define IA64_MCA_CMC_INT_VECTOR CMC_IRQ /* Correctable machine check interrupt */
#define IA64_CMC_INT_DISABLE 0
#define IA64_CMC_INT_ENABLE 1
diff -urN linux-davidm/include/asm-ia64/mman.h lia64/include/asm-ia64/mman.h
--- linux-davidm/include/asm-ia64/mman.h Fri Apr 21 15:21:24 2000
+++ lia64/include/asm-ia64/mman.h Mon Oct 30 22:54:48 2000
@@ -23,6 +23,8 @@
#define MAP_EXECUTABLE 0x1000 /* mark it as an executable */
#define MAP_LOCKED 0x2000 /* pages are locked */
#define MAP_NORESERVE 0x4000 /* don't check for reservations */
+#define MAP_WRITECOMBINED 0x10000 /* write-combine the area */
+#define MAP_NONCACHED 0x20000 /* don't cache the memory */
#define MS_ASYNC 1 /* sync memory asynchronously */
#define MS_INVALIDATE 2 /* invalidate the caches */
diff -urN linux-davidm/include/asm-ia64/param.h lia64/include/asm-ia64/param.h
--- linux-davidm/include/asm-ia64/param.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/param.h Mon Oct 30 22:55:07 2000
@@ -32,4 +32,8 @@
#define MAXHOSTNAMELEN 64 /* max length of hostname */
+#ifdef __KERNEL__
+# define CLOCKS_PER_SEC HZ /* frequency at which times() counts */
+#endif
+
#endif /* _ASM_IA64_PARAM_H */
diff -urN linux-davidm/include/asm-ia64/processor.h lia64/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/processor.h Mon Oct 30 22:56:56 2000
@@ -4,7 +4,7 @@
/*
* Copyright (C) 1998-2000 Hewlett-Packard Co
* Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1998-2000 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*
@@ -19,7 +19,13 @@
#include <asm/types.h>
#define IA64_NUM_DBG_REGS 8
-#define IA64_NUM_PM_REGS 4
+/*
+ * Limits for PMC and PMD are set to less than maximum architected values
+ * but should be sufficient for a while
+ */
+#define IA64_NUM_PMC_REGS 32
+#define IA64_NUM_PMD_REGS 32
+#define IA64_NUM_PMD_COUNTERS 4
/*
* TASK_SIZE really is a mis-named. It really is the maximum user
@@ -288,10 +294,15 @@
__u64 dbr[IA64_NUM_DBG_REGS];
__u64 ibr[IA64_NUM_DBG_REGS];
#ifdef CONFIG_PERFMON
- __u64 pmc[IA64_NUM_PM_REGS];
- __u64 pmd[IA64_NUM_PM_REGS];
- __u64 pmod[IA64_NUM_PM_REGS];
-# define INIT_THREAD_PM {0, }, {0, }, {0, },
+ __u64 pmc[IA64_NUM_PMC_REGS];
+ __u64 pmd[IA64_NUM_PMD_REGS];
+ struct {
+ __u64 val; /* virtual 64bit counter */
+ __u64 rval; /* reset value on overflow */
+ int sig; /* signal used to notify */
+ int pid; /* process to notify */
+ } pmu_counters[IA64_NUM_PMD_COUNTERS];
+# define INIT_THREAD_PM {0, }, {0, }, {{ 0, 0, 0, 0}, },
#else
# define INIT_THREAD_PM
#endif
@@ -423,8 +434,8 @@
#endif
#ifdef CONFIG_PERFMON
-extern void ia64_save_pm_regs (struct thread_struct *thread);
-extern void ia64_load_pm_regs (struct thread_struct *thread);
+extern void ia64_save_pm_regs (struct task_struct *task);
+extern void ia64_load_pm_regs (struct task_struct *task);
#endif
#define ia64_fph_enable() __asm__ __volatile__ (";; rsm psr.dfh;; srlz.d;;" ::: "memory");
diff -urN linux-davidm/include/asm-ia64/spinlock.h lia64/include/asm-ia64/spinlock.h
--- linux-davidm/include/asm-ia64/spinlock.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/spinlock.h Mon Oct 30 22:57:29 2000
@@ -18,8 +18,9 @@
#undef NEW_LOCK
#ifdef NEW_LOCK
+
typedef struct {
- volatile unsigned char lock;
+ volatile unsigned int lock;
} spinlock_t;
#define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 }
@@ -38,7 +39,7 @@
"mov r30=1\n" \
"mov ar.ccv=r0\n" \
";;\n" \
- IA64_SEMFIX"cmpxchg1.acq r30=[%0],r30,ar.ccv\n" \
+ IA64_SEMFIX"cmpxchg4.acq r30=[%0],r30,ar.ccv\n" \
";;\n" \
"cmp.ne p15,p0=r30,r0\n" \
"(p15) br.call.spnt.few b7=ia64_spinlock_contention\n" \
@@ -48,18 +49,16 @@
: "ar.ccv", "ar.pfs", "b7", "p15", "r28", "r29", "r30", "memory"); \
}
-#define spin_trylock(x) \
-({ \
- register char *addr __asm__ ("r31") = (char *) &(x)->lock; \
- register long result; \
- \
- __asm__ __volatile__ ( \
- "mov r30=1\n" \
- "mov ar.ccv=r0\n" \
- ";;\n" \
- IA64_SEMFIX"cmpxchg1.acq %0=[%1],r30,ar.ccv\n" \
- : "=r"(result) : "r"(addr) : "ar.ccv", "r30", "memory"); \
- (result = 0); \
+#define spin_trylock(x) \
+({ \
+ register long result; \
+ \
+ __asm__ __volatile__ ( \
+ "mov ar.ccv=r0\n" \
+ ";;\n" \
+ IA64_SEMFIX"cmpxchg4.acq %0=[%2],%1,ar.ccv\n" \
+ : "=r"(result) : "r"(1), "r"(&(x)->lock) : "ar.ccv", "memory"); \
+ (result = 0); \
})
#define spin_is_locked(x) ((x)->lock != 0)
diff -urN linux-davidm/include/asm-ia64/system.h lia64/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/system.h Mon Oct 30 22:57:47 2000
@@ -27,7 +27,8 @@
#define GATE_ADDR (0xa000000000000000 + PAGE_SIZE)
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC)
/* Workaround for Errata 97. */
# define IA64_SEMFIX_INSN mf;
# define IA64_SEMFIX "mf;"
diff -urN linux-davidm/include/asm-ia64/uaccess.h lia64/include/asm-ia64/uaccess.h
--- linux-davidm/include/asm-ia64/uaccess.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/uaccess.h Mon Oct 30 22:58:00 2000
@@ -125,46 +125,28 @@
struct __large_struct { unsigned long buf[100]; };
#define __m(x) (*(struct __large_struct *)(x))
-#define __get_user_64(addr) \
+/* We need to declare the __ex_table section before we can use it in .xdata. */
+__asm__ (".section \"__ex_table\", \"a\"\n\t.previous");
+
+#define __get_user_64(addr) \
__asm__ ("\n1:\tld8 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 (2b-1b)|1\n" \
- "\t.previous" \
- : "=r"(__gu_val), "=r"(__gu_err) \
- : "m"(__m(addr)), "1"(__gu_err));
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n" \
+ : "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
-#define __get_user_32(addr) \
+#define __get_user_32(addr) \
__asm__ ("\n1:\tld4 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 (2b-1b)|1\n" \
- "\t.previous" \
- : "=r"(__gu_val), "=r"(__gu_err) \
- : "m"(__m(addr)), "1"(__gu_err));
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n" \
+ : "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
-#define __get_user_16(addr) \
+#define __get_user_16(addr) \
__asm__ ("\n1:\tld2 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 (2b-1b)|1\n" \
- "\t.previous" \
- : "=r"(__gu_val), "=r"(__gu_err) \
- : "m"(__m(addr)), "1"(__gu_err));
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n" \
+ : "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
-#define __get_user_8(addr) \
+#define __get_user_8(addr) \
__asm__ ("\n1:\tld1 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 (2b-1b)|1\n" \
- "\t.previous" \
- : "=r"(__gu_val), "=r"(__gu_err) \
- : "m"(__m(addr)), "1"(__gu_err));
-
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n" \
+ : "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
extern void __put_user_unknown (void);
@@ -206,46 +188,26 @@
#define __put_user_64(x,addr) \
__asm__ __volatile__ ( \
"\n1:\tst8 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 2b-1b\n" \
- "\t.previous" \
- : "=r"(__pu_err) \
- : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n" \
+ : "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_32(x,addr) \
__asm__ __volatile__ ( \
"\n1:\tst4 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 2b-1b\n" \
- "\t.previous" \
- : "=r"(__pu_err) \
- : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n" \
+ : "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_16(x,addr) \
__asm__ __volatile__ ( \
"\n1:\tst2 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 2b-1b\n" \
- "\t.previous" \
- : "=r"(__pu_err) \
- : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n" \
+ : "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_8(x,addr) \
__asm__ __volatile__ ( \
"\n1:\tst1 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "2:\n" \
- "\t.section __ex_table,\"a\"\n" \
- "\t\tdata4 @gprel(1b)\n" \
- "\t\tdata4 2b-1b\n" \
- "\t.previous" \
- : "=r"(__pu_err) \
- : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+ "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n" \
+ : "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
/*
* Complex access routines
diff -urN linux-davidm/include/asm-ia64/unistd.h lia64/include/asm-ia64/unistd.h
--- linux-davidm/include/asm-ia64/unistd.h Mon Oct 30 23:54:14 2000
+++ lia64/include/asm-ia64/unistd.h Tue Oct 31 00:18:03 2000
@@ -160,7 +160,7 @@
#define __NR_nanosleep 1168
#define __NR_nfsservctl 1169
#define __NR_prctl 1170
-#define __NR_getpagesize 1171
+/* 1171 is reserved for backwards compatibility with old __NR_getpagesize */
#define __NR_mmap2 1172
#define __NR_pciconfig_read 1173
#define __NR_pciconfig_write 1174
diff -urN linux-davidm/include/linux/elf.h lia64/include/linux/elf.h
--- linux-davidm/include/linux/elf.h Wed Oct 4 16:53:43 2000
+++ lia64/include/linux/elf.h Mon Oct 30 22:58:27 2000
@@ -165,6 +165,7 @@
#define AT_EGID 14 /* effective gid */
#define AT_PLATFORM 15 /* string identifying CPU for optimizations */
#define AT_HWCAP 16 /* arch dependent hints at CPU capabilities */
+#define AT_CLKTCK 17 /* frequency at which times() increments */
typedef struct dynamic{
Elf32_Sword d_tag;
diff -urN linux-davidm/include/linux/mm.h lia64/include/linux/mm.h
--- linux-davidm/include/linux/mm.h Wed Oct 4 16:53:43 2000
+++ lia64/include/linux/mm.h Mon Oct 30 22:58:40 2000
@@ -95,6 +95,8 @@
#define VM_DONTCOPY 0x00020000 /* Do not copy this vma on fork */
#define VM_DONTEXPAND 0x00040000 /* Cannot expand with mremap() */
+#define VM_WRITECOMBINED 0x00100000 /* Write-combined */
+#define VM_NONCACHED 0x00200000 /* Noncached access */
#define VM_STACK_FLAGS 0x00000177
diff -urN linux-davidm/mm/mmap.c lia64/mm/mmap.c
--- linux-davidm/mm/mmap.c Fri Sep 8 14:35:08 2000
+++ lia64/mm/mmap.c Mon Oct 30 23:02:32 2000
@@ -151,6 +151,12 @@
_trans(prot, PROT_WRITE, VM_WRITE) |
_trans(prot, PROT_EXEC, VM_EXEC);
flag_bits +#ifdef MAP_WRITECOMBINED
+ _trans(flags, MAP_WRITECOMBINED, VM_WRITECOMBINED) |
+#endif
+#ifdef MAP_NONCACHED
+ _trans(flags, MAP_NONCACHED, VM_NONCACHED) |
+#endif
_trans(flags, MAP_GROWSDOWN, VM_GROWSDOWN) |
_trans(flags, MAP_DENYWRITE, VM_DENYWRITE) |
_trans(flags, MAP_EXECUTABLE, VM_EXECUTABLE);
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0-test10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (23 preceding siblings ...)
2000-10-31 8:55 ` [Linux-ia64] kernel update (relative to 2.4.0-test9) David Mosberger
@ 2000-11-02 8:50 ` David Mosberger
2000-11-02 10:39 ` Pimenov, Sergei
` (190 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-11-02 8:50 UTC (permalink / raw)
To: linux-ia64
The patch at:
ftp://ftp.kernel.org/pub/linux/kernel/port/pub/linux/kernel/ports/ia64/linux-2.4.0-test10-ia64-001101.diff.gz
contains the latest IA-64 kernel diff relative to Linus' 2.4.0-test10. There are
only few new things:
- bring IA-64 support in sync with 2.4.0-test10
- added EFI partition support by Matt (my apologies to Matt for
missing his patch the last time round...)
- small fix to the PCI DMA support by Asit
- drop evil CONFIG_SKB_BELOW_4GB
- make simserial.c and simeth.c use ia64_alloc_irq() to obtain an
available interrupt vector
That should be it. This kernel is known to build and work on Big Sur,
Lion, and the HP Ski simulator.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help linux-2.4.0-test10-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/Documentation/Configure.help Wed Nov 1 23:05:34 2000
@@ -11365,6 +11365,12 @@
Say Y here if you would like to be able to read the hard disk
partition table format used by SGI machines.
+Intel EFI GUID partition support
+CONFIG_EFI_PARTITION
+ Say Y here if you would like to use hard disks under Linux which
+ were partitioned using EFI GPT. Presently only useful on the
+ IA-64 platform.
+
ADFS file system support (EXPERIMENTAL)
CONFIG_ADFS_FS
The Acorn Disc Filing System is the standard file system of the
diff -urN linux-davidm/Makefile linux-2.4.0-test10-lia/Makefile
--- linux-davidm/Makefile Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/Makefile Thu Nov 2 00:23:09 2000
@@ -206,7 +206,7 @@
$(LIBS) \
--end-group \
-o vmlinux
- $(NM) vmlinux | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aU] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > System.map
+ $(NM) vmlinux | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > System.map
symlinks:
rm -f include/asm
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-test10-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/config.in Thu Nov 2 00:24:09 2000
@@ -45,14 +45,14 @@
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
+ bool ' Enable Itanium B1-step specific code' CONFIG_ITANIUM_B1_SPECIFIC
+ bool ' Enable Itanium B2-step specific code' CONFIG_ITANIUM_B2_SPECIFIC
fi
bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
bool ' Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
- bool ' Force socket buffers below 4GB?' CONFIG_SKB_BELOW_4GB
-
bool ' ACPI kernel configuration manager (EXPERIMENTAL)' CONFIG_ACPI_KERNEL_CONFIG
if [ "$CONFIG_ACPI_KERNEL_CONFIG" = "y" ]; then
define_bool CONFIG_PM y
diff -urN linux-davidm/arch/ia64/dig/dig_irq.c linux-2.4.0-test10-lia/arch/ia64/dig/dig_irq.c
--- linux-davidm/arch/ia64/dig/dig_irq.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/dig/dig_irq.c Wed Dec 31 16:00:00 1969
@@ -1,10 +0,0 @@
-void
-dig_irq_init (void)
-{
- /*
- * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support
- * enabled.
- */
- outb(0xff, 0xA1);
- outb(0xff, 0x21);
-}
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-test10-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/entry.S Wed Nov 1 23:12:22 2000
@@ -503,7 +503,7 @@
;;
ld4 r2=[r2]
;;
- shl r2=r2,SMP_LOG_CACHE_BYTES // can't use shladd here...
+ shl r2=r2,SMP_CACHE_SHIFT // can't use shladd here...
;;
add r3=r2,r3
#else
diff -urN linux-davidm/arch/ia64/kernel/pci-dma.c linux-2.4.0-test10-lia/arch/ia64/kernel/pci-dma.c
--- linux-davidm/arch/ia64/kernel/pci-dma.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/pci-dma.c Wed Nov 1 23:13:06 2000
@@ -125,12 +125,16 @@
BUG();
/*
- * Find suitable number of IO TLB entries size that will fit this request and allocate a buffer
- * from that IO TLB pool.
+ * Find suitable number of IO TLB entries size that will fit this request and
+ * allocate a buffer from that IO TLB pool.
*/
spin_lock_irqsave(&io_tlb_lock, flags);
{
wrap = index = ALIGN(io_tlb_index, stride);
+
+ if (index >= io_tlb_nslabs)
+ index = 0;
+
do {
/*
* If we find a slot that indicates we have 'nslots' number of
diff -urN linux-davidm/arch/ia64/kernel/pci.c linux-2.4.0-test10-lia/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/pci.c Wed Nov 1 23:55:29 2000
@@ -123,10 +123,9 @@
* Initialization. Uses the SAL interface
*/
void __init
-pcibios_init(void)
+pcibios_init (void)
{
# define PCI_BUSES_TO_SCAN 255
- struct pci_ops *ops = NULL;
int i;
printk("PCI: Probing PCI hardware\n");
@@ -141,14 +140,14 @@
* are examined.
*/
void __init
-pcibios_fixup_bus(struct pci_bus *b)
+pcibios_fixup_bus (struct pci_bus *b)
{
return;
}
void __init
-pcibios_update_resource(struct pci_dev *dev, struct resource *root,
- struct resource *res, int resource)
+pcibios_update_resource (struct pci_dev *dev, struct resource *root,
+ struct resource *res, int resource)
{
unsigned long where, size;
u32 reg;
@@ -163,7 +162,7 @@
}
void __init
-pcibios_update_irq(struct pci_dev *dev, int irq)
+pcibios_update_irq (struct pci_dev *dev, int irq)
{
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq);
@@ -190,9 +189,9 @@
* PCI BIOS setup, always defaults to SAL interface
*/
char * __init
-pcibios_setup(char *str)
+pcibios_setup (char *str)
{
- pci_probe = PCI_NO_CHECKS;
+ pci_probe = PCI_NO_CHECKS;
return NULL;
}
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.0-test10-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/ptrace.c Wed Nov 1 23:13:25 2000
@@ -617,7 +617,6 @@
struct switch_stack *sw;
struct unw_frame_info info;
struct pt_regs *pt;
- unsigned long pmd_tmp;
pt = ia64_task_regs(child);
sw = (struct switch_stack *) (child->thread.ksp + 16);
diff -urN linux-davidm/arch/ia64/lib/memcpy.S linux-2.4.0-test10-lia/arch/ia64/lib/memcpy.S
--- linux-davidm/arch/ia64/lib/memcpy.S Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/lib/memcpy.S Wed Nov 1 23:14:24 2000
@@ -29,7 +29,14 @@
mov in0=in1
;;
mov in1=r8
- ;;
+ // gas doesn't handle control flow across procedures, so it doesn't
+ // realize that a stop bit is needed before the "alloc" instruction
+ // below
+{
+ nop.m 0
+ nop.f 0
+ nop.i 0
+} ;;
END(bcopy)
// FALL THROUGH
GLOBAL_ENTRY(memcpy)
diff -urN linux-davidm/drivers/char/simserial.c linux-2.4.0-test10-lia/drivers/char/simserial.c
--- linux-davidm/drivers/char/simserial.c Thu Nov 2 00:16:40 2000
+++ linux-2.4.0-test10-lia/drivers/char/simserial.c Wed Nov 1 23:18:55 2000
@@ -36,7 +36,6 @@
#undef SIMSERIAL_DEBUG /* define this to get some debug information */
#define KEYBOARD_INTR 3 /* must match with simulator! */
-#define SIMSERIAL_IRQ 0xee
#define NR_PORTS 1 /* only one port for now */
#define SERIAL_INLINE 1
@@ -78,7 +77,7 @@
*/
static struct serial_state rs_table[NR_PORTS]={
/* UART CLK PORT IRQ FLAGS */
- { 0, BASE_BAUD, 0x3F8, SIMSERIAL_IRQ, STD_COM_FLAGS,0,PORT_16550 } /* ttyS0 */
+ { 0, BASE_BAUD, 0x3F8, 0, STD_COM_FLAGS,0,PORT_16550 } /* ttyS0 */
};
/*
@@ -1013,10 +1012,7 @@
struct serial_state *state;
show_serial_version();
-
- /* connect the platform's keyboard interrupt to SIMSERIAL_IRQ */
- ia64_ssc_connect_irq(KEYBOARD_INTR, SIMSERIAL_IRQ);
-
+
/* Initialize the tty_driver structure */
memset(&serial_driver, 0, sizeof(struct tty_driver));
@@ -1063,6 +1059,11 @@
for (i = 0, state = rs_table; i < NR_PORTS; i++,state++) {
if (state->type = PORT_UNKNOWN) continue;
+
+ if (!state->irq) {
+ state->irq = ia64_alloc_irq();
+ ia64_ssc_connect_irq(KEYBOARD_INTR, state->irq);
+ }
printk(KERN_INFO "ttyS%02d at 0x%04lx (irq = %d) is a %s\n",
state->line,
diff -urN linux-davidm/drivers/net/simeth.c linux-2.4.0-test10-lia/drivers/net/simeth.c
--- linux-davidm/drivers/net/simeth.c Thu Nov 2 00:16:40 2000
+++ linux-2.4.0-test10-lia/drivers/net/simeth.c Wed Nov 1 23:19:29 2000
@@ -27,7 +27,6 @@
#include <asm/irq.h>
-#define SIMETH_IRQ 0xed
#define SIMETH_RECV_MAX 10
/*
@@ -213,11 +212,8 @@
return -ENOMEM;
memcpy(dev->dev_addr, mac_addr, sizeof(mac_addr));
- /*
- * XXX Fix me
- * does not support more than one card !
- */
- dev->irq = SIMETH_IRQ;
+
+ dev->irq = ia64_alloc_irq();
/*
* attach the interrupt in the simulator, this does enable interrupts
diff -urN linux-davidm/fs/partitions/Config.in linux-2.4.0-test10-lia/fs/partitions/Config.in
--- linux-davidm/fs/partitions/Config.in Sun Jul 9 22:21:41 2000
+++ linux-2.4.0-test10-lia/fs/partitions/Config.in Wed Nov 1 23:20:37 2000
@@ -23,6 +23,7 @@
bool ' BSD disklabel (FreeBSD partition tables) support' CONFIG_BSD_DISKLABEL
bool ' Solaris (x86) partition table support' CONFIG_SOLARIS_X86_PARTITION
bool ' Unixware slices support' CONFIG_UNIXWARE_DISKLABEL
+ bool ' EFI GUID Partition support' CONFIG_EFI_PARTITION
fi
bool ' SGI partition support' CONFIG_SGI_PARTITION
bool ' Ultrix partition table support' CONFIG_ULTRIX_PARTITION
diff -urN linux-davidm/fs/partitions/Makefile linux-2.4.0-test10-lia/fs/partitions/Makefile
--- linux-davidm/fs/partitions/Makefile Tue Jul 18 22:49:47 2000
+++ linux-2.4.0-test10-lia/fs/partitions/Makefile Wed Nov 1 23:20:52 2000
@@ -20,6 +20,7 @@
obj-$(CONFIG_SUN_PARTITION) += sun.o
obj-$(CONFIG_ULTRIX_PARTITION) += ultrix.o
obj-$(CONFIG_IBM_PARTITION) += ibm.o
+obj-$(CONFIG_EFI_PARTITION) += efi.o
O_OBJS += $(obj-y)
M_OBJS += $(obj-m)
diff -urN linux-davidm/fs/partitions/check.c linux-2.4.0-test10-lia/fs/partitions/check.c
--- linux-davidm/fs/partitions/check.c Mon Oct 16 12:58:51 2000
+++ linux-2.4.0-test10-lia/fs/partitions/check.c Wed Nov 1 23:21:07 2000
@@ -32,6 +32,7 @@
#include "sun.h"
#include "ibm.h"
#include "ultrix.h"
+#include "efi.h"
extern void device_init(void);
extern int *blk_size[];
@@ -71,6 +72,9 @@
#endif
#ifdef CONFIG_IBM_PARTITION
ibm_partition,
+#endif
+#ifdef CONFIG_EFI_PARTITION
+ efi_partition,
#endif
NULL
};
diff -urN linux-davidm/fs/partitions/efi.c linux-2.4.0-test10-lia/fs/partitions/efi.c
--- linux-davidm/fs/partitions/efi.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test10-lia/fs/partitions/efi.c Wed Nov 1 23:21:21 2000
@@ -0,0 +1,646 @@
+/************************************************************
+ * EFI GUID Partition Table handling
+ * Per Intel EFI Specification v0.99
+ * http://developer.intel.com/technology/efi/efi.htm
+ * efi.[ch] by Matt Domsch <Matt_Domsch@dell.com>
+ * Copyright 2000 Dell Computer Corporation
+ * CRC routines taken from the EFI Sample Implementation,
+ * 1999.12.31, lib/crc.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * TODO:
+ *
+ * Changelog:
+ * Wed Oct 25 2000 Matt Domsch <Matt_Domsch@dell.com>
+ * - Fixed the LastLBA() call to return the proper last block
+ *
+ * Thu Oct 12 2000 Matt Domsch <Matt_Domsch@dell.com>
+ * - Thanks to Andries Brouwer for his debugging assistance.
+ * - Code works, detects all the partitions.
+ *
+ ************************************************************/
+#include <linux/config.h>
+#include <linux/fs.h>
+#include <linux/genhd.h>
+#include <linux/kernel.h>
+#include <linux/major.h>
+#include <linux/string.h>
+#include <linux/blk.h>
+#include <linux/malloc.h>
+#include <linux/smp_lock.h>
+#include <asm/system.h>
+#include <asm/efi.h>
+
+#include "check.h"
+#include "efi.h"
+
+#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+extern void md_autodetect_dev(kdev_t dev);
+#endif
+
+
+#undef EFI_DEBUG
+#ifdef EFI_DEBUG
+static char *efi_printk_level = KERN_DEBUG;
+#define debug_printk printk
+#else
+#define debug_printk(...)
+#endif
+
+/* CRC routines taken from the EFI Sample Implementation,
+ * 1999.12.31, lib/crc.c
+ *
+ * Note, the EFI Specification, v0.99, has a reference to
+ * Dr. Dobbs Journal, May 1994 (actually it's in May 1992)
+ * but that isn't the CRC function being used by EFI.
+ */
+
+static u32 CRCTable[256] = {
+ 0x00000000, 0x77073096, 0xEE0E612C, 0x990951BA, 0x076DC419, 0x706AF48F,
+ 0xE963A535, 0x9E6495A3, 0x0EDB8832, 0x79DCB8A4, 0xE0D5E91E, 0x97D2D988,
+ 0x09B64C2B, 0x7EB17CBD, 0xE7B82D07, 0x90BF1D91, 0x1DB71064, 0x6AB020F2,
+ 0xF3B97148, 0x84BE41DE, 0x1ADAD47D, 0x6DDDE4EB, 0xF4D4B551, 0x83D385C7,
+ 0x136C9856, 0x646BA8C0, 0xFD62F97A, 0x8A65C9EC, 0x14015C4F, 0x63066CD9,
+ 0xFA0F3D63, 0x8D080DF5, 0x3B6E20C8, 0x4C69105E, 0xD56041E4, 0xA2677172,
+ 0x3C03E4D1, 0x4B04D447, 0xD20D85FD, 0xA50AB56B, 0x35B5A8FA, 0x42B2986C,
+ 0xDBBBC9D6, 0xACBCF940, 0x32D86CE3, 0x45DF5C75, 0xDCD60DCF, 0xABD13D59,
+ 0x26D930AC, 0x51DE003A, 0xC8D75180, 0xBFD06116, 0x21B4F4B5, 0x56B3C423,
+ 0xCFBA9599, 0xB8BDA50F, 0x2802B89E, 0x5F058808, 0xC60CD9B2, 0xB10BE924,
+ 0x2F6F7C87, 0x58684C11, 0xC1611DAB, 0xB6662D3D, 0x76DC4190, 0x01DB7106,
+ 0x98D220BC, 0xEFD5102A, 0x71B18589, 0x06B6B51F, 0x9FBFE4A5, 0xE8B8D433,
+ 0x7807C9A2, 0x0F00F934, 0x9609A88E, 0xE10E9818, 0x7F6A0DBB, 0x086D3D2D,
+ 0x91646C97, 0xE6635C01, 0x6B6B51F4, 0x1C6C6162, 0x856530D8, 0xF262004E,
+ 0x6C0695ED, 0x1B01A57B, 0x8208F4C1, 0xF50FC457, 0x65B0D9C6, 0x12B7E950,
+ 0x8BBEB8EA, 0xFCB9887C, 0x62DD1DDF, 0x15DA2D49, 0x8CD37CF3, 0xFBD44C65,
+ 0x4DB26158, 0x3AB551CE, 0xA3BC0074, 0xD4BB30E2, 0x4ADFA541, 0x3DD895D7,
+ 0xA4D1C46D, 0xD3D6F4FB, 0x4369E96A, 0x346ED9FC, 0xAD678846, 0xDA60B8D0,
+ 0x44042D73, 0x33031DE5, 0xAA0A4C5F, 0xDD0D7CC9, 0x5005713C, 0x270241AA,
+ 0xBE0B1010, 0xC90C2086, 0x5768B525, 0x206F85B3, 0xB966D409, 0xCE61E49F,
+ 0x5EDEF90E, 0x29D9C998, 0xB0D09822, 0xC7D7A8B4, 0x59B33D17, 0x2EB40D81,
+ 0xB7BD5C3B, 0xC0BA6CAD, 0xEDB88320, 0x9ABFB3B6, 0x03B6E20C, 0x74B1D29A,
+ 0xEAD54739, 0x9DD277AF, 0x04DB2615, 0x73DC1683, 0xE3630B12, 0x94643B84,
+ 0x0D6D6A3E, 0x7A6A5AA8, 0xE40ECF0B, 0x9309FF9D, 0x0A00AE27, 0x7D079EB1,
+ 0xF00F9344, 0x8708A3D2, 0x1E01F268, 0x6906C2FE, 0xF762575D, 0x806567CB,
+ 0x196C3671, 0x6E6B06E7, 0xFED41B76, 0x89D32BE0, 0x10DA7A5A, 0x67DD4ACC,
+ 0xF9B9DF6F, 0x8EBEEFF9, 0x17B7BE43, 0x60B08ED5, 0xD6D6A3E8, 0xA1D1937E,
+ 0x38D8C2C4, 0x4FDFF252, 0xD1BB67F1, 0xA6BC5767, 0x3FB506DD, 0x48B2364B,
+ 0xD80D2BDA, 0xAF0A1B4C, 0x36034AF6, 0x41047A60, 0xDF60EFC3, 0xA867DF55,
+ 0x316E8EEF, 0x4669BE79, 0xCB61B38C, 0xBC66831A, 0x256FD2A0, 0x5268E236,
+ 0xCC0C7795, 0xBB0B4703, 0x220216B9, 0x5505262F, 0xC5BA3BBE, 0xB2BD0B28,
+ 0x2BB45A92, 0x5CB36A04, 0xC2D7FFA7, 0xB5D0CF31, 0x2CD99E8B, 0x5BDEAE1D,
+ 0x9B64C2B0, 0xEC63F226, 0x756AA39C, 0x026D930A, 0x9C0906A9, 0xEB0E363F,
+ 0x72076785, 0x05005713, 0x95BF4A82, 0xE2B87A14, 0x7BB12BAE, 0x0CB61B38,
+ 0x92D28E9B, 0xE5D5BE0D, 0x7CDCEFB7, 0x0BDBDF21, 0x86D3D2D4, 0xF1D4E242,
+ 0x68DDB3F8, 0x1FDA836E, 0x81BE16CD, 0xF6B9265B, 0x6FB077E1, 0x18B74777,
+ 0x88085AE6, 0xFF0F6A70, 0x66063BCA, 0x11010B5C, 0x8F659EFF, 0xF862AE69,
+ 0x616BFFD3, 0x166CCF45, 0xA00AE278, 0xD70DD2EE, 0x4E048354, 0x3903B3C2,
+ 0xA7672661, 0xD06016F7, 0x4969474D, 0x3E6E77DB, 0xAED16A4A, 0xD9D65ADC,
+ 0x40DF0B66, 0x37D83BF0, 0xA9BCAE53, 0xDEBB9EC5, 0x47B2CF7F, 0x30B5FFE9,
+ 0xBDBDF21C, 0xCABAC28A, 0x53B39330, 0x24B4A3A6, 0xBAD03605, 0xCDD70693,
+ 0x54DE5729, 0x23D967BF, 0xB3667A2E, 0xC4614AB8, 0x5D681B02, 0x2A6F2B94,
+ 0xB40BBE37, 0xC30C8EA1, 0x5A05DF1B, 0x2D02EF8D
+};
+
+static u32
+CalculateCrc (void *_pt, u32 Size)
+{
+ u8 *pt = (u8 *)_pt;
+ register u32 Crc;
+
+ /* compute crc */
+ Crc = 0xffffffff;
+ while (Size) {
+ Crc = (Crc >> 8) ^ CRCTable[(__u8) Crc ^ *pt];
+ pt += 1;
+ Size -= 1;
+ }
+ Crc = Crc ^ 0xffffffff;
+ return Crc;
+}
+
+
+
+/************************************************************
+ * IsLegacyMBRValid()
+ * Requires:
+ * - mbr is a pointer to a legacy mbr structure
+ * Modifies: nothing
+ * Returns:
+ * 1 on true
+ * 0 on false
+ ************************************************************/
+static inline int
+IsLegacyMBRValid(LegacyMBR_t *mbr)
+{
+ return (mbr ? (mbr->Signature = MSDOS_MBR_SIGNATURE) : 0);
+}
+
+
+
+/************************************************************
+ * LastLBA()
+ * Requires:
+ * - struct gendisk hd
+ * - kdev_t dev
+ * Modifies: nothing
+ * Returns:
+ * Last LBA value on success. This is stored (by sd and
+ * ide-geometry) in
+ * the part[0] entry for this disk, and is the number of
+ * physical sectors available on the disk.
+ * 0 on error
+ ************************************************************/
+u64
+LastLBA(struct gendisk *hd, kdev_t dev)
+{
+ if (!hd || !hd->part) return 0;
+ return hd->part[MINOR(dev)].nr_sects - 1;
+}
+
+
+/************************************************************
+ * ReadLBA()
+ * Requires:
+ * - hd is our disk device.
+ * - dev is our device major number
+ * - lba is the logical block address desired (disk hardsector number)
+ * - buffer is a buffer of size size into which data copied
+ * - size_t count is size of the read (in bytes)
+ * Modifies:
+ * - buffer
+ * Returns:
+ * - count of bytes read
+ * - 0 on error
+ * Bugs:
+ * - bread() takes second argument as a signed int, not a u64.
+ * This is because getblk() takes the block number as a signed int.
+ * This overflow is known on l-k. We overflow at about 1TB.
+ *
+ ************************************************************/
+
+static size_t
+ReadLBA(struct gendisk *hd, kdev_t dev, u64 _lba, u8 *buffer, size_t count)
+{
+ struct buffer_head *bh;
+ size_t totalreadcount = 0, bytesread;
+ int lba = (_lba & 0x7FFFFFFF), i, blockstoread, blocksize;
+ debug_printk(efi_printk_level "ReadLBA(%p,%s,%x,%p,%x)\n",
+ hd, kdevname(dev), lba, buffer, count);
+
+ if (!hd || !buffer || !count) return 0;
+
+
+ blocksize = get_hardblocksize(dev);
+ if (!blocksize) blocksize = 512;
+ blockstoread = count / blocksize;
+ if (count % blocksize) blockstoread += 1;
+ debug_printk(efi_printk_level "about to read %d blocks\n",
+ blockstoread);
+
+
+ for (i=0; i<blockstoread; i++) {
+ bh = bread(dev, lba+i, blocksize);
+ if (!bh) {
+ /* We hit the end of the disk */
+ debug_printk(efi_printk_level
+ "bread returned NULL.\n");
+ return totalreadcount;
+ }
+
+ bytesread = (count > bh->b_size ? bh->b_size : count);
+ memcpy(buffer, bh->b_data, bytesread);
+
+ buffer += bytesread; /* Advance the buffer pointer */
+ totalreadcount += bytesread; /* Advance the total read count */
+ count -= bytesread; /* Subtract bytesread from count */
+
+ brelse(bh);
+ }
+
+ return totalreadcount;
+}
+
+void
+PrintGuidPartitionTableHeader(GuidPartitionTableHeader_t *gpt)
+{
+ debug_printk(efi_printk_level "GUID Partition Table Header\n");
+ if (!gpt) return;
+ debug_printk(efi_printk_level "Signature : %lx\n",
+ gpt->Signature);
+ debug_printk(efi_printk_level "Revision : %x\n",
+ gpt->Revision);
+ debug_printk(efi_printk_level "HeaderSize : %x\n",
+ gpt->HeaderSize);
+ debug_printk(efi_printk_level "HeaderCRC32 : %x\n",
+ gpt->HeaderCRC32);
+ debug_printk(efi_printk_level "MyLBA : %lx\n",
+ gpt->MyLBA);
+ debug_printk(efi_printk_level "AlternateLBA : %lx\n",
+ gpt->AlternateLBA);
+ debug_printk(efi_printk_level "FirstUsableLBA : %lx\n",
+ gpt->FirstUsableLBA);
+ debug_printk(efi_printk_level "LastUsableLBA : %lx\n",
+ gpt->LastUsableLBA);
+
+ debug_printk(efi_printk_level "PartitionEntryLBA : %lx\n",
+ gpt->PartitionEntryLBA);
+ debug_printk(efi_printk_level "NumberOfPartitionEntries : %x\n",
+ gpt->NumberOfPartitionEntries);
+ debug_printk(efi_printk_level "SizeOfPartitionEntry : %x\n",
+ gpt->SizeOfPartitionEntry);
+ debug_printk(efi_printk_level "PartitionEntryArrayCRC32 : %x\n",
+ gpt->PartitionEntryArrayCRC32);
+
+ return;
+}
+
+
+
+/************************************************************
+ * ReadGuidPartitionEntries()
+ * Requires:
+ * - filedes is an open file descriptor, suitable for reading
+ * - lba is the Logical Block Address of the partition table
+ * - gpt is a buffer into which the GPT will be put
+ * Modifies:
+ * - filedes file and pointer
+ * - gpt
+ * Returns:
+ * pte on success
+ * NULL on error
+ * Notes: remember to free pte when you're done!
+ ************************************************************/
+GuidPartitionEntry_t *
+ReadGuidPartitionEntries(struct gendisk *hd, kdev_t dev,
+ GuidPartitionTableHeader_t *gpt)
+{
+ size_t count;
+ GuidPartitionEntry_t *pte;
+ if (!hd || !gpt) return NULL;
+
+ count = gpt->NumberOfPartitionEntries * gpt->SizeOfPartitionEntry;
+ debug_printk(efi_printk_level "ReadGPTEs() kmallocing %x bytes\n",
+ count);
+ if (!count) return NULL;
+ pte = kmalloc(count, GFP_KERNEL);
+ if (!pte) return NULL;
+ memset(pte, 0, count);
+
+ if (ReadLBA(hd, dev, gpt->PartitionEntryLBA, (u8 *)pte,
+ count) < count) {
+ kfree(pte);
+ return NULL;
+ }
+ return pte;
+}
+
+
+
+/************************************************************
+ * ReadGuidPartitionTableHeader()
+ * Requires:
+ * - hd is our struct gendisk
+ * - dev is our device major number
+ * - lba is the Logical Block Address of the partition table
+ * - gpt is a buffer into which the GPT will be put
+ * - pte is a buffer into which the PTEs will be put
+ * Modifies:
+ * - gpt and pte
+ * Returns:
+ * 1 on success
+ * 0 on error
+ ************************************************************/
+
+GuidPartitionTableHeader_t *
+ReadGuidPartitionTableHeader(struct gendisk *hd, kdev_t dev, u64 lba)
+
+{
+ GuidPartitionTableHeader_t *gpt;
+ if (!hd) return NULL;
+
+ gpt = kmalloc(sizeof(GuidPartitionTableHeader_t), GFP_KERNEL);
+ if (!gpt) return NULL;
+ memset(gpt, 0, sizeof(GuidPartitionTableHeader_t));
+
+ debug_printk(efi_printk_level "GPTH() calling ReadLBA().\n");
+ if (ReadLBA(hd, dev, lba, (u8 *)gpt,
+ sizeof(GuidPartitionTableHeader_t)) <
+ sizeof(GuidPartitionTableHeader_t)) {
+ debug_printk(efi_printk_level "ReadGPTH(%lx) read failed.\n",
+ lba);
+ kfree(gpt);
+ return NULL;
+ }
+ PrintGuidPartitionTableHeader(gpt);
+
+ return gpt;
+}
+
+
+
+/************************************************************
+ * IsGuidPartitionTableValid()
+ * Requires:
+ * - gd points to our struct gendisk
+ * - dev is our device major number
+ * - lba is the logical block address of the GPTH to test
+ * - gpt is a GPTH if it's valid
+ * - ptes is a PTEs if it's valid
+ * Modifies:
+ * - gpt and ptes
+ * Returns:
+ * 1 if valid
+ * 0 on error
+ ************************************************************/
+static int
+IsGuidPartitionTableValid(struct gendisk *hd, kdev_t dev, u64 lba,
+ GuidPartitionTableHeader_t **gpt,
+ GuidPartitionEntry_t **ptes)
+{
+ u32 crc, origcrc;
+
+ if (!hd || !gpt || !ptes) return 0;
+ if (!(*gpt = ReadGuidPartitionTableHeader(hd, dev, lba))) return 0;
+
+ /* Check the GUID Partition Table Signature */
+ if ((*gpt)->Signature != GUID_PT_HEADER_SIGNATURE) {
+ debug_printk(efi_printk_level "GUID Partition Table Header Signature is wrong: %x != %x\n", (*gpt)->Signature, GUID_PT_HEADER_SIGNATURE);
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+
+ /* Check the GUID Partition Table CRC */
+ origcrc = (*gpt)->HeaderCRC32;
+ (*gpt)->HeaderCRC32 = 0;
+ crc = CalculateCrc(*gpt, (*gpt)->HeaderSize);
+
+
+ if (crc != origcrc) {
+ debug_printk(efi_printk_level "GUID Partition Table Header CRC is wrong: %x != %x\n", (*gpt)->HeaderCRC32, origcrc);
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+ (*gpt)->HeaderCRC32 = origcrc;
+
+ /* Check that the MyLBA entry points to the LBA that contains
+ * the GUID Partition Table */
+ if ((*gpt)->MyLBA != lba) {
+ debug_printk(efi_printk_level "GPT MyLBA incorrect: %lx != %lx\n", (*gpt)->MyLBA, lba);
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+
+ if (!(*ptes = ReadGuidPartitionEntries(hd, dev, *gpt))) {
+ debug_printk(efi_printk_level "read PTEs failed.\n");
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+
+ /* Check the GUID Partition Entry Array CRC */
+ crc = CalculateCrc(*ptes, (*gpt)->NumberOfPartitionEntries *
+ (*gpt)->SizeOfPartitionEntry);
+
+ if (crc != (*gpt)->PartitionEntryArrayCRC32) {
+ debug_printk(efi_printk_level "GUID Partitition Entry Array CRC check failed.\n");
+ kfree(*gpt);
+ *gpt = NULL;
+ kfree(*ptes);
+ *ptes = NULL;
+ return 0;
+ }
+
+
+ /* We're done, all's well */
+ return 1;
+}
+
+
+
+/************************************************************
+ * FindValidGPT()
+ * Requires:
+ * - gd points to our struct gendisk
+ * - dev is our device major number
+ * - gpt is a GPTH if it's valid
+ * - ptes is a PTE
+ * Modifies:
+ * - gpt & ptes
+ * Returns:
+ * 1 if valid
+ * 0 on error
+ ************************************************************/
+static int
+FindValidGPT(struct gendisk *hd, kdev_t dev,
+ GuidPartitionTableHeader_t **gpt,
+ GuidPartitionEntry_t **ptes)
+{
+ int rc = 0;
+ GuidPartitionTableHeader_t *pgpt = NULL, *agpt = NULL;
+ GuidPartitionEntry_t *pptes = NULL, *aptes = NULL;
+ u64 lastlba;
+ if (!hd || !gpt || !ptes) return 0;
+
+ lastlba = LastLBA(hd, dev);
+ /* Check the Primary GPT */
+ rc = IsGuidPartitionTableValid(hd, dev, 1, &pgpt, &pptes);
+ if (rc) {
+ /* Primary GPT is OK, check the alternate and warn if bad */
+ rc = IsGuidPartitionTableValid(hd, dev, pgpt->AlternateLBA,
+ &agpt, &aptes);
+ if (!rc){
+ printk(KERN_WARNING "Alternate GPT is invalid, using primary GPT.\n");
+ }
+
+ *gpt = pgpt;
+ *ptes = pptes;
+ if (agpt) kfree(agpt);
+ if (aptes) kfree(aptes);
+ return 1;
+ } /* if primary is valid */
+ else {
+ /* Primary GPT is bad, check the Alternate GPT */
+ rc = IsGuidPartitionTableValid(hd, dev, lastlba,
+ &agpt, &aptes);
+ if (rc) {
+ /* Primary is bad, alternate is good.
+ Return values from the alternate and warn.
+ */
+ printk(KERN_WARNING "Primary GPT is invalid, using alternate GPT.\n");
+ *gpt = agpt;
+ *ptes = aptes;
+ return 1;
+ }
+ else {
+ /* Primary is bad, alternate is bad, try "other"
+ * alternate. This is necessary because if we
+ * have an odd-sized disk, user-space might
+ * have put the alternate in block lastlba-1.
+ */
+ if (!(lastlba & 1)) {
+ lastlba--;
+ rc = IsGuidPartitionTableValid(hd, dev,
+ lastlba,
+ &agpt, &aptes);
+ if (rc) {
+ /* Primary is bad, alternate is good.
+ * Return values from the alternate
+ * and warn.
+ */
+ printk("Primary GPT is invalid, using alternate GPT.\n");
+ *gpt = agpt;
+ *ptes = aptes;
+ return 1;
+ }
+ }
+ }
+ }
+ /* Both primary and alternate GPTs are bad.
+ * This isn't our disk, return 0.
+ */
+ return 0;
+}
+
+
+
+/*
+ * Create devices for each entry in the GUID Partition Table Entries.
+ * The first block of each partition is a Legacy MBR.
+ *
+ * We do not create a Linux partition for GPT, but
+ * only for the actual data partitions.
+ * Returns:
+ * -1 if unable to read the partition table
+ * 0 if this isn't our partition table
+ * 1 if successful
+ *
+ */
+
+static int
+add_gpt_partitions(struct gendisk *hd, kdev_t dev, int nextminor)
+{
+ GuidPartitionTableHeader_t *gpt = NULL;
+ GuidPartitionEntry_t *ptes = NULL;
+ u32 i, nummade=0;
+
+ efi_guid_t unusedGuid = UNUSED_ENTRY_GUID;
+#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+ efi_guid_t raidGuid = PARTITION_LINUX_RAID_GUID;
+#endif
+
+ if (!hd) return -1;
+
+ if (!FindValidGPT(hd, dev, &gpt, &ptes) ||
+ !gpt || !ptes) {
+ if (gpt) kfree(gpt);
+ if (ptes) kfree(ptes);
+ return 0;
+ }
+
+ debug_printk(efi_printk_level "GUID Partition Table is valid! Yea!\n");
+ for (i = 0; i < gpt->NumberOfPartitionEntries &&
+ nummade < (hd->max_p - 1); i++) {
+ if (!efi_guidcmp(unusedGuid, ptes[i].PartitionTypeGuid))
+ continue;
+
+ add_gd_partition(hd, nextminor, ptes[i].StartingLBA,
+ (ptes[i].EndingLBA-ptes[i].StartingLBA + 1));
+
+ /* If there's this is a RAID volume, tell md */
+#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+ if (!efi_guidcmp(raidGuid, ptes[i].PartitionTypeGuid)) {
+ md_autodetect_dev(MKDEV(MAJOR(dev),nextminor));
+ }
+#endif
+ nummade++;
+ nextminor++;
+
+ }
+ kfree(ptes);
+ kfree(gpt);
+ printk("\n");
+ return 1;
+
+}
+
+
+/*
+ * efi_partition()
+ *
+ * If the first block on the disk is a legacy MBR,
+ * it got handled already by msdos_partition().
+ * If it's a Protective MBR, we'll handle it here.
+ *
+ * Returns:
+ * -1 if unable to read the partition table
+ * 0 if this isn't our partitoin table
+ * 1 if successful
+ *
+ */
+
+int
+efi_partition(struct gendisk *hd, kdev_t dev,
+ unsigned long first_sector, int first_part_minor) {
+ int hardblocksize = get_hardblocksize(dev);
+ int orig_blksize_size = BLOCK_SIZE;
+ int rc = 0;
+
+ /* not good, but choose something! */
+ if (!hardblocksize) hardblocksize = 512;
+
+ /* Need to change the block size that the block layer uses */
+ if (blksize_size[MAJOR(dev)]){
+ orig_blksize_size = blksize_size[MAJOR(dev)][MINOR(dev)];
+ }
+
+ if (orig_blksize_size != hardblocksize)
+ set_blocksize(dev, hardblocksize);
+
+ rc = add_gpt_partitions(hd, dev, first_part_minor);
+
+ /* change back */
+ if (orig_blksize_size != hardblocksize)
+ set_blocksize(dev, orig_blksize_size);
+
+ return rc;
+}
+
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-indent-level: 4
+ * c-brace-imaginary-offset: 0
+ * c-brace-offset: -4
+ * c-argdecl-indent: 4
+ * c-label-offset: -4
+ * c-continued-statement-offset: 4
+ * c-continued-brace-offset: 0
+ * indent-tabs-mode: nil
+ * tab-width: 8
+ * End:
+ */
+
+
+
diff -urN linux-davidm/fs/partitions/efi.h linux-2.4.0-test10-lia/fs/partitions/efi.h
--- linux-davidm/fs/partitions/efi.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test10-lia/fs/partitions/efi.h Wed Nov 1 23:35:07 2000
@@ -0,0 +1,154 @@
+/************************************************************
+ * EFI GUID Partition Table
+ * Per Intel EFI Specification v0.99
+ * http://developer.intel.com/technology/efi/efi.htm
+ *
+ * By Matt Domsch <Matt_Domsch@dell.com> Fri Sep 22 22:15:56 CDT 2000
+ * Copyright 2000 Dell Computer Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ ************************************************************/
+
+#ifndef FS_PART_EFI_H_INCLUDED
+#define FS_PART_EFI_H_INCLUDED
+
+#include <linux/types.h>
+#include <asm/efi.h>
+
+#include <linux/config.h>
+#include <linux/fs.h>
+#include <linux/genhd.h>
+#include <linux/kernel.h>
+#include <linux/major.h>
+#include <linux/string.h>
+#include <linux/blk.h>
+
+#define MSDOS_MBR_SIGNATURE 0xaa55
+#define EFI_PMBR_OSTYPE_EFI 0xEF
+#define EFI_PMBR_OSTYPE_EFI_GPT 0xEE
+
+#define GUID_PT_BLOCK_SIZE 512
+
+#define GUID_PT_HEADER_SIGNATURE 0x5452415020494645L
+#define GUID_PT_HEADER_REVISION_V1 0x00010000
+#define GUID_PT_HEADER_REVISION_V0_99 0x00000099
+#define UNUSED_ENTRY_GUID \
+ ((efi_guid_t) { 0x00000000, 0x0000, 0x0000, { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }})
+#define PARTITION_SYSTEM_GUID \
+((efi_guid_t) { 0xC12A7328, 0xF81F, 0x11d2, { 0xBA, 0x4B, 0x00, 0xA0, 0xC9, 0x3E, 0xC9, 0x3B }})
+#define LEGACY_MBR_PARTITION_GUID \
+ ((efi_guid_t) { 0x024DEE41, 0x33E7, 0x11d3, { 0x9D, 0x69, 0x00, 0x08, 0xC7, 0x81, 0xF3, 0x9F }})
+#define PARTITION_MSFT_RESERVED_GUID \
+ ((efi_guid_t) { 0xE3C9E316, 0x0B5C, 0x4DB8, { 0x81, 0x7D, 0xF9, 0x2D, 0xF0, 0x02, 0x15, 0xAE }})
+#define PARTITION_BASIC_DATA_GUID \
+ ((efi_guid_t) { 0xEBD0A0A2, 0xB9E5, 0x4433, { 0x87, 0xC0, 0x68, 0xB6, 0xB7, 0x26, 0x99, 0xC7 }})
+#define PARTITION_LINUX_RAID_GUID \
+ ((efi_guid_t) { 0xa19d880f, 0x05fc, 0x4d3b, { 0xa0, 0x06, 0x74, 0x3f, 0x0f, 0x84, 0x91, 0x1e }})
+#define PARTITION_LINUX_SWAP_GUID \
+ ((efi_guid_t) { 0x0657fd6d, 0xa4ab, 0x43c4, { 0x84, 0xe5, 0x09, 0x33, 0xc8, 0x4b, 0x4f, 0x4f }})
+
+typedef struct _GuidPartitionTableHeader_t {
+ u64 Signature;
+ u32 Revision;
+ u32 HeaderSize;
+ u32 HeaderCRC32;
+ u32 Reserved1;
+ u64 MyLBA;
+ u64 AlternateLBA;
+ u64 FirstUsableLBA;
+ u64 LastUsableLBA;
+ efi_guid_t DiskGUID;
+ u64 PartitionEntryLBA;
+ u32 NumberOfPartitionEntries;
+ u32 SizeOfPartitionEntry;
+ u32 PartitionEntryArrayCRC32;
+ u8 Reserved2[GUID_PT_BLOCK_SIZE - 92];
+} GuidPartitionTableHeader_t;
+
+typedef struct _GuidPartitionEntryAttributes_t {
+ __u64 RequiredToFunction:1;
+ __u64 Reserved:63;
+} GuidPartitionEntryAttributes_t;
+
+typedef struct _GuidPartitionEntry_t {
+ efi_guid_t PartitionTypeGuid;
+ efi_guid_t UniquePartitionGuid;
+ u64 StartingLBA;
+ u64 EndingLBA;
+ GuidPartitionEntryAttributes_t Attributes;
+ efi_char16_t PartitionName[72/sizeof(efi_char16_t)];
+} GuidPartitionEntry_t;
+
+
+
+typedef struct _PartitionRecord_t {
+ u8 BootIndicator; /* Not used by EFI firmware. Set to 0x80 to indicate that this
+ is the bootable legacy partition. */
+ u8 StartHead; /* Start of partition in CHS address, not used by EFI firmware. */
+ u8 StartSector; /* Start of partition in CHS address, not used by EFI firmware. */
+ u8 StartTrack; /* Start of partition in CHS address, not used by EFI firmware. */
+ u8 OSType; /* OS type. A value of 0xEF defines an EFI system partition.
+ Other values are reserved for legacy operating systems, and
+ allocated independently of the EFI specification. */
+ u8 EndHead; /* End of partition in CHS address, not used by EFI firmware. */
+ u8 EndSector; /* End of partition in CHS address, not used by EFI firmware. */
+ u8 EndTrack; /* End of partition in CHS address, not used by EFI firmware. */
+ u32 StartingLBA; /* Starting LBA address of the partition on the disk. Used by
+ EFI firmware to define the start of the partition. */
+ u32 SizeInLBA; /* Size of partition in LBA. Used by EFI firmware to determine
+ the size of the partition. */
+} PartitionRecord_t;
+
+typedef struct _LegacyMBR_t {
+ u8 BootCode[440];
+ u32 UniqueMBRSignature;
+ u16 Unknown;
+ PartitionRecord_t PartitionRecord[4];
+ u16 Signature;
+} __attribute__ ((packed)) LegacyMBR_t;
+
+
+
+#define EFI_GPT_PRIMARY_PARTITION_TABLE_LBA 1
+
+/* Functions */
+extern int
+efi_partition(struct gendisk *hd, kdev_t dev,
+ unsigned long first_sector, int first_part_minor);
+
+
+
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * --------------------------------------------------------------------------
+ * Local variables:
+ * c-indent-level: 4
+ * c-brace-imaginary-offset: 0
+ * c-brace-offset: -4
+ * c-argdecl-indent: 4
+ * c-label-offset: -4
+ * c-continued-statement-offset: 4
+ * c-continued-brace-offset: 0
+ * indent-tabs-mode: nil
+ * tab-width: 8
+ * End:
+ */
diff -urN linux-davidm/fs/partitions/msdos.c linux-2.4.0-test10-lia/fs/partitions/msdos.c
--- linux-davidm/fs/partitions/msdos.c Tue Jul 18 23:29:16 2000
+++ linux-2.4.0-test10-lia/fs/partitions/msdos.c Thu Nov 2 00:10:33 2000
@@ -36,6 +36,10 @@
#include "check.h"
#include "msdos.h"
+#ifdef CONFIG_EFI_PARTITION
+#include "efi.h"
+#endif
+
#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
extern void md_autodetect_dev(kdev_t dev);
#endif
@@ -378,6 +382,16 @@
bforget(bh);
return 0;
}
+#ifdef CONFIG_EFI_PARTITION
+ p = (struct partition *) (0x1be + data);
+ for (i=1 ; i<=4 ; i++,p++) {
+ /* If this is an EFI GPT disk, msdos should ignore it. */
+ if (SYS_IND(p) = EFI_PMBR_OSTYPE_EFI_GPT) {
+ bforget(bh);
+ return 0;
+ }
+ }
+#endif
p = (struct partition *) (0x1be + data);
#ifdef CONFIG_BLK_DEV_IDE
diff -urN linux-davidm/include/asm-ia64/cache.h linux-2.4.0-test10-lia/include/asm-ia64/cache.h
--- linux-davidm/include/asm-ia64/cache.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/cache.h Wed Nov 1 23:32:05 2000
@@ -9,11 +9,11 @@
*/
/* Bytes per L1 (data) cache line. */
-#define LOG_L1_CACHE_BYTES 6
-#define L1_CACHE_BYTES (1 << LOG_L1_CACHE_BYTES)
+#define L1_CACHE_SHIFT 6
+#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#ifdef CONFIG_SMP
-# define SMP_LOG_CACHE_BYTES LOG_L1_CACHE_BYTES
+# define SMP_CACHE_SHIFT L1_CACHE_SHIFT
# define SMP_CACHE_BYTES L1_CACHE_BYTES
#else
/*
@@ -21,7 +21,7 @@
* safe and provides an easy way to avoid wasting space on a
* uni-processor:
*/
-# define SMP_LOG_CACHE_BYTES 3
+# define SMP_CACHE_SHIFT 3
# define SMP_CACHE_BYTES (1 << 3)
#endif
diff -urNdiff -urN linux-davidm/include/asm-ia64/machvec_dig.h linux-2.4.0-test10-lia/include/asm-ia64/machvec_dig.h
--- linux-davidm/include/asm-ia64/machvec_dig.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/machvec_dig.h Wed Nov 1 23:22:49 2000
@@ -5,6 +5,7 @@
extern ia64_mv_irq_init_t dig_irq_init;
extern ia64_mv_pci_fixup_t dig_pci_fixup;
extern ia64_mv_map_nr_t map_nr_dense;
+extern ia64_mv_pci_fixup_t iosapic_pci_fixup;
/*
* This stuff has dual use!
diff -urN linux-davidm/include/asm-ia64/module.h linux-2.4.0-test10-lia/include/asm-ia64/module.h
--- linux-davidm/include/asm-ia64/module.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/module.h Wed Nov 1 23:53:20 2000
@@ -75,10 +75,10 @@
/*
* Pointers are reasonable, add the module unwind table
*/
- archdata->unw_table = unw_add_unwind_table(mod->name, archdata->segment_base,
+ archdata->unw_table = unw_add_unwind_table(mod->name,
+ (unsigned long) archdata->segment_base,
(unsigned long) archdata->gp,
- (unsigned long) archdata->unw_start,
- (unsigned long) archdata->unw_end);
+ archdata->unw_start, archdata->unw_end);
#endif /* CONFIG_IA64_NEW_UNWIND */
return 0;
}
@@ -98,7 +98,7 @@
archdata = (struct archdata *)(mod->archdata_start);
if (archdata->unw_table != NULL)
- unw_remove_unwind_table(archdata->unw_table);
+ unw_remove_unwind_table((void *) archdata->unw_table);
}
#endif /* CONFIG_IA64_NEW_UNWIND */
diff -urN linux-davidm/include/asm-ia64/pgalloc.h linux-2.4.0-test10-lia/include/asm-ia64/pgalloc.h
--- linux-davidm/include/asm-ia64/pgalloc.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/pgalloc.h Wed Nov 1 23:32:10 2000
@@ -196,13 +196,6 @@
extern int do_check_pgt_cache (int, int);
/*
- * This establishes kernel virtual mappings (e.g., as a result of a
- * vmalloc call). Since ia-64 uses a separate kernel page table,
- * there is nothing to do here... :)
- */
-#define set_pgdir(vmaddr, entry) do { } while(0)
-
-/*
* Now for some TLB flushing routines. This is the kind of stuff that
* can be very expensive, so try to avoid them whenever possible.
*/
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.4.0-test10-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/pgtable.h Wed Nov 1 23:32:30 2000
@@ -24,6 +24,9 @@
* matches the VHPT short format, the firt doubleword of the VHPD long
* format, and the first doubleword of the TLB insertion format.
*/
+#define _PAGE_A_BIT 5
+#define _PAGE_D_BIT 6
+
#define _PAGE_P (1 << 0) /* page present bit */
#define _PAGE_MA_WB (0x0 << 2) /* write back memory attribute */
#define _PAGE_MA_UC (0x4 << 2) /* uncacheable memory attribute */
@@ -46,8 +49,8 @@
#define _PAGE_AR_X_RX (7 << 9) /* exec & promote / read & exec */
#define _PAGE_AR_MASK (7 << 9)
#define _PAGE_AR_SHIFT 9
-#define _PAGE_A (1 << 5) /* page accessed bit */
-#define _PAGE_D (1 << 6) /* page dirty bit */
+#define _PAGE_A (1 << _PAGE_A_BIT) /* page accessed bit */
+#define _PAGE_D (1 << _PAGE_D_BIT) /* page dirty bit */
#define _PAGE_PPN_MASK (((__IA64_UL(1) << IA64_MAX_PHYS_BITS) - 1) & ~0xfffUL)
#define _PAGE_ED (__IA64_UL(1) << 52) /* exception deferral */
#define _PAGE_PROTNONE (__IA64_UL(1) << 63)
@@ -186,34 +189,12 @@
} while (0)
/* Quick test to see if ADDR is a (potentially) valid physical address. */
-static __inline__ long
+static inline long
ia64_phys_addr_valid (unsigned long addr)
{
return (addr & (my_cpu_data.unimpl_pa_mask)) = 0;
}
-/* Quick test to see if ADDR is a (potentially) valid physical address. */
-static __inline__ long
-ia64_phys_addr_valid (unsigned long addr)
-{
- return (addr & (my_cpu_data.unimpl_pa_mask)) = 0;
-}
-
-/*
- * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel
- * memory. For the return value to be meaningful, ADDR must be >- * PAGE_OFFSET. This operation can be relatively expensive (e.g.,
- * require a hash-, or multi-level tree-lookup or something of that
- * sort) but it guarantees to return TRUE only if accessing the page
- * at that address does not cause an error. Note that there may be
- * addresses for which kern_addr_valid() returns FALSE even though an
- * access would not cause an error (e.g., this is typically true for
- * memory mapped I/O regions.
- *
- * XXX Need to implement this for IA-64.
- */
-#define kern_addr_valid(addr) (1)
-
/*
* kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel
* memory. For the return value to be meaningful, ADDR must be >@@ -340,7 +321,7 @@
/*
* Return the region index for virtual address ADDRESS.
*/
-static __inline__ unsigned long
+static inline unsigned long
rgn_index (unsigned long address)
{
ia64_va a;
@@ -352,7 +333,7 @@
/*
* Return the region offset for virtual address ADDRESS.
*/
-static __inline__ unsigned long
+static inline unsigned long
rgn_offset (unsigned long address)
{
ia64_va a;
@@ -364,7 +345,7 @@
#define RGN_SIZE (1UL << 61)
#define RGN_KERNEL 7
-static __inline__ unsigned long
+static inline unsigned long
pgd_index (unsigned long address)
{
unsigned long region = address >> 61;
@@ -375,7 +356,7 @@
/* The offset in the 1-level directory is given by the 3 region bits
(61..63) and the seven level-1 bits (33-39). */
-static __inline__ pgd_t*
+static inline pgd_t*
pgd_offset (struct mm_struct *mm, unsigned long address)
{
return mm->pgd + pgd_index(address);
@@ -394,6 +375,49 @@
#define pte_offset(dir,addr) \
((pte_t *) pmd_page(*(dir)) + (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)))
+/* atomic versions of the some PTE manipulations: */
+
+static inline int
+ptep_test_and_clear_young (pte_t *ptep)
+{
+ return test_and_clear_bit(_PAGE_A_BIT, ptep);
+}
+
+static inline int
+ptep_test_and_clear_dirty (pte_t *ptep)
+{
+ return test_and_clear_bit(_PAGE_D_BIT, ptep);
+}
+
+static inline pte_t
+ptep_get_and_clear (pte_t *ptep)
+{
+ return __pte(xchg((long *) ptep, 0));
+}
+
+/* XXX this should be called ptep_set_wrprotect!!! */
+static inline void
+ptep_clear_wrprotect (pte_t *ptep)
+{
+ unsigned long new, old;
+
+ do {
+ old = pte_val(*ptep);
+ new = pte_val(pte_wrprotect(__pte (old)));
+ } while (cmpxchg((unsigned long *) ptep, old, new) != old);
+}
+
+static inline void
+ptep_mkdirty (pte_t *ptep)
+{
+ set_bit(_PAGE_D_BIT, ptep);
+}
+
+static inline int
+pte_same (pte_t a, pte_t b)
+{
+ return pte_val(a) = pte_val(b);
+}
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern void paging_init (void);
@@ -459,8 +483,6 @@
*/
extern unsigned long empty_zero_page[PAGE_SIZE/sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
-
-#include <asm-generic/pgtable.h>
# endif /* !__ASSEMBLY__ */
diff -urN linux-davidm/include/linux/skbuff.h linux-2.4.0-test10-lia/include/linux/skbuff.h
--- linux-davidm/include/linux/skbuff.h Thu Nov 2 00:16:42 2000
+++ linux-2.4.0-test10-lia/include/linux/skbuff.h Wed Nov 1 23:33:21 2000
@@ -896,11 +896,7 @@
{
struct sk_buff *skb;
-#ifdef CONFIG_SKB_BELOW_4GB
- skb = alloc_skb(length+16, GFP_ATOMIC | GFP_DMA);
-#else
skb = alloc_skb(length+16, GFP_ATOMIC);
-#endif
if (skb)
skb_reserve(skb,16);
return skb;
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.4.0-test10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (24 preceding siblings ...)
2000-11-02 8:50 ` [Linux-ia64] kernel update (relative to 2.4.0-test10) David Mosberger
@ 2000-11-02 10:39 ` Pimenov, Sergei
2000-11-16 7:59 ` David Mosberger
` (189 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Pimenov, Sergei @ 2000-11-02 10:39 UTC (permalink / raw)
To: linux-ia64
The URL is broken, correct one is
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/linux-2.4.0-test10-ia64-001101.diff.gz
p.s.
-----Original Message-----
From: David Mosberger [mailto:davidm@hpl.hp.com]
Sent: Thursday, November 02, 2000 11:51 AM
To: linux-ia64@linuxia64.org
Subject: [Linux-ia64] kernel update (relative to 2.4.0-test10)
The patch at:
ftp://ftp.kernel.org/pub/linux/kernel/port/pub/linux/kernel/ports/ia64/linux-2.4.0-test10-ia64-001101.diff.gz
contains the latest IA-64 kernel diff relative to Linus' 2.4.0-test10. There are
only few new things:
- bring IA-64 support in sync with 2.4.0-test10
- added EFI partition support by Matt (my apologies to Matt for
missing his patch the last time round...)
- small fix to the PCI DMA support by Asit
- drop evil CONFIG_SKB_BELOW_4GB
- make simserial.c and simeth.c use ia64_alloc_irq() to obtain an
available interrupt vector
That should be it. This kernel is known to build and work on Big Sur,
Lion, and the HP Ski simulator.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help linux-2.4.0-test10-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/Documentation/Configure.help Wed Nov 1 23:05:34 2000
@@ -11365,6 +11365,12 @@
Say Y here if you would like to be able to read the hard disk
partition table format used by SGI machines.
+Intel EFI GUID partition support
+CONFIG_EFI_PARTITION
+ Say Y here if you would like to use hard disks under Linux which
+ were partitioned using EFI GPT. Presently only useful on the
+ IA-64 platform.
+
ADFS file system support (EXPERIMENTAL)
CONFIG_ADFS_FS
The Acorn Disc Filing System is the standard file system of the
diff -urN linux-davidm/Makefile linux-2.4.0-test10-lia/Makefile
--- linux-davidm/Makefile Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/Makefile Thu Nov 2 00:23:09 2000
@@ -206,7 +206,7 @@
$(LIBS) \
--end-group \
-o vmlinux
- $(NM) vmlinux | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aU] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > System.map
+ $(NM) vmlinux | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > System.map
symlinks:
rm -f include/asm
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-test10-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/config.in Thu Nov 2 00:24:09 2000
@@ -45,14 +45,14 @@
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
+ bool ' Enable Itanium B1-step specific code' CONFIG_ITANIUM_B1_SPECIFIC
+ bool ' Enable Itanium B2-step specific code' CONFIG_ITANIUM_B2_SPECIFIC
fi
bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
bool ' Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
- bool ' Force socket buffers below 4GB?' CONFIG_SKB_BELOW_4GB
-
bool ' ACPI kernel configuration manager (EXPERIMENTAL)' CONFIG_ACPI_KERNEL_CONFIG
if [ "$CONFIG_ACPI_KERNEL_CONFIG" = "y" ]; then
define_bool CONFIG_PM y
diff -urN linux-davidm/arch/ia64/dig/dig_irq.c linux-2.4.0-test10-lia/arch/ia64/dig/dig_irq.c
--- linux-davidm/arch/ia64/dig/dig_irq.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/dig/dig_irq.c Wed Dec 31 16:00:00 1969
@@ -1,10 +0,0 @@
-void
-dig_irq_init (void)
-{
- /*
- * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support
- * enabled.
- */
- outb(0xff, 0xA1);
- outb(0xff, 0x21);
-}
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-test10-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/entry.S Wed Nov 1 23:12:22 2000
@@ -503,7 +503,7 @@
;;
ld4 r2=[r2]
;;
- shl r2=r2,SMP_LOG_CACHE_BYTES // can't use shladd here...
+ shl r2=r2,SMP_CACHE_SHIFT // can't use shladd here...
;;
add r3=r2,r3
#else
diff -urN linux-davidm/arch/ia64/kernel/pci-dma.c linux-2.4.0-test10-lia/arch/ia64/kernel/pci-dma.c
--- linux-davidm/arch/ia64/kernel/pci-dma.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/pci-dma.c Wed Nov 1 23:13:06 2000
@@ -125,12 +125,16 @@
BUG();
/*
- * Find suitable number of IO TLB entries size that will fit this request and allocate a buffer
- * from that IO TLB pool.
+ * Find suitable number of IO TLB entries size that will fit this request and
+ * allocate a buffer from that IO TLB pool.
*/
spin_lock_irqsave(&io_tlb_lock, flags);
{
wrap = index = ALIGN(io_tlb_index, stride);
+
+ if (index >= io_tlb_nslabs)
+ index = 0;
+
do {
/*
* If we find a slot that indicates we have 'nslots' number of
diff -urN linux-davidm/arch/ia64/kernel/pci.c linux-2.4.0-test10-lia/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/pci.c Wed Nov 1 23:55:29 2000
@@ -123,10 +123,9 @@
* Initialization. Uses the SAL interface
*/
void __init
-pcibios_init(void)
+pcibios_init (void)
{
# define PCI_BUSES_TO_SCAN 255
- struct pci_ops *ops = NULL;
int i;
printk("PCI: Probing PCI hardware\n");
@@ -141,14 +140,14 @@
* are examined.
*/
void __init
-pcibios_fixup_bus(struct pci_bus *b)
+pcibios_fixup_bus (struct pci_bus *b)
{
return;
}
void __init
-pcibios_update_resource(struct pci_dev *dev, struct resource *root,
- struct resource *res, int resource)
+pcibios_update_resource (struct pci_dev *dev, struct resource *root,
+ struct resource *res, int resource)
{
unsigned long where, size;
u32 reg;
@@ -163,7 +162,7 @@
}
void __init
-pcibios_update_irq(struct pci_dev *dev, int irq)
+pcibios_update_irq (struct pci_dev *dev, int irq)
{
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq);
@@ -190,9 +189,9 @@
* PCI BIOS setup, always defaults to SAL interface
*/
char * __init
-pcibios_setup(char *str)
+pcibios_setup (char *str)
{
- pci_probe = PCI_NO_CHECKS;
+ pci_probe = PCI_NO_CHECKS;
return NULL;
}
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.0-test10-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/ptrace.c Wed Nov 1 23:13:25 2000
@@ -617,7 +617,6 @@
struct switch_stack *sw;
struct unw_frame_info info;
struct pt_regs *pt;
- unsigned long pmd_tmp;
pt = ia64_task_regs(child);
sw = (struct switch_stack *) (child->thread.ksp + 16);
diff -urN linux-davidm/arch/ia64/lib/memcpy.S linux-2.4.0-test10-lia/arch/ia64/lib/memcpy.S
--- linux-davidm/arch/ia64/lib/memcpy.S Thu Nov 2 00:16:39 2000
+++ linux-2.4.0-test10-lia/arch/ia64/lib/memcpy.S Wed Nov 1 23:14:24 2000
@@ -29,7 +29,14 @@
mov in0=in1
;;
mov in1=r8
- ;;
+ // gas doesn't handle control flow across procedures, so it doesn't
+ // realize that a stop bit is needed before the "alloc" instruction
+ // below
+{
+ nop.m 0
+ nop.f 0
+ nop.i 0
+} ;;
END(bcopy)
// FALL THROUGH
GLOBAL_ENTRY(memcpy)
diff -urN linux-davidm/drivers/char/simserial.c linux-2.4.0-test10-lia/drivers/char/simserial.c
--- linux-davidm/drivers/char/simserial.c Thu Nov 2 00:16:40 2000
+++ linux-2.4.0-test10-lia/drivers/char/simserial.c Wed Nov 1 23:18:55 2000
@@ -36,7 +36,6 @@
#undef SIMSERIAL_DEBUG /* define this to get some debug information */
#define KEYBOARD_INTR 3 /* must match with simulator! */
-#define SIMSERIAL_IRQ 0xee
#define NR_PORTS 1 /* only one port for now */
#define SERIAL_INLINE 1
@@ -78,7 +77,7 @@
*/
static struct serial_state rs_table[NR_PORTS]={
/* UART CLK PORT IRQ FLAGS */
- { 0, BASE_BAUD, 0x3F8, SIMSERIAL_IRQ, STD_COM_FLAGS,0,PORT_16550 } /* ttyS0 */
+ { 0, BASE_BAUD, 0x3F8, 0, STD_COM_FLAGS,0,PORT_16550 } /* ttyS0 */
};
/*
@@ -1013,10 +1012,7 @@
struct serial_state *state;
show_serial_version();
-
- /* connect the platform's keyboard interrupt to SIMSERIAL_IRQ */
- ia64_ssc_connect_irq(KEYBOARD_INTR, SIMSERIAL_IRQ);
-
+
/* Initialize the tty_driver structure */
memset(&serial_driver, 0, sizeof(struct tty_driver));
@@ -1063,6 +1059,11 @@
for (i = 0, state = rs_table; i < NR_PORTS; i++,state++) {
if (state->type = PORT_UNKNOWN) continue;
+
+ if (!state->irq) {
+ state->irq = ia64_alloc_irq();
+ ia64_ssc_connect_irq(KEYBOARD_INTR, state->irq);
+ }
printk(KERN_INFO "ttyS%02d at 0x%04lx (irq = %d) is a %s\n",
state->line,
diff -urN linux-davidm/drivers/net/simeth.c linux-2.4.0-test10-lia/drivers/net/simeth.c
--- linux-davidm/drivers/net/simeth.c Thu Nov 2 00:16:40 2000
+++ linux-2.4.0-test10-lia/drivers/net/simeth.c Wed Nov 1 23:19:29 2000
@@ -27,7 +27,6 @@
#include <asm/irq.h>
-#define SIMETH_IRQ 0xed
#define SIMETH_RECV_MAX 10
/*
@@ -213,11 +212,8 @@
return -ENOMEM;
memcpy(dev->dev_addr, mac_addr, sizeof(mac_addr));
- /*
- * XXX Fix me
- * does not support more than one card !
- */
- dev->irq = SIMETH_IRQ;
+
+ dev->irq = ia64_alloc_irq();
/*
* attach the interrupt in the simulator, this does enable interrupts
diff -urN linux-davidm/fs/partitions/Config.in linux-2.4.0-test10-lia/fs/partitions/Config.in
--- linux-davidm/fs/partitions/Config.in Sun Jul 9 22:21:41 2000
+++ linux-2.4.0-test10-lia/fs/partitions/Config.in Wed Nov 1 23:20:37 2000
@@ -23,6 +23,7 @@
bool ' BSD disklabel (FreeBSD partition tables) support' CONFIG_BSD_DISKLABEL
bool ' Solaris (x86) partition table support' CONFIG_SOLARIS_X86_PARTITION
bool ' Unixware slices support' CONFIG_UNIXWARE_DISKLABEL
+ bool ' EFI GUID Partition support' CONFIG_EFI_PARTITION
fi
bool ' SGI partition support' CONFIG_SGI_PARTITION
bool ' Ultrix partition table support' CONFIG_ULTRIX_PARTITION
diff -urN linux-davidm/fs/partitions/Makefile linux-2.4.0-test10-lia/fs/partitions/Makefile
--- linux-davidm/fs/partitions/Makefile Tue Jul 18 22:49:47 2000
+++ linux-2.4.0-test10-lia/fs/partitions/Makefile Wed Nov 1 23:20:52 2000
@@ -20,6 +20,7 @@
obj-$(CONFIG_SUN_PARTITION) += sun.o
obj-$(CONFIG_ULTRIX_PARTITION) += ultrix.o
obj-$(CONFIG_IBM_PARTITION) += ibm.o
+obj-$(CONFIG_EFI_PARTITION) += efi.o
O_OBJS += $(obj-y)
M_OBJS += $(obj-m)
diff -urN linux-davidm/fs/partitions/check.c linux-2.4.0-test10-lia/fs/partitions/check.c
--- linux-davidm/fs/partitions/check.c Mon Oct 16 12:58:51 2000
+++ linux-2.4.0-test10-lia/fs/partitions/check.c Wed Nov 1 23:21:07 2000
@@ -32,6 +32,7 @@
#include "sun.h"
#include "ibm.h"
#include "ultrix.h"
+#include "efi.h"
extern void device_init(void);
extern int *blk_size[];
@@ -71,6 +72,9 @@
#endif
#ifdef CONFIG_IBM_PARTITION
ibm_partition,
+#endif
+#ifdef CONFIG_EFI_PARTITION
+ efi_partition,
#endif
NULL
};
diff -urN linux-davidm/fs/partitions/efi.c linux-2.4.0-test10-lia/fs/partitions/efi.c
--- linux-davidm/fs/partitions/efi.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test10-lia/fs/partitions/efi.c Wed Nov 1 23:21:21 2000
@@ -0,0 +1,646 @@
+/************************************************************
+ * EFI GUID Partition Table handling
+ * Per Intel EFI Specification v0.99
+ * http://developer.intel.com/technology/efi/efi.htm
+ * efi.[ch] by Matt Domsch <Matt_Domsch@dell.com>
+ * Copyright 2000 Dell Computer Corporation
+ * CRC routines taken from the EFI Sample Implementation,
+ * 1999.12.31, lib/crc.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * TODO:
+ *
+ * Changelog:
+ * Wed Oct 25 2000 Matt Domsch <Matt_Domsch@dell.com>
+ * - Fixed the LastLBA() call to return the proper last block
+ *
+ * Thu Oct 12 2000 Matt Domsch <Matt_Domsch@dell.com>
+ * - Thanks to Andries Brouwer for his debugging assistance.
+ * - Code works, detects all the partitions.
+ *
+ ************************************************************/
+#include <linux/config.h>
+#include <linux/fs.h>
+#include <linux/genhd.h>
+#include <linux/kernel.h>
+#include <linux/major.h>
+#include <linux/string.h>
+#include <linux/blk.h>
+#include <linux/malloc.h>
+#include <linux/smp_lock.h>
+#include <asm/system.h>
+#include <asm/efi.h>
+
+#include "check.h"
+#include "efi.h"
+
+#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+extern void md_autodetect_dev(kdev_t dev);
+#endif
+
+
+#undef EFI_DEBUG
+#ifdef EFI_DEBUG
+static char *efi_printk_level = KERN_DEBUG;
+#define debug_printk printk
+#else
+#define debug_printk(...)
+#endif
+
+/* CRC routines taken from the EFI Sample Implementation,
+ * 1999.12.31, lib/crc.c
+ *
+ * Note, the EFI Specification, v0.99, has a reference to
+ * Dr. Dobbs Journal, May 1994 (actually it's in May 1992)
+ * but that isn't the CRC function being used by EFI.
+ */
+
+static u32 CRCTable[256] = {
+ 0x00000000, 0x77073096, 0xEE0E612C, 0x990951BA, 0x076DC419, 0x706AF48F,
+ 0xE963A535, 0x9E6495A3, 0x0EDB8832, 0x79DCB8A4, 0xE0D5E91E, 0x97D2D988,
+ 0x09B64C2B, 0x7EB17CBD, 0xE7B82D07, 0x90BF1D91, 0x1DB71064, 0x6AB020F2,
+ 0xF3B97148, 0x84BE41DE, 0x1ADAD47D, 0x6DDDE4EB, 0xF4D4B551, 0x83D385C7,
+ 0x136C9856, 0x646BA8C0, 0xFD62F97A, 0x8A65C9EC, 0x14015C4F, 0x63066CD9,
+ 0xFA0F3D63, 0x8D080DF5, 0x3B6E20C8, 0x4C69105E, 0xD56041E4, 0xA2677172,
+ 0x3C03E4D1, 0x4B04D447, 0xD20D85FD, 0xA50AB56B, 0x35B5A8FA, 0x42B2986C,
+ 0xDBBBC9D6, 0xACBCF940, 0x32D86CE3, 0x45DF5C75, 0xDCD60DCF, 0xABD13D59,
+ 0x26D930AC, 0x51DE003A, 0xC8D75180, 0xBFD06116, 0x21B4F4B5, 0x56B3C423,
+ 0xCFBA9599, 0xB8BDA50F, 0x2802B89E, 0x5F058808, 0xC60CD9B2, 0xB10BE924,
+ 0x2F6F7C87, 0x58684C11, 0xC1611DAB, 0xB6662D3D, 0x76DC4190, 0x01DB7106,
+ 0x98D220BC, 0xEFD5102A, 0x71B18589, 0x06B6B51F, 0x9FBFE4A5, 0xE8B8D433,
+ 0x7807C9A2, 0x0F00F934, 0x9609A88E, 0xE10E9818, 0x7F6A0DBB, 0x086D3D2D,
+ 0x91646C97, 0xE6635C01, 0x6B6B51F4, 0x1C6C6162, 0x856530D8, 0xF262004E,
+ 0x6C0695ED, 0x1B01A57B, 0x8208F4C1, 0xF50FC457, 0x65B0D9C6, 0x12B7E950,
+ 0x8BBEB8EA, 0xFCB9887C, 0x62DD1DDF, 0x15DA2D49, 0x8CD37CF3, 0xFBD44C65,
+ 0x4DB26158, 0x3AB551CE, 0xA3BC0074, 0xD4BB30E2, 0x4ADFA541, 0x3DD895D7,
+ 0xA4D1C46D, 0xD3D6F4FB, 0x4369E96A, 0x346ED9FC, 0xAD678846, 0xDA60B8D0,
+ 0x44042D73, 0x33031DE5, 0xAA0A4C5F, 0xDD0D7CC9, 0x5005713C, 0x270241AA,
+ 0xBE0B1010, 0xC90C2086, 0x5768B525, 0x206F85B3, 0xB966D409, 0xCE61E49F,
+ 0x5EDEF90E, 0x29D9C998, 0xB0D09822, 0xC7D7A8B4, 0x59B33D17, 0x2EB40D81,
+ 0xB7BD5C3B, 0xC0BA6CAD, 0xEDB88320, 0x9ABFB3B6, 0x03B6E20C, 0x74B1D29A,
+ 0xEAD54739, 0x9DD277AF, 0x04DB2615, 0x73DC1683, 0xE3630B12, 0x94643B84,
+ 0x0D6D6A3E, 0x7A6A5AA8, 0xE40ECF0B, 0x9309FF9D, 0x0A00AE27, 0x7D079EB1,
+ 0xF00F9344, 0x8708A3D2, 0x1E01F268, 0x6906C2FE, 0xF762575D, 0x806567CB,
+ 0x196C3671, 0x6E6B06E7, 0xFED41B76, 0x89D32BE0, 0x10DA7A5A, 0x67DD4ACC,
+ 0xF9B9DF6F, 0x8EBEEFF9, 0x17B7BE43, 0x60B08ED5, 0xD6D6A3E8, 0xA1D1937E,
+ 0x38D8C2C4, 0x4FDFF252, 0xD1BB67F1, 0xA6BC5767, 0x3FB506DD, 0x48B2364B,
+ 0xD80D2BDA, 0xAF0A1B4C, 0x36034AF6, 0x41047A60, 0xDF60EFC3, 0xA867DF55,
+ 0x316E8EEF, 0x4669BE79, 0xCB61B38C, 0xBC66831A, 0x256FD2A0, 0x5268E236,
+ 0xCC0C7795, 0xBB0B4703, 0x220216B9, 0x5505262F, 0xC5BA3BBE, 0xB2BD0B28,
+ 0x2BB45A92, 0x5CB36A04, 0xC2D7FFA7, 0xB5D0CF31, 0x2CD99E8B, 0x5BDEAE1D,
+ 0x9B64C2B0, 0xEC63F226, 0x756AA39C, 0x026D930A, 0x9C0906A9, 0xEB0E363F,
+ 0x72076785, 0x05005713, 0x95BF4A82, 0xE2B87A14, 0x7BB12BAE, 0x0CB61B38,
+ 0x92D28E9B, 0xE5D5BE0D, 0x7CDCEFB7, 0x0BDBDF21, 0x86D3D2D4, 0xF1D4E242,
+ 0x68DDB3F8, 0x1FDA836E, 0x81BE16CD, 0xF6B9265B, 0x6FB077E1, 0x18B74777,
+ 0x88085AE6, 0xFF0F6A70, 0x66063BCA, 0x11010B5C, 0x8F659EFF, 0xF862AE69,
+ 0x616BFFD3, 0x166CCF45, 0xA00AE278, 0xD70DD2EE, 0x4E048354, 0x3903B3C2,
+ 0xA7672661, 0xD06016F7, 0x4969474D, 0x3E6E77DB, 0xAED16A4A, 0xD9D65ADC,
+ 0x40DF0B66, 0x37D83BF0, 0xA9BCAE53, 0xDEBB9EC5, 0x47B2CF7F, 0x30B5FFE9,
+ 0xBDBDF21C, 0xCABAC28A, 0x53B39330, 0x24B4A3A6, 0xBAD03605, 0xCDD70693,
+ 0x54DE5729, 0x23D967BF, 0xB3667A2E, 0xC4614AB8, 0x5D681B02, 0x2A6F2B94,
+ 0xB40BBE37, 0xC30C8EA1, 0x5A05DF1B, 0x2D02EF8D
+};
+
+static u32
+CalculateCrc (void *_pt, u32 Size)
+{
+ u8 *pt = (u8 *)_pt;
+ register u32 Crc;
+
+ /* compute crc */
+ Crc = 0xffffffff;
+ while (Size) {
+ Crc = (Crc >> 8) ^ CRCTable[(__u8) Crc ^ *pt];
+ pt += 1;
+ Size -= 1;
+ }
+ Crc = Crc ^ 0xffffffff;
+ return Crc;
+}
+
+
+
+/************************************************************
+ * IsLegacyMBRValid()
+ * Requires:
+ * - mbr is a pointer to a legacy mbr structure
+ * Modifies: nothing
+ * Returns:
+ * 1 on true
+ * 0 on false
+ ************************************************************/
+static inline int
+IsLegacyMBRValid(LegacyMBR_t *mbr)
+{
+ return (mbr ? (mbr->Signature = MSDOS_MBR_SIGNATURE) : 0);
+}
+
+
+
+/************************************************************
+ * LastLBA()
+ * Requires:
+ * - struct gendisk hd
+ * - kdev_t dev
+ * Modifies: nothing
+ * Returns:
+ * Last LBA value on success. This is stored (by sd and
+ * ide-geometry) in
+ * the part[0] entry for this disk, and is the number of
+ * physical sectors available on the disk.
+ * 0 on error
+ ************************************************************/
+u64
+LastLBA(struct gendisk *hd, kdev_t dev)
+{
+ if (!hd || !hd->part) return 0;
+ return hd->part[MINOR(dev)].nr_sects - 1;
+}
+
+
+/************************************************************
+ * ReadLBA()
+ * Requires:
+ * - hd is our disk device.
+ * - dev is our device major number
+ * - lba is the logical block address desired (disk hardsector number)
+ * - buffer is a buffer of size size into which data copied
+ * - size_t count is size of the read (in bytes)
+ * Modifies:
+ * - buffer
+ * Returns:
+ * - count of bytes read
+ * - 0 on error
+ * Bugs:
+ * - bread() takes second argument as a signed int, not a u64.
+ * This is because getblk() takes the block number as a signed int.
+ * This overflow is known on l-k. We overflow at about 1TB.
+ *
+ ************************************************************/
+
+static size_t
+ReadLBA(struct gendisk *hd, kdev_t dev, u64 _lba, u8 *buffer, size_t count)
+{
+ struct buffer_head *bh;
+ size_t totalreadcount = 0, bytesread;
+ int lba = (_lba & 0x7FFFFFFF), i, blockstoread, blocksize;
+ debug_printk(efi_printk_level "ReadLBA(%p,%s,%x,%p,%x)\n",
+ hd, kdevname(dev), lba, buffer, count);
+
+ if (!hd || !buffer || !count) return 0;
+
+
+ blocksize = get_hardblocksize(dev);
+ if (!blocksize) blocksize = 512;
+ blockstoread = count / blocksize;
+ if (count % blocksize) blockstoread += 1;
+ debug_printk(efi_printk_level "about to read %d blocks\n",
+ blockstoread);
+
+
+ for (i=0; i<blockstoread; i++) {
+ bh = bread(dev, lba+i, blocksize);
+ if (!bh) {
+ /* We hit the end of the disk */
+ debug_printk(efi_printk_level
+ "bread returned NULL.\n");
+ return totalreadcount;
+ }
+
+ bytesread = (count > bh->b_size ? bh->b_size : count);
+ memcpy(buffer, bh->b_data, bytesread);
+
+ buffer += bytesread; /* Advance the buffer pointer */
+ totalreadcount += bytesread; /* Advance the total read count */
+ count -= bytesread; /* Subtract bytesread from count */
+
+ brelse(bh);
+ }
+
+ return totalreadcount;
+}
+
+void
+PrintGuidPartitionTableHeader(GuidPartitionTableHeader_t *gpt)
+{
+ debug_printk(efi_printk_level "GUID Partition Table Header\n");
+ if (!gpt) return;
+ debug_printk(efi_printk_level "Signature : %lx\n",
+ gpt->Signature);
+ debug_printk(efi_printk_level "Revision : %x\n",
+ gpt->Revision);
+ debug_printk(efi_printk_level "HeaderSize : %x\n",
+ gpt->HeaderSize);
+ debug_printk(efi_printk_level "HeaderCRC32 : %x\n",
+ gpt->HeaderCRC32);
+ debug_printk(efi_printk_level "MyLBA : %lx\n",
+ gpt->MyLBA);
+ debug_printk(efi_printk_level "AlternateLBA : %lx\n",
+ gpt->AlternateLBA);
+ debug_printk(efi_printk_level "FirstUsableLBA : %lx\n",
+ gpt->FirstUsableLBA);
+ debug_printk(efi_printk_level "LastUsableLBA : %lx\n",
+ gpt->LastUsableLBA);
+
+ debug_printk(efi_printk_level "PartitionEntryLBA : %lx\n",
+ gpt->PartitionEntryLBA);
+ debug_printk(efi_printk_level "NumberOfPartitionEntries : %x\n",
+ gpt->NumberOfPartitionEntries);
+ debug_printk(efi_printk_level "SizeOfPartitionEntry : %x\n",
+ gpt->SizeOfPartitionEntry);
+ debug_printk(efi_printk_level "PartitionEntryArrayCRC32 : %x\n",
+ gpt->PartitionEntryArrayCRC32);
+
+ return;
+}
+
+
+
+/************************************************************
+ * ReadGuidPartitionEntries()
+ * Requires:
+ * - filedes is an open file descriptor, suitable for reading
+ * - lba is the Logical Block Address of the partition table
+ * - gpt is a buffer into which the GPT will be put
+ * Modifies:
+ * - filedes file and pointer
+ * - gpt
+ * Returns:
+ * pte on success
+ * NULL on error
+ * Notes: remember to free pte when you're done!
+ ************************************************************/
+GuidPartitionEntry_t *
+ReadGuidPartitionEntries(struct gendisk *hd, kdev_t dev,
+ GuidPartitionTableHeader_t *gpt)
+{
+ size_t count;
+ GuidPartitionEntry_t *pte;
+ if (!hd || !gpt) return NULL;
+
+ count = gpt->NumberOfPartitionEntries * gpt->SizeOfPartitionEntry;
+ debug_printk(efi_printk_level "ReadGPTEs() kmallocing %x bytes\n",
+ count);
+ if (!count) return NULL;
+ pte = kmalloc(count, GFP_KERNEL);
+ if (!pte) return NULL;
+ memset(pte, 0, count);
+
+ if (ReadLBA(hd, dev, gpt->PartitionEntryLBA, (u8 *)pte,
+ count) < count) {
+ kfree(pte);
+ return NULL;
+ }
+ return pte;
+}
+
+
+
+/************************************************************
+ * ReadGuidPartitionTableHeader()
+ * Requires:
+ * - hd is our struct gendisk
+ * - dev is our device major number
+ * - lba is the Logical Block Address of the partition table
+ * - gpt is a buffer into which the GPT will be put
+ * - pte is a buffer into which the PTEs will be put
+ * Modifies:
+ * - gpt and pte
+ * Returns:
+ * 1 on success
+ * 0 on error
+ ************************************************************/
+
+GuidPartitionTableHeader_t *
+ReadGuidPartitionTableHeader(struct gendisk *hd, kdev_t dev, u64 lba)
+
+{
+ GuidPartitionTableHeader_t *gpt;
+ if (!hd) return NULL;
+
+ gpt = kmalloc(sizeof(GuidPartitionTableHeader_t), GFP_KERNEL);
+ if (!gpt) return NULL;
+ memset(gpt, 0, sizeof(GuidPartitionTableHeader_t));
+
+ debug_printk(efi_printk_level "GPTH() calling ReadLBA().\n");
+ if (ReadLBA(hd, dev, lba, (u8 *)gpt,
+ sizeof(GuidPartitionTableHeader_t)) <
+ sizeof(GuidPartitionTableHeader_t)) {
+ debug_printk(efi_printk_level "ReadGPTH(%lx) read failed.\n",
+ lba);
+ kfree(gpt);
+ return NULL;
+ }
+ PrintGuidPartitionTableHeader(gpt);
+
+ return gpt;
+}
+
+
+
+/************************************************************
+ * IsGuidPartitionTableValid()
+ * Requires:
+ * - gd points to our struct gendisk
+ * - dev is our device major number
+ * - lba is the logical block address of the GPTH to test
+ * - gpt is a GPTH if it's valid
+ * - ptes is a PTEs if it's valid
+ * Modifies:
+ * - gpt and ptes
+ * Returns:
+ * 1 if valid
+ * 0 on error
+ ************************************************************/
+static int
+IsGuidPartitionTableValid(struct gendisk *hd, kdev_t dev, u64 lba,
+ GuidPartitionTableHeader_t **gpt,
+ GuidPartitionEntry_t **ptes)
+{
+ u32 crc, origcrc;
+
+ if (!hd || !gpt || !ptes) return 0;
+ if (!(*gpt = ReadGuidPartitionTableHeader(hd, dev, lba))) return 0;
+
+ /* Check the GUID Partition Table Signature */
+ if ((*gpt)->Signature != GUID_PT_HEADER_SIGNATURE) {
+ debug_printk(efi_printk_level "GUID Partition Table Header Signature is wrong: %x != %x\n", (*gpt)->Signature, GUID_PT_HEADER_SIGNATURE);
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+
+ /* Check the GUID Partition Table CRC */
+ origcrc = (*gpt)->HeaderCRC32;
+ (*gpt)->HeaderCRC32 = 0;
+ crc = CalculateCrc(*gpt, (*gpt)->HeaderSize);
+
+
+ if (crc != origcrc) {
+ debug_printk(efi_printk_level "GUID Partition Table Header CRC is wrong: %x != %x\n", (*gpt)->HeaderCRC32, origcrc);
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+ (*gpt)->HeaderCRC32 = origcrc;
+
+ /* Check that the MyLBA entry points to the LBA that contains
+ * the GUID Partition Table */
+ if ((*gpt)->MyLBA != lba) {
+ debug_printk(efi_printk_level "GPT MyLBA incorrect: %lx != %lx\n", (*gpt)->MyLBA, lba);
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+
+ if (!(*ptes = ReadGuidPartitionEntries(hd, dev, *gpt))) {
+ debug_printk(efi_printk_level "read PTEs failed.\n");
+ kfree(*gpt);
+ *gpt = NULL;
+ return 0;
+ }
+
+ /* Check the GUID Partition Entry Array CRC */
+ crc = CalculateCrc(*ptes, (*gpt)->NumberOfPartitionEntries *
+ (*gpt)->SizeOfPartitionEntry);
+
+ if (crc != (*gpt)->PartitionEntryArrayCRC32) {
+ debug_printk(efi_printk_level "GUID Partitition Entry Array CRC check failed.\n");
+ kfree(*gpt);
+ *gpt = NULL;
+ kfree(*ptes);
+ *ptes = NULL;
+ return 0;
+ }
+
+
+ /* We're done, all's well */
+ return 1;
+}
+
+
+
+/************************************************************
+ * FindValidGPT()
+ * Requires:
+ * - gd points to our struct gendisk
+ * - dev is our device major number
+ * - gpt is a GPTH if it's valid
+ * - ptes is a PTE
+ * Modifies:
+ * - gpt & ptes
+ * Returns:
+ * 1 if valid
+ * 0 on error
+ ************************************************************/
+static int
+FindValidGPT(struct gendisk *hd, kdev_t dev,
+ GuidPartitionTableHeader_t **gpt,
+ GuidPartitionEntry_t **ptes)
+{
+ int rc = 0;
+ GuidPartitionTableHeader_t *pgpt = NULL, *agpt = NULL;
+ GuidPartitionEntry_t *pptes = NULL, *aptes = NULL;
+ u64 lastlba;
+ if (!hd || !gpt || !ptes) return 0;
+
+ lastlba = LastLBA(hd, dev);
+ /* Check the Primary GPT */
+ rc = IsGuidPartitionTableValid(hd, dev, 1, &pgpt, &pptes);
+ if (rc) {
+ /* Primary GPT is OK, check the alternate and warn if bad */
+ rc = IsGuidPartitionTableValid(hd, dev, pgpt->AlternateLBA,
+ &agpt, &aptes);
+ if (!rc){
+ printk(KERN_WARNING "Alternate GPT is invalid, using primary GPT.\n");
+ }
+
+ *gpt = pgpt;
+ *ptes = pptes;
+ if (agpt) kfree(agpt);
+ if (aptes) kfree(aptes);
+ return 1;
+ } /* if primary is valid */
+ else {
+ /* Primary GPT is bad, check the Alternate GPT */
+ rc = IsGuidPartitionTableValid(hd, dev, lastlba,
+ &agpt, &aptes);
+ if (rc) {
+ /* Primary is bad, alternate is good.
+ Return values from the alternate and warn.
+ */
+ printk(KERN_WARNING "Primary GPT is invalid, using alternate GPT.\n");
+ *gpt = agpt;
+ *ptes = aptes;
+ return 1;
+ }
+ else {
+ /* Primary is bad, alternate is bad, try "other"
+ * alternate. This is necessary because if we
+ * have an odd-sized disk, user-space might
+ * have put the alternate in block lastlba-1.
+ */
+ if (!(lastlba & 1)) {
+ lastlba--;
+ rc = IsGuidPartitionTableValid(hd, dev,
+ lastlba,
+ &agpt, &aptes);
+ if (rc) {
+ /* Primary is bad, alternate is good.
+ * Return values from the alternate
+ * and warn.
+ */
+ printk("Primary GPT is invalid, using alternate GPT.\n");
+ *gpt = agpt;
+ *ptes = aptes;
+ return 1;
+ }
+ }
+ }
+ }
+ /* Both primary and alternate GPTs are bad.
+ * This isn't our disk, return 0.
+ */
+ return 0;
+}
+
+
+
+/*
+ * Create devices for each entry in the GUID Partition Table Entries.
+ * The first block of each partition is a Legacy MBR.
+ *
+ * We do not create a Linux partition for GPT, but
+ * only for the actual data partitions.
+ * Returns:
+ * -1 if unable to read the partition table
+ * 0 if this isn't our partition table
+ * 1 if successful
+ *
+ */
+
+static int
+add_gpt_partitions(struct gendisk *hd, kdev_t dev, int nextminor)
+{
+ GuidPartitionTableHeader_t *gpt = NULL;
+ GuidPartitionEntry_t *ptes = NULL;
+ u32 i, nummade=0;
+
+ efi_guid_t unusedGuid = UNUSED_ENTRY_GUID;
+#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+ efi_guid_t raidGuid = PARTITION_LINUX_RAID_GUID;
+#endif
+
+ if (!hd) return -1;
+
+ if (!FindValidGPT(hd, dev, &gpt, &ptes) ||
+ !gpt || !ptes) {
+ if (gpt) kfree(gpt);
+ if (ptes) kfree(ptes);
+ return 0;
+ }
+
+ debug_printk(efi_printk_level "GUID Partition Table is valid! Yea!\n");
+ for (i = 0; i < gpt->NumberOfPartitionEntries &&
+ nummade < (hd->max_p - 1); i++) {
+ if (!efi_guidcmp(unusedGuid, ptes[i].PartitionTypeGuid))
+ continue;
+
+ add_gd_partition(hd, nextminor, ptes[i].StartingLBA,
+ (ptes[i].EndingLBA-ptes[i].StartingLBA + 1));
+
+ /* If there's this is a RAID volume, tell md */
+#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+ if (!efi_guidcmp(raidGuid, ptes[i].PartitionTypeGuid)) {
+ md_autodetect_dev(MKDEV(MAJOR(dev),nextminor));
+ }
+#endif
+ nummade++;
+ nextminor++;
+
+ }
+ kfree(ptes);
+ kfree(gpt);
+ printk("\n");
+ return 1;
+
+}
+
+
+/*
+ * efi_partition()
+ *
+ * If the first block on the disk is a legacy MBR,
+ * it got handled already by msdos_partition().
+ * If it's a Protective MBR, we'll handle it here.
+ *
+ * Returns:
+ * -1 if unable to read the partition table
+ * 0 if this isn't our partitoin table
+ * 1 if successful
+ *
+ */
+
+int
+efi_partition(struct gendisk *hd, kdev_t dev,
+ unsigned long first_sector, int first_part_minor) {
+ int hardblocksize = get_hardblocksize(dev);
+ int orig_blksize_size = BLOCK_SIZE;
+ int rc = 0;
+
+ /* not good, but choose something! */
+ if (!hardblocksize) hardblocksize = 512;
+
+ /* Need to change the block size that the block layer uses */
+ if (blksize_size[MAJOR(dev)]){
+ orig_blksize_size = blksize_size[MAJOR(dev)][MINOR(dev)];
+ }
+
+ if (orig_blksize_size != hardblocksize)
+ set_blocksize(dev, hardblocksize);
+
+ rc = add_gpt_partitions(hd, dev, first_part_minor);
+
+ /* change back */
+ if (orig_blksize_size != hardblocksize)
+ set_blocksize(dev, orig_blksize_size);
+
+ return rc;
+}
+
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-indent-level: 4
+ * c-brace-imaginary-offset: 0
+ * c-brace-offset: -4
+ * c-argdecl-indent: 4
+ * c-label-offset: -4
+ * c-continued-statement-offset: 4
+ * c-continued-brace-offset: 0
+ * indent-tabs-mode: nil
+ * tab-width: 8
+ * End:
+ */
+
+
+
diff -urN linux-davidm/fs/partitions/efi.h linux-2.4.0-test10-lia/fs/partitions/efi.h
--- linux-davidm/fs/partitions/efi.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test10-lia/fs/partitions/efi.h Wed Nov 1 23:35:07 2000
@@ -0,0 +1,154 @@
+/************************************************************
+ * EFI GUID Partition Table
+ * Per Intel EFI Specification v0.99
+ * http://developer.intel.com/technology/efi/efi.htm
+ *
+ * By Matt Domsch <Matt_Domsch@dell.com> Fri Sep 22 22:15:56 CDT 2000
+ * Copyright 2000 Dell Computer Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ ************************************************************/
+
+#ifndef FS_PART_EFI_H_INCLUDED
+#define FS_PART_EFI_H_INCLUDED
+
+#include <linux/types.h>
+#include <asm/efi.h>
+
+#include <linux/config.h>
+#include <linux/fs.h>
+#include <linux/genhd.h>
+#include <linux/kernel.h>
+#include <linux/major.h>
+#include <linux/string.h>
+#include <linux/blk.h>
+
+#define MSDOS_MBR_SIGNATURE 0xaa55
+#define EFI_PMBR_OSTYPE_EFI 0xEF
+#define EFI_PMBR_OSTYPE_EFI_GPT 0xEE
+
+#define GUID_PT_BLOCK_SIZE 512
+
+#define GUID_PT_HEADER_SIGNATURE 0x5452415020494645L
+#define GUID_PT_HEADER_REVISION_V1 0x00010000
+#define GUID_PT_HEADER_REVISION_V0_99 0x00000099
+#define UNUSED_ENTRY_GUID \
+ ((efi_guid_t) { 0x00000000, 0x0000, 0x0000, { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }})
+#define PARTITION_SYSTEM_GUID \
+((efi_guid_t) { 0xC12A7328, 0xF81F, 0x11d2, { 0xBA, 0x4B, 0x00, 0xA0, 0xC9, 0x3E, 0xC9, 0x3B }})
+#define LEGACY_MBR_PARTITION_GUID \
+ ((efi_guid_t) { 0x024DEE41, 0x33E7, 0x11d3, { 0x9D, 0x69, 0x00, 0x08, 0xC7, 0x81, 0xF3, 0x9F }})
+#define PARTITION_MSFT_RESERVED_GUID \
+ ((efi_guid_t) { 0xE3C9E316, 0x0B5C, 0x4DB8, { 0x81, 0x7D, 0xF9, 0x2D, 0xF0, 0x02, 0x15, 0xAE }})
+#define PARTITION_BASIC_DATA_GUID \
+ ((efi_guid_t) { 0xEBD0A0A2, 0xB9E5, 0x4433, { 0x87, 0xC0, 0x68, 0xB6, 0xB7, 0x26, 0x99, 0xC7 }})
+#define PARTITION_LINUX_RAID_GUID \
+ ((efi_guid_t) { 0xa19d880f, 0x05fc, 0x4d3b, { 0xa0, 0x06, 0x74, 0x3f, 0x0f, 0x84, 0x91, 0x1e }})
+#define PARTITION_LINUX_SWAP_GUID \
+ ((efi_guid_t) { 0x0657fd6d, 0xa4ab, 0x43c4, { 0x84, 0xe5, 0x09, 0x33, 0xc8, 0x4b, 0x4f, 0x4f }})
+
+typedef struct _GuidPartitionTableHeader_t {
+ u64 Signature;
+ u32 Revision;
+ u32 HeaderSize;
+ u32 HeaderCRC32;
+ u32 Reserved1;
+ u64 MyLBA;
+ u64 AlternateLBA;
+ u64 FirstUsableLBA;
+ u64 LastUsableLBA;
+ efi_guid_t DiskGUID;
+ u64 PartitionEntryLBA;
+ u32 NumberOfPartitionEntries;
+ u32 SizeOfPartitionEntry;
+ u32 PartitionEntryArrayCRC32;
+ u8 Reserved2[GUID_PT_BLOCK_SIZE - 92];
+} GuidPartitionTableHeader_t;
+
+typedef struct _GuidPartitionEntryAttributes_t {
+ __u64 RequiredToFunction:1;
+ __u64 Reserved:63;
+} GuidPartitionEntryAttributes_t;
+
+typedef struct _GuidPartitionEntry_t {
+ efi_guid_t PartitionTypeGuid;
+ efi_guid_t UniquePartitionGuid;
+ u64 StartingLBA;
+ u64 EndingLBA;
+ GuidPartitionEntryAttributes_t Attributes;
+ efi_char16_t PartitionName[72/sizeof(efi_char16_t)];
+} GuidPartitionEntry_t;
+
+
+
+typedef struct _PartitionRecord_t {
+ u8 BootIndicator; /* Not used by EFI firmware. Set to 0x80 to indicate that this
+ is the bootable legacy partition. */
+ u8 StartHead; /* Start of partition in CHS address, not used by EFI firmware. */
+ u8 StartSector; /* Start of partition in CHS address, not used by EFI firmware. */
+ u8 StartTrack; /* Start of partition in CHS address, not used by EFI firmware. */
+ u8 OSType; /* OS type. A value of 0xEF defines an EFI system partition.
+ Other values are reserved for legacy operating systems, and
+ allocated independently of the EFI specification. */
+ u8 EndHead; /* End of partition in CHS address, not used by EFI firmware. */
+ u8 EndSector; /* End of partition in CHS address, not used by EFI firmware. */
+ u8 EndTrack; /* End of partition in CHS address, not used by EFI firmware. */
+ u32 StartingLBA; /* Starting LBA address of the partition on the disk. Used by
+ EFI firmware to define the start of the partition. */
+ u32 SizeInLBA; /* Size of partition in LBA. Used by EFI firmware to determine
+ the size of the partition. */
+} PartitionRecord_t;
+
+typedef struct _LegacyMBR_t {
+ u8 BootCode[440];
+ u32 UniqueMBRSignature;
+ u16 Unknown;
+ PartitionRecord_t PartitionRecord[4];
+ u16 Signature;
+} __attribute__ ((packed)) LegacyMBR_t;
+
+
+
+#define EFI_GPT_PRIMARY_PARTITION_TABLE_LBA 1
+
+/* Functions */
+extern int
+efi_partition(struct gendisk *hd, kdev_t dev,
+ unsigned long first_sector, int first_part_minor);
+
+
+
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * --------------------------------------------------------------------------
+ * Local variables:
+ * c-indent-level: 4
+ * c-brace-imaginary-offset: 0
+ * c-brace-offset: -4
+ * c-argdecl-indent: 4
+ * c-label-offset: -4
+ * c-continued-statement-offset: 4
+ * c-continued-brace-offset: 0
+ * indent-tabs-mode: nil
+ * tab-width: 8
+ * End:
+ */
diff -urN linux-davidm/fs/partitions/msdos.c linux-2.4.0-test10-lia/fs/partitions/msdos.c
--- linux-davidm/fs/partitions/msdos.c Tue Jul 18 23:29:16 2000
+++ linux-2.4.0-test10-lia/fs/partitions/msdos.c Thu Nov 2 00:10:33 2000
@@ -36,6 +36,10 @@
#include "check.h"
#include "msdos.h"
+#ifdef CONFIG_EFI_PARTITION
+#include "efi.h"
+#endif
+
#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
extern void md_autodetect_dev(kdev_t dev);
#endif
@@ -378,6 +382,16 @@
bforget(bh);
return 0;
}
+#ifdef CONFIG_EFI_PARTITION
+ p = (struct partition *) (0x1be + data);
+ for (i=1 ; i<=4 ; i++,p++) {
+ /* If this is an EFI GPT disk, msdos should ignore it. */
+ if (SYS_IND(p) = EFI_PMBR_OSTYPE_EFI_GPT) {
+ bforget(bh);
+ return 0;
+ }
+ }
+#endif
p = (struct partition *) (0x1be + data);
#ifdef CONFIG_BLK_DEV_IDE
diff -urN linux-davidm/include/asm-ia64/cache.h linux-2.4.0-test10-lia/include/asm-ia64/cache.h
--- linux-davidm/include/asm-ia64/cache.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/cache.h Wed Nov 1 23:32:05 2000
@@ -9,11 +9,11 @@
*/
/* Bytes per L1 (data) cache line. */
-#define LOG_L1_CACHE_BYTES 6
-#define L1_CACHE_BYTES (1 << LOG_L1_CACHE_BYTES)
+#define L1_CACHE_SHIFT 6
+#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#ifdef CONFIG_SMP
-# define SMP_LOG_CACHE_BYTES LOG_L1_CACHE_BYTES
+# define SMP_CACHE_SHIFT L1_CACHE_SHIFT
# define SMP_CACHE_BYTES L1_CACHE_BYTES
#else
/*
@@ -21,7 +21,7 @@
* safe and provides an easy way to avoid wasting space on a
* uni-processor:
*/
-# define SMP_LOG_CACHE_BYTES 3
+# define SMP_CACHE_SHIFT 3
# define SMP_CACHE_BYTES (1 << 3)
#endif
diff -urNdiff -urN linux-davidm/include/asm-ia64/machvec_dig.h linux-2.4.0-test10-lia/include/asm-ia64/machvec_dig.h
--- linux-davidm/include/asm-ia64/machvec_dig.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/machvec_dig.h Wed Nov 1 23:22:49 2000
@@ -5,6 +5,7 @@
extern ia64_mv_irq_init_t dig_irq_init;
extern ia64_mv_pci_fixup_t dig_pci_fixup;
extern ia64_mv_map_nr_t map_nr_dense;
+extern ia64_mv_pci_fixup_t iosapic_pci_fixup;
/*
* This stuff has dual use!
diff -urN linux-davidm/include/asm-ia64/module.h linux-2.4.0-test10-lia/include/asm-ia64/module.h
--- linux-davidm/include/asm-ia64/module.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/module.h Wed Nov 1 23:53:20 2000
@@ -75,10 +75,10 @@
/*
* Pointers are reasonable, add the module unwind table
*/
- archdata->unw_table = unw_add_unwind_table(mod->name, archdata->segment_base,
+ archdata->unw_table = unw_add_unwind_table(mod->name,
+ (unsigned long) archdata->segment_base,
(unsigned long) archdata->gp,
- (unsigned long) archdata->unw_start,
- (unsigned long) archdata->unw_end);
+ archdata->unw_start, archdata->unw_end);
#endif /* CONFIG_IA64_NEW_UNWIND */
return 0;
}
@@ -98,7 +98,7 @@
archdata = (struct archdata *)(mod->archdata_start);
if (archdata->unw_table != NULL)
- unw_remove_unwind_table(archdata->unw_table);
+ unw_remove_unwind_table((void *) archdata->unw_table);
}
#endif /* CONFIG_IA64_NEW_UNWIND */
diff -urN linux-davidm/include/asm-ia64/pgalloc.h linux-2.4.0-test10-lia/include/asm-ia64/pgalloc.h
--- linux-davidm/include/asm-ia64/pgalloc.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/pgalloc.h Wed Nov 1 23:32:10 2000
@@ -196,13 +196,6 @@
extern int do_check_pgt_cache (int, int);
/*
- * This establishes kernel virtual mappings (e.g., as a result of a
- * vmalloc call). Since ia-64 uses a separate kernel page table,
- * there is nothing to do here... :)
- */
-#define set_pgdir(vmaddr, entry) do { } while(0)
-
-/*
* Now for some TLB flushing routines. This is the kind of stuff that
* can be very expensive, so try to avoid them whenever possible.
*/
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.4.0-test10-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Thu Nov 2 00:16:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/pgtable.h Wed Nov 1 23:32:30 2000
@@ -24,6 +24,9 @@
* matches the VHPT short format, the firt doubleword of the VHPD long
* format, and the first doubleword of the TLB insertion format.
*/
+#define _PAGE_A_BIT 5
+#define _PAGE_D_BIT 6
+
#define _PAGE_P (1 << 0) /* page present bit */
#define _PAGE_MA_WB (0x0 << 2) /* write back memory attribute */
#define _PAGE_MA_UC (0x4 << 2) /* uncacheable memory attribute */
@@ -46,8 +49,8 @@
#define _PAGE_AR_X_RX (7 << 9) /* exec & promote / read & exec */
#define _PAGE_AR_MASK (7 << 9)
#define _PAGE_AR_SHIFT 9
-#define _PAGE_A (1 << 5) /* page accessed bit */
-#define _PAGE_D (1 << 6) /* page dirty bit */
+#define _PAGE_A (1 << _PAGE_A_BIT) /* page accessed bit */
+#define _PAGE_D (1 << _PAGE_D_BIT) /* page dirty bit */
#define _PAGE_PPN_MASK (((__IA64_UL(1) << IA64_MAX_PHYS_BITS) - 1) & ~0xfffUL)
#define _PAGE_ED (__IA64_UL(1) << 52) /* exception deferral */
#define _PAGE_PROTNONE (__IA64_UL(1) << 63)
@@ -186,34 +189,12 @@
} while (0)
/* Quick test to see if ADDR is a (potentially) valid physical address. */
-static __inline__ long
+static inline long
ia64_phys_addr_valid (unsigned long addr)
{
return (addr & (my_cpu_data.unimpl_pa_mask)) = 0;
}
-/* Quick test to see if ADDR is a (potentially) valid physical address. */
-static __inline__ long
-ia64_phys_addr_valid (unsigned long addr)
-{
- return (addr & (my_cpu_data.unimpl_pa_mask)) = 0;
-}
-
-/*
- * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel
- * memory. For the return value to be meaningful, ADDR must be >- * PAGE_OFFSET. This operation can be relatively expensive (e.g.,
- * require a hash-, or multi-level tree-lookup or something of that
- * sort) but it guarantees to return TRUE only if accessing the page
- * at that address does not cause an error. Note that there may be
- * addresses for which kern_addr_valid() returns FALSE even though an
- * access would not cause an error (e.g., this is typically true for
- * memory mapped I/O regions.
- *
- * XXX Need to implement this for IA-64.
- */
-#define kern_addr_valid(addr) (1)
-
/*
* kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel
* memory. For the return value to be meaningful, ADDR must be >@@ -340,7 +321,7 @@
/*
* Return the region index for virtual address ADDRESS.
*/
-static __inline__ unsigned long
+static inline unsigned long
rgn_index (unsigned long address)
{
ia64_va a;
@@ -352,7 +333,7 @@
/*
* Return the region offset for virtual address ADDRESS.
*/
-static __inline__ unsigned long
+static inline unsigned long
rgn_offset (unsigned long address)
{
ia64_va a;
@@ -364,7 +345,7 @@
#define RGN_SIZE (1UL << 61)
#define RGN_KERNEL 7
-static __inline__ unsigned long
+static inline unsigned long
pgd_index (unsigned long address)
{
unsigned long region = address >> 61;
@@ -375,7 +356,7 @@
/* The offset in the 1-level directory is given by the 3 region bits
(61..63) and the seven level-1 bits (33-39). */
-static __inline__ pgd_t*
+static inline pgd_t*
pgd_offset (struct mm_struct *mm, unsigned long address)
{
return mm->pgd + pgd_index(address);
@@ -394,6 +375,49 @@
#define pte_offset(dir,addr) \
((pte_t *) pmd_page(*(dir)) + (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)))
+/* atomic versions of the some PTE manipulations: */
+
+static inline int
+ptep_test_and_clear_young (pte_t *ptep)
+{
+ return test_and_clear_bit(_PAGE_A_BIT, ptep);
+}
+
+static inline int
+ptep_test_and_clear_dirty (pte_t *ptep)
+{
+ return test_and_clear_bit(_PAGE_D_BIT, ptep);
+}
+
+static inline pte_t
+ptep_get_and_clear (pte_t *ptep)
+{
+ return __pte(xchg((long *) ptep, 0));
+}
+
+/* XXX this should be called ptep_set_wrprotect!!! */
+static inline void
+ptep_clear_wrprotect (pte_t *ptep)
+{
+ unsigned long new, old;
+
+ do {
+ old = pte_val(*ptep);
+ new = pte_val(pte_wrprotect(__pte (old)));
+ } while (cmpxchg((unsigned long *) ptep, old, new) != old);
+}
+
+static inline void
+ptep_mkdirty (pte_t *ptep)
+{
+ set_bit(_PAGE_D_BIT, ptep);
+}
+
+static inline int
+pte_same (pte_t a, pte_t b)
+{
+ return pte_val(a) = pte_val(b);
+}
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern void paging_init (void);
@@ -459,8 +483,6 @@
*/
extern unsigned long empty_zero_page[PAGE_SIZE/sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
-
-#include <asm-generic/pgtable.h>
# endif /* !__ASSEMBLY__ */
diff -urN linux-davidm/include/linux/skbuff.h linux-2.4.0-test10-lia/include/linux/skbuff.h
--- linux-davidm/include/linux/skbuff.h Thu Nov 2 00:16:42 2000
+++ linux-2.4.0-test10-lia/include/linux/skbuff.h Wed Nov 1 23:33:21 2000
@@ -896,11 +896,7 @@
{
struct sk_buff *skb;
-#ifdef CONFIG_SKB_BELOW_4GB
- skb = alloc_skb(length+16, GFP_ATOMIC | GFP_DMA);
-#else
skb = alloc_skb(length+16, GFP_ATOMIC);
-#endif
if (skb)
skb_reserve(skb,16);
return skb;
_______________________________________________
Linux-IA64 mailing list
Linux-IA64@linuxia64.org
http://lists.linuxia64.org/lists/listinfo/linux-ia64
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0-test10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (25 preceding siblings ...)
2000-11-02 10:39 ` Pimenov, Sergei
@ 2000-11-16 7:59 ` David Mosberger
2000-12-07 8:26 ` [Linux-ia64] kernel update (relative to 2.4.0-test11) David Mosberger
` (188 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-11-16 7:59 UTC (permalink / raw)
To: linux-ia64
Here is the latest IA-64 kernel update. It's still relative to
test10. This patch is rather big as it contains the first batch of
code required for the SGI SN1 machine. Other than that, the following
changes were made:
- Asit: added workaround for dbr[] access errata
- Jonathan Nicklin, Patrick O'Rourke: changed locore code to execute
in virtual mode; with this change, only the (rare) page table accesses are
now done in physical mode
- Stephane: cleanup of sal.h; fix unaligned handler to always send SIGBUS
(not SIGSEGV)
- Kanoj: fix for ia64_pall_call_static
- added send_ipi/inX/outX machvec entries; note: both IPI and inX/outX
are architected in IA-64 and there shouldn't be any need for these
machvecs; however, there are unfortunately some machines that don't
follow the architecture and that's why we had to add them; if you're
a system designer, please follow the IA-64 architecture and do not
use these machvec entries
- DCR is now initialized such that all faults are deferred (as requested
by Intel)
- cleaned up PCI code and pgtable.h a bit
- ptce loop info is now maintained per CPU (mostly for cleanliness)
- width of region id registers is now determined at boot time via
a call to PAL
- fix unwind code so it doesn't hang when removing a kernel module
As usual, the full patch can be found at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64
The file name is linux-2.4.0-test10-ia64-001115.diff*
Enjoy,
--david
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-test10-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/config.in Wed Nov 15 17:52:03 2000
@@ -48,6 +48,10 @@
bool ' Enable Itanium B1-step specific code' CONFIG_ITANIUM_B1_SPECIFIC
bool ' Enable Itanium B2-step specific code' CONFIG_ITANIUM_B2_SPECIFIC
fi
+ bool ' Enable Itanium C-step specific code' CONFIG_ITANIUM_CSTEP_SPECIFIC
+ if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" = "y" ]; then
+ bool ' Enable Itanium C0-step specific code' CONFIG_ITANIUM_C0_SPECIFIC
+ fi
bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-test10-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/entry.S Wed Nov 15 17:52:56 2000
@@ -11,6 +11,17 @@
* Copyright (C) 1999 Don Dugger <Don.Dugger@intel.com>
*/
/*
+ * ia64_switch_to now places correct virtual mapping in in TR2 for
+ * kernel stack. This allows us to handle interrupts without changing
+ * to physical mode.
+ *
+ * ar.k4 is now used to hold last virtual map address
+ *
+ * Jonathan Nickin <nicklin@missioncriticallinux.com>
+ * Patrick O'Rourke <orourke@missioncriticallinux.com>
+ * 11/07/2000
+ /
+/*
* Global (preserved) predicate usage on syscall entry/exit path:
*
* pKern: See entry.h.
@@ -27,7 +38,8 @@
#include <asm/processor.h>
#include <asm/unistd.h>
#include <asm/asmmacro.h>
-
+#include <asm/pgtable.h>
+
#include "entry.h"
.text
@@ -98,6 +110,8 @@
br.ret.sptk.many rp
END(sys_clone)
+#define KSTACK_TR 2
+
/*
* prev_task <- ia64_switch_to(struct task_struct *next)
*/
@@ -108,22 +122,55 @@
UNW(.body)
adds r22=IA64_TASK_THREAD_KSP_OFFSET,r13
- dep r18=-1,r0,0,61 // build mask 0x1fffffffffffffff
+ mov r27=ar.k4
+ dep r20=0,in0,61,3 // physical address of "current"
+ ;;
+ st8 [r22]=sp // save kernel stack pointer of old task
+ shr.u r26=r20,_PAGE_SIZE_256M
+ ;;
+ cmp.eq p7,p6=r26,r0 // check < 256M
adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
;;
- st8 [r22]=sp // save kernel stack pointer of old task
- ld8 sp=[r21] // load kernel stack pointer of new task
- and r20=in0,r18 // physical address of "current"
- ;;
- mov ar.k6=r20 // copy "current" into ar.k6
- mov r8=r13 // return pointer to previously running task
- mov r13=in0 // set "current" pointer
+ /*
+ * If we've already mapped this task's page, we can skip doing it
+ * again.
+ */
+(p6) cmp.eq p7,p6=r26,r27
+(p6) br.cond.dpnt.few .map
+ ;;
+.done: ld8 sp=[r21] // load kernel stack pointer of new task
+(p6) ssm psr.ic // if we we had to map, renable the psr.ic bit FIRST!!!
;;
+(p6) srlz.d
+ mov ar.k6=r20 // copy "current" into ar.k6
+ mov r8=r13 // return pointer to previously running task
+ mov r13=in0 // set "current" pointer
+ ;;
+(p6) ssm psr.i // renable psr.i AFTER the ic bit is serialized
DO_LOAD_SWITCH_STACK( )
+
#ifdef CONFIG_SMP
- sync.i // ensure "fc"s done by this CPU are visible on other CPUs
-#endif
- br.ret.sptk.few rp
+ sync.i // ensure "fc"s done by this CPU are visible on other CPUs
+#endif
+ br.ret.sptk.few rp // boogie on out in new context
+
+.map:
+ rsm psr.i | psr.ic
+ movl r25=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX
+ ;;
+ srlz.d
+ or r23=r25,r20 // construct PA | page properties
+ mov r25=_PAGE_SIZE_256M<<2
+ ;;
+ mov cr.itir=r25
+ mov cr.ifa=in0 // VA of next task...
+ ;;
+ mov r25=KSTACK_TR // use tr entry #2...
+ mov ar.k4=r26 // remember last page we mapped...
+ ;;
+ itr.d dtr[r25]=r23 // wire in new mapping...
+ br.cond.sptk.many .done
+ ;;
END(ia64_switch_to)
#ifndef CONFIG_IA64_NEW_UNWIND
@@ -611,14 +658,13 @@
mov ar.ccv=r1
mov ar.fpsr=r13
mov b0=r14
- // turn off interrupts, interrupt collection, & data translation
- rsm psr.i | psr.ic | psr.dt
+ // turn off interrupts, interrupt collection
+ rsm psr.i | psr.ic
;;
srlz.i // EAS 2.5
mov b7=r15
;;
invala // invalidate ALAT
- dep r12=0,r12,61,3 // convert sp to physical address
bsw.0;; // switch back to bank 0 (must be last in insn group)
;;
#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
@@ -1091,7 +1137,7 @@
data8 sys_setpriority
data8 sys_statfs
data8 sys_fstatfs
- data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1105
data8 sys_semget
data8 sys_semop
data8 sys_semctl
diff -urN linux-davidm/arch/ia64/kernel/fw-emu.c linux-2.4.0-test10-lia/arch/ia64/kernel/fw-emu.c
--- linux-davidm/arch/ia64/kernel/fw-emu.c Mon Oct 9 17:54:54 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/fw-emu.c Wed Nov 15 17:53:32 2000
@@ -402,7 +402,6 @@
sal_systab->sal_rev_minor = 1;
sal_systab->sal_rev_major = 0;
sal_systab->entry_count = 1;
- sal_systab->ia32_bios_present = 0;
#ifdef CONFIG_IA64_GENERIC
strcpy(sal_systab->oem_id, "Generic");
diff -urN linux-davidm/arch/ia64/kernel/head.S linux-2.4.0-test10-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/head.S Wed Nov 15 17:54:03 2000
@@ -168,6 +168,11 @@
add r19=IA64_NUM_DBG_REGS*8,in0
;;
1: mov r16Ûr[r18]
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_C0_SPECIFIC)
+ ;;
+ srlz.d
+#endif
mov r17=ibr[r18]
add r18=1,r18
;;
@@ -195,6 +200,11 @@
add r18=1,r18
;;
mov dbr[r18]=r16
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_C0_SPECIFIC)
+ ;;
+ srlz.d
+#endif
mov ibr[r18]=r17
br.cloop.sptk.few 1b
;;
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.0-test10-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/ivt.S Wed Nov 15 17:54:59 2000
@@ -44,20 +44,10 @@
#include <asm/system.h>
#include <asm/unistd.h>
-#define MINSTATE_START_SAVE_MIN /* no special action needed */
-#define MINSTATE_END_SAVE_MIN \
- or r2=r2,r14; /* make first base a kernel virtual address */ \
- or r12=r12,r14; /* make sp a kernel virtual address */ \
- or r13=r13,r14; /* make `current' a kernel virtual address */ \
- bsw.1; /* switch back to bank 1 (must be last in insn group) */ \
- ;;
-
+#define MINSTATE_VIRT /* needed by minstate.h */
#include "minstate.h"
#define FAULT(n) \
- rsm psr.dt; /* avoid nested faults due to TLB misses... */ \
- ;; \
- srlz.d; /* ensure everyone knows psr.dt is off... */ \
mov r31=pr; \
mov r19=n;; /* prepare to save predicates */ \
br.cond.sptk.many dispatch_to_fault_handler
@@ -419,6 +409,10 @@
//-----------------------------------------------------------------------------------
// call do_page_fault (predicates are in r31, psr.dt is off, r16 is faulting address)
page_fault:
+ ssm psr.dt
+ ;;
+ srlz.i
+ ;;
SAVE_MIN_WITH_COVER
//
// Copy control registers to temporary registers, then turn on psr bits,
@@ -430,7 +424,7 @@
mov r9=cr.isr
adds r3=8,r2 // set up second base pointer
;;
- ssm psr.ic | psr.dt
+ ssm psr.ic
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -725,16 +719,14 @@
mov r16=cr.iim
mov r17=__IA64_BREAK_SYSCALL
mov r31=pr // prepare to save predicates
- rsm psr.dt // avoid nested faults due to TLB misses...
;;
- srlz.d // ensure everyone knows psr.dt is off...
cmp.eq p0,p7=r16,r17 // is this a system call? (p7 <- false, if so)
(p7) br.cond.spnt.many non_syscall
SAVE_MIN // uses r31; defines r2:
- // turn interrupt collection and data translation back on:
- ssm psr.ic | psr.dt
+ // turn interrupt collection back on:
+ ssm psr.ic
;;
srlz.i // guarantee that interrupt collection is enabled
cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
@@ -795,17 +787,14 @@
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4)
- rsm psr.dt // avoid nested faults due to TLB misses...
- ;;
- srlz.d // ensure everyone knows psr.dt is off...
mov r31=pr // prepare to save predicates
;;
SAVE_MIN_WITH_COVER // uses r31; defines r2 and r3
- ssm psr.ic | psr.dt // turn interrupt collection and data translation back on
+ ssm psr.ic // turn interrupt collection
;;
adds r3=8,r2 // set up second base pointer for SAVE_REST
- srlz.i // ensure everybody knows psr.ic and psr.dt are back on
+ srlz.i // ensure everybody knows psr.ic is back on
;;
SAVE_REST
;;
@@ -855,7 +844,7 @@
// The "alloc" can cause a mandatory store which could lead to
// an "Alt DTLB" fault which we can handle only if psr.ic is on.
//
- ssm psr.ic | psr.dt
+ ssm psr.ic
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -900,7 +889,7 @@
SAVE_MIN
;;
mov r14=cr.isr
- ssm psr.ic | psr.dt
+ ssm psr.ic
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -985,8 +974,8 @@
mov r8=cr.iim // get break immediate (must be done while psr.ic is off)
adds r3=8,r2 // set up second base pointer for SAVE_REST
- // turn interrupt collection and data translation back on:
- ssm psr.ic | psr.dt
+ // turn interrupt collection back on:
+ ssm psr.ic
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -1023,7 +1012,7 @@
// wouldn't get the state to recover.
//
mov r15=cr.ifa
- ssm psr.ic | psr.dt
+ ssm psr.ic
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -1055,7 +1044,6 @@
//
// Input:
// psr.ic: off
- // psr.dt: off
// r19: fault vector number (e.g., 24 for General Exception)
// r31: contains saved predicates (pr)
//
@@ -1071,7 +1059,7 @@
mov r10=cr.iim
mov r11=cr.itir
;;
- ssm psr.ic | psr.dt
+ ssm psr.ic
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -1145,9 +1133,7 @@
// 0x5400 Entry 24 (size 16 bundles) General Exception (5,32,34,36,38,39)
mov r16=cr.isr
mov r31=pr
- rsm psr.dt // avoid nested faults due to TLB misses...
;;
- srlz.d // ensure everyone knows psr.dt is off...
cmp4.eq p6,p0=0,r16
(p6) br.sptk dispatch_illegal_op_fault
;;
@@ -1157,7 +1143,7 @@
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5500 Entry 25 (size 16 bundles) Disabled FP-Register (35)
- rsm psr.dt | psr.dfh // ensure we can access fph
+ rsm psr.dfh // ensure we can access fph
;;
srlz.d
mov r31=pr
@@ -1218,11 +1204,9 @@
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5a00 Entry 30 (size 16 bundles) Unaligned Reference (57)
- rsm psr.dt // avoid nested faults due to TLB misses...
mov r16=cr.ipsr
mov r31=pr // prepare to save predicates
;;
- srlz.d // ensure everyone knows psr.dt is off
br.cond.sptk.many dispatch_unaligned_handler
.align 256
@@ -1304,9 +1288,6 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6a00 Entry 46 (size 16 bundles) IA-32 Intercept (30,31,59,70,71)
#ifdef CONFIG_IA32_SUPPORT
- rsm psr.dt
- ;;
- srlz.d
mov r31=pr
mov r16=cr.isr
;;
@@ -1334,9 +1315,6 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6b00 Entry 47 (size 16 bundles) IA-32 Interrupt (74)
#ifdef CONFIG_IA32_SUPPORT
- rsm psr.dt
- ;;
- srlz.d
mov r31=pr
br.cond.sptk.many dispatch_to_ia32_handler
#else
diff -urN linux-davidm/arch/ia64/kernel/machvec.c linux-2.4.0-test10-lia/arch/ia64/kernel/machvec.c
--- linux-davidm/arch/ia64/kernel/machvec.c Sun Aug 13 10:17:16 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/machvec.c Wed Nov 15 17:57:18 2000
@@ -1,10 +1,12 @@
#include <linux/config.h>
+
+#ifdef CONFIG_IA64_GENERIC
+
#include <linux/kernel.h>
+#include <linux/string.h>
#include <asm/page.h>
#include <asm/machvec.h>
-
-#ifdef CONFIG_IA64_GENERIC
struct ia64_machine_vector ia64_mv;
diff -urN linux-davidm/arch/ia64/kernel/mca.c linux-2.4.0-test10-lia/arch/ia64/kernel/mca.c
--- linux-davidm/arch/ia64/kernel/mca.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/mca.c Wed Nov 15 17:57:31 2000
@@ -366,7 +366,7 @@
void
ia64_mca_wakeup(int cpu)
{
- ia64_send_ipi(cpu, IA64_MCA_WAKEUP_INT_VECTOR, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(cpu, IA64_MCA_WAKEUP_INT_VECTOR, IA64_IPI_DM_INT, 0);
ia64_mc_info.imi_rendez_checkin[cpu] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
}
diff -urN linux-davidm/arch/ia64/kernel/mca_asm.S linux-2.4.0-test10-lia/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/mca_asm.S Wed Nov 15 17:57:45 2000
@@ -3,8 +3,9 @@
//
// Mods by cfleck to integrate into kernel build
// 00/03/15 davidm Added various stop bits to get a clean compile
-// 00/03/29 cfleck Added code to save INIT handoff state in pt_regs format, switch to temp kstack,
-// switch modes, jump to C INIT handler
+//
+// 00/03/29 cfleck Added code to save INIT handoff state in pt_regs format, switch to temp
+// kstack, switch modes, jump to C INIT handler
//
#include <asm/pgtable.h>
#include <asm/processor.h>
@@ -15,14 +16,7 @@
* When we get an machine check, the kernel stack pointer is no longer
* valid, so we need to set a new stack pointer.
*/
-#define MINSTATE_START_SAVE_MIN \
-(pKern) movl sp=ia64_init_stack+IA64_STK_OFFSET-IA64_PT_REGS_SIZE; \
- ;;
-
-#define MINSTATE_END_SAVE_MIN \
- or r12=r12,r14; /* make sp a kernel virtual address */ \
- or r13=r13,r14; /* make `current' a kernel virtual address */ \
- ;;
+#define MINSTATE_PHYS /* Make sure stack access is physical for MINSTATE */
#include "minstate.h"
diff -urN linux-davidm/arch/ia64/kernel/minstate.h linux-2.4.0-test10-lia/arch/ia64/kernel/minstate.h
--- linux-davidm/arch/ia64/kernel/minstate.h Mon Oct 9 17:54:54 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/minstate.h Wed Nov 15 22:22:30 2000
@@ -20,6 +20,72 @@
#define rR1 r20
/*
+ * Here start the source dependent macros.
+ */
+
+/*
+ * For ivt.s we want to access the stack virtually so we dont have to disable translation
+ * on interrupts.
+ */
+#define MINSTATE_START_SAVE_MIN_VIRT \
+ dep r1=-1,r1,61,3; /* r1 = current (virtual) */ \
+(p7) mov ar.rsc=r0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
+ ;; \
+(p7) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of RBS */ \
+(p7) mov rARRNAT=ar.rnat; \
+(pKern) mov r1=sp; /* get sp */ \
+ ;; \
+(p7) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
+(p7) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
+ ;; \
+(pKern) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
+(p7) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
+ ;; \
+(p7) mov r18=ar.bsp; \
+(p7) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
+
+#define MINSTATE_END_SAVE_MIN_VIRT \
+ or r13=r13,r14; /* make `current' a kernel virtual address */ \
+ bsw.1; /* switch back to bank 1 (must be last in insn group) */ \
+ ;;
+
+/*
+ * For mca_asm.S we want to access the stack physically since the state is saved before we
+ * go virtual and dont want to destroy the iip or ipsr.
+ */
+#define MINSTATE_START_SAVE_MIN_PHYS \
+(pKern) movl sp=ia64_init_stack+IA64_STK_OFFSET-IA64_PT_REGS_SIZE; \
+(p7) mov ar.rsc=r0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
+(p7) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
+ ;; \
+(p7) mov rARRNAT=ar.rnat; \
+(pKern) dep r1=0,sp,61,3; /* compute physical addr of sp */ \
+(p7) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
+(p7) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
+(p7) dep rKRBS=-1,rKRBS,61,3; /* compute kernel virtual addr of RBS */\
+ ;; \
+(pKern) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
+(p7) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
+ ;; \
+(p7) mov r18=ar.bsp; \
+(p7) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
+
+#define MINSTATE_END_SAVE_MIN_PHYS \
+ or r12=r12,r14; /* make sp a kernel virtual address */ \
+ or r13=r13,r14; /* make `current' a kernel virtual address */ \
+ ;;
+
+#ifdef MINSTATE_VIRT
+# define MINSTATE_START_SAVE_MIN MINSTATE_START_SAVE_MIN_VIRT
+# define MINSTATE_END_SAVE_MIN MINSTATE_END_SAVE_MIN_VIRT
+#endif
+
+#ifdef MINSTATE_PHYS
+# define MINSTATE_START_SAVE_MIN MINSTATE_START_SAVE_MIN_PHYS
+# define MINSTATE_END_SAVE_MIN MINSTATE_END_SAVE_MIN_PHYS
+#endif
+
+/*
* DO_SAVE_MIN switches to the kernel stacks (if necessary) and saves
* the minimum state necessary that allows us to turn psr.ic back
* on.
@@ -31,7 +97,6 @@
*
* Upon exit, the state is as follows:
* psr.ic: off
- * psr.dt: off
* r2 = points to &pt_regs.r16
* r12 = kernel sp (kernel virtual address)
* r13 = points to current task_struct (kernel virtual address)
@@ -50,7 +115,7 @@
mov rCRIPSR=cr.ipsr; \
mov rB6¶; /* rB6 = branch reg 6 */ \
mov rCRIIP=cr.iip; \
- mov r1=ar.k6; /* r1 = current */ \
+ mov r1=ar.k6; /* r1 = current (physical) */ \
;; \
invala; \
extr.u r16=rCRIPSR,32,2; /* extract psr.cpl */ \
@@ -58,25 +123,11 @@
cmp.eq pKern,p7=r0,r16; /* are we in kernel mode already? (psr.cpl=0) */ \
/* switch from user to kernel RBS: */ \
COVER; \
- ;; \
- MINSTATE_START_SAVE_MIN \
-(p7) mov ar.rsc=r0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
-(p7) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
- ;; \
-(p7) mov rARRNAT=ar.rnat; \
-(pKern) dep r1=0,sp,61,3; /* compute physical addr of sp */ \
-(p7) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
-(p7) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
-(p7) dep rKRBS=-1,rKRBS,61,3; /* compute kernel virtual addr of RBS */ \
;; \
-(pKern) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
-(p7) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
+ MINSTATE_START_SAVE_MIN \
;; \
-(p7) mov r18=ar.bsp; \
-(p7) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
- \
- mov r16=r1; /* initialize first base pointer */ \
- adds r17=8,r1; /* initialize second base pointer */ \
+ mov r16=r1; /* initialize first base pointer */ \
+ adds r17=8,r1; /* initialize second base pointer */ \
;; \
st8 [r16]=rCRIPSR,16; /* save cr.ipsr */ \
st8 [r17]=rCRIIP,16; /* save cr.iip */ \
diff -urN linux-davidm/arch/ia64/kernel/pal.S linux-2.4.0-test10-lia/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S Mon Oct 9 17:54:54 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/pal.S Wed Nov 15 17:58:01 2000
@@ -52,10 +52,9 @@
/*
* Make a PAL call using the static calling convention.
*
- * in0 Pointer to struct ia64_pal_retval
- * in1 Index of PAL service
- * in2 - in4 Remaining PAL arguments
- * in5 1 => clear psr.ic, 0 => don't clear psr.ic
+ * in0 Index of PAL service
+ * in1 - in3 Remaining PAL arguments
+ * in4 1 => clear psr.ic, 0 => don't clear psr.ic
*
*/
GLOBAL_ENTRY(ia64_pal_call_static)
@@ -69,7 +68,7 @@
}
;;
ld8 loc2 = [loc2] // loc2 <- entry point
- tbit.nz p6,p7 = in5, 0
+ tbit.nz p6,p7 = in4, 0
adds r8 = 1f-1b,r8
;;
mov loc3 = psr
diff -urN linux-davidm/arch/ia64/kernel/pci.c linux-2.4.0-test10-lia/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/pci.c Wed Nov 15 17:58:20 2000
@@ -1,10 +1,8 @@
/*
- * pci.c - Low-Level PCI Access in IA64
+ * pci.c - Low-Level PCI Access in IA-64
*
* Derived from bios32.c of i386 tree.
- *
*/
-
#include <linux/config.h>
#include <linux/types.h>
@@ -44,15 +42,11 @@
* This interrupt-safe spinlock protects all accesses to PCI
* configuration space.
*/
-
spinlock_t pci_lock = SPIN_LOCK_UNLOCKED;
-struct pci_fixup pcibios_fixups[] = { { 0 } };
-
-#define PCI_NO_CHECKS 0x400
-#define PCI_NO_PEER_FIXUP 0x800
-
-static unsigned int pci_probe = PCI_NO_CHECKS;
+struct pci_fixup pcibios_fixups[] = {
+ { 0 }
+};
/* Macro to build a PCI configuration address to be passed as a parameter to SAL. */
@@ -110,7 +104,7 @@
return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS(dev, where), 4, value);
}
-static struct pci_ops pci_conf = {
+struct pci_ops pci_conf = {
pci_conf_read_config_byte,
pci_conf_read_config_word,
pci_conf_read_config_dword,
@@ -185,17 +179,16 @@
return 0;
}
+void
+pcibios_align_resource (void *data, struct resource *res, unsigned long size)
+{
+}
+
/*
* PCI BIOS setup, always defaults to SAL interface
*/
char * __init
pcibios_setup (char *str)
{
- pci_probe = PCI_NO_CHECKS;
return NULL;
-}
-
-void
-pcibios_align_resource (void *data, struct resource *res, unsigned long size)
-{
}
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.0-test10-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/ptrace.c Wed Nov 15 17:58:36 2000
@@ -1058,7 +1058,7 @@
goto out_tsk;
if (child->state != TASK_STOPPED) {
- if (request != PTRACE_KILL && request != PTRACE_PEEKUSR)
+ if (request != PTRACE_KILL)
goto out_tsk;
}
diff -urN linux-davidm/arch/ia64/kernel/sal.c linux-2.4.0-test10-lia/arch/ia64/kernel/sal.c
--- linux-davidm/arch/ia64/kernel/sal.c Mon Oct 9 17:54:54 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/sal.c Wed Nov 15 17:58:45 2000
@@ -104,9 +104,11 @@
if (strncmp(systab->signature, "SST_", 4) != 0)
printk("bad signature in system table!");
- printk("SAL v%u.%02u: ia32bios=%s, oem=%.32s, product=%.32s\n",
+ /*
+ * revisions are coded in BCD, so %x does the job for us
+ */
+ printk("SAL v%x.%02x: oem=%.32s, product=%.32s\n",
systab->sal_rev_major, systab->sal_rev_minor,
- systab->ia32_bios_present ? "present" : "absent",
systab->oem_id, systab->product_id);
min = ~0UL;
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.0-test10-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Mon Oct 9 17:54:55 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/setup.c Wed Nov 15 22:02:05 2000
@@ -408,6 +408,8 @@
{
extern void __init ia64_rid_init (void);
extern void __init ia64_tlb_init (void);
+ pal_vm_info_2_u_t vmi;
+ unsigned int max_ctx;
identify_cpu(&my_cpu_data);
@@ -415,15 +417,12 @@
memset(ia64_task_regs(current), 0, sizeof(struct pt_regs));
/*
- * Initialize default control register to defer speculative
- * faults. On a speculative load, we want to defer access
- * right, key miss, and key permission faults. We currently
- * do NOT defer TLB misses, page-not-present, access bit, or
- * debug faults but kernel code should not rely on any
- * particular setting of these bits.
- ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_PP);
+ * Initialize default control register to defer all speculative faults. The
+ * kernel MUST NOT depend on a particular setting of these bits (in other words,
+ * the kernel must have recovery code for all speculative accesses).
*/
- ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX );
+ ia64_set_dcr( IA64_DCR_DM | IA64_DCR_DP | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_DR
+ | IA64_DCR_DA | IA64_DCR_DD);
#ifndef CONFIG_SMP
ia64_set_fpu_owner(0); /* initialize ar.k5 */
#endif
@@ -444,4 +443,17 @@
#ifdef CONFIG_SMP
normal_xtp();
#endif
+
+ /* set ia64_ctx.max_rid to the maximum RID that is supported by all CPUs: */
+ if (ia64_pal_vm_summary(NULL, &vmi) = 0)
+ max_ctx = (1U << (vmi.pal_vm_info_2_s.rid_size - 3)) - 1;
+ else {
+ printk("ia64_rid_init: PAL VM summary failed, assuming 18 RID bits\n");
+ max_ctx = (1U << 15) - 1; /* use architected minimum */
+ }
+ while (max_ctx < ia64_ctx.max_ctx) {
+ unsigned int old = ia64_ctx.max_ctx;
+ if (cmpxchg(&ia64_ctx.max_ctx, old, max_ctx) = old)
+ break;
+ }
}
diff -urN linux-davidm/arch/ia64/kernel/signal.c linux-2.4.0-test10-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Mon Oct 9 17:54:55 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/signal.c Wed Nov 15 18:03:14 2000
@@ -91,7 +91,7 @@
scr->pt.r10 = -1;
}
while (1) {
- set_current_state(TASK_INTERRUPTIBLE);
+ current->state = TASK_INTERRUPTIBLE;
schedule();
if (ia64_do_signal(&oldset, scr, 1))
return -EINTR;
@@ -499,9 +499,10 @@
/* Let the debugger run. */
current->exit_code = signr;
current->thread.siginfo = &info;
- set_current_state(TASK_STOPPED);
+ current->state = TASK_STOPPED;
notify_parent(current, SIGCHLD);
schedule();
+
signr = current->exit_code;
current->thread.siginfo = 0;
@@ -557,7 +558,7 @@
/* FALLTHRU */
case SIGSTOP:
- set_current_state(TASK_STOPPED);
+ current->state = TASK_STOPPED;
current->exit_code = signr;
if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags
& SA_NOCLDSTOP))
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-test10-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/smp.c Wed Nov 15 18:04:37 2000
@@ -279,7 +279,7 @@
return;
set_bit(op, &ipi_op[dest_cpu]);
- ia64_send_ipi(dest_cpu, IPI_IRQ, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(dest_cpu, IPI_IRQ, IA64_IPI_DM_INT, 0);
}
static inline void
@@ -429,7 +429,7 @@
int i;
for (i = 0; i < smp_num_cpus; i++) {
if (i != smp_processor_id())
- ia64_send_ipi(i, IPI_IRQ, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(i, IPI_IRQ, IA64_IPI_DM_INT, 0);
}
goto retry;
#else
@@ -587,7 +587,7 @@
cpu_now_booting = cpu;
/* Kick the AP in the butt */
- ia64_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
/* wait up to 10s for the AP to start */
for (timeout = 0; timeout < 100000; timeout++) {
diff -urN linux-davidm/arch/ia64/kernel/unaligned.c linux-2.4.0-test10-lia/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/unaligned.c Wed Nov 15 18:08:22 2000
@@ -1564,9 +1564,13 @@
DPRINT(("ret=%d\n", ret));
if (ret) {
- lock_kernel();
- force_sig(SIGSEGV, current);
- unlock_kernel();
+ struct siginfo si;
+
+ si.si_signo = SIGBUS;
+ si.si_errno = 0;
+ si.si_code = BUS_ADRALN;
+ si.si_addr = (void *) ifa;
+ force_sig_info(SIGBUS, &si, current);
} else {
/*
* given today's architecture this case is not likely to happen
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.0-test10-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/kernel/unwind.c Wed Nov 15 22:08:29 2000
@@ -1996,8 +1996,8 @@
void
unw_remove_unwind_table (void *handle)
{
- struct unw_table *table, *prevt;
- struct unw_script *tmp, *prev;
+ struct unw_table *table, *prev;
+ struct unw_script *tmp;
unsigned long flags;
long index;
@@ -2016,41 +2016,35 @@
{
/* first, delete the table: */
- for (prevt = (struct unw_table *) &unw.tables; prevt; prevt = prevt->next)
- if (prevt->next = table)
+ for (prev = (struct unw_table *) &unw.tables; prev; prev = prev->next)
+ if (prev->next = table)
break;
- if (!prevt) {
+ if (!prev) {
dprintk("unwind: failed to find unwind table %p\n", (void *) table);
spin_unlock_irqrestore(&unw.lock, flags);
return;
}
- prevt->next = table->next;
+ prev->next = table->next;
+ }
+ spin_unlock_irqrestore(&unw.lock, flags);
- /* next, remove hash table entries for this table */
+ /* next, remove hash table entries for this table */
- for (index = 0; index <= UNW_HASH_SIZE; ++index) {
- if (unw.hash[index] >= UNW_CACHE_SIZE)
- continue;
-
- tmp = unw.cache + unw.hash[index];
- prev = 0;
- while (1) {
- write_lock(&tmp->lock);
- {
- if (tmp->ip >= table->start && tmp->ip < table->end) {
- if (prev)
- prev->coll_chain = tmp->coll_chain;
- else
- unw.hash[index] = -1;
- tmp->ip = 0;
- } else
- prev = tmp;
- }
- write_unlock(&tmp->lock);
+ for (index = 0; index <= UNW_HASH_SIZE; ++index) {
+ tmp = unw.cache + unw.hash[index];
+ if (unw.hash[index] >= UNW_CACHE_SIZE
+ || tmp->ip < table->start || tmp->ip >= table->end)
+ continue;
+
+ write_lock(&tmp->lock);
+ {
+ if (tmp->ip >= table->start && tmp->ip < table->end) {
+ unw.hash[index] = tmp->coll_chain;
+ tmp->ip = 0;
}
}
+ write_unlock(&tmp->lock);
}
- spin_unlock_irqrestore(&unw.lock, flags);
kfree(table);
}
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.0-test10-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Mon Oct 9 17:54:56 2000
+++ linux-2.4.0-test10-lia/arch/ia64/mm/init.c Wed Nov 15 18:39:03 2000
@@ -305,7 +305,7 @@
return 0;
}
flush_page_to_ram(page);
- set_pte(pte, page_pte_prot(page, PAGE_GATE));
+ set_pte(pte, mk_pte(page, PAGE_GATE));
/* no need for flush_tlb */
return page;
}
@@ -423,6 +423,17 @@
extern char __start_gate_section[];
long reserved_pages, codesize, datasize, initsize;
+#ifdef CONFIG_SWIOTLB
+ {
+ /*
+ * This needs to be called _after_ the command line has been parsed but
+ * _before_ any drivers that may need the sw I/O TLB are initialized or
+ * bootmem has been freed.
+ */
+ extern void setup_swiotlb (void);
+ setup_swiotlb();
+ }
+#endif
if (!mem_map)
BUG();
diff -urN linux-davidm/arch/ia64/mm/tlb.c linux-2.4.0-test10-lia/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/arch/ia64/mm/tlb.c Wed Nov 15 22:08:44 2000
@@ -36,15 +36,10 @@
struct ia64_ctx ia64_ctx = {
lock: SPIN_LOCK_UNLOCKED,
next: 1,
- limit: (1UL << IA64_HW_CONTEXT_BITS)
+ limit: (1 << 15) - 1, /* start out with the safe (architected) limit */
+ max_ctx: ~0U
};
- /*
- * Put everything in a struct so we avoid the global offset table whenever
- * possible.
- */
-ia64_ptce_info_t ia64_ptce_info;
-
/*
* Seralize usage of ptc.g
*/
@@ -133,12 +128,12 @@
void
wrap_mmu_context (struct mm_struct *mm)
{
+ unsigned long tsk_context, max_ctx = ia64_ctx.max_ctx;
struct task_struct *tsk;
- unsigned long tsk_context;
- if (ia64_ctx.next >= (1UL << IA64_HW_CONTEXT_BITS))
+ if (ia64_ctx.next > max_ctx)
ia64_ctx.next = 300; /* skip daemons */
- ia64_ctx.limit = (1UL << IA64_HW_CONTEXT_BITS);
+ ia64_ctx.limit = max_ctx + 1;
/*
* Scan all the task's mm->context and set proper safe range
@@ -153,9 +148,9 @@
if (tsk_context = ia64_ctx.next) {
if (++ia64_ctx.next >= ia64_ctx.limit) {
/* empty range: reset the range limit and start over */
- if (ia64_ctx.next >= (1UL << IA64_HW_CONTEXT_BITS))
+ if (ia64_ctx.next > max_ctx)
ia64_ctx.next = 300;
- ia64_ctx.limit = (1UL << IA64_HW_CONTEXT_BITS);
+ ia64_ctx.limit = max_ctx + 1;
goto repeat;
}
}
@@ -169,12 +164,13 @@
void
__flush_tlb_all (void)
{
- unsigned long i, j, flags, count0, count1, stride0, stride1, addr = ia64_ptce_info.base;
+ unsigned long i, j, flags, count0, count1, stride0, stride1, addr;
- count0 = ia64_ptce_info.count[0];
- count1 = ia64_ptce_info.count[1];
- stride0 = ia64_ptce_info.stride[0];
- stride1 = ia64_ptce_info.stride[1];
+ addr = my_cpu_data.ptce_base;
+ count0 = my_cpu_data.ptce_count[0];
+ count1 = my_cpu_data.ptce_count[1];
+ stride0 = my_cpu_data.ptce_stride[0];
+ stride1 = my_cpu_data.ptce_stride[1];
local_irq_save(flags);
for (i = 0; i < count0; ++i) {
@@ -246,6 +242,14 @@
void __init
ia64_tlb_init (void)
{
- ia64_get_ptce(&ia64_ptce_info);
+ ia64_ptce_info_t ptce_info;
+
+ ia64_get_ptce(&ptce_info);
+ my_cpu_data.ptce_base = ptce_info.base;
+ my_cpu_data.ptce_count[0] = ptce_info.count[0];
+ my_cpu_data.ptce_count[1] = ptce_info.count[1];
+ my_cpu_data.ptce_stride[0] = ptce_info.stride[0];
+ my_cpu_data.ptce_stride[1] = ptce_info.stride[1];
+
__flush_tlb_all(); /* nuke left overs from bootstrapping... */
}
diff -urN linux-davidm/arch/ia64/tools/print_offsets.c linux-2.4.0-test10-lia/arch/ia64/tools/print_offsets.c
--- linux-davidm/arch/ia64/tools/print_offsets.c Fri Jul 14 16:08:12 2000
+++ linux-2.4.0-test10-lia/arch/ia64/tools/print_offsets.c Wed Nov 15 18:10:14 2000
@@ -149,7 +149,7 @@
{ "IA64_SWITCH_STACK_AR_UNAT_OFFSET", offsetof (struct switch_stack, ar_unat) },
{ "IA64_SWITCH_STACK_AR_RNAT_OFFSET", offsetof (struct switch_stack, ar_rnat) },
{ "IA64_SWITCH_STACK_AR_BSPSTORE_OFFSET", offsetof (struct switch_stack, ar_bspstore) },
- { "IA64_SWITCH_STACK_PR_OFFSET", offsetof (struct switch_stack, b0) },
+ { "IA64_SWITCH_STACK_PR_OFFSET", offsetof (struct switch_stack, pr) },
{ "IA64_SIGCONTEXT_AR_BSP_OFFSET", offsetof (struct sigcontext, sc_ar_bsp) },
{ "IA64_SIGCONTEXT_AR_RNAT_OFFSET", offsetof (struct sigcontext, sc_ar_rnat) },
{ "IA64_SIGCONTEXT_FLAGS_OFFSET", offsetof (struct sigcontext, sc_flags) },
diff -urN linux-davidm/include/asm-generic/pgtable.h linux-2.4.0-test10-lia/include/asm-generic/pgtable.h
--- linux-davidm/include/asm-generic/pgtable.h Thu Oct 19 15:51:16 2000
+++ linux-2.4.0-test10-lia/include/asm-generic/pgtable.h Wed Nov 15 18:13:56 2000
@@ -26,7 +26,7 @@
return pte;
}
-static inline void ptep_clear_wrprotect(pte_t *ptep)
+static inline void ptep_set_wrprotect(pte_t *ptep)
{
pte_t old_pte = *ptep;
set_pte(ptep, pte_wrprotect(old_pte));
diff -urN linux-davidm/include/asm-i386/pgtable.h linux-2.4.0-test10-lia/include/asm-i386/pgtable.h
--- linux-davidm/include/asm-i386/pgtable.h Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/include/asm-i386/pgtable.h Wed Nov 15 18:14:06 2000
@@ -283,7 +283,7 @@
static inline int ptep_test_and_clear_dirty(pte_t *ptep) { return test_and_clear_bit(_PAGE_BIT_DIRTY, ptep); }
static inline int ptep_test_and_clear_young(pte_t *ptep) { return test_and_clear_bit(_PAGE_BIT_ACCESSED, ptep); }
-static inline void ptep_clear_wrprotect(pte_t *ptep) { clear_bit(_PAGE_BIT_RW, ptep); }
+static inline void ptep_set_wrprotect(pte_t *ptep) { clear_bit(_PAGE_BIT_RW, ptep); }
static inline void ptep_mkdirty(pte_t *ptep) { set_bit(_PAGE_BIT_RW, ptep); }
/*
diff -urN linux-davidm/include/asm-ia64/hw_irq.h linux-2.4.0-test10-lia/include/asm-ia64/hw_irq.h
--- linux-davidm/include/asm-ia64/hw_irq.h Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/hw_irq.h Wed Nov 15 22:54:13 2000
@@ -10,6 +10,7 @@
#include <linux/types.h>
+#include <asm/machvec.h>
#include <asm/ptrace.h>
#include <asm/smp.h>
@@ -77,7 +78,7 @@
static inline void
hw_resend_irq (struct hw_interrupt_type *h, unsigned int vector)
{
- ia64_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
}
#endif /* _ASM_IA64_HW_IRQ_H */
diff -urN linux-davidm/include/asm-ia64/io.h linux-2.4.0-test10-lia/include/asm-ia64/io.h
--- linux-davidm/include/asm-ia64/io.h Mon Oct 9 17:54:58 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/io.h Wed Nov 15 22:54:12 2000
@@ -29,6 +29,7 @@
# ifdef __KERNEL__
+#include <asm/machvec.h>
#include <asm/page.h>
#include <asm/system.h>
@@ -54,8 +55,7 @@
#define bus_to_virt phys_to_virt
#define virt_to_bus virt_to_phys
-# else /* !KERNEL */
-# endif /* !KERNEL */
+# endif /* KERNEL */
/*
* Memory fence w/accept. This should never be used in code that is
@@ -100,7 +100,7 @@
*/
static inline unsigned int
-__inb (unsigned long port)
+__ia64_inb (unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
unsigned char ret;
@@ -111,7 +111,7 @@
}
static inline unsigned int
-__inw (unsigned long port)
+__ia64_inw (unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
unsigned short ret;
@@ -122,7 +122,7 @@
}
static inline unsigned int
-__inl (unsigned long port)
+__ia64_inl (unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
unsigned int ret;
@@ -133,112 +133,148 @@
}
static inline void
-__insb (unsigned long port, void *dst, unsigned long count)
+__ia64_outb (unsigned char val, unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
- unsigned char *dp = dst;
+ *addr = val;
__ia64_mf_a();
- while (count--) {
- *dp++ = *addr;
- }
- __ia64_mf_a();
- return;
}
static inline void
-__insw (unsigned long port, void *dst, unsigned long count)
+__ia64_outw (unsigned short val, unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
- unsigned short *dp = dst;
+ *addr = val;
__ia64_mf_a();
- while (count--) {
- *dp++ = *addr;
- }
- __ia64_mf_a();
- return;
}
static inline void
-__insl (unsigned long port, void *dst, unsigned long count)
+__ia64_outl (unsigned int val, unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
- unsigned int *dp = dst;
+ *addr = val;
__ia64_mf_a();
- while (count--) {
- *dp++ = *addr;
- }
- __ia64_mf_a();
- return;
}
static inline void
-__outb (unsigned char val, unsigned long port)
+__insb (unsigned long port, void *dst, unsigned long count)
{
- volatile unsigned char *addr = __ia64_mk_io_addr(port);
+ unsigned char *dp = dst;
- *addr = val;
- __ia64_mf_a();
+ if (platform_inb = __ia64_inb) {
+ volatile unsigned char *addr = __ia64_mk_io_addr(port);
+
+ __ia64_mf_a();
+ while (count--)
+ *dp++ = *addr;
+ __ia64_mf_a();
+ } else
+ while (count--)
+ *dp++ = platform_inb(port);
+ return;
}
static inline void
-__outw (unsigned short val, unsigned long port)
+__insw (unsigned long port, void *dst, unsigned long count)
{
- volatile unsigned short *addr = __ia64_mk_io_addr(port);
+ unsigned short *dp = dst;
- *addr = val;
- __ia64_mf_a();
+ if (platform_inw = __ia64_inw) {
+ volatile unsigned short *addr = __ia64_mk_io_addr(port);
+
+ __ia64_mf_a();
+ while (count--)
+ *dp++ = *addr;
+ __ia64_mf_a();
+ } else
+ while (count--)
+ *dp++ = platform_inw(port);
+ return;
}
static inline void
-__outl (unsigned int val, unsigned long port)
+__insl (unsigned long port, void *dst, unsigned long count)
{
- volatile unsigned int *addr = __ia64_mk_io_addr(port);
+ unsigned int *dp = dst;
- *addr = val;
- __ia64_mf_a();
+ if (platform_inl = __ia64_inl) {
+ volatile unsigned int *addr = __ia64_mk_io_addr(port);
+
+ __ia64_mf_a();
+ while (count--)
+ *dp++ = *addr;
+ __ia64_mf_a();
+ } else
+ while (count--)
+ *dp++ = platform_inl(port);
+ return;
}
static inline void
__outsb (unsigned long port, const void *src, unsigned long count)
{
- volatile unsigned char *addr = __ia64_mk_io_addr(port);
const unsigned char *sp = src;
- while (count--) {
- *addr = *sp++;
- }
- __ia64_mf_a();
+ if (platform_outb = __ia64_outb) {
+ volatile unsigned char *addr = __ia64_mk_io_addr(port);
+
+ while (count--)
+ *addr = *sp++;
+ __ia64_mf_a();
+ } else
+ while (count--)
+ platform_outb(*sp++, port);
return;
}
static inline void
__outsw (unsigned long port, const void *src, unsigned long count)
{
- volatile unsigned short *addr = __ia64_mk_io_addr(port);
const unsigned short *sp = src;
- while (count--) {
- *addr = *sp++;
- }
- __ia64_mf_a();
+ if (platform_outw = __ia64_outw) {
+ volatile unsigned short *addr = __ia64_mk_io_addr(port);
+
+ while (count--)
+ *addr = *sp++;
+ __ia64_mf_a();
+ } else
+ while (count--)
+ platform_outw(*sp++, port);
return;
}
static inline void
__outsl (unsigned long port, void *src, unsigned long count)
{
- volatile unsigned int *addr = __ia64_mk_io_addr(port);
const unsigned int *sp = src;
- while (count--) {
- *addr = *sp++;
- }
- __ia64_mf_a();
+ if (platform_outl = __ia64_outl) {
+ volatile unsigned int *addr = __ia64_mk_io_addr(port);
+
+ while (count--)
+ *addr = *sp++;
+ __ia64_mf_a();
+ } else
+ while (count--)
+ platform_outl(*sp++, port);
return;
}
+
+/*
+ * Unfortunately, some platforms are broken and do not follow the
+ * IA-64 architecture specification regarding legacy I/O support.
+ * Thus, we have to make these operations platform dependent...
+ */
+#define __inb platform_inb
+#define __inw platform_inw
+#define __inl platform_inl
+#define __outb platform_outb
+#define __outw platform_outw
+#define __outl platform_outl
#define inb __inb
#define inw __inw
diff -urN linux-davidm/include/asm-ia64/machvec.h linux-2.4.0-test10-lia/include/asm-ia64/machvec.h
--- linux-davidm/include/asm-ia64/machvec.h Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/machvec.h Wed Nov 15 22:54:12 2000
@@ -31,6 +31,22 @@
typedef void ia64_mv_mca_handler_t (void);
typedef void ia64_mv_cmci_handler_t (int, void *, struct pt_regs *);
typedef void ia64_mv_log_print_t (void);
+typedef void ia64_mv_send_ipi_t (int, int, int, int);
+/*
+ * WARNING: The legacy I/O space is _architected_. Platforms are
+ * expected to follow this architected model (see Section 10.7 in the
+ * IA-64 Architecture Software Developer's Manual). Unfortunately,
+ * some broken machines do not follow that model, which is why we have
+ * to make the inX/outX operations part of the machine vector.
+ * Platform designers should follow the architected model whenever
+ * possible.
+ */
+typedef unsigned int ia64_mv_inb_t (unsigned long);
+typedef unsigned int ia64_mv_inw_t (unsigned long);
+typedef unsigned int ia64_mv_inl_t (unsigned long);
+typedef void ia64_mv_outb_t (unsigned char, unsigned long);
+typedef void ia64_mv_outw_t (unsigned short, unsigned long);
+typedef void ia64_mv_outl_t (unsigned int, unsigned long);
extern void machvec_noop (void);
@@ -54,6 +70,13 @@
# define platform_cmci_handler ia64_mv.cmci_handler
# define platform_log_print ia64_mv.log_print
# define platform_pci_fixup ia64_mv.pci_fixup
+# define platform_send_ipi ia64_mv.send_ipi
+# define platform_inb ia64_mv.inb
+# define platform_inw ia64_mv.inw
+# define platform_inl ia64_mv.inl
+# define platform_outb ia64_mv.outb
+# define platform_outw ia64_mv.outw
+# define platform_outl ia64_mv.outl
# endif
struct ia64_machine_vector {
@@ -66,6 +89,13 @@
ia64_mv_mca_handler_t *mca_handler;
ia64_mv_cmci_handler_t *cmci_handler;
ia64_mv_log_print_t *log_print;
+ ia64_mv_send_ipi_t *send_ipi;
+ ia64_mv_inb_t *inb;
+ ia64_mv_inw_t *inw;
+ ia64_mv_inl_t *inl;
+ ia64_mv_outb_t *outb;
+ ia64_mv_outw_t *outw;
+ ia64_mv_outl_t *outl;
};
#define MACHVEC_INIT(name) \
@@ -78,7 +108,14 @@
platform_mca_init, \
platform_mca_handler, \
platform_cmci_handler, \
- platform_log_print \
+ platform_log_print, \
+ platform_send_ipi, \
+ platform_inb, \
+ platform_inw, \
+ platform_inl, \
+ platform_outb, \
+ platform_outw, \
+ platform_outl \
}
extern struct ia64_machine_vector ia64_mv;
@@ -112,6 +149,27 @@
#endif
#ifndef platform_pci_fixup
# define platform_pci_fixup ((ia64_mv_pci_fixup_t *) machvec_noop)
+#endif
+#ifndef platform_send_ipi
+# define platform_send_ipi ia64_send_ipi /* default to architected version */
+#endif
+#ifndef platform_inb
+# define platform_inb __ia64_inb
+#endif
+#ifndef platform_inw
+# define platform_inw __ia64_inw
+#endif
+#ifndef platform_inl
+# define platform_inl __ia64_inl
+#endif
+#ifndef platform_outb
+# define platform_outb __ia64_outb
+#endif
+#ifndef platform_outw
+# define platform_outw __ia64_outw
+#endif
+#ifndef platform_outl
+# define platform_outl __ia64_outl
#endif
#endif /* _ASM_IA64_MACHVEC_H */
diff -urN linux-davidm/include/asm-ia64/machvec_dig.h linux-2.4.0-test10-lia/include/asm-ia64/machvec_dig.h
--- linux-davidm/include/asm-ia64/machvec_dig.h Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/machvec_dig.h Wed Nov 15 18:15:39 2000
@@ -3,9 +3,8 @@
extern ia64_mv_setup_t dig_setup;
extern ia64_mv_irq_init_t dig_irq_init;
-extern ia64_mv_pci_fixup_t dig_pci_fixup;
-extern ia64_mv_map_nr_t map_nr_dense;
extern ia64_mv_pci_fixup_t iosapic_pci_fixup;
+extern ia64_mv_map_nr_t map_nr_dense;
/*
* This stuff has dual use!
diff -urN linux-davidm/include/asm-ia64/machvec_hpsim.h linux-2.4.0-test10-lia/include/asm-ia64/machvec_hpsim.h
--- linux-davidm/include/asm-ia64/machvec_hpsim.h Fri Jul 14 16:08:12 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/machvec_hpsim.h Wed Nov 15 18:15:49 2000
@@ -15,7 +15,6 @@
#define platform_name "hpsim"
#define platform_setup hpsim_setup
#define platform_irq_init hpsim_irq_init
-#define platform_pci_fixup hpsim_pci_fixup
#define platform_map_nr map_nr_dense
#endif /* _ASM_IA64_MACHVEC_HPSIM_h */
diff -urN linux-davidm/include/asm-ia64/machvec_init.h linux-2.4.0-test10-lia/include/asm-ia64/machvec_init.h
--- linux-davidm/include/asm-ia64/machvec_init.h Fri Aug 11 19:09:06 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/machvec_init.h Wed Nov 15 22:08:12 2000
@@ -4,6 +4,14 @@
#include <asm/machvec.h>
+extern ia64_mv_send_ipi_t ia64_send_ipi;
+extern ia64_mv_inb_t __ia64_inb;
+extern ia64_mv_inw_t __ia64_inw;
+extern ia64_mv_inl_t __ia64_inl;
+extern ia64_mv_outb_t __ia64_outb;
+extern ia64_mv_outw_t __ia64_outw;
+extern ia64_mv_outl_t __ia64_outl;
+
#define MACHVEC_HELPER(name) \
struct ia64_machine_vector machvec_##name __attribute__ ((unused, __section__ (".machvec"))) \
= MACHVEC_INIT(name);
diff -urN linux-davidm/include/asm-ia64/machvec_sn1.h linux-2.4.0-test10-lia/include/asm-ia64/machvec_sn1.h
--- linux-davidm/include/asm-ia64/machvec_sn1.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/machvec_sn1.h Wed Nov 15 18:16:14 2000
@@ -4,6 +4,7 @@
extern ia64_mv_setup_t sn1_setup;
extern ia64_mv_irq_init_t sn1_irq_init;
extern ia64_mv_map_nr_t sn1_map_nr;
+extern ia64_mv_send_ipi_t sn1_send_IPI;
/*
* This stuff has dual use!
@@ -16,5 +17,6 @@
#define platform_setup sn1_setup
#define platform_irq_init sn1_irq_init
#define platform_map_nr sn1_map_nr
+#define platform_send_ipi sn1_send_IPI
#endif /* _ASM_IA64_MACHVEC_SN1_h */
diff -urN linux-davidm/include/asm-ia64/mmu_context.h linux-2.4.0-test10-lia/include/asm-ia64/mmu_context.h
--- linux-davidm/include/asm-ia64/mmu_context.h Mon Oct 9 17:54:59 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/mmu_context.h Wed Nov 15 22:54:13 2000
@@ -32,20 +32,11 @@
#define IA64_REGION_ID_KERNEL 0 /* the kernel's region id (tlb.c depends on this being 0) */
-#define IA64_REGION_ID_BITS 18
-
-#ifdef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
-# define IA64_HW_CONTEXT_BITS IA64_REGION_ID_BITS
-#else
-# define IA64_HW_CONTEXT_BITS (IA64_REGION_ID_BITS - 3)
-#endif
-
-#define IA64_HW_CONTEXT_MASK ((1UL << IA64_HW_CONTEXT_BITS) - 1)
-
struct ia64_ctx {
spinlock_t lock;
unsigned int next; /* next context number to use */
unsigned int limit; /* next >= limit => must call wrap_mmu_context() */
+ unsigned int max_ctx; /* max. context value supported by all CPUs */
};
extern struct ia64_ctx ia64_ctx;
@@ -60,11 +51,7 @@
static inline unsigned long
ia64_rid (unsigned long context, unsigned long region_addr)
{
-# ifdef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
- return context;
-# else
return context << 3 | (region_addr >> 61);
-# endif
}
static inline void
@@ -108,12 +95,8 @@
unsigned long rid_incr = 0;
unsigned long rr0, rr1, rr2, rr3, rr4;
- rid = mm->context;
-
-#ifndef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
- rid <<= 3; /* make space for encoding the region number */
+ rid = mm->context << 3; /* make space for encoding the region number */
rid_incr = 1 << 8;
-#endif
/* encode the region id, preferred page size, and VHPT enable bit: */
rr0 = (rid << 8) | (PAGE_SHIFT << 2) | 1;
@@ -132,11 +115,10 @@
}
/*
- * Switch from address space PREV to address space NEXT. Note that
- * TSK may be NULL.
+ * Switch from address space PREV to address space NEXT.
*/
static inline void
-switch_mm (struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk, unsigned cpu)
+activate_mm (struct mm_struct *prev, struct mm_struct *next)
{
/*
* We may get interrupts here, but that's OK because interrupt
@@ -147,7 +129,6 @@
reload_context(next);
}
-#define activate_mm(prev,next) \
- switch_mm((prev), (next), NULL, smp_processor_id())
+#define switch_mm(prev_mm,next_mm,next_task,cpu) activate_mm(prev_mm, next_mm)
#endif /* _ASM_IA64_MMU_CONTEXT_H */
diff -urN linux-davidm/include/asm-ia64/page.h linux-2.4.0-test10-lia/include/asm-ia64/page.h
--- linux-davidm/include/asm-ia64/page.h Mon Oct 9 17:54:59 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/page.h Wed Nov 15 22:54:12 2000
@@ -58,7 +58,6 @@
#define pgprot_val(x) ((x).pgprot)
#define __pte(x) ((pte_t) { (x) } )
-#define __pgd(x) ((pgd_t) { (x) } )
#define __pgprot(x) ((pgprot_t) { (x) } )
# else /* !STRICT_MM_TYPECHECKS */
@@ -102,7 +101,7 @@
#ifdef CONFIG_IA64_GENERIC
# include <asm/machvec.h>
# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
-#elif defined (CONFIG_IA64_SN_SN1)
+#elif defined (CONFIG_IA64_SN_SGI_SN1)
# define virt_to_page(kaddr) (mem_map + MAP_NR_SN1(kaddr))
#else
# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.4.0-test10-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/pgtable.h Wed Nov 15 22:54:13 2000
@@ -82,7 +82,7 @@
#define PGDIR_SIZE (__IA64_UL(1) << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define PTRS_PER_PGD (__IA64_UL(1) << (PAGE_SHIFT-3))
-#define USER_PTRS_PER_PGD PTRS_PER_PGD
+#define USER_PTRS_PER_PGD (5*PTRS_PER_PGD/8) /* regions 0-4 are user regions */
#define FIRST_USER_PGD_NR 0
/*
@@ -101,9 +101,6 @@
*/
#define PTRS_PER_PTE (__IA64_UL(1) << (PAGE_SHIFT-3))
-/* Number of pointers that fit on a page: this will go away. */
-#define PTRS_PER_PAGE (__IA64_UL(1) << (PAGE_SHIFT-3))
-
# ifndef __ASSEMBLY__
#include <asm/bitops.h>
@@ -136,19 +133,19 @@
#define __P001 PAGE_READONLY
#define __P010 PAGE_READONLY /* write to priv pg -> copy & make writable */
#define __P011 PAGE_READONLY /* ditto */
-#define __P100 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __P101 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __P110 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __P111 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
+#define __P100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
+#define __P101 PAGE_COPY
+ #define __P110 PAGE_COPY
+#define __P111 PAGE_COPY
#define __S000 PAGE_NONE
#define __S001 PAGE_READONLY
#define __S010 PAGE_SHARED /* we don't have (and don't need) write-only */
#define __S011 PAGE_SHARED
-#define __S100 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __S101 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __S110 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RWX)
-#define __S111 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RWX)
+#define __S100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
+#define __S101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
+#define __S110 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)
+#define __S111 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)
#define pgd_ERROR(e) printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e))
#define pmd_ERROR(e) printk("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
@@ -223,7 +220,7 @@
#define VMALLOC_START (0xa000000000000000+2*PAGE_SIZE)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
-#define VMALLOC_END 0xbfffffffffffffff
+#define VMALLOC_END (0xa000000000000000+ (1UL << (4*PAGE_SHIFT - 13)))
/*
* BAD_PAGETABLE is used when we need a bogus page-table, while
@@ -285,8 +282,8 @@
*/
#define pte_read(pte) (((pte_val(pte) & _PAGE_AR_MASK) >> _PAGE_AR_SHIFT) < 6)
#define pte_write(pte) ((unsigned) (((pte_val(pte) & _PAGE_AR_MASK) >> _PAGE_AR_SHIFT) - 2) < 4)
-#define pte_dirty(pte) (pte_val(pte) & _PAGE_D)
-#define pte_young(pte) (pte_val(pte) & _PAGE_A)
+#define pte_dirty(pte) ((pte_val(pte) & _PAGE_D) != 0)
+#define pte_young(pte) ((pte_val(pte) & _PAGE_A) != 0)
/*
* Note: we convert AR_RWX to AR_RX and AR_RW to AR_R by clearing the
* 2nd bit in the access rights:
@@ -380,37 +377,68 @@
static inline int
ptep_test_and_clear_young (pte_t *ptep)
{
+#ifdef CONFIG_SMP
return test_and_clear_bit(_PAGE_A_BIT, ptep);
+#else
+ pte_t pte = *ptep;
+ if (!pte_young(pte))
+ return 0;
+ set_pte(ptep, pte_mkold(pte));
+ return 1;
+#endif
}
static inline int
ptep_test_and_clear_dirty (pte_t *ptep)
{
+#ifdef CONFIG_SMP
return test_and_clear_bit(_PAGE_D_BIT, ptep);
+#else
+ pte_t pte = *ptep;
+ if (!pte_dirty(pte))
+ return 0;
+ set_pte(ptep, pte_mkclean(pte));
+ return 1;
+#endif
}
static inline pte_t
ptep_get_and_clear (pte_t *ptep)
{
+#ifdef CONFIG_SMP
return __pte(xchg((long *) ptep, 0));
+#else
+ pte_t pte = *ptep;
+ pte_clear(ptep);
+ return pte;
+#endif
}
-/* XXX this should be called ptep_set_wrprotect!!! */
static inline void
-ptep_clear_wrprotect (pte_t *ptep)
+ptep_set_wrprotect (pte_t *ptep)
{
+#ifdef CONFIG_SMP
unsigned long new, old;
do {
old = pte_val(*ptep);
new = pte_val(pte_wrprotect(__pte (old)));
} while (cmpxchg((unsigned long *) ptep, old, new) != old);
+#else
+ pte_t old_pte = *ptep;
+ set_pte(ptep, pte_wrprotect(old_pte));
+#endif
}
static inline void
ptep_mkdirty (pte_t *ptep)
{
+#ifdef CONFIG_SMP
set_bit(_PAGE_D_BIT, ptep);
+#else
+ pte_t old_pte = *ptep;
+ set_pte(ptep, pte_mkdirty(old_pte));
+#endif
}
static inline int
@@ -444,16 +472,13 @@
# define update_mmu_cache(vma,address,pte) \
do { \
/* \
- * XXX fix me!! \
- * \
- * It's not clear this is a win. We may end up pollute the \
+ * This is usually not a win. We may end up polluting the \
* dtlb with itlb entries and vice versa (e.g., consider stack \
* pages that are normally marked executable). It would be \
* better to insert the TLB entry for the TLB cache that we \
* know needs the new entry. However, the update_mmu_cache() \
* arguments don't tell us whether we got here through a data \
- * access or through an instruction fetch. Talk to Linus to \
- * fix this. \
+ * access or through an instruction fetch. \
* \
* If you re-enable this code, you must disable the ptc code in \
* Entry 20 of the ivt. \
@@ -467,7 +492,7 @@
#endif
#define SWP_TYPE(entry) (((entry).val >> 1) & 0xff)
-#define SWP_OFFSET(entry) ((entry).val >> 9)
+#define SWP_OFFSET(entry) (((entry).val << 1) >> 10)
#define SWP_ENTRY(type,offset) ((swp_entry_t) { ((type) << 1) | ((offset) << 9) })
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.0-test10-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/processor.h Wed Nov 15 22:54:12 2000
@@ -248,6 +248,9 @@
__u64 usec_per_cyc; /* 2^IA64_USEC_PER_CYC_SHIFT*1000000/itc_freq */
__u64 unimpl_va_mask; /* mask of unimplemented virtual address bits (from PAL) */
__u64 unimpl_pa_mask; /* mask of unimplemented physical address bits (from PAL) */
+ __u64 ptce_base;
+ __u32 ptce_count[2];
+ __u32 ptce_stride[2];
#ifdef CONFIG_SMP
__u64 loops_per_sec;
__u64 ipi_count;
diff -urN linux-davidm/include/asm-ia64/sal.h linux-2.4.0-test10-lia/include/asm-ia64/sal.h
--- linux-davidm/include/asm-ia64/sal.h Mon Oct 9 17:55:00 2000
+++ linux-2.4.0-test10-lia/include/asm-ia64/sal.h Wed Nov 15 22:54:23 2000
@@ -24,7 +24,9 @@
extern spinlock_t sal_lock;
-#define __SAL_CALL(result,args...) result = (*ia64_sal)(args)
+/* SAL spec _requires_ eight args for each call. */
+#define __SAL_CALL(result,a0,a1,a2,a3,a4,a5,a6,a7) \
+ result = (*ia64_sal)(a0,a1,a2,a3,a4,a5,a6,a7)
#ifdef CONFIG_SMP
# define SAL_CALL(result,args...) do { \
@@ -60,10 +62,10 @@
* informational value should be printed (e.g., "reboot for
* change to take effect").
*/
- s64 status;
- u64 v0;
- u64 v1;
- u64 v2;
+ s64 status;
+ u64 v0;
+ u64 v1;
+ u64 v2;
};
typedef struct ia64_sal_retval (*ia64_sal_handler) (u64, ...);
@@ -78,24 +80,27 @@
* The SAL system table is followed by a variable number of variable
* length descriptors. The structure of these descriptors follows
* below.
+ * The defininition follows SAL specs from July 2000
*/
struct ia64_sal_systab {
- char signature[4]; /* should be "SST_" */
- int size; /* size of this table in bytes */
- unsigned char sal_rev_minor;
- unsigned char sal_rev_major;
- unsigned short entry_count; /* # of entries in variable portion */
- unsigned char checksum;
- char ia32_bios_present;
- unsigned short reserved1;
- char oem_id[32]; /* ASCII NUL terminated OEM id
- (terminating NUL is missing if
- string is exactly 32 bytes long). */
- char product_id[32]; /* ASCII product id */
- char reserved2[16];
+ u8 signature[4]; /* should be "SST_" */
+ u32 size; /* size of this table in bytes */
+ u8 sal_rev_minor;
+ u8 sal_rev_major;
+ u16 entry_count; /* # of entries in variable portion */
+ u8 checksum;
+ u8 reserved1[7];
+ u8 sal_a_rev_minor;
+ u8 sal_a_rev_major;
+ u8 sal_b_rev_minor;
+ u8 sal_b_rev_major;
+ /* oem_id & product_id: terminating NUL is missing if string is exactly 32 bytes long. */
+ u8 oem_id[32];
+ u8 product_id[32]; /* ASCII product id */
+ u8 reserved2[8];
};
-enum SAL_Systab_Entry_Type {
+enum sal_systab_entry_type {
SAL_DESC_ENTRY_POINT = 0,
SAL_DESC_MEMORY = 1,
SAL_DESC_PLATFORM_FEATURE = 2,
@@ -115,75 +120,78 @@
*/
#define SAL_DESC_SIZE(type) "\060\040\020\040\020\020"[(unsigned) type]
-struct ia64_sal_desc_entry_point {
- char type;
- char reserved1[7];
- s64 pal_proc;
- s64 sal_proc;
- s64 gp;
- char reserved2[16];
-};
-
-struct ia64_sal_desc_memory {
- char type;
- char used_by_sal; /* needs to be mapped for SAL? */
- char mem_attr; /* current memory attribute setting */
- char access_rights; /* access rights set up by SAL */
- char mem_attr_mask; /* mask of supported memory attributes */
- char reserved1;
- char mem_type; /* memory type */
- char mem_usage; /* memory usage */
- s64 addr; /* physical address of memory */
- unsigned int length; /* length (multiple of 4KB pages) */
- unsigned int reserved2;
- char oem_reserved[8];
-};
+typedef struct ia64_sal_desc_entry_point {
+ u8 type;
+ u8 reserved1[7];
+ u64 pal_proc;
+ u64 sal_proc;
+ u64 gp;
+ u8 reserved2[16];
+}ia64_sal_desc_entry_point_t;
+
+typedef struct ia64_sal_desc_memory {
+ u8 type;
+ u8 used_by_sal; /* needs to be mapped for SAL? */
+ u8 mem_attr; /* current memory attribute setting */
+ u8 access_rights; /* access rights set up by SAL */
+ u8 mem_attr_mask; /* mask of supported memory attributes */
+ u8 reserved1;
+ u8 mem_type; /* memory type */
+ u8 mem_usage; /* memory usage */
+ u64 addr; /* physical address of memory */
+ u32 length; /* length (multiple of 4KB pages) */
+ u32 reserved2;
+ u8 oem_reserved[8];
+} ia64_sal_desc_memory_t;
#define IA64_SAL_PLATFORM_FEATURE_BUS_LOCK (1 << 0)
#define IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT (1 << 1)
#define IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT (1 << 2)
-struct ia64_sal_desc_platform_feature {
- char type;
- unsigned char feature_mask;
- char reserved1[14];
-};
-
-struct ia64_sal_desc_tr {
- char type;
- char tr_type; /* 0 = instruction, 1 = data */
- char regnum; /* translation register number */
- char reserved1[5];
- s64 addr; /* virtual address of area covered */
- s64 page_size; /* encoded page size */
- char reserved2[8];
-};
+typedef struct ia64_sal_desc_platform_feature {
+ u8 type;
+ u8 feature_mask;
+ u8 reserved1[14];
+} ia64_sal_desc_platform_feature_t;
+
+typedef struct ia64_sal_desc_tr {
+ u8 type;
+ u8 tr_type; /* 0 = instruction, 1 = data */
+ u8 regnum; /* translation register number */
+ u8 reserved1[5];
+ u64 addr; /* virtual address of area covered */
+ u64 page_size; /* encoded page size */
+ u8 reserved2[8];
+} ia64_sal_desc_tr_t;
typedef struct ia64_sal_desc_ptc {
- char type;
- char reserved1[3];
- unsigned int num_domains; /* # of coherence domains */
- s64 domain_info; /* physical address of domain info table */
+ u8 type;
+ u8 reserved1[3];
+ u32 num_domains; /* # of coherence domains */
+ u64 domain_info; /* physical address of domain info table */
} ia64_sal_desc_ptc_t;
typedef struct ia64_sal_ptc_domain_info {
- unsigned long proc_count; /* number of processors in domain */
- long proc_list; /* physical address of LID array */
+ u64 proc_count; /* number of processors in domain */
+ u64 proc_list; /* physical address of LID array */
} ia64_sal_ptc_domain_info_t;
typedef struct ia64_sal_ptc_domain_proc_entry {
- unsigned char id; /* id of processor */
- unsigned char eid; /* eid of processor */
+ u64 reserved : 16;
+ u64 eid : 8; /* eid of processor */
+ u64 id : 8; /* id of processor */
+ u64 ignored : 32;
} ia64_sal_ptc_domain_proc_entry_t;
+
#define IA64_SAL_AP_EXTERNAL_INT 0
-struct ia64_sal_desc_ap_wakeup {
- char type;
- char mechanism; /* 0 = external interrupt */
- char reserved1[6];
- long vector; /* interrupt vector in range 0x10-0xff */
-};
+typedef struct ia64_sal_desc_ap_wakeup {
+ u8 type;
+ u8 mechanism; /* 0 = external interrupt */
+ u8 reserved1[6];
+ u64 vector; /* interrupt vector in range 0x10-0xff */
+} ia64_sal_desc_ap_wakeup_t ;
extern ia64_sal_handler ia64_sal;
extern struct ia64_sal_desc_ptc *ia64_ptc_domain_info;
@@ -218,24 +226,24 @@
/* Encodings for vectors which can be registered by the OS with SAL */
enum {
- SAL_VECTOR_OS_MCA = 0,
- SAL_VECTOR_OS_INIT = 1,
- SAL_VECTOR_OS_BOOT_RENDEZ = 2
+ SAL_VECTOR_OS_MCA = 0,
+ SAL_VECTOR_OS_INIT = 1,
+ SAL_VECTOR_OS_BOOT_RENDEZ = 2
};
/* Definition of the SAL Error Log from the SAL spec */
/* Definition of timestamp according to SAL spec for logging purposes */
-typedef struct sal_log_timestamp_s {
- u8 slh_century; /* Century (19, 20, 21, ...) */
- u8 slh_year; /* Year (00..99) */
- u8 slh_month; /* Month (1..12) */
- u8 slh_day; /* Day (1..31) */
- u8 slh_reserved;
- u8 slh_hour; /* Hour (0..23) */
- u8 slh_minute; /* Minute (0..59) */
- u8 slh_second; /* Second (0..59) */
+typedef struct sal_log_timestamp {
+ u8 slh_century; /* Century (19, 20, 21, ...) */
+ u8 slh_year; /* Year (00..99) */
+ u8 slh_month; /* Month (1..12) */
+ u8 slh_day; /* Day (1..31) */
+ u8 slh_reserved;
+ u8 slh_hour; /* Hour (0..23) */
+ u8 slh_minute; /* Minute (0..59) */
+ u8 slh_second; /* Second (0..59) */
} sal_log_timestamp_t;
@@ -243,126 +251,126 @@
#define MAX_TLB_ERRORS 6
#define MAX_BUS_ERRORS 1
-typedef struct sal_log_processor_info_s {
+typedef struct sal_log_processor_info {
struct {
- u64 slpi_psi : 1,
- slpi_cache_check: MAX_CACHE_ERRORS,
- slpi_tlb_check : MAX_TLB_ERRORS,
- slpi_bus_check : MAX_BUS_ERRORS,
- slpi_reserved2 : (31 - (MAX_TLB_ERRORS + MAX_CACHE_ERRORS
- + MAX_BUS_ERRORS)),
- slpi_minstate : 1,
- slpi_bank1_gr : 1,
- slpi_br : 1,
- slpi_cr : 1,
- slpi_ar : 1,
- slpi_rr : 1,
- slpi_fr : 1,
- slpi_reserved1 : 25;
+ u64 slpi_psi : 1,
+ slpi_cache_check: MAX_CACHE_ERRORS,
+ slpi_tlb_check : MAX_TLB_ERRORS,
+ slpi_bus_check : MAX_BUS_ERRORS,
+ slpi_reserved2 : (31 - (MAX_TLB_ERRORS + MAX_CACHE_ERRORS
+ + MAX_BUS_ERRORS)),
+ slpi_minstate : 1,
+ slpi_bank1_gr : 1,
+ slpi_br : 1,
+ slpi_cr : 1,
+ slpi_ar : 1,
+ slpi_rr : 1,
+ slpi_fr : 1,
+ slpi_reserved1 : 25;
} slpi_valid;
- pal_processor_state_info_t slpi_processor_state_info;
+ pal_processor_state_info_t slpi_processor_state_info;
struct {
- pal_cache_check_info_t slpi_cache_check;
- u64 slpi_target_address;
+ pal_cache_check_info_t slpi_cache_check;
+ u64 slpi_target_address;
} slpi_cache_check_info[MAX_CACHE_ERRORS];
- pal_tlb_check_info_t slpi_tlb_check_info[MAX_TLB_ERRORS];
+ pal_tlb_check_info_t slpi_tlb_check_info[MAX_TLB_ERRORS];
struct {
- pal_bus_check_info_t slpi_bus_check;
- u64 slpi_requestor_addr;
- u64 slpi_responder_addr;
- u64 slpi_target_addr;
+ pal_bus_check_info_t slpi_bus_check;
+ u64 slpi_requestor_addr;
+ u64 slpi_responder_addr;
+ u64 slpi_target_addr;
} slpi_bus_check_info[MAX_BUS_ERRORS];
- pal_min_state_area_t slpi_min_state_area;
- u64 slpi_br[8];
- u64 slpi_cr[128];
- u64 slpi_ar[128];
- u64 slpi_rr[8];
- u64 slpi_fr[128];
+ pal_min_state_area_t slpi_min_state_area;
+ u64 slpi_br[8];
+ u64 slpi_cr[128];
+ u64 slpi_ar[128];
+ u64 slpi_rr[8];
+ u64 slpi_fr[128];
} sal_log_processor_info_t;
/* platform error log structures */
typedef struct platerr_logheader {
- u64 nextlog; /* next log offset if present */
- u64 loglength; /* log length */
- u64 logsubtype; /* log subtype memory/bus/component */
- u64 eseverity; /* error severity */
+ u64 nextlog; /* next log offset if present */
+ u64 loglength; /* log length */
+ u64 logsubtype; /* log subtype memory/bus/component */
+ u64 eseverity; /* error severity */
} ehdr_t;
typedef struct sysmem_errlog {
- ehdr_t lhdr; /* header */
- u64 vflag; /* valid bits for each field in the log */
- u64 addr; /* memory address */
- u64 data; /* memory data */
- u64 cmd; /* command bus value if any */
- u64 ctrl; /* control bus value if any */
- u64 addrsyndrome; /* memory address ecc/parity syndrome bits */
- u64 datasyndrome; /* data ecc/parity syndrome */
- u64 cacheinfo; /* platform cache info as defined in pal spec. table 7-34 */
+ ehdr_t lhdr; /* header */
+ u64 vflag; /* valid bits for each field in the log */
+ u64 addr; /* memory address */
+ u64 data; /* memory data */
+ u64 cmd; /* command bus value if any */
+ u64 ctrl; /* control bus value if any */
+ u64 addrsyndrome; /* memory address ecc/parity syndrome bits */
+ u64 datasyndrome; /* data ecc/parity syndrome */
+ u64 cacheinfo; /* platform cache info as defined in pal spec. table 7-34 */
} merrlog_t;
typedef struct sysbus_errlog {
- ehdr_t lhdr; /* linkded list header */
- u64 vflag; /* valid bits for each field in the log */
- u64 busnum; /* bus number in error */
- u64 reqaddr; /* requestor address */
- u64 resaddr; /* responder address */
- u64 taraddr; /* target address */
- u64 data; /* requester r/w data */
- u64 cmd; /* bus commands */
- u64 ctrl; /* bus controls (be# &-0) */
- u64 addrsyndrome; /* addr bus ecc/parity bits */
- u64 datasyndrome; /* data bus ecc/parity bits */
- u64 cmdsyndrome; /* command bus ecc/parity bits */
- u64 ctrlsyndrome; /* control bus ecc/parity bits */
+ ehdr_t lhdr; /* linkded list header */
+ u64 vflag; /* valid bits for each field in the log */
+ u64 busnum; /* bus number in error */
+ u64 reqaddr; /* requestor address */
+ u64 resaddr; /* responder address */
+ u64 taraddr; /* target address */
+ u64 data; /* requester r/w data */
+ u64 cmd; /* bus commands */
+ u64 ctrl; /* bus controls (be# &-0) */
+ u64 addrsyndrome; /* addr bus ecc/parity bits */
+ u64 datasyndrome; /* data bus ecc/parity bits */
+ u64 cmdsyndrome; /* command bus ecc/parity bits */
+ u64 ctrlsyndrome; /* control bus ecc/parity bits */
} berrlog_t;
/* platform error log structures */
typedef struct syserr_chdr { /* one header per component */
- u64 busnum; /* bus number on which the component resides */
- u64 devnum; /* same as device select */
- u64 funcid; /* function id of the device */
- u64 devid; /* pci device id */
- u64 classcode; /* pci class code for the device */
- u64 cmdreg; /* pci command reg value */
- u64 statreg; /* pci status reg value */
+ u64 busnum; /* bus number on which the component resides */
+ u64 devnum; /* same as device select */
+ u64 funcid; /* function id of the device */
+ u64 devid; /* pci device id */
+ u64 classcode; /* pci class code for the device */
+ u64 cmdreg; /* pci command reg value */
+ u64 statreg; /* pci status reg value */
} chdr_t;
typedef struct cfginfo {
- u64 cfgaddr;
- u64 cfgval;
+ u64 cfgaddr;
+ u64 cfgval;
} cfginfo_t;
typedef struct sys_comperr { /* per component */
- ehdr_t lhdr; /* linked list header */
- u64 vflag; /* valid bits for each field in the log */
- chdr_t scomphdr;
- u64 numregpair; /* number of reg addr/value pairs */
+ ehdr_t lhdr; /* linked list header */
+ u64 vflag; /* valid bits for each field in the log */
+ chdr_t scomphdr;
+ u64 numregpair; /* number of reg addr/value pairs */
cfginfo_t cfginfo;
} cerrlog_t;
typedef struct sel_records {
- ehdr_t lhdr;
- u64 seldata;
+ ehdr_t lhdr;
+ u64 seldata;
} isel_t;
typedef struct plat_errlog {
- u64 mbcsvalid; /* valid bits for each type of log */
- merrlog_t smemerrlog; /* platform memory error logs */
- berrlog_t sbuserrlog; /* platform bus error logs */
- cerrlog_t scomperrlog; /* platform chipset error logs */
- isel_t selrecord; /* ipmi sel record */
+ u64 mbcsvalid; /* valid bits for each type of log */
+ merrlog_t smemerrlog; /* platform memory error logs */
+ berrlog_t sbuserrlog; /* platform bus error logs */
+ cerrlog_t scomperrlog; /* platform chipset error logs */
+ isel_t selrecord; /* ipmi sel record */
} platforminfo_t;
/* over all log structure (processor+platform) */
typedef union udev_specific_log {
- sal_log_processor_info_t proclog;
- platforminfo_t platlog;
+ sal_log_processor_info_t proclog;
+ platforminfo_t platlog;
} devicelog_t;
@@ -378,21 +386,18 @@
#define sal_log_processor_info_rr_valid slpi_valid.slpi_rr
#define sal_log_processor_info_fr_valid slpi_valid.slpi_fr
-typedef struct sal_log_header_s {
- u64 slh_next_log; /* Offset of the next log from the
- * beginning of this structure.
- */
- uint slh_log_len; /* Length of this error log in bytes */
- ushort slh_log_type; /* Type of log (0 - cpu ,1 - platform) */
- ushort slh_log_sub_type; /* SGI specific sub type */
- sal_log_timestamp_t slh_log_timestamp; /* Timestamp */
+typedef struct sal_log_header {
+ u64 slh_next_log; /* Offset of the next log from the beginning of this structure */
+ u32 slh_log_len; /* Length of this error log in bytes */
+ u16 slh_log_type; /* Type of log (0 - cpu ,1 - platform) */
+ u16 slh_log_sub_type; /* SGI specific sub type */
+ sal_log_timestamp_t slh_log_timestamp; /* Timestamp */
} sal_log_header_t;
/* SAL PSI log structure */
-typedef struct psilog
-{
- sal_log_header_t sal_elog_header;
- devicelog_t devlog;
+typedef struct psilog {
+ sal_log_header_t sal_elog_header;
+ devicelog_t devlog;
} ia64_psilog_t;
/*
@@ -405,7 +410,7 @@
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_FREQ_BASE, which);
+ SAL_CALL(isrv, SAL_FREQ_BASE, which, 0, 0, 0, 0, 0, 0);
*ticks_per_second = isrv.v0;
*drift_info = isrv.v1;
return isrv.status;
@@ -416,7 +421,7 @@
ia64_sal_cache_flush (u64 cache_type)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_CACHE_FLUSH, cache_type);
+ SAL_CALL(isrv, SAL_CACHE_FLUSH, cache_type, 0, 0, 0, 0, 0, 0);
return isrv.status;
}
@@ -427,7 +432,7 @@
ia64_sal_cache_init (void)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_CACHE_INIT);
+ SAL_CALL(isrv, SAL_CACHE_INIT, 0, 0, 0, 0, 0, 0, 0);
return isrv.status;
}
@@ -438,7 +443,8 @@
ia64_sal_clear_state_info (u64 sal_info_type, u64 sal_info_sub_type)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_CLEAR_STATE_INFO, sal_info_type, sal_info_sub_type);
+ SAL_CALL(isrv, SAL_CLEAR_STATE_INFO, sal_info_type, sal_info_sub_type,
+ 0, 0, 0, 0, 0);
return isrv.status;
}
@@ -450,7 +456,8 @@
ia64_sal_get_state_info (u64 sal_info_type, u64 sal_info_sub_type, u64 *sal_info)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_GET_STATE_INFO, sal_info_type, sal_info_sub_type, sal_info);
+ SAL_CALL(isrv, SAL_GET_STATE_INFO, sal_info_type, sal_info_sub_type,
+ sal_info, 0, 0, 0, 0);
if (isrv.status)
return 0;
return isrv.v0;
@@ -462,7 +469,8 @@
ia64_sal_get_state_info_size (u64 sal_info_type, u64 sal_info_sub_type)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_GET_STATE_INFO_SIZE, sal_info_type, sal_info_sub_type);
+ SAL_CALL(isrv, SAL_GET_STATE_INFO_SIZE, sal_info_type, sal_info_sub_type,
+ 0, 0, 0, 0, 0);
if (isrv.status)
return 0;
return isrv.v0;
@@ -475,7 +483,7 @@
ia64_sal_mc_rendez (void)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_MC_RENDEZ);
+ SAL_CALL(isrv, SAL_MC_RENDEZ, 0, 0, 0, 0, 0, 0, 0);
return isrv.status;
}
@@ -487,7 +495,8 @@
ia64_sal_mc_set_params (u64 param_type, u64 i_or_m, u64 i_or_m_val, u64 timeout)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_MC_SET_PARAMS, param_type, i_or_m, i_or_m_val, timeout);
+ SAL_CALL(isrv, SAL_MC_SET_PARAMS, param_type, i_or_m, i_or_m_val, timeout,
+ 0, 0, 0);
return isrv.status;
}
@@ -505,7 +514,7 @@
*/
spin_lock_irqsave(&ivr_read_lock, flags);
#endif
- SAL_CALL(isrv, SAL_PCI_CONFIG_READ, pci_config_addr, size);
+ SAL_CALL(isrv, SAL_PCI_CONFIG_READ, pci_config_addr, size, 0, 0, 0, 0, 0);
#ifdef CONFIG_ITANIUM_A1_SPECIFIC
spin_unlock_irqrestore(&ivr_read_lock, flags);
#endif
@@ -528,7 +537,8 @@
*/
spin_lock_irqsave(&ivr_read_lock, flags);
#endif
- SAL_CALL(isrv, SAL_PCI_CONFIG_WRITE, pci_config_addr, size, value);
+ SAL_CALL(isrv, SAL_PCI_CONFIG_WRITE, pci_config_addr, size, value,
+ 0, 0, 0, 0);
#ifdef CONFIG_ITANIUM_A1_SPECIFIC
spin_unlock_irqrestore(&ivr_read_lock, flags);
#endif
@@ -543,7 +553,8 @@
ia64_sal_register_physical_addr (u64 phys_entry, u64 phys_addr)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_REGISTER_PHYSICAL_ADDR, phys_entry, phys_addr);
+ SAL_CALL(isrv, SAL_REGISTER_PHYSICAL_ADDR, phys_entry, phys_addr,
+ 0, 0, 0, 0, 0);
return isrv.status;
}
@@ -569,7 +580,8 @@
u64 *error_code, u64 *scratch_buf_size_needed)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_UPDATE_PAL, param_buf, scratch_buf, scratch_buf_size);
+ SAL_CALL(isrv, SAL_UPDATE_PAL, param_buf, scratch_buf, scratch_buf_size,
+ 0, 0, 0, 0);
if (error_code)
*error_code = isrv.v0;
if (scratch_buf_size_needed)
diff -urN linux-davidm/kernel/printk.c linux-2.4.0-test10-lia/kernel/printk.c
--- linux-davidm/kernel/printk.c Wed Nov 15 23:09:41 2000
+++ linux-2.4.0-test10-lia/kernel/printk.c Wed Nov 15 19:49:26 2000
@@ -512,7 +512,7 @@
#include <asm/io.h>
-#define VGABASE ((char *)0x00000000000b8000)
+#define VGABASE ((char *)0xc0000000000b8000)
static int current_ypos = 50, current_xpos = 0;
diff -urN linux-davidm/mm/memory.c linux-2.4.0-test10-lia/mm/memory.c
--- linux-davidm/mm/memory.c Mon Oct 30 14:32:57 2000
+++ linux-2.4.0-test10-lia/mm/memory.c Wed Nov 15 19:49:41 2000
@@ -227,7 +227,7 @@
/* If it's a COW mapping, write protect it both in the parent and the child */
if (cow) {
- ptep_clear_wrprotect(src_pte);
+ ptep_set_wrprotect(src_pte);
pte = *src_pte;
}
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0-test11)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (26 preceding siblings ...)
2000-11-16 7:59 ` David Mosberger
@ 2000-12-07 8:26 ` David Mosberger
2000-12-07 21:57 ` David Mosberger
` (187 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-12-07 8:26 UTC (permalink / raw)
To: linux-ia64
The directory at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
now has files linux-2.4.0-test11-ia64-001206.diff* which contain the
latest IA-64 kernel diff relative to Linus' 2.4.0-test11.
This patch makes a significant change to the virtual memory space of
IA-64 Linux: so far, within each region, the first N bytes and the
last N bytes could be mapped by a process (with N=2^39 when 8KB pages
are in use). This patch changes this so that all mappable space is at
the beginning of a region. That is, region offsets 0-(2*N-1) are now
mappable instead. This change was necessary to fix the bug that Asit
discovered under heavy swap activity when the memory size was 256MB
(there is nothing magic about this size; the problem could have
occurred with other sizes as well). Normal applications should not
notice this change, though if something has stack-related addresses
hardcoded, those applications need to be updated, obviously (it's
usually a bad idea to hardcode such things anyhow).
Other changes:
- Matt: EFI CRC updates to avoid copyright issues
- Andreas: fix typo in __NR_pivot_root
- Jonathan: don't acquire xtime_lock in timer interrupt
of application processors
- Kanoj: updates for SN1 machine
- Intel: lots of ACPI updates (including ACPI 2.0 support)
- Asit: place the virtual linear page table is at the end of a
region and make it big enough to map the entire implemented
virtual address within a region
- Asit (I think): fix for software I/O TLB problems
- export more kernel symbols for modules
- platform_pci_fixup() is now called twice: once before
the PCI busscan and once after
- NaT Consumption faults and Unsupported Data Reference faults
now result in SIGILL with no kernel messages printed; you can
tell from the siginfo's si_code and si_imm values what caused
the signal: si_code=ILL_ILLOPN and si_imm& implies
NaT Consumption, si_imm1 implies Unsupported Data Reference
- fix a bug in the IA-64 page fault handler which caused it to
pick up the wrong VMA for permission checking when
the VMA was growing towards higher addresses; this hasn't caused
any known errors, but was a potential source of problems and
had to be fixed; thanks to Matthew Willcox for pointing this out
- fix for fcntl(F_GETOWN) failure reported by SCO (this is untested;
could someone from SCO rerun the test and let me know if it does
indeed fix the problem?)
CAVEAT: the kernel now relies on PAL_VM_SUMMARY returning the correct
value for IMPL_VA_MSB. If you have old firmware, this may cause the kernel
to fail to boot. To find out, look for the message of the form:
CPU 0: XX virtual and YY physical address bits
if XX is anything other than 50, you need to upgrade your firmware.
This kernel is known to build and run on 2P Big Sur and the HP Ski
simulator. I expect it to work fine on 4P Lion and UP as well.
As usual, the attached relative diff is fyi only.
Enjoy,
--david
diff -urN linux-davidm/arch/ia64/Makefile lia64/arch/ia64/Makefile
--- linux-davidm/arch/ia64/Makefile Mon Oct 9 17:54:53 2000
+++ lia64/arch/ia64/Makefile Wed Dec 6 22:09:11 2000
@@ -19,22 +19,28 @@
EXTRA
CFLAGS := $(CFLAGS) -pipe $(EXTRA) -Wa,-x -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 \
-funwind-tables
CFLAGS_KERNEL := -mconstant-gp
ifeq ($(CONFIG_ITANIUM_ASTEP_SPECIFIC),y)
CFLAGS += -ma-step
endif
+ifeq ($(CONFIG_ITANIUM_BSTEP_SPECIFIC),y)
+ CFLAGS += -mb-step
+endif
ifdef CONFIG_IA64_GENERIC
CORE_FILES := arch/$(ARCH)/hp/hp.a \
arch/$(ARCH)/sn/sn.a \
arch/$(ARCH)/dig/dig.a \
+ arch/$(ARCH)/sn/io/sgiio.o \
$(CORE_FILES)
SUBDIRS := arch/$(ARCH)/hp \
arch/$(ARCH)/sn/sn1 \
arch/$(ARCH)/sn \
arch/$(ARCH)/dig \
+ arch/$(ARCH)/sn/io \
$(SUBDIRS)
else # !GENERIC
@@ -47,10 +53,7 @@
endif
ifdef CONFIG_IA64_SGI_SN1
-CFLAGS := $(CFLAGS) -DSN -I. -DBRINGUP -DDIRECT_L1_CONSOLE \
- -DNUMA_BASE -DSIMULATED_KLGRAPH -DNUMA_MIGR_CONTROL \
- -DLITTLE_ENDIAN -DREAL_HARDWARE -DLANGUAGE_C=1 \
- -D_LANGUAGE_C=1
+CFLAGS += -DBRINGUP
SUBDIRS := arch/$(ARCH)/sn/sn1 \
arch/$(ARCH)/sn \
arch/$(ARCH)/sn/io \
@@ -96,7 +99,7 @@
arch/$(ARCH)/vmlinux.lds: arch/$(ARCH)/vmlinux.lds.S FORCE
$(CPP) -D__ASSEMBLY__ -C -P -I$(HPATH) -I$(HPATH)/asm-$(ARCH) \
- arch/$(ARCH)/vmlinux.lds.S > $@
+ -traditional arch/$(ARCH)/vmlinux.lds.S > $@
FORCE: ;
diff -urN linux-davidm/arch/ia64/config.in lia64/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/config.in Wed Dec 6 22:09:46 2000
@@ -59,6 +59,7 @@
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
bool ' Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
+ bool ' Enable ACPI 2.0 with errata 1.3' CONFIG_ACPI20
bool ' ACPI kernel configuration manager (EXPERIMENTAL)' CONFIG_ACPI_KERNEL_CONFIG
if [ "$CONFIG_ACPI_KERNEL_CONFIG" = "y" ]; then
define_bool CONFIG_PM y
@@ -79,8 +80,9 @@
define_bool CONFIG_DEVFS_FS y
define_bool CONFIG_IA64_BRL_EMU y
define_bool CONFIG_IA64_MCA y
- define_bool CONFIG_IA64_SGI_IO y
define_bool CONFIG_ITANIUM y
+ define_bool CONFIG_SGI_IOC3_ETH y
+ bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM y
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
diff -urN linux-davidm/arch/ia64/hp/hpsim_setup.c lia64/arch/ia64/hp/hpsim_setup.c
--- linux-davidm/arch/ia64/hp/hpsim_setup.c Fri Jul 14 16:08:11 2000
+++ lia64/arch/ia64/hp/hpsim_setup.c Wed Dec 6 22:10:09 2000
@@ -63,12 +63,6 @@
}
void __init
-hpsim_pci_fixup (void)
-{
-}
-
-
-void __init
hpsim_setup (char **cmdline_p)
{
ROOT_DEV = to_kdev_t(0x0801); /* default to first SCSI drive */
diff -urN linux-davidm/arch/ia64/kernel/acpi.c lia64/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/acpi.c Wed Dec 6 22:18:40 2000
@@ -8,6 +8,10 @@
* Copyright (C) 1999,2000 Walt Drummond <drummond@valinux.com>
* Copyright (C) 2000 Hewlett-Packard Co.
* Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
+ * ACPI based kernel configuration manager.
+ * ACPI 2.0 & IA64 ext 0.71
*/
#include <linux/config.h>
@@ -41,27 +45,82 @@
asm (".weak iosapic_register_legacy_irq");
asm (".weak iosapic_init");
+const char *
+acpi_get_sysname (void)
+{
+ /* the following should go away once we have an ACPI parser: */
+#ifdef CONFIG_IA64_GENERIC
+ return "hpsim";
+#else
+# if defined (CONFIG_IA64_HP_SIM)
+ return "hpsim";
+# elif defined (CONFIG_IA64_SGI_SN1)
+ return "sn1";
+# elif defined (CONFIG_IA64_DIG)
+ return "dig";
+# else
+# error Unknown platform. Fix acpi.c.
+# endif
+#endif
+
+}
+
/*
- * Identify usable CPU's and remember them for SMP bringup later.
+ * Configure legacy IRQ information.
*/
static void __init
-acpi_lsapic (char *p)
+acpi_legacy_irq (char *p)
{
- int add = 1;
-
- acpi_entry_lsapic_t *lsapic = (acpi_entry_lsapic_t *) p;
+ acpi_entry_int_override_t *legacy = (acpi_entry_int_override_t *) p;
+ unsigned long polarity = 0, edge_triggered = 0;
- if ((lsapic->flags & LSAPIC_PRESENT) = 0)
+ /*
+ * If the platform we're running doesn't define
+ * iosapic_register_legacy_irq(), we ignore this info...
+ */
+ if (!iosapic_register_legacy_irq)
return;
+ switch (legacy->flags) {
+ case 0x5: polarity = 1; edge_triggered = 1; break;
+ case 0x7: polarity = 0; edge_triggered = 1; break;
+ case 0xd: polarity = 1; edge_triggered = 0; break;
+ case 0xf: polarity = 0; edge_triggered = 0; break;
+ default:
+ printk(" ACPI Legacy IRQ 0x%02x: Unknown flags 0x%x\n", legacy->isa_irq,
+ legacy->flags);
+ break;
+ }
+ iosapic_register_legacy_irq(legacy->isa_irq, legacy->pin, polarity, edge_triggered);
+}
+
+/*
+ * ACPI 2.0 tables parsing functions
+ */
+
+static unsigned long
+readl_unaligned(void *p)
+{
+ unsigned long ret;
+
+ memcpy(&ret, p, sizeof(long));
+ return ret;
+}
+
+/*
+ * Identify usable CPU's and remember them for SMP bringup later.
+ */
+static void __init
+acpi20_lsapic (char *p)
+{
+ int add = 1;
+
+ acpi20_entry_lsapic_t *lsapic = (acpi20_entry_lsapic_t *) p;
printk(" CPU %d (%.04x:%.04x): ", total_cpus, lsapic->eid, lsapic->id);
if ((lsapic->flags & LSAPIC_ENABLED) = 0) {
printk("Disabled.\n");
add = 0;
- } else if (lsapic->flags & LSAPIC_PERFORMANCE_RESTRICTED) {
- printk("Performance Restricted; ignoring.\n");
- add = 0;
}
#ifdef CONFIG_SMP
@@ -78,33 +137,223 @@
}
/*
- * Configure legacy IRQ information.
+ * Info on platform interrupt sources: NMI. PMI, INIT, etc.
*/
static void __init
-acpi_legacy_irq (char *p)
+acpi20_platform (char *p)
{
- acpi_entry_int_override_t *legacy = (acpi_entry_int_override_t *) p;
- unsigned long polarity = 0, edge_triggered = 0;
+ acpi20_entry_platform_src_t *plat = (acpi20_entry_platform_src_t *) p;
+
+ printk("PLATFORM: IOSAPIC %x -> Vector %x on CPU %.04u:%.04u\n",
+ plat->iosapic_vector, plat->global_vector, plat->eid, plat->id);
+}
+
+/*
+ * Override the physical address of the local APIC in the MADT stable header.
+ */
+static void __init
+acpi20_lapic_addr_override (char *p)
+{
+ acpi20_entry_lapic_addr_override_t * lapic = (acpi20_entry_lapic_addr_override_t *) p;
+
+ if (lapic->lapic_address) {
+ iounmap((void *)ipi_base_addr);
+ ipi_base_addr = (unsigned long) ioremap(lapic->lapic_address, 0);
+
+ printk("LOCAL ACPI override to 0x%lx(p=0x%lx)\n",
+ ipi_base_addr, lapic->lapic_address);
+ }
+}
+
+/*
+ * Parse the ACPI Multiple APIC Description Table
+ */
+static void __init
+acpi20_parse_madt (acpi_madt_t *madt)
+{
+ acpi_entry_iosapic_t *iosapic;
+ char *p, *end;
+
+ /* Base address of IPI Message Block */
+ if (madt->lapic_address) {
+ ipi_base_addr = (unsigned long) ioremap(madt->lapic_address, 0);
+ printk("Lapic address set to 0x%lx\n", ipi_base_addr);
+ } else
+ printk("Lapic address set to default 0x%lx\n", ipi_base_addr);
+
+ p = (char *) (madt + 1);
+ end = p + (madt->header.length - sizeof(acpi_madt_t));
/*
- * If the platform we're running doesn't define
- * iosapic_register_legacy_irq(), we ignore this info...
+ * Splitted entry parsing to ensure ordering.
*/
- if (!iosapic_register_legacy_irq)
+
+ while (p < end) {
+ switch (*p) {
+ case ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE:
+ printk("ACPI 2.0 MADT: LOCAL APIC Override\n");
+ acpi20_lapic_addr_override(p);
+ break;
+
+ case ACPI20_ENTRY_LOCAL_SAPIC:
+ printk("ACPI 2.0 MADT: LOCAL SAPIC\n");
+ acpi20_lsapic(p);
+ break;
+
+ case ACPI20_ENTRY_IO_SAPIC:
+ iosapic = (acpi_entry_iosapic_t *) p;
+ if (iosapic_init)
+ iosapic_init(iosapic->address, iosapic->irq_base);
+ break;
+
+ case ACPI20_ENTRY_PLATFORM_INT_SOURCE:
+ printk("ACPI 2.0 MADT: PLATFORM INT SOUCE\n");
+ acpi20_platform(p);
+ break;
+
+ case ACPI20_ENTRY_LOCAL_APIC:
+ printk("ACPI 2.0 MADT: LOCAL APIC entry\n"); break;
+ case ACPI20_ENTRY_IO_APIC:
+ printk("ACPI 2.0 MADT: IO APIC entry\n"); break;
+ case ACPI20_ENTRY_NMI_SOURCE:
+ printk("ACPI 2.0 MADT: NMI SOURCE entry\n"); break;
+ case ACPI20_ENTRY_LOCAL_APIC_NMI:
+ printk("ACPI 2.0 MADT: LOCAL APIC NMI entry\n"); break;
+ case ACPI20_ENTRY_INT_SRC_OVERRIDE:
+ break;
+ default:
+ printk("ACPI 2.0 MADT: unknown entry skip\n"); break;
+ break;
+ }
+
+ p += p[1];
+ }
+
+ p = (char *) (madt + 1);
+ end = p + (madt->header.length - sizeof(acpi_madt_t));
+
+ while (p < end) {
+
+ switch (*p) {
+ case ACPI20_ENTRY_INT_SRC_OVERRIDE:
+ printk("ACPI 2.0 MADT: INT SOURCE Override\n");
+ acpi_legacy_irq(p);
+ break;
+ default:
+ break;
+ }
+
+ p += p[1];
+ }
+
+ /* Make bootup pretty */
+ printk(" %d CPUs available, %d CPUs total\n",
+ available_cpus, total_cpus);
+}
+
+int __init
+acpi20_parse (acpi20_rsdp_t *rsdp20)
+{
+ acpi_xsdt_t *xsdt;
+ acpi_desc_table_hdr_t *hdrp;
+ int tables, i;
+
+ if (strncmp(rsdp20->signature, ACPI_RSDP_SIG, ACPI_RSDP_SIG_LEN)) {
+ printk("ACPI 2.0 RSDP signature incorrect!\n");
+ return 0;
+ } else {
+ printk("ACPI 2.0 Root System Description Ptr at 0x%lx\n",
+ (unsigned long)rsdp20);
+ }
+
+ xsdt = __va(rsdp20->xsdt);
+ hdrp = &xsdt->header;
+ if (strncmp(hdrp->signature,
+ ACPI_XSDT_SIG, ACPI_XSDT_SIG_LEN)) {
+ printk("ACPI 2.0 XSDT signature incorrect. Trying RSDT\n");
+ /* RSDT parsing here */
+ return 0;
+ } else {
+ printk("ACPI 2.0 XSDT at 0x%lx (p=0x%lx)\n",
+ (unsigned long)xsdt, (unsigned long)rsdp20->xsdt);
+ }
+
+ printk("ACPI 2.0: %.6s %.8s %d.%d\n",
+ hdrp->oem_id,
+ hdrp->oem_table_id,
+ hdrp->oem_revision >> 16,
+ hdrp->oem_revision & 0xffff);
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+ acpi_cf_init((void *)rsdp20);
+#endif
+
+ tables =(hdrp->length -sizeof(acpi_desc_table_hdr_t))>>3;
+
+ for (i = 0; i < tables; i++) {
+ hdrp = (acpi_desc_table_hdr_t *) __va(readl_unaligned(&xsdt->entry_ptrs[i]));
+ printk(" :table %4.4s found\n", hdrp->signature);
+
+ /* Only interested int the MADT table for now ... */
+ if (strncmp(hdrp->signature,
+ ACPI_MADT_SIG, ACPI_MADT_SIG_LEN) != 0)
+ continue;
+
+ acpi20_parse_madt((acpi_madt_t *) hdrp);
+ }
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+ acpi_cf_terminate();
+#endif
+
+#ifdef CONFIG_SMP
+ if (available_cpus = 0) {
+ printk("ACPI: Found 0 CPUS; assuming 1\n");
+ available_cpus = 1; /* We've got at least one of these, no? */
+ }
+ smp_boot_data.cpu_count = available_cpus;
+#endif
+ return 1;
+}
+/*
+ * ACPI 1.0b with 0.71 IA64 extensions functions; should be removed once all
+ * platforms start supporting ACPI 2.0
+ */
+
+/*
+ * Identify usable CPU's and remember them for SMP bringup later.
+ */
+static void __init
+acpi_lsapic (char *p)
+{
+ int add = 1;
+
+ acpi_entry_lsapic_t *lsapic = (acpi_entry_lsapic_t *) p;
+
+ if ((lsapic->flags & LSAPIC_PRESENT) = 0)
return;
- /* See MPS 1.4 section 4.3.4 */
- switch (legacy->flags) {
- case 0x5: polarity = 1; edge_triggered = 1; break;
- case 0x8: polarity = 0; edge_triggered = 1; break;
- case 0xd: polarity = 1; edge_triggered = 0; break;
- case 0xf: polarity = 0; edge_triggered = 0; break;
- default:
- printk(" ACPI Legacy IRQ 0x%02x: Unknown flags 0x%x\n", legacy->isa_irq,
- legacy->flags);
- break;
+ printk(" CPU %d (%.04x:%.04x): ", total_cpus, lsapic->eid, lsapic->id);
+
+ if ((lsapic->flags & LSAPIC_ENABLED) = 0) {
+ printk("Disabled.\n");
+ add = 0;
+ } else if (lsapic->flags & LSAPIC_PERFORMANCE_RESTRICTED) {
+ printk("Performance Restricted; ignoring.\n");
+ add = 0;
}
- iosapic_register_legacy_irq(legacy->isa_irq, legacy->pin, polarity, edge_triggered);
+
+#ifdef CONFIG_SMP
+ smp_boot_data.cpu_phys_id[total_cpus] = -1;
+#endif
+ if (add) {
+ printk("Available.\n");
+ available_cpus++;
+#ifdef CONFIG_SMP
+ smp_boot_data.cpu_phys_id[total_cpus] = (lsapic->id << 8) | lsapic->eid;
+#endif /* CONFIG_SMP */
+ }
+ total_cpus++;
}
/*
@@ -115,7 +364,7 @@
{
acpi_entry_platform_src_t *plat = (acpi_entry_platform_src_t *) p;
- printk("PLATFORM: IOSAPIC %x -> Vector %lx on CPU %.04u:%.04u\n",
+ printk("PLATFORM: IOSAPIC %x -> Vector %x on CPU %.04u:%.04u\n",
plat->iosapic_vector, plat->global_vector, plat->eid, plat->id);
}
@@ -173,18 +422,12 @@
acpi_desc_table_hdr_t *hdrp;
long tables, i;
- if (!rsdp) {
- printk("Uh-oh, no ACPI Root System Description Pointer table!\n");
- return 0;
- }
-
if (strncmp(rsdp->signature, ACPI_RSDP_SIG, ACPI_RSDP_SIG_LEN)) {
printk("Uh-oh, ACPI RSDP signature incorrect!\n");
return 0;
}
- rsdp->rsdt = __va(rsdp->rsdt);
- rsdt = rsdp->rsdt;
+ rsdt = __va(rsdp->rsdt);
if (strncmp(rsdt->header.signature, ACPI_RSDT_SIG, ACPI_RSDT_SIG_LEN)) {
printk("Uh-oh, ACPI RDST signature incorrect!\n");
return 0;
@@ -220,23 +463,4 @@
smp_boot_data.cpu_count = available_cpus;
#endif
return 1;
-}
-
-const char *
-acpi_get_sysname (void)
-{
- /* the following should go away once we have an ACPI parser: */
-#ifdef CONFIG_IA64_GENERIC
- return "hpsim";
-#else
-# if defined (CONFIG_IA64_HP_SIM)
- return "hpsim";
-# elif defined (CONFIG_IA64_SGI_SN1)
- return "sn1";
-# elif defined (CONFIG_IA64_DIG)
- return "dig";
-# else
-# error Unknown platform. Fix acpi.c.
-# endif
-#endif
}
diff -urN linux-davidm/arch/ia64/kernel/efi.c lia64/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/efi.c Wed Dec 6 22:20:03 2000
@@ -332,6 +332,9 @@
if (efi_guidcmp(config_tables[i].guid, MPS_TABLE_GUID) = 0) {
efi.mps = __va(config_tables[i].table);
printk(" MPS=0x%lx", config_tables[i].table);
+ } else if (efi_guidcmp(config_tables[i].guid, ACPI_20_TABLE_GUID) = 0) {
+ efi.acpi20 = __va(config_tables[i].table);
+ printk(" ACPI 2.0=0x%lx", config_tables[i].table);
} else if (efi_guidcmp(config_tables[i].guid, ACPI_TABLE_GUID) = 0) {
efi.acpi = __va(config_tables[i].table);
printk(" ACPI=0x%lx", config_tables[i].table);
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c lia64/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Mon Oct 9 17:54:54 2000
+++ lia64/arch/ia64/kernel/ia64_ksyms.c Wed Dec 6 22:20:30 2000
@@ -92,6 +92,9 @@
#include <asm/uaccess.h>
EXPORT_SYMBOL(__copy_user);
EXPORT_SYMBOL(__do_clear_user);
+EXPORT_SYMBOL(__strlen_user);
+EXPORT_SYMBOL(__strncpy_from_user);
+EXPORT_SYMBOL(__strnlen_user);
#include <asm/unistd.h>
EXPORT_SYMBOL(__ia64_syscall);
diff -urN linux-davidm/arch/ia64/kernel/iosapic.c lia64/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/iosapic.c Wed Dec 6 22:22:38 2000
@@ -443,11 +443,14 @@
}
void
-iosapic_pci_fixup (void)
+iosapic_pci_fixup (int phase)
{
struct pci_dev *dev;
unsigned char pin;
int vector;
+
+ if (phase != 1)
+ return;
pci_for_each_dev(dev) {
pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
diff -urN linux-davidm/arch/ia64/kernel/ivt.S lia64/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/ivt.S Wed Dec 6 22:21:29 2000
@@ -112,15 +112,14 @@
(p7) dep r17=r17,r19,(PAGE_SHIFT-3),3 // put region number bits in place
srlz.d // ensure "rsm psr.dt" has taken effect
(p6) movl r19=__pa(SWAPPER_PGD_ADDR) // region 5 is rooted at swapper_pg_dir
-(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-1
-(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-4
+(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT
+(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-3
;;
(p6) dep r17=r18,r19,3,(PAGE_SHIFT-3) // r17=PTA + IFA(33,42)*8
(p7) dep r17=r18,r17,3,(PAGE_SHIFT-6) // r17=PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)
cmp.eq p7,p6=0,r21 // unused address bits all zeroes?
shr.u r18=r16,PMD_SHIFT // shift L2 index into position
;;
-(p6) cmp.eq p7,p6=-1,r21 // unused address bits all ones?
ld8 r17=[r17] // fetch the L1 entry (may be 0)
;;
(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
@@ -222,15 +221,14 @@
(p7) dep r17=r17,r19,(PAGE_SHIFT-3),3 // put region number bits in place
srlz.d // ensure "rsm psr.dt" has taken effect
(p6) movl r19=__pa(SWAPPER_PGD_ADDR) // region 5 is rooted at swapper_pg_dir
-(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-1
-(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-4
+(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT
+(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-3
;;
(p6) dep r17=r18,r19,3,(PAGE_SHIFT-3) // r17=PTA + IFA(33,42)*8
(p7) dep r17=r18,r17,3,(PAGE_SHIFT-6) // r17=PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)
cmp.eq p7,p6=0,r21 // unused address bits all zeroes?
shr.u r18=r16,PMD_SHIFT // shift L2 index into position
;;
-(p6) cmp.eq p7,p6=-1,r21 // unused address bits all ones?
ld8 r17=[r17] // fetch the L1 entry (may be 0)
;;
(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
@@ -309,15 +307,14 @@
(p7) dep r17=r17,r19,(PAGE_SHIFT-3),3 // put region number bits in place
srlz.d // ensure "rsm psr.dt" has taken effect
(p6) movl r19=__pa(SWAPPER_PGD_ADDR) // region 5 is rooted at swapper_pg_dir
-(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-1
-(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-4
+(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT
+(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-3
;;
(p6) dep r17=r18,r19,3,(PAGE_SHIFT-3) // r17=PTA + IFA(33,42)*8
(p7) dep r17=r18,r17,3,(PAGE_SHIFT-6) // r17=PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)
cmp.eq p7,p6=0,r21 // unused address bits all zeroes?
shr.u r18=r16,PMD_SHIFT // shift L2 index into position
;;
-(p6) cmp.eq p7,p6=-1,r21 // unused address bits all ones?
ld8 r17=[r17] // fetch the L1 entry (may be 0)
;;
(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
@@ -351,28 +348,31 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
mov r16=cr.ifa // get address that caused the TLB miss
-#ifdef CONFIG_DISABLE_VHPT
+ movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX
+ mov r21=cr.ipsr
mov r31=pr
;;
- shr.u r21=r16,61 // get the region number into r21
- ;;
- cmp.gt p6,p0=6,r21 // user mode
-(p6) br.cond.dptk.many itlb_fault
+#ifdef CONFIG_DISABLE_VHPT
+ shr.u r22=r16,61 // get the region number into r21
;;
- mov pr=r31,-1
+ cmp.gt p8,p0=6,r22 // user mode
+(p8) br.cond.dptk.many itlb_fault
#endif
- movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX
- ;;
+ extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
shr.u r18=r16,57 // move address bit 61 to bit 4
- dep r16=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits
+ dep r19=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits
;;
andcm r18=0x10,r18 // bit 4=~address-bit(61)
- dep r16=r17,r16,0,12 // insert PTE control bits into r16
+ cmp.ne p8,p0=r0,r23 // psr.cpl != 0?
+ dep r19=r17,r19,0,12 // insert PTE control bits into r19
;;
- or r16=r16,r18 // set bit 4 (uncached) if the access was to region 6
+ or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6
+(p8) br.cond.spnt.many page_fault
;;
- itc.i r16 // insert the TLB entry
+ itc.i r19 // insert the TLB entry
+ mov pr=r31,-1
rfi
+ ;;
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
@@ -389,21 +389,24 @@
cmp.gt p8,p0=6,r22 // user mode
(p8) br.cond.dptk.many dtlb_fault
#endif
+ extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
tbit.nz p6,p7=r20,IA64_ISR_SP_BIT // is speculation bit on?
shr.u r18=r16,57 // move address bit 61 to bit 4
- dep r16=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits
+ dep r19=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits
;;
- dep r21=-1,r21,IA64_PSR_ED_BIT,1
andcm r18=0x10,r18 // bit 4=~address-bit(61)
- dep r16=r17,r16,0,12 // insert PTE control bits into r16
+ cmp.ne p8,p0=r0,r23
+(p8) br.cond.spnt.many page_fault
+
+ dep r21=-1,r21,IA64_PSR_ED_BIT,1
+ dep r19=r17,r19,0,12 // insert PTE control bits into r19
;;
- or r16=r16,r18 // set bit 4 (uncached) if the access was to region 6
+ or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6
(p6) mov cr.ipsr=r21
;;
-(p7) itc.d r16 // insert the TLB entry
+(p7) itc.d r19 // insert the TLB entry
mov pr=r31,-1
rfi
-
;;
//-----------------------------------------------------------------------------------
@@ -870,6 +873,7 @@
cmp.ne p6,p0=0,r8
(p6) br.call.dpnt b6¶ // call returns to ia64_leave_kernel
br.sptk ia64_leave_kernel
+ ;;
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
diff -urN linux-davidm/arch/ia64/kernel/pci-dma.c lia64/arch/ia64/kernel/pci-dma.c
--- linux-davidm/arch/ia64/kernel/pci-dma.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/pci-dma.c Wed Dec 6 22:22:55 2000
@@ -24,7 +24,8 @@
#include <linux/init.h>
#include <linux/bootmem.h>
-#define ALIGN(val, align) ((unsigned long) (((unsigned long) (val) + ((align) - 1)) & ~((align) - 1)))
+#define ALIGN(val, align) ((unsigned long) \
+ (((unsigned long) (val) + ((align) - 1)) & ~((align) - 1)))
/*
* log of the size of each IO TLB slab. The number of slabs is command line
@@ -109,7 +110,8 @@
{
unsigned long flags;
char *dma_addr;
- unsigned int i, nslots, stride, index, wrap;
+ unsigned int nslots, stride, index, wrap;
+ int i;
/*
* For mappings greater than a page size, we limit the stride (and hence alignment)
@@ -133,7 +135,7 @@
wrap = index = ALIGN(io_tlb_index, stride);
if (index >= io_tlb_nslabs)
- index = 0;
+ wrap = index = 0;
do {
/*
@@ -142,14 +144,19 @@
* entries as '0' indicating unavailable.
*/
if (io_tlb_list[index] >= nslots) {
+ int count = 0;
+
for (i = index; i < index + nslots; i++)
io_tlb_list[i] = 0;
+ for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
+ io_tlb_list[i] = ++count;
dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
/*
* Update the indices to avoid searching in the next round.
*/
- io_tlb_index = (index + nslots) < io_tlb_nslabs ? (index + nslots) : 0;
+ io_tlb_index = ((index + nslots) < io_tlb_nslabs
+ ? (index + nslots) : 0);
goto found;
}
@@ -209,15 +216,17 @@
{
int count = ((index + nslots) < io_tlb_nslabs ? io_tlb_list[index + nslots] : 0);
/*
- * Step 1: return the slots to the free list, merging the slots with superceeding slots
+ * Step 1: return the slots to the free list, merging the slots with
+ * superceeding slots
*/
for (i = index + nslots - 1; i >= index; i--)
io_tlb_list[i] = ++count;
/*
- * Step 2: merge the returned slots with the preceeding slots, if available (non zero)
+ * Step 2: merge the returned slots with the preceeding slots, if
+ * available (non zero)
*/
for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
- io_tlb_list[i] += io_tlb_list[index];
+ io_tlb_list[i] = ++count;
}
spin_unlock_irqrestore(&io_tlb_lock, flags);
}
diff -urN linux-davidm/arch/ia64/kernel/pci.c lia64/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/pci.c Wed Dec 6 22:23:15 2000
@@ -122,10 +122,13 @@
# define PCI_BUSES_TO_SCAN 255
int i;
+ platform_pci_fixup(0); /* phase 0 initialization (before PCI bus has been scanned) */
+
printk("PCI: Probing PCI hardware\n");
for (i = 0; i < PCI_BUSES_TO_SCAN; i++)
pci_scan_bus(i, &pci_conf, NULL);
- platform_pci_fixup();
+
+ platform_pci_fixup(1); /* phase 1 initialization (after PCI bus has been scanned) */
return;
}
diff -urN linux-davidm/arch/ia64/kernel/setup.c lia64/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/setup.c Wed Dec 6 22:23:29 2000
@@ -235,6 +235,12 @@
machvec_init(acpi_get_sysname());
#endif
+#ifdef CONFIG_ACPI20
+ if (efi.acpi20) {
+ /* Parse the ACPI 2.0 tables */
+ acpi20_parse(efi.acpi20);
+ } else
+#endif
if (efi.acpi) {
/* Parse the ACPI tables */
acpi_parse(efi.acpi);
@@ -376,15 +382,7 @@
status = ia64_pal_vm_summary(&vm1, &vm2);
if (status = PAL_STATUS_SUCCESS) {
-#if 1
- /*
- * XXX the current PAL code returns IMPL_VA_MSB=60, which is dead-wrong.
- * --davidm 00/05/26
- s*/
- impl_va_msb = 50;
-#else
impl_va_msb = vm2.pal_vm_info_2_s.impl_va_msb;
-#endif
phys_addr_size = vm1.pal_vm_info_1_s.phys_add_size;
}
printk("CPU %d: %lu virtual and %lu physical address bits\n",
diff -urN linux-davidm/arch/ia64/kernel/sys_ia64.c lia64/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c Mon Oct 9 17:54:55 2000
+++ lia64/arch/ia64/kernel/sys_ia64.c Wed Dec 6 22:24:07 2000
@@ -95,10 +95,10 @@
static inline unsigned long
do_mmap2 (unsigned long addr, unsigned long len, int prot, int flags, int fd, unsigned long pgoff)
{
- unsigned long loff, hoff;
+ unsigned long roff;
struct file *file = 0;
/* the virtual address space that is mappable in each region: */
-# define OCTANT_SIZE ((PTRS_PER_PGD<<PGDIR_SHIFT)/8)
+# define OCTANT_SIZE (1UL << (4*PAGE_SHIFT - 12))
/*
* A zero mmap always succeeds in Linux, independent of
@@ -107,15 +107,12 @@
if (PAGE_ALIGN(len) = 0)
return addr;
- /* Don't permit mappings into or across the address hole in a region: */
- loff = rgn_offset(addr);
- hoff = loff - (RGN_SIZE - OCTANT_SIZE/2);
- if ((len | loff | (loff + len)) >= OCTANT_SIZE/2
- && (len | hoff | (hoff + len)) >= OCTANT_SIZE/2)
+ /* don't permit mappings into unmapped space or the virtual page table of a region: */
+ roff = rgn_offset(addr);
+ if ((len | roff | (roff + len)) >= OCTANT_SIZE)
return -EINVAL;
- /* Don't permit mappings that would cross a region boundary: */
-
+ /* don't permit mappings that would cross a region boundary: */
if (rgn_index(addr) != rgn_index(addr + len))
return -EINVAL;
diff -urN linux-davidm/arch/ia64/kernel/time.c lia64/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Mon Oct 9 17:54:55 2000
+++ lia64/arch/ia64/kernel/time.c Wed Dec 6 22:24:20 2000
@@ -152,19 +152,7 @@
{
int cpu = smp_processor_id();
unsigned long new_itm;
-#if 0
- static unsigned long last_time;
- static unsigned char count;
- int printed = 0;
-#endif
- /*
- * Here we are in the timer irq handler. We have irqs locally
- * disabled, but we don't know if the timer_bh is running on
- * another CPU. We need to avoid to SMP race by acquiring the
- * xtime_lock.
- */
- write_lock(&xtime_lock);
new_itm = itm.next[cpu].count;
if (!time_after(ia64_get_itc(), new_itm))
@@ -173,48 +161,33 @@
while (1) {
/*
- * Do kernel PC profiling here. We multiply the
- * instruction number by four so that we can use a
- * prof_shift of 2 to get instruction-level instead of
- * just bundle-level accuracy.
+ * Do kernel PC profiling here. We multiply the instruction number by
+ * four so that we can use a prof_shift of 2 to get instruction-level
+ * instead of just bundle-level accuracy.
*/
if (!user_mode(regs))
do_profile(regs->cr_iip + 4*ia64_psr(regs)->ri);
#ifdef CONFIG_SMP
smp_do_timer(regs);
- if (smp_processor_id() = 0)
- do_timer(regs);
-#else
- do_timer(regs);
#endif
+ if (smp_processor_id() = 0) {
+ /*
+ * Here we are in the timer irq handler. We have irqs locally
+ * disabled, but we don't know if the timer_bh is running on
+ * another CPU. We need to avoid to SMP race by acquiring the
+ * xtime_lock.
+ */
+ write_lock(&xtime_lock);
+ do_timer(regs);
+ write_unlock(&xtime_lock);
+ }
new_itm += itm.delta;
itm.next[cpu].count = new_itm;
if (time_after(new_itm, ia64_get_itc()))
break;
-
-#if 0
- /*
- * SoftSDV in SMP mode is _slow_, so we do "lose" ticks,
- * but it's really OK...
- */
- if (count > 0 && jiffies - last_time > 5*HZ)
- count = 0;
- if (count++ = 0) {
- last_time = jiffies;
- if (!printed) {
- printk("Lost clock tick on CPU %d (now=%lx, next=%lx)!!\n",
- cpu, ia64_get_itc(), itm.next[cpu].count);
- printed = 1;
-# ifdef CONFIG_IA64_DEBUG_IRQ
- printk("last_cli_ip=%lx\n", last_cli_ip);
-# endif
- }
- }
-#endif
}
- write_unlock(&xtime_lock);
/*
* If we're too close to the next clock tick for comfort, we
diff -urN linux-davidm/arch/ia64/kernel/traps.c lia64/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/kernel/traps.c Wed Dec 6 22:24:51 2000
@@ -78,7 +78,7 @@
die_if_kernel (char *str, struct pt_regs *regs, long err)
{
if (user_mode(regs)) {
-#if 1
+#if 0
/* XXX for debugging only */
printk ("!!die_if_kernel: %s(%d): %s %ld\n",
current->comm, current->pid, str, err);
@@ -484,6 +484,20 @@
sprintf(buf, "Disabled FPL fault---not supposed to happen!");
break;
+ case 26: /* NaT Consumption */
+ case 31: /* Unsupported Data Reference */
+ if (user_mode(regs)) {
+ siginfo.si_signo = SIGILL;
+ siginfo.si_code = ILL_ILLOPN;
+ siginfo.si_errno = 0;
+ siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
+ siginfo.si_imm = vector;
+ force_sig_info(SIGILL, &siginfo, current);
+ return;
+ }
+ sprintf(buf, (vector = 26) ? "NaT consumption" : "Unsupported data reference");
+ break;
+
case 29: /* Debug */
case 35: /* Taken Branch Trap */
case 36: /* Single Step Trap */
@@ -522,10 +536,10 @@
case 34: /* Unimplemented Instruction Address Trap */
if (user_mode(regs)) {
- printk("Woah! Unimplemented Instruction Address Trap!\n");
- siginfo.si_code = ILL_BADIADDR;
siginfo.si_signo = SIGILL;
+ siginfo.si_code = ILL_BADIADDR;
siginfo.si_errno = 0;
+ siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
force_sig_info(SIGILL, &siginfo, current);
return;
}
@@ -544,7 +558,8 @@
case 46:
printk("Unexpected IA-32 intercept trap (Trap 46)\n");
- printk(" iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n", regs->cr_iip, ifa, isr, iim);
+ printk(" iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n",
+ regs->cr_iip, ifa, isr, iim);
force_sig(SIGSEGV, current);
return;
diff -urN linux-davidm/arch/ia64/lib/memcpy.S lia64/arch/ia64/lib/memcpy.S
--- linux-davidm/arch/ia64/lib/memcpy.S Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/lib/memcpy.S Wed Dec 6 22:25:15 2000
@@ -106,6 +106,13 @@
(p7) br.cond.spnt.few memcpy_short
(p6) br.cond.spnt.few memcpy_long
;;
+ nop.m 0
+ ;;
+ nop.m 0
+ nop.i 0
+ ;;
+ nop.m 0
+ ;;
.rotr val[N]
.rotp p[N]
.align 32
@@ -139,6 +146,15 @@
;;
mov ar.lc=cnt
;;
+ nop.m 0
+ ;;
+ nop.m 0
+ nop.i 0
+ ;;
+ nop.m 0
+ ;;
+ nop.m 0
+ ;;
/*
* It is faster to put a stop bit in the loop here because it makes
* the pipeline shorter (and latency is what matters on short copies).
@@ -249,6 +265,13 @@
add t4=t0,t4
mov pr=cnt,0x38 // set (p5,p4,p3) to # of bytes last-word bytes to copy
mov ar.lc=t2
+ ;;
+ nop.m 0
+ ;;
+ nop.m 0
+ nop.i 0
+ ;;
+ nop.m 0
;;
(p6) ld8 val[1]=[src2],8 // prime the pump...
mov b6=t4
diff -urN linux-davidm/arch/ia64/mm/fault.c lia64/arch/ia64/mm/fault.c
--- linux-davidm/arch/ia64/mm/fault.c Thu Jun 22 07:09:45 2000
+++ lia64/arch/ia64/mm/fault.c Wed Dec 6 22:25:31 2000
@@ -121,17 +121,19 @@
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
- } else if (expand_backing_store(prev_vma, address))
- goto bad_area;
+ } else {
+ vma = prev_vma;
+ if (expand_backing_store(vma, address))
+ goto bad_area;
+ }
goto good_area;
bad_area:
up(&mm->mmap_sem);
if (isr & IA64_ISR_SP) {
/*
- * This fault was due to a speculative load set the
- * "ed" bit in the psr to ensure forward progress
- * (target register will get a NaT).
+ * This fault was due to a speculative load set the "ed" bit in the psr to
+ * ensure forward progress (target register will get a NaT).
*/
ia64_psr(regs)->ed = 1;
return;
@@ -146,6 +148,15 @@
}
no_context:
+ if (isr & IA64_ISR_SP) {
+ /*
+ * This fault was due to a speculative load set the "ed" bit in the psr to
+ * ensure forward progress (target register will get a NaT).
+ */
+ ia64_psr(regs)->ed = 1;
+ return;
+ }
+
fix = search_exception_table(regs->cr_iip);
if (fix) {
regs->r8 = -EFAULT;
diff -urN linux-davidm/arch/ia64/mm/init.c lia64/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/mm/init.c Wed Dec 6 22:26:21 2000
@@ -313,7 +313,12 @@
void __init
ia64_rid_init (void)
{
- unsigned long flags, rid, pta, impl_va_msb;
+ unsigned long flags, rid, pta, impl_va_bits;
+#ifdef CONFIG_DISABLE_VHPT
+# define VHPT_ENABLE_BIT 0
+#else
+# define VHPT_ENABLE_BIT 1
+#endif
/* Set up the kernel identity mappings (regions 6 & 7) and the vmalloc area (region 5): */
ia64_clear_ic(flags);
@@ -330,44 +335,46 @@
__restore_flags(flags);
/*
- * Check if the virtually mapped linear page table (VMLPT)
- * overlaps with a mapped address space. The IA-64
- * architecture guarantees that at least 50 bits of virtual
- * address space are implemented but if we pick a large enough
- * page size (e.g., 64KB), the VMLPT is big enough that it
- * will overlap with the upper half of the kernel mapped
- * region. I assume that once we run on machines big enough
- * to warrant 64KB pages, IMPL_VA_MSB will be significantly
- * bigger, so we can just adjust the number below to get
- * things going. Alternatively, we could truncate the upper
- * half of each regions address space to not permit mappings
- * that would overlap with the VMLPT. --davidm 99/11/13
+ * Check if the virtually mapped linear page table (VMLPT) overlaps with a mapped
+ * address space. The IA-64 architecture guarantees that at least 50 bits of
+ * virtual address space are implemented but if we pick a large enough page size
+ * (e.g., 64KB), the mapped address space is big enough that it will overlap with
+ * VMLPT. I assume that once we run on machines big enough to warrant 64KB pages,
+ * IMPL_VA_MSB will be significantly bigger, so this is unlikely to become a
+ * problem in practice. Alternatively, we could truncate the top of the mapped
+ * address space to not permit mappings that would overlap with the VMLPT.
+ * --davidm 00/12/06
+ */
+# define pte_bits 3
+# define mapped_space_bits (3*(PAGE_SHIFT - pte_bits) + PAGE_SHIFT)
+ /*
+ * The virtual page table has to cover the entire implemented address space within
+ * a region even though not all of this space may be mappable. The reason for
+ * this is that the Access bit and Dirty bit fault handlers perform
+ * non-speculative accesses to the virtual page table, so the address range of the
+ * virtual page table itself needs to be covered by virtual page table.
*/
-# define ld_pte_size 3
-# define ld_max_addr_space_pages 3*(PAGE_SHIFT - ld_pte_size) /* max # of mappable pages */
-# define ld_max_addr_space_size (ld_max_addr_space_pages + PAGE_SHIFT)
-# define ld_max_vpt_size (ld_max_addr_space_pages + ld_pte_size)
+# define vmlpt_bits (impl_va_bits - PAGE_SHIFT + pte_bits)
# define POW2(n) (1ULL << (n))
- impl_va_msb = ffz(~my_cpu_data.unimpl_va_mask) - 1;
- if (impl_va_msb < 50 || impl_va_msb > 60)
- panic("Bogus impl_va_msb value of %lu!\n", impl_va_msb);
+ impl_va_bits = ffz(~my_cpu_data.unimpl_va_mask);
- if (POW2(ld_max_addr_space_size - 1) + POW2(ld_max_vpt_size) > POW2(impl_va_msb))
+ if (impl_va_bits < 51 || impl_va_bits > 61)
+ panic("CPU has bogus IMPL_VA_MSB value of %lu!\n", impl_va_bits - 1);
+
+ /* place the VMLPT at the end of each page-table mapped region: */
+ pta = POW2(61) - POW2(vmlpt_bits);
+
+ if (POW2(mapped_space_bits) >= pta)
panic("mm/init: overlap between virtually mapped linear page table and "
"mapped kernel space!");
- pta = POW2(61) - POW2(impl_va_msb);
-#ifndef CONFIG_DISABLE_VHPT
/*
* Set the (virtually mapped linear) page table address. Bit
* 8 selects between the short and long format, bits 2-7 the
* size of the table, and bit 0 whether the VHPT walker is
* enabled.
*/
- ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 1);
-#else
- ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 0);
-#endif
+ ia64_set_pta(pta | (0 << 8) | (vmlpt_bits << 2) | VHPT_ENABLE_BIT);
}
/*
diff -urN linux-davidm/arch/ia64/mm/tlb.c lia64/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Wed Dec 6 23:15:23 2000
+++ lia64/arch/ia64/mm/tlb.c Wed Dec 6 22:26:48 2000
@@ -194,7 +194,11 @@
if (mm != current->active_mm) {
/* this does happen, but perhaps it's not worth optimizing for? */
+#ifdef CONFIG_SMP
+ flush_tlb_all();
+#else
mm->context = 0;
+#endif
return;
}
diff -urN linux-davidm/drivers/acpi/Makefile lia64/drivers/acpi/Makefile
--- linux-davidm/drivers/acpi/Makefile Wed Dec 6 23:15:23 2000
+++ lia64/drivers/acpi/Makefile Mon Nov 20 17:45:48 2000
@@ -2,7 +2,7 @@
# Makefile for the Linux ACPI interpreter
#
-SUB_DIRS :+SUB_DIRS :=
MOD_SUB_DIRS := $(SUB_DIRS)
MOD_IN_SUB_DIRS : ALL_SUB_DIRS := $(SUB_DIRS)
@@ -12,27 +12,33 @@
M_OBJS :
export ACPI_CFLAGS
-#ACPI_CFLAGS := -D_LINUX -DCONFIG_ACPI_KERNEL_CONFIG_CBN_FIX -DCONFIG_ACPI_KERNEL_CONFIG_ONLY -DCONFIG_ACPI_KERNEL_CONFIG_DEBUG -DCONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
-ACPI_CFLAGS := -D_LINUX -DCONFIG_ACPI_KERNEL_CONFIG_CBN_FIX -DCONFIG_ACPI_KERNEL_CONFIG_ONLY
+ifdef CONFIG_ACPI_KERNEL_CONFIG
+ ACPI_CFLAGS := -D_LINUX -DCONFIG_ACPI_KERNEL_CONFIG_ONLY
+else
+ ACPI_CFLAGS := -D_LINUX
+endif
EXTRA_CFLAGS += -I./include
EXTRA_CFLAGS += $(ACPI_CFLAGS)
# if the interpreter is used, it overrides arch/i386/kernel/acpi.c
-ifeq ($(CONFIG_ACPI_INTERPRETER),y)
+ifeq ($(CONFIG_ACPI),y)
+
SUB_DIRS += common dispatcher events hardware\
interpreter namespace parser resources tables
+
ACPI_OBJS := $(patsubst %,%.o,$(SUB_DIRS))
# ACPI_OBJS += $(patsubst %.c,%.o,$(wildcard *.c))
ifdef CONFIG_ACPI_KERNEL_CONFIG
ACPI_OBJS += acpiconf.o osconf.o os.o
else
- ACPI_OBJS += driver.o ec.o cpu.o os.o sys.o table.o
+ ACPI_OBJS += driver.o cmbatt.o cpu.o ec.o ksyms.o os.o sys.o table.o
endif
O_OBJS += $(ACPI_OBJS)
+ OX_OBJS = ksyms.o
endif
include $(TOPDIR)/Rules.make
diff -urN linux-davidm/fs/fcntl.c lia64/fs/fcntl.c
--- linux-davidm/fs/fcntl.c Wed Dec 6 18:33:28 2000
+++ lia64/fs/fcntl.c Wed Dec 6 22:36:29 2000
@@ -269,6 +269,7 @@
* to fix this will be in libc.
*/
err = filp->f_owner.pid;
+ force_successful_syscall_return();
break;
case F_SETOWN:
lock_kernel();
diff -urN linux-davidm/fs/partitions/efi.c lia64/fs/partitions/efi.c
--- linux-davidm/fs/partitions/efi.c Wed Dec 6 23:15:23 2000
+++ lia64/fs/partitions/efi.c Wed Dec 6 22:36:43 2000
@@ -4,8 +4,12 @@
* http://developer.intel.com/technology/efi/efi.htm
* efi.[ch] by Matt Domsch <Matt_Domsch@dell.com>
* Copyright 2000 Dell Computer Corporation
- * CRC routines taken from the EFI Sample Implementation,
- * 1999.12.31, lib/crc.c
+ *
+ * Note, the EFI Specification, v0.99, has a reference to
+ * Dr. Dobbs Journal, May 1994 (actually it's in May 1992)
+ * but that isn't the CRC function being used by EFI. Intel's
+ * EFI Sample Implementation shows that they use the same function
+ * as was COPYRIGHT (C) 1986 Gary S. Brown.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -25,6 +29,13 @@
* TODO:
*
* Changelog:
+ * Tue Dec 5 2000 Matt Domsch <Matt_Domsch@dell.com>
+ * - Moved crc32() to linux/lib, added efi_crc32().
+ *
+ * Thu Nov 30 2000 Matt Domsch <Matt_Domsch@dell.com>
+ * - Replaced Intel's CRC32 function with an equivalent
+ * non-license-restricted version.
+ *
* Wed Oct 25 2000 Matt Domsch <Matt_Domsch@dell.com>
* - Fixed the LastLBA() call to return the proper last block
*
@@ -44,6 +55,7 @@
#include <linux/smp_lock.h>
#include <asm/system.h>
#include <asm/efi.h>
+#include <linux/crc32.h>
#include "check.h"
#include "efi.h"
@@ -61,79 +73,27 @@
#define debug_printk(...)
#endif
-/* CRC routines taken from the EFI Sample Implementation,
- * 1999.12.31, lib/crc.c
- *
- * Note, the EFI Specification, v0.99, has a reference to
- * Dr. Dobbs Journal, May 1994 (actually it's in May 1992)
- * but that isn't the CRC function being used by EFI.
- */
-
-static u32 CRCTable[256] = {
- 0x00000000, 0x77073096, 0xEE0E612C, 0x990951BA, 0x076DC419, 0x706AF48F,
- 0xE963A535, 0x9E6495A3, 0x0EDB8832, 0x79DCB8A4, 0xE0D5E91E, 0x97D2D988,
- 0x09B64C2B, 0x7EB17CBD, 0xE7B82D07, 0x90BF1D91, 0x1DB71064, 0x6AB020F2,
- 0xF3B97148, 0x84BE41DE, 0x1ADAD47D, 0x6DDDE4EB, 0xF4D4B551, 0x83D385C7,
- 0x136C9856, 0x646BA8C0, 0xFD62F97A, 0x8A65C9EC, 0x14015C4F, 0x63066CD9,
- 0xFA0F3D63, 0x8D080DF5, 0x3B6E20C8, 0x4C69105E, 0xD56041E4, 0xA2677172,
- 0x3C03E4D1, 0x4B04D447, 0xD20D85FD, 0xA50AB56B, 0x35B5A8FA, 0x42B2986C,
- 0xDBBBC9D6, 0xACBCF940, 0x32D86CE3, 0x45DF5C75, 0xDCD60DCF, 0xABD13D59,
- 0x26D930AC, 0x51DE003A, 0xC8D75180, 0xBFD06116, 0x21B4F4B5, 0x56B3C423,
- 0xCFBA9599, 0xB8BDA50F, 0x2802B89E, 0x5F058808, 0xC60CD9B2, 0xB10BE924,
- 0x2F6F7C87, 0x58684C11, 0xC1611DAB, 0xB6662D3D, 0x76DC4190, 0x01DB7106,
- 0x98D220BC, 0xEFD5102A, 0x71B18589, 0x06B6B51F, 0x9FBFE4A5, 0xE8B8D433,
- 0x7807C9A2, 0x0F00F934, 0x9609A88E, 0xE10E9818, 0x7F6A0DBB, 0x086D3D2D,
- 0x91646C97, 0xE6635C01, 0x6B6B51F4, 0x1C6C6162, 0x856530D8, 0xF262004E,
- 0x6C0695ED, 0x1B01A57B, 0x8208F4C1, 0xF50FC457, 0x65B0D9C6, 0x12B7E950,
- 0x8BBEB8EA, 0xFCB9887C, 0x62DD1DDF, 0x15DA2D49, 0x8CD37CF3, 0xFBD44C65,
- 0x4DB26158, 0x3AB551CE, 0xA3BC0074, 0xD4BB30E2, 0x4ADFA541, 0x3DD895D7,
- 0xA4D1C46D, 0xD3D6F4FB, 0x4369E96A, 0x346ED9FC, 0xAD678846, 0xDA60B8D0,
- 0x44042D73, 0x33031DE5, 0xAA0A4C5F, 0xDD0D7CC9, 0x5005713C, 0x270241AA,
- 0xBE0B1010, 0xC90C2086, 0x5768B525, 0x206F85B3, 0xB966D409, 0xCE61E49F,
- 0x5EDEF90E, 0x29D9C998, 0xB0D09822, 0xC7D7A8B4, 0x59B33D17, 0x2EB40D81,
- 0xB7BD5C3B, 0xC0BA6CAD, 0xEDB88320, 0x9ABFB3B6, 0x03B6E20C, 0x74B1D29A,
- 0xEAD54739, 0x9DD277AF, 0x04DB2615, 0x73DC1683, 0xE3630B12, 0x94643B84,
- 0x0D6D6A3E, 0x7A6A5AA8, 0xE40ECF0B, 0x9309FF9D, 0x0A00AE27, 0x7D079EB1,
- 0xF00F9344, 0x8708A3D2, 0x1E01F268, 0x6906C2FE, 0xF762575D, 0x806567CB,
- 0x196C3671, 0x6E6B06E7, 0xFED41B76, 0x89D32BE0, 0x10DA7A5A, 0x67DD4ACC,
- 0xF9B9DF6F, 0x8EBEEFF9, 0x17B7BE43, 0x60B08ED5, 0xD6D6A3E8, 0xA1D1937E,
- 0x38D8C2C4, 0x4FDFF252, 0xD1BB67F1, 0xA6BC5767, 0x3FB506DD, 0x48B2364B,
- 0xD80D2BDA, 0xAF0A1B4C, 0x36034AF6, 0x41047A60, 0xDF60EFC3, 0xA867DF55,
- 0x316E8EEF, 0x4669BE79, 0xCB61B38C, 0xBC66831A, 0x256FD2A0, 0x5268E236,
- 0xCC0C7795, 0xBB0B4703, 0x220216B9, 0x5505262F, 0xC5BA3BBE, 0xB2BD0B28,
- 0x2BB45A92, 0x5CB36A04, 0xC2D7FFA7, 0xB5D0CF31, 0x2CD99E8B, 0x5BDEAE1D,
- 0x9B64C2B0, 0xEC63F226, 0x756AA39C, 0x026D930A, 0x9C0906A9, 0xEB0E363F,
- 0x72076785, 0x05005713, 0x95BF4A82, 0xE2B87A14, 0x7BB12BAE, 0x0CB61B38,
- 0x92D28E9B, 0xE5D5BE0D, 0x7CDCEFB7, 0x0BDBDF21, 0x86D3D2D4, 0xF1D4E242,
- 0x68DDB3F8, 0x1FDA836E, 0x81BE16CD, 0xF6B9265B, 0x6FB077E1, 0x18B74777,
- 0x88085AE6, 0xFF0F6A70, 0x66063BCA, 0x11010B5C, 0x8F659EFF, 0xF862AE69,
- 0x616BFFD3, 0x166CCF45, 0xA00AE278, 0xD70DD2EE, 0x4E048354, 0x3903B3C2,
- 0xA7672661, 0xD06016F7, 0x4969474D, 0x3E6E77DB, 0xAED16A4A, 0xD9D65ADC,
- 0x40DF0B66, 0x37D83BF0, 0xA9BCAE53, 0xDEBB9EC5, 0x47B2CF7F, 0x30B5FFE9,
- 0xBDBDF21C, 0xCABAC28A, 0x53B39330, 0x24B4A3A6, 0xBAD03605, 0xCDD70693,
- 0x54DE5729, 0x23D967BF, 0xB3667A2E, 0xC4614AB8, 0x5D681B02, 0x2A6F2B94,
- 0xB40BBE37, 0xC30C8EA1, 0x5A05DF1B, 0x2D02EF8D
-};
+/************************************************************
+ * efi_crc32()
+ * Requires:
+ * - a buffer of length len
+ * Modifies: nothing
+ * Returns:
+ * EFI-style CRC32 value for buf
+ *
+ * This function uses the crc32 function by Gary S. Brown,
+ * but seeds the function with ~0, and xor's with ~0 at the end.
+ ************************************************************/
-static u32
-CalculateCrc (void *_pt, u32 Size)
+u32 static inline
+efi_crc32(const void *buf, unsigned long len)
{
- u8 *pt = (u8 *)_pt;
- register u32 Crc;
-
- /* compute crc */
- Crc = 0xffffffff;
- while (Size) {
- Crc = (Crc >> 8) ^ CRCTable[(__u8) Crc ^ *pt];
- pt += 1;
- Size -= 1;
- }
- Crc = Crc ^ 0xffffffff;
- return Crc;
+ return (crc32(buf, len, ~0L) ^ ~0L);
}
+
/************************************************************
* IsLegacyMBRValid()
* Requires:
@@ -387,7 +347,7 @@
/* Check the GUID Partition Table CRC */
origcrc = (*gpt)->HeaderCRC32;
(*gpt)->HeaderCRC32 = 0;
- crc = CalculateCrc(*gpt, (*gpt)->HeaderSize);
+ crc = efi_crc32((const unsigned char *)(*gpt), (*gpt)->HeaderSize);
if (crc != origcrc) {
@@ -415,7 +375,8 @@
}
/* Check the GUID Partition Entry Array CRC */
- crc = CalculateCrc(*ptes, (*gpt)->NumberOfPartitionEntries *
+ crc = efi_crc32((const unsigned char *)(*ptes),
+ (*gpt)->NumberOfPartitionEntries *
(*gpt)->SizeOfPartitionEntry);
if (crc != (*gpt)->PartitionEntryArrayCRC32) {
diff -urN linux-davidm/include/asm-ia64/a.out.h lia64/include/asm-ia64/a.out.h
--- linux-davidm/include/asm-ia64/a.out.h Sun Feb 6 18:42:40 2000
+++ lia64/include/asm-ia64/a.out.h Wed Dec 6 22:36:55 2000
@@ -7,14 +7,13 @@
* probably would be better to clean up binfmt_elf.c so it does not
* necessarily depend on there being a.out support.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/types.h>
-struct exec
-{
+struct exec {
unsigned long a_info;
unsigned long a_text;
unsigned long a_data;
@@ -31,7 +30,7 @@
#define N_TXTOFF(x) 0
#ifdef __KERNEL__
-# define STACK_TOP 0xa000000000000000UL
+# define STACK_TOP (0x8000000000000000UL + (1UL << (4*PAGE_SHIFT - 12)))
# define IA64_RBS_BOT (STACK_TOP - 0x80000000L) /* bottom of register backing store */
#endif
diff -urN linux-davidm/include/asm-ia64/acpi-ext.h lia64/include/asm-ia64/acpi-ext.h
--- linux-davidm/include/asm-ia64/acpi-ext.h Mon Oct 9 17:54:57 2000
+++ lia64/include/asm-ia64/acpi-ext.h Wed Dec 6 22:37:07 2000
@@ -8,19 +8,27 @@
*
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
+ * ACPI 2.0 specification
*/
#include <linux/types.h>
+#pragma pack(1)
#define ACPI_RSDP_SIG "RSD PTR " /* Trailing space required */
#define ACPI_RSDP_SIG_LEN 8
typedef struct {
char signature[8];
u8 checksum;
char oem_id[6];
- char reserved; /* Must be 0 */
- struct acpi_rsdt *rsdt;
-} acpi_rsdp_t;
+ u8 revision;
+ u32 rsdt;
+ u32 lenth;
+ struct acpi_xsdt *xsdt;
+ u8 ext_checksum;
+ u8 reserved[3];
+} acpi20_rsdp_t;
typedef struct {
char signature[4];
@@ -32,20 +40,73 @@
u32 oem_revision;
u32 creator_id;
u32 creator_revision;
- char reserved[4];
} acpi_desc_table_hdr_t;
#define ACPI_RSDT_SIG "RSDT"
#define ACPI_RSDT_SIG_LEN 4
-typedef struct acpi_rsdt {
+typedef struct {
+ acpi_desc_table_hdr_t header;
+ u8 reserved[4];
+ u32 entry_ptrs[1]; /* Not really . . . */
+} acpi20_rsdt_t;
+
+#define ACPI_XSDT_SIG "XSDT"
+#define ACPI_XSDT_SIG_LEN 4
+typedef struct acpi_xsdt {
acpi_desc_table_hdr_t header;
unsigned long entry_ptrs[1]; /* Not really . . . */
+} acpi_xsdt_t;
+
+/* Common structures for ACPI 2.0 and 0.71 */
+
+typedef struct acpi_entry_iosapic {
+ u8 type;
+ u8 length;
+ u8 id;
+ u8 reserved;
+ u32 irq_base; /* start of IRQ's this IOSAPIC is responsible for. */
+ unsigned long address; /* Address of this IOSAPIC */
+} acpi_entry_iosapic_t;
+
+/* Local SAPIC flags */
+#define LSAPIC_ENABLED (1<<0)
+#define LSAPIC_PERFORMANCE_RESTRICTED (1<<1)
+#define LSAPIC_PRESENT (1<<2)
+
+/* Defines legacy IRQ->pin mapping */
+typedef struct {
+ u8 type;
+ u8 length;
+ u8 bus; /* Constant 0 = ISA */
+ u8 isa_irq; /* ISA IRQ # */
+ u32 pin; /* called vector in spec; really IOSAPIC pin number */
+ u16 flags; /* Edge/Level trigger & High/Low active */
+} acpi_entry_int_override_t;
+
+#define INT_OVERRIDE_ACTIVE_LOW 0x03
+#define INT_OVERRIDE_LEVEL_TRIGGER 0x0d
+
+/* IA64 ext 0.71 */
+
+typedef struct {
+ char signature[8];
+ u8 checksum;
+ char oem_id[6];
+ char reserved; /* Must be 0 */
+ struct acpi_rsdt *rsdt;
+} acpi_rsdp_t;
+
+typedef struct {
+ acpi_desc_table_hdr_t header;
+ u8 reserved[4];
+ unsigned long entry_ptrs[1]; /* Not really . . . */
} acpi_rsdt_t;
#define ACPI_SAPIC_SIG "SPIC"
#define ACPI_SAPIC_SIG_LEN 4
typedef struct {
acpi_desc_table_hdr_t header;
+ u8 reserved[4];
unsigned long interrupt_block;
} acpi_sapic_t;
@@ -55,11 +116,6 @@
#define ACPI_ENTRY_INT_SRC_OVERRIDE 2
#define ACPI_ENTRY_PLATFORM_INT_SOURCE 3 /* Unimplemented */
-/* Local SAPIC flags */
-#define LSAPIC_ENABLED (1<<0)
-#define LSAPIC_PERFORMANCE_RESTRICTED (1<<1)
-#define LSAPIC_PRESENT (1<<2)
-
typedef struct acpi_entry_lsapic {
u8 type;
u8 length;
@@ -69,42 +125,71 @@
u8 eid;
} acpi_entry_lsapic_t;
-typedef struct acpi_entry_iosapic {
+typedef struct {
u8 type;
u8 length;
- u16 reserved;
- u32 irq_base; /* start of IRQ's this IOSAPIC is responsible for. */
- unsigned long address; /* Address of this IOSAPIC */
-} acpi_entry_iosapic_t;
+ u16 flags;
+ u8 int_type;
+ u8 id;
+ u8 eid;
+ u8 iosapic_vector;
+ u8 reserved[4];
+ u32 global_vector;
+} acpi_entry_platform_src_t;
-/* Defines legacy IRQ->pin mapping */
+/* ACPI 2.0 with 1.3 errata specific structures */
+
+#define ACPI_MADT_SIG "APIC"
+#define ACPI_MADT_SIG_LEN 4
typedef struct {
+ acpi_desc_table_hdr_t header;
+ u32 lapic_address;
+ u32 flags;
+} acpi_madt_t;
+
+/* acpi 2.0 MADT structure types */
+#define ACPI20_ENTRY_LOCAL_APIC 0
+#define ACPI20_ENTRY_IO_APIC 1
+#define ACPI20_ENTRY_INT_SRC_OVERRIDE 2
+#define ACPI20_ENTRY_NMI_SOURCE 3
+#define ACPI20_ENTRY_LOCAL_APIC_NMI 4
+#define ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE 5
+#define ACPI20_ENTRY_IO_SAPIC 6
+#define ACPI20_ENTRY_LOCAL_SAPIC 7
+#define ACPI20_ENTRY_PLATFORM_INT_SOURCE 8
+
+typedef struct acpi20_entry_lsapic {
u8 type;
u8 length;
- u8 bus; /* Constant 0 = ISA */
- u8 isa_irq; /* ISA IRQ # */
- u8 pin; /* called vector in spec; really IOSAPIC pin number */
- u32 flags; /* Edge/Level trigger & High/Low active */
- u8 reserved[6];
-} acpi_entry_int_override_t;
-#define INT_OVERRIDE_ACTIVE_LOW 0x03
-#define INT_OVERRIDE_LEVEL_TRIGGER 0x0d
+ u8 acpi_processor_id;
+ u8 id;
+ u8 eid;
+ u8 reserved[3];
+ u32 flags;
+} acpi20_entry_lsapic_t;
+
+typedef struct acpi20_entry_lapic_addr_override {
+ u8 type;
+ u8 length;
+ u8 reserved[2];
+ unsigned long lapic_address;
+} acpi20_entry_lapic_addr_override_t;
typedef struct {
u8 type;
u8 length;
- u32 flags;
+ u16 flags;
u8 int_type;
u8 id;
u8 eid;
u8 iosapic_vector;
- unsigned long reserved;
- unsigned long global_vector;
-} acpi_entry_platform_src_t;
+ u32 global_vector;
+} acpi20_entry_platform_src_t;
+extern int acpi20_parse(acpi20_rsdp_t *);
extern int acpi_parse(acpi_rsdp_t *);
extern const char *acpi_get_sysname (void);
extern void (*acpi_idle) (void); /* power-management idle function, if any */
-
+#pragma pack()
#endif /* _ASM_IA64_ACPI_EXT_H */
diff -urN linux-davidm/include/asm-ia64/efi.h lia64/include/asm-ia64/efi.h
--- linux-davidm/include/asm-ia64/efi.h Mon Oct 9 17:54:58 2000
+++ lia64/include/asm-ia64/efi.h Wed Dec 6 23:13:14 2000
@@ -168,6 +168,9 @@
#define ACPI_TABLE_GUID \
((efi_guid_t) { 0xeb9d2d30, 0x2d88, 0x11d3, { 0x9a, 0x16, 0x0, 0x90, 0x27, 0x3f, 0xc1, 0x4d }})
+#define ACPI_20_TABLE_GUID \
+ ((efi_guid_t) { 0x8868e871, 0xe4f1, 0x11d3, { 0xbc, 0x22, 0x0, 0x80, 0xc7, 0x3c, 0x88, 0x81 }})
+
#define SMBIOS_TABLE_GUID \
((efi_guid_t) { 0xeb9d2d31, 0x2d88, 0x11d3, { 0x9a, 0x16, 0x0, 0x90, 0x27, 0x3f, 0xc1, 0x4d }})
@@ -204,7 +207,8 @@
extern struct efi {
efi_system_table_t *systab; /* EFI system table */
void *mps; /* MPS table */
- void *acpi; /* ACPI table */
+ void *acpi; /* ACPI table (IA64 ext 0.71) */
+ void *acpi20; /* ACPI table (ACPI 2.0) */
void *smbios; /* SM BIOS table */
void *sal_systab; /* SAL system table */
void *boot_info; /* boot info table */
diff -urN linux-davidm/include/asm-ia64/iosapic.h lia64/include/asm-ia64/iosapic.h
--- linux-davidm/include/asm-ia64/iosapic.h Wed Dec 6 23:15:30 2000
+++ lia64/include/asm-ia64/iosapic.h Wed Dec 6 22:37:31 2000
@@ -56,7 +56,7 @@
extern void __init iosapic_init (unsigned long address, unsigned int base_irq);
extern void iosapic_register_legacy_irq (unsigned long irq, unsigned long pin,
unsigned long polarity, unsigned long trigger);
-extern void iosapic_pci_fixup (void);
+extern void iosapic_pci_fixup (int);
# endif /* !__ASSEMBLY__ */
#endif /* __ASM_IA64_IOSAPIC_H */
diff -urN linux-davidm/include/asm-ia64/machvec.h lia64/include/asm-ia64/machvec.h
--- linux-davidm/include/asm-ia64/machvec.h Wed Dec 6 23:15:30 2000
+++ lia64/include/asm-ia64/machvec.h Wed Dec 6 23:13:06 2000
@@ -25,7 +25,7 @@
typedef void ia64_mv_setup_t (char **);
typedef void ia64_mv_irq_init_t (void);
-typedef void ia64_mv_pci_fixup_t (void);
+typedef void ia64_mv_pci_fixup_t (int);
typedef unsigned long ia64_mv_map_nr_t (unsigned long);
typedef void ia64_mv_mca_init_t (void);
typedef void ia64_mv_mca_handler_t (void);
diff -urN linux-davidm/include/asm-ia64/machvec_sn1.h lia64/include/asm-ia64/machvec_sn1.h
--- linux-davidm/include/asm-ia64/machvec_sn1.h Wed Dec 6 23:15:31 2000
+++ lia64/include/asm-ia64/machvec_sn1.h Wed Dec 6 22:37:56 2000
@@ -5,6 +5,13 @@
extern ia64_mv_irq_init_t sn1_irq_init;
extern ia64_mv_map_nr_t sn1_map_nr;
extern ia64_mv_send_ipi_t sn1_send_IPI;
+extern ia64_mv_pci_fixup_t sn1_pci_fixup;
+extern ia64_mv_inb_t sn1_inb;
+extern ia64_mv_inw_t sn1_inw;
+extern ia64_mv_inl_t sn1_inl;
+extern ia64_mv_outb_t sn1_outb;
+extern ia64_mv_outw_t sn1_outw;
+extern ia64_mv_outl_t sn1_outl;
/*
* This stuff has dual use!
@@ -18,5 +25,12 @@
#define platform_irq_init sn1_irq_init
#define platform_map_nr sn1_map_nr
#define platform_send_ipi sn1_send_IPI
+#define platform_pci_fixup sn1_pci_fixup
+#define platform_inb sn1_inb
+#define platform_inw sn1_inw
+#define platform_inl sn1_inl
+#define platform_outb sn1_outb
+#define platform_outw sn1_outw
+#define platform_outl sn1_outl
#endif /* _ASM_IA64_MACHVEC_SN1_h */
diff -urN linux-davidm/include/asm-ia64/mca.h lia64/include/asm-ia64/mca.h
--- linux-davidm/include/asm-ia64/mca.h Wed Dec 6 23:15:31 2000
+++ lia64/include/asm-ia64/mca.h Wed Dec 6 23:14:18 2000
@@ -46,11 +46,11 @@
u64 cmcv_regval;
struct {
u64 cmcr_vector : 8;
- u64 cmcr_ignored1 : 47;
+ u64 cmcr_reserved1 : 4;
+ u64 cmcr_ignored1 : 1;
+ u64 cmcr_reserved2 : 3;
u64 cmcr_mask : 1;
- u64 cmcr_reserved1 : 3;
- u64 cmcr_ignored2 : 1;
- u64 cmcr_reserved2 : 4;
+ u64 cmcr_ignored2 : 47;
} cmcv_reg_s;
} cmcv_reg_t;
diff -urN linux-davidm/include/asm-ia64/offsets.h lia64/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Mon Oct 9 17:54:59 2000
+++ lia64/include/asm-ia64/offsets.h Wed Dec 6 22:45:34 2000
@@ -11,7 +11,7 @@
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 3328 /* 0xd00 */
+#define IA64_TASK_SIZE 3472 /* 0xd90 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
@@ -21,10 +21,10 @@
#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
-#define IA64_TASK_THREAD_OFFSET 1424 /* 0x590 */
-#define IA64_TASK_THREAD_KSP_OFFSET 1424 /* 0x590 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 3184 /* 0xc70 */
-#define IA64_TASK_PID_OFFSET 188 /* 0xbc */
+#define IA64_TASK_THREAD_OFFSET 960 /* 0x3c0 */
+#define IA64_TASK_THREAD_KSP_OFFSET 960 /* 0x3c0 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 3328 /* 0xd00 */
+#define IA64_TASK_PID_OFFSET 196 /* 0xc4 */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
#define IA64_PT_REGS_CR_IIP_OFFSET 8 /* 0x8 */
@@ -115,7 +115,7 @@
#define IA64_SWITCH_STACK_AR_UNAT_OFFSET 528 /* 0x210 */
#define IA64_SWITCH_STACK_AR_RNAT_OFFSET 536 /* 0x218 */
#define IA64_SWITCH_STACK_AR_BSPSTORE_OFFSET 544 /* 0x220 */
-#define IA64_SWITCH_STACK_PR_OFFSET 464 /* 0x1d0 */
+#define IA64_SWITCH_STACK_PR_OFFSET 552 /* 0x228 */
#define IA64_SIGCONTEXT_AR_BSP_OFFSET 72 /* 0x48 */
#define IA64_SIGCONTEXT_AR_RNAT_OFFSET 80 /* 0x50 */
#define IA64_SIGCONTEXT_FLAGS_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/page.h lia64/include/asm-ia64/page.h
--- linux-davidm/include/asm-ia64/page.h Wed Dec 6 23:15:31 2000
+++ lia64/include/asm-ia64/page.h Wed Dec 6 23:13:06 2000
@@ -92,21 +92,17 @@
*/
#define MAP_NR_DENSE(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
-/*
- * This variant works well for the SGI SN1 architecture (which does have huge
- * holes in the memory address space).
- */
-#define MAP_NR_SN1(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
-
#ifdef CONFIG_IA64_GENERIC
# include <asm/machvec.h>
-# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
-#elif defined (CONFIG_IA64_SN_SGI_SN1)
-# define virt_to_page(kaddr) (mem_map + MAP_NR_SN1(kaddr))
+# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
+#elif defined (CONFIG_IA64_SGI_SN1)
+# ifndef CONFIG_DISCONTIGMEM
+# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
+# endif
#else
-# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
+# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
#endif
-#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
+#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
typedef union ia64_va {
struct {
diff -urN linux-davidm/include/asm-ia64/pgtable.h lia64/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Wed Dec 6 23:15:31 2000
+++ lia64/include/asm-ia64/pgtable.h Wed Dec 6 23:13:10 2000
@@ -135,7 +135,7 @@
#define __P011 PAGE_READONLY /* ditto */
#define __P100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
#define __P101 PAGE_COPY
- #define __P110 PAGE_COPY
+#define __P110 PAGE_COPY
#define __P111 PAGE_COPY
#define __S000 PAGE_NONE
@@ -214,13 +214,13 @@
/*
* On some architectures, special things need to be done when setting
- * the PTE in a page table. Nothing special needs to be on ia-64.
+ * the PTE in a page table. Nothing special needs to be on IA-64.
*/
#define set_pte(ptep, pteval) (*(ptep) = (pteval))
-#define VMALLOC_START (0xa000000000000000+2*PAGE_SIZE)
+#define VMALLOC_START (0xa000000000000000 + 2*PAGE_SIZE)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
-#define VMALLOC_END (0xa000000000000000+ (1UL << (4*PAGE_SHIFT - 13)))
+#define VMALLOC_END (0xa000000000000000 + (1UL << (4*PAGE_SHIFT - 12)))
/*
* BAD_PAGETABLE is used when we need a bogus page-table, while
diff -urN linux-davidm/include/asm-ia64/ptrace.h lia64/include/asm-ia64/ptrace.h
--- linux-davidm/include/asm-ia64/ptrace.h Mon Oct 9 17:55:00 2000
+++ lia64/include/asm-ia64/ptrace.h Wed Dec 6 23:13:06 2000
@@ -2,8 +2,8 @@
#define _ASM_IA64_PTRACE_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
*
* 12/07/98 S. Eranian added pt_regs & switch_stack
@@ -74,6 +74,9 @@
#ifndef __ASSEMBLY__
+#include <asm/current.h>
+#include <asm/page.h>
+
/*
* This struct defines the way the registers are saved on system
* calls.
@@ -236,7 +239,14 @@
extern void ia64_increment_ip (struct pt_regs *pt);
extern void ia64_decrement_ip (struct pt_regs *pt);
-#endif
+
+static inline void
+force_successful_syscall_return (void)
+{
+ ia64_task_regs(current)->r8 = 0;
+}
+
+#endif /* !__KERNEL__ */
#endif /* !__ASSEMBLY__ */
diff -urN linux-davidm/include/asm-ia64/unistd.h lia64/include/asm-ia64/unistd.h
--- linux-davidm/include/asm-ia64/unistd.h Wed Dec 6 23:15:31 2000
+++ lia64/include/asm-ia64/unistd.h Wed Dec 6 22:39:55 2000
@@ -196,7 +196,7 @@
#define __NR_getsockopt 1204
#define __NR_sendmsg 1205
#define __NR_recvmsg 1206
-#define __NR_sys_pivot_root 1207
+#define __NR_pivot_root 1207
#define __NR_mincore 1208
#define __NR_madvise 1209
#define __NR_stat 1210
diff -urN linux-davidm/include/linux/crc32.h lia64/include/linux/crc32.h
--- linux-davidm/include/linux/crc32.h Wed Dec 31 16:00:00 1969
+++ lia64/include/linux/crc32.h Wed Dec 6 22:40:05 2000
@@ -0,0 +1,17 @@
+/*
+ * crc32.h
+ * See linux/lib/crc32.c for license and changes
+ */
+#ifndef _LINUX_CRC32_H
+#define _LINUX_CRC32_H
+
+#include <linux/types.h>
+
+/*
+ * This computes a 32 bit CRC of the data in the buffer, and returns the CRC.
+ * The polynomial used is 0xedb88320.
+ */
+
+extern u32 crc32 (const void *buf, unsigned long len, u32 seed);
+
+#endif /* _LINUX_CRC32_H */
diff -urN linux-davidm/lib/Makefile lia64/lib/Makefile
--- linux-davidm/lib/Makefile Fri Jul 7 16:41:11 2000
+++ lia64/lib/Makefile Wed Dec 6 22:41:02 2000
@@ -7,7 +7,7 @@
#
L_TARGET := lib.a
-L_OBJS := errno.o ctype.o string.o vsprintf.o brlock.o
+L_OBJS := errno.o ctype.o string.o vsprintf.o brlock.o crc32.o
LX_OBJS := cmdline.o
ifneq ($(CONFIG_HAVE_DEC_LOCK),y)
diff -urN linux-davidm/lib/crc32.c lia64/lib/crc32.c
--- linux-davidm/lib/crc32.c Wed Dec 31 16:00:00 1969
+++ lia64/lib/crc32.c Wed Dec 6 22:41:15 2000
@@ -0,0 +1,125 @@
+/*
+ * Dec 5, 2000 Matt Domsch <Matt_Domsch@dell.com>
+ * - Copied crc32.c from the linux/drivers/net/cipe directory.
+ * - Now pass seed as an arg
+ * - changed unsigned long to u32, added #include<linux/types.h>
+ * - changed len to be an unsigned long
+ * - changed crc32val to be a register
+ * - License remains unchanged! It's still GPL-compatable!
+ */
+
+ /* =============================== */
+ /* COPYRIGHT (C) 1986 Gary S. Brown. You may use this program, or */
+ /* code or tables extracted from it, as desired without restriction. */
+ /* */
+ /* First, the polynomial itself and its table of feedback terms. The */
+ /* polynomial is */
+ /* X^32+X^26+X^23+X^22+X^16+X^12+X^11+X^10+X^8+X^7+X^5+X^4+X^2+X^1+X^0 */
+ /* */
+ /* Note that we take it "backwards" and put the highest-order term in */
+ /* the lowest-order bit. The X^32 term is "implied"; the LSB is the */
+ /* X^31 term, etc. The X^0 term (usually shown as "+1") results in */
+ /* the MSB being 1. */
+ /* */
+ /* Note that the usual hardware shift register implementation, which */
+ /* is what we're using (we're merely optimizing it by doing eight-bit */
+ /* chunks at a time) shifts bits into the lowest-order term. In our */
+ /* implementation, that means shifting towards the right. Why do we */
+ /* do it this way? Because the calculated CRC must be transmitted in */
+ /* order from highest-order term to lowest-order term. UARTs transmit */
+ /* characters in order from LSB to MSB. By storing the CRC this way, */
+ /* we hand it to the UART in the order low-byte to high-byte; the UART */
+ /* sends each low-bit to hight-bit; and the result is transmission bit */
+ /* by bit from highest- to lowest-order term without requiring any bit */
+ /* shuffling on our part. Reception works similarly. */
+ /* */
+ /* The feedback terms table consists of 256, 32-bit entries. Notes: */
+ /* */
+ /* The table can be generated at runtime if desired; code to do so */
+ /* is shown later. It might not be obvious, but the feedback */
+ /* terms simply represent the results of eight shift/xor opera- */
+ /* tions for all combinations of data and CRC register values. */
+ /* */
+ /* The values must be right-shifted by eight bits by the "updcrc" */
+ /* logic; the shift must be unsigned (bring in zeroes). On some */
+ /* hardware you could probably optimize the shift in assembler by */
+ /* using byte-swap instructions. */
+ /* polynomial $edb88320 */
+ /* */
+ /* -------------------------------------------------------------------- */
+
+#include <linux/crc32.h>
+
+static u32 crc32_tab[] = {
+ 0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L,
+ 0x706af48fL, 0xe963a535L, 0x9e6495a3L, 0x0edb8832L, 0x79dcb8a4L,
+ 0xe0d5e91eL, 0x97d2d988L, 0x09b64c2bL, 0x7eb17cbdL, 0xe7b82d07L,
+ 0x90bf1d91L, 0x1db71064L, 0x6ab020f2L, 0xf3b97148L, 0x84be41deL,
+ 0x1adad47dL, 0x6ddde4ebL, 0xf4d4b551L, 0x83d385c7L, 0x136c9856L,
+ 0x646ba8c0L, 0xfd62f97aL, 0x8a65c9ecL, 0x14015c4fL, 0x63066cd9L,
+ 0xfa0f3d63L, 0x8d080df5L, 0x3b6e20c8L, 0x4c69105eL, 0xd56041e4L,
+ 0xa2677172L, 0x3c03e4d1L, 0x4b04d447L, 0xd20d85fdL, 0xa50ab56bL,
+ 0x35b5a8faL, 0x42b2986cL, 0xdbbbc9d6L, 0xacbcf940L, 0x32d86ce3L,
+ 0x45df5c75L, 0xdcd60dcfL, 0xabd13d59L, 0x26d930acL, 0x51de003aL,
+ 0xc8d75180L, 0xbfd06116L, 0x21b4f4b5L, 0x56b3c423L, 0xcfba9599L,
+ 0xb8bda50fL, 0x2802b89eL, 0x5f058808L, 0xc60cd9b2L, 0xb10be924L,
+ 0x2f6f7c87L, 0x58684c11L, 0xc1611dabL, 0xb6662d3dL, 0x76dc4190L,
+ 0x01db7106L, 0x98d220bcL, 0xefd5102aL, 0x71b18589L, 0x06b6b51fL,
+ 0x9fbfe4a5L, 0xe8b8d433L, 0x7807c9a2L, 0x0f00f934L, 0x9609a88eL,
+ 0xe10e9818L, 0x7f6a0dbbL, 0x086d3d2dL, 0x91646c97L, 0xe6635c01L,
+ 0x6b6b51f4L, 0x1c6c6162L, 0x856530d8L, 0xf262004eL, 0x6c0695edL,
+ 0x1b01a57bL, 0x8208f4c1L, 0xf50fc457L, 0x65b0d9c6L, 0x12b7e950L,
+ 0x8bbeb8eaL, 0xfcb9887cL, 0x62dd1ddfL, 0x15da2d49L, 0x8cd37cf3L,
+ 0xfbd44c65L, 0x4db26158L, 0x3ab551ceL, 0xa3bc0074L, 0xd4bb30e2L,
+ 0x4adfa541L, 0x3dd895d7L, 0xa4d1c46dL, 0xd3d6f4fbL, 0x4369e96aL,
+ 0x346ed9fcL, 0xad678846L, 0xda60b8d0L, 0x44042d73L, 0x33031de5L,
+ 0xaa0a4c5fL, 0xdd0d7cc9L, 0x5005713cL, 0x270241aaL, 0xbe0b1010L,
+ 0xc90c2086L, 0x5768b525L, 0x206f85b3L, 0xb966d409L, 0xce61e49fL,
+ 0x5edef90eL, 0x29d9c998L, 0xb0d09822L, 0xc7d7a8b4L, 0x59b33d17L,
+ 0x2eb40d81L, 0xb7bd5c3bL, 0xc0ba6cadL, 0xedb88320L, 0x9abfb3b6L,
+ 0x03b6e20cL, 0x74b1d29aL, 0xead54739L, 0x9dd277afL, 0x04db2615L,
+ 0x73dc1683L, 0xe3630b12L, 0x94643b84L, 0x0d6d6a3eL, 0x7a6a5aa8L,
+ 0xe40ecf0bL, 0x9309ff9dL, 0x0a00ae27L, 0x7d079eb1L, 0xf00f9344L,
+ 0x8708a3d2L, 0x1e01f268L, 0x6906c2feL, 0xf762575dL, 0x806567cbL,
+ 0x196c3671L, 0x6e6b06e7L, 0xfed41b76L, 0x89d32be0L, 0x10da7a5aL,
+ 0x67dd4accL, 0xf9b9df6fL, 0x8ebeeff9L, 0x17b7be43L, 0x60b08ed5L,
+ 0xd6d6a3e8L, 0xa1d1937eL, 0x38d8c2c4L, 0x4fdff252L, 0xd1bb67f1L,
+ 0xa6bc5767L, 0x3fb506ddL, 0x48b2364bL, 0xd80d2bdaL, 0xaf0a1b4cL,
+ 0x36034af6L, 0x41047a60L, 0xdf60efc3L, 0xa867df55L, 0x316e8eefL,
+ 0x4669be79L, 0xcb61b38cL, 0xbc66831aL, 0x256fd2a0L, 0x5268e236L,
+ 0xcc0c7795L, 0xbb0b4703L, 0x220216b9L, 0x5505262fL, 0xc5ba3bbeL,
+ 0xb2bd0b28L, 0x2bb45a92L, 0x5cb36a04L, 0xc2d7ffa7L, 0xb5d0cf31L,
+ 0x2cd99e8bL, 0x5bdeae1dL, 0x9b64c2b0L, 0xec63f226L, 0x756aa39cL,
+ 0x026d930aL, 0x9c0906a9L, 0xeb0e363fL, 0x72076785L, 0x05005713L,
+ 0x95bf4a82L, 0xe2b87a14L, 0x7bb12baeL, 0x0cb61b38L, 0x92d28e9bL,
+ 0xe5d5be0dL, 0x7cdcefb7L, 0x0bdbdf21L, 0x86d3d2d4L, 0xf1d4e242L,
+ 0x68ddb3f8L, 0x1fda836eL, 0x81be16cdL, 0xf6b9265bL, 0x6fb077e1L,
+ 0x18b74777L, 0x88085ae6L, 0xff0f6a70L, 0x66063bcaL, 0x11010b5cL,
+ 0x8f659effL, 0xf862ae69L, 0x616bffd3L, 0x166ccf45L, 0xa00ae278L,
+ 0xd70dd2eeL, 0x4e048354L, 0x3903b3c2L, 0xa7672661L, 0xd06016f7L,
+ 0x4969474dL, 0x3e6e77dbL, 0xaed16a4aL, 0xd9d65adcL, 0x40df0b66L,
+ 0x37d83bf0L, 0xa9bcae53L, 0xdebb9ec5L, 0x47b2cf7fL, 0x30b5ffe9L,
+ 0xbdbdf21cL, 0xcabac28aL, 0x53b39330L, 0x24b4a3a6L, 0xbad03605L,
+ 0xcdd70693L, 0x54de5729L, 0x23d967bfL, 0xb3667a2eL, 0xc4614ab8L,
+ 0x5d681b02L, 0x2a6f2b94L, 0xb40bbe37L, 0xc30c8ea1L, 0x5a05df1bL,
+ 0x2d02ef8dL
+ };
+
+/* Return a 32-bit CRC of the contents of the buffer. */
+
+u32
+crc32(const void *buf, unsigned long len, u32 seed)
+{
+ unsigned long i;
+ register u32 crc32val;
+ const unsigned char *s = buf;
+
+ crc32val = seed;
+ for (i = 0; i < len; i ++)
+ {
+ crc32val + crc32_tab[(crc32val ^ s[i]) & 0xff] ^
+ (crc32val >> 8);
+ }
+ return crc32val;
+}
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.0-test11)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (27 preceding siblings ...)
2000-12-07 8:26 ` [Linux-ia64] kernel update (relative to 2.4.0-test11) David Mosberger
@ 2000-12-07 21:57 ` David Mosberger
2000-12-15 5:00 ` [Linux-ia64] kernel update (relative to 2.4.0-test12) David Mosberger
` (186 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-12-07 21:57 UTC (permalink / raw)
To: linux-ia64
David> CAVEAT: the kernel now relies on PAL_VM_SUMMARY returning the
David> correct value for IMPL_VA_MSB. If you have old firmware,
David> this may cause the kernel to fail to boot. To find out, look
David> for the message of the form:
David> CPU 0: XX virtual and YY physical address bits
David> if XX is anything other than 50, you need to upgrade your
David> firmware.
Make that 51 (IMPL_VA_MSB is 50 on Itanium, i.e., the number of
virtual address bits is 51; some day I'll get this right... ;-)
Thanks to Matt for pointing this out and sorry for the confusion.
You can check whether your firmware is recent enough by doing:
fgrep Virtual /proc/pal/cpu0/vm_info
If it displays "51 bits", you're in fine shape.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0-test12)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (28 preceding siblings ...)
2000-12-07 21:57 ` David Mosberger
@ 2000-12-15 5:00 ` David Mosberger
2000-12-15 22:43 ` Nathan Straz
` (185 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2000-12-15 5:00 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/port/pub/linux/kernel/ports/ia64/
in files linux-2.4.0-test12-ia64-001214.diff*
Summary of changes:
- Jonathan Nicklin: per-CPU interrupts now have their own IRQ handler
to avoid unnecesary serialization due to irq descriptor locking
- Kanoj et al: various SGI SN1 updates
- Support for A1 step CPUs is gone
- The software I/O TLB implementation has moved from kernel/pci-dma.c
to lib/swiotlb.c and the function names got renamed accordingly.
This makes room for other (platform-dependent) implementations of
the PCI DMA interface.
- Fix alternate TLB handlers to always set access rights to to RWX
- Fix Dirty bit handler to also set the Access bit (this is just
a performance optimization)
- NaT demining: avoid crashing the kernel when the user passes NaT
values to a system call
- re-structure the way i-cache flushing is done
- fix GENERIC build
Caveat: I consider this patch somewhat experimental because the
i-cache flush restructuring changed the approach to cache flushing
quite fundamentally. The new approach is all-improved and gives a
nice performance boost when execve()ing binaries repeatedly. However,
there is a small risk that something doesn't quite work the way it's
supposed to. Thus, I'd like to see everyone giving this kernel a good
workout and to report any new problems.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help linux-2.4.0-test12-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/Documentation/Configure.help Thu Dec 14 15:08:15 2000
@@ -16931,12 +16931,6 @@
with an A-step CPU. You have an A-step CPU if the "revision" field in
/proc/cpuinfo is 0.
-Enable Itanium A1-step specific code
-CONFIG_ITANIUM_A1_SPECIFIC
- Select this option to build a kernel for an Itanium prototype system
- with an A1-step CPU. If you don't know whether you have an A1-step CPU,
- you probably don't and you can answer "no" here.
-
Enable Itanium B-step specific code
CONFIG_ITANIUM_BSTEP_SPECIFIC
Select this option to build a kernel for an Itanium prototype system
diff -urN linux-davidm/Documentation/IO-mapping.txt linux-2.4.0-test12-lia/Documentation/IO-mapping.txt
--- linux-davidm/Documentation/IO-mapping.txt Fri Oct 27 10:58:02 2000
+++ linux-2.4.0-test12-lia/Documentation/IO-mapping.txt Thu Dec 14 14:06:14 2000
@@ -1,3 +1,9 @@
+[ NOTE: The virt_to_bus() and bus_to_virt() functions have been
+ superseded by the functionality provided by the PCI DMA
+ interface (see Documentation/DMA-mapping.txt). They continue
+ to be documented below for historical purposes, but new code
+ must not use them. --davidm 00/12/12 ]
+
[ This is a mail message in response to a query on IO mapping, thus the
strange format for a "document" ]
diff -urN linux-davidm/arch/ia64/boot/Makefile linux-2.4.0-test12-lia/arch/ia64/boot/Makefile
--- linux-davidm/arch/ia64/boot/Makefile Thu Jun 22 07:09:44 2000
+++ linux-2.4.0-test12-lia/arch/ia64/boot/Makefile Thu Dec 14 14:07:03 2000
@@ -16,13 +16,12 @@
$(CC) $(AFLAGS) -traditional -c -o $*.o $<
OBJECTS = bootloader.o
-TARGETS +targets-y
-ifdef CONFIG_IA64_HP_SIM
- TARGETS += bootloader
-endif
+targets-$(CONFIG_IA64_HP_SIM) += bootloader
+targets-$(CONFIG_IA64_GENERIC) += bootloader
-all: $(TARGETS)
+all: $(targets-y)
bootloader: $(OBJECTS)
$(LD) $(LINKFLAGS) $(OBJECTS) $(TOPDIR)/lib/lib.a $(TOPDIR)/arch/$(ARCH)/lib/lib.a \
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-test12-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/config.in Thu Dec 14 19:51:13 2000
@@ -18,7 +18,6 @@
comment 'General setup'
define_bool CONFIG_IA64 y
-define_bool CONFIG_SWIOTLB y # for now...
define_bool CONFIG_ISA n
define_bool CONFIG_EISA n
@@ -41,9 +40,6 @@
define_bool CONFIG_ITANIUM y
define_bool CONFIG_IA64_BRL_EMU y
bool ' Enable Itanium A-step specific code' CONFIG_ITANIUM_ASTEP_SPECIFIC
- if [ "$CONFIG_ITANIUM_ASTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium A1-step specific code' CONFIG_ITANIUM_A1_SPECIFIC
- fi
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
@@ -75,7 +71,6 @@
bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
fi
bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM n
- bool ' Enable SGI hack for version 1.0 syngery bugs' CONFIG_IA64_SGI_SYNERGY_1_0_HACKS n
define_bool CONFIG_DEVFS_DEBUG y
define_bool CONFIG_DEVFS_FS y
define_bool CONFIG_IA64_BRL_EMU y
@@ -83,6 +78,7 @@
define_bool CONFIG_ITANIUM y
define_bool CONFIG_SGI_IOC3_ETH y
bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM y
+ bool ' Enable NUMA support' CONFIG_NUMA y
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
diff -urN linux-davidm/arch/ia64/kernel/Makefile linux-2.4.0-test12-lia/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/Makefile Thu Dec 14 14:08:10 2000
@@ -10,7 +10,7 @@
all: kernel.o head.o init_task.o
obj-y := acpi.o entry.o gate.o efi.o efi_stub.o irq.o irq_ia64.o irq_sapic.o ivt.o \
- machvec.o pal.o pci-dma.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
+ machvec.o pal.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
obj-$(CONFIG_IA64_GENERIC) += machvec.o iosapic.o
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.0-test12-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/ia64_ksyms.c Thu Dec 14 14:08:47 2000
@@ -24,10 +24,6 @@
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strtok);
-#include <linux/pci.h>
-EXPORT_SYMBOL(pci_alloc_consistent);
-EXPORT_SYMBOL(pci_free_consistent);
-
#include <linux/in6.h>
#include <asm/checksum.h>
/* not coded yet?? EXPORT_SYMBOL(csum_ipv6_magic); */
@@ -48,14 +44,6 @@
#include <asm/page.h>
EXPORT_SYMBOL(clear_page);
-
-#include <asm/pci.h>
-EXPORT_SYMBOL(pci_dma_sync_sg);
-EXPORT_SYMBOL(pci_dma_sync_single);
-EXPORT_SYMBOL(pci_map_sg);
-EXPORT_SYMBOL(pci_map_single);
-EXPORT_SYMBOL(pci_unmap_sg);
-EXPORT_SYMBOL(pci_unmap_single);
#include <asm/processor.h>
EXPORT_SYMBOL(cpu_data);
diff -urN linux-davidm/arch/ia64/kernel/irq.c linux-2.4.0-test12-lia/arch/ia64/kernel/irq.c
--- linux-davidm/arch/ia64/kernel/irq.c Wed Dec 13 17:29:20 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/irq.c Thu Dec 14 14:09:00 2000
@@ -541,6 +541,18 @@
spin_unlock_irqrestore(&desc->lock, flags);
}
+void do_IRQ_per_cpu(unsigned long irq, struct pt_regs *regs)
+{
+ irq_desc_t *desc = irq_desc + irq;
+ int cpu = smp_processor_id();
+
+ kstat.irqs[cpu][irq]++;
+
+ desc->handler->ack(irq);
+ handle_IRQ_event(irq, regs, desc->action);
+ desc->handler->end(irq);
+}
+
/*
* do_IRQ handles all normal device IRQ's (the special
* SMP cross-CPU interrupts have their own specific
@@ -581,8 +593,7 @@
if (!(status & (IRQ_DISABLED | IRQ_INPROGRESS))) {
action = desc->action;
status &= ~IRQ_PENDING; /* we commit to handling */
- if (!(status & IRQ_PER_CPU))
- status |= IRQ_INPROGRESS; /* we are handling it */
+ status |= IRQ_INPROGRESS; /* we are handling it */
}
desc->status = status;
diff -urN linux-davidm/arch/ia64/kernel/irq_ia64.c linux-2.4.0-test12-lia/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/irq_ia64.c Thu Dec 14 14:09:33 2000
@@ -38,10 +38,6 @@
#define IRQ_DEBUG 0
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
-spinlock_t ivr_read_lock;
-#endif
-
/* default base addr of IPI table */
unsigned long ipi_base_addr = (__IA64_UNCACHED_OFFSET | IPI_DEFAULT_BASE_ADDR);
@@ -65,22 +61,6 @@
return next_irq++;
}
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
-
-int usbfix;
-
-static int __init
-usbfix_option (char *str)
-{
- printk("irq: enabling USB workaround\n");
- usbfix = 1;
- return 1;
-}
-
-__setup("usbfix", usbfix_option);
-
-#endif /* CONFIG_ITANIUM_A1_SPECIFIC */
-
/*
* That's where the IVT branches when we get an external
* interrupt. This branches to the correct hardware IRQ handler via
@@ -90,42 +70,6 @@
ia64_handle_irq (unsigned long vector, struct pt_regs *regs)
{
unsigned long saved_tpr;
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- unsigned long eoi_ptr;
-
-# ifdef CONFIG_USB
- extern void reenable_usb (void);
- extern void disable_usb (void);
-
- if (usbfix)
- disable_usb();
-# endif
- /*
- * Stop IPIs by getting the ivr_read_lock
- */
- spin_lock(&ivr_read_lock);
- {
- unsigned int tmp;
- /*
- * Disable PCI writes
- */
- outl(0x80ff81c0, 0xcf8);
- tmp = inl(0xcfc);
- outl(tmp | 0x400, 0xcfc);
- eoi_ptr = inl(0xcfc);
- vector = ia64_get_ivr();
- /*
- * Enable PCI writes
- */
- outl(tmp, 0xcfc);
- }
- spin_unlock(&ivr_read_lock);
-
-# ifdef CONFIG_USB
- if (usbfix)
- reenable_usb();
-# endif
-#endif /* CONFIG_ITANIUM_A1_SPECIFIC */
#if IRQ_DEBUG
{
@@ -174,7 +118,10 @@
ia64_set_tpr(vector);
ia64_srlz_d();
- do_IRQ(vector, regs);
+ if ((irq_desc[vector].status & IRQ_PER_CPU) != 0)
+ do_IRQ_per_cpu(vector, regs);
+ else
+ do_IRQ(vector, regs);
/*
* Disable interrupts and send EOI:
@@ -182,9 +129,6 @@
local_irq_disable();
ia64_set_tpr(saved_tpr);
ia64_eoi();
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- break;
-#endif
vector = ia64_get_ivr();
} while (vector != IA64_SPURIOUS_INT);
}
@@ -207,8 +151,8 @@
* Disable all local interrupts
*/
ia64_set_itv(0, 1);
- ia64_set_lrr0(0, 1);
- ia64_set_lrr1(0, 1);
+ ia64_set_lrr0(0, 1);
+ ia64_set_lrr1(0, 1);
irq_desc[IA64_SPURIOUS_INT].handler = &irq_type_ia64_sapic;
#ifdef CONFIG_SMP
@@ -235,9 +179,6 @@
unsigned long ipi_addr;
unsigned long ipi_data;
unsigned long phys_cpu_id;
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- unsigned long flags;
-#endif
#ifdef CONFIG_SMP
phys_cpu_id = cpu_physical_id(cpu);
@@ -252,13 +193,5 @@
ipi_data = (delivery_mode << 8) | (vector & 0xff);
ipi_addr = ipi_base_addr | (phys_cpu_id << 4) | ((redirect & 1) << 3);
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- spin_lock_irqsave(&ivr_read_lock, flags);
-#endif
-
writeq(ipi_data, ipi_addr);
-
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- spin_unlock_irqrestore(&ivr_read_lock, flags);
-#endif
}
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.0-test12-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/ivt.S Thu Dec 14 20:11:48 2000
@@ -348,7 +348,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
mov r16=cr.ifa // get address that caused the TLB miss
- movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX
+ movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX
mov r21=cr.ipsr
mov r31=pr
;;
@@ -378,7 +378,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1000 Entry 4 (size 64 bundles) Alt DTLB (7,46)
mov r16=cr.ifa // get address that caused the TLB miss
- movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RW
+ movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX
mov r20=cr.isr
mov r21=cr.ipsr
mov r31=pr
@@ -532,7 +532,7 @@
;;
1: ld8 r18=[r17]
;; // avoid RAW on r18
- or r18=_PAGE_D,r18 // set the dirty bit
+ or r18=_PAGE_D|_PAGE_A,r18 // set the dirty and accessed bits
mov b0=r29 // restore b0
;;
st8 [r17]=r18 // store back updated PTE
@@ -549,7 +549,7 @@
1: ld8 r18=[r17]
;; // avoid RAW on r18
mov ar.ccv=r18 // set compare value for cmpxchg
- or r25=_PAGE_D,r18 // set the dirty bit
+ or r25=_PAGE_D|_PAGE_A,r18 // set the dirty and accessed bits
;;
cmpxchg8.acq r26=[r17],r25,ar.ccv
mov r24=PAGE_SHIFT<<2
@@ -741,14 +741,13 @@
adds r3=8,r2 // set up second base pointer for SAVE_REST
;;
SAVE_REST
- ;; // avoid WAW on r2 & r3
+ br.call.sptk rpÞmine_args // clear NaT bits in (potential) syscall args
mov r3%5
adds r15=-1024,r15 // r15 contains the syscall number---subtract 1024
adds r2=IA64_TASK_PTRACE_OFFSET,r13 // r2 = ¤t->ptrace
-
;;
- cmp.geu.unc p6,p7=r3,r15 // (syscall > 0 && syscall <= 1024+255) ?
+ cmp.geu p6,p7=r3,r15 // (syscall > 0 && syscall <= 1024+255) ?
movl r16=sys_call_table
;;
(p6) shladd r16=r15,3,r16
@@ -787,6 +786,33 @@
br.call.sptk.few rp=ia64_trace_syscall // rp will be overwritten (ignored)
// NOT REACHED
+ .proc demine_args
+demine_args:
+ alloc r2=ar.pfs,8,0,0,0
+ tnat.nz p8,p0=in0
+ tnat.nz p9,p0=in1
+ ;;
+(p8) mov in0=-1
+ tnat.nz p10,p0=in2
+ tnat.nz p11,p0=in3
+
+(p9) mov in1=-1
+ tnat.nz p12,p0=in4
+ tnat.nz p13,p0=in5
+ ;;
+(p10) mov in2=-1
+ tnat.nz p14,p0=in6
+ tnat.nz p15,p0=in7
+
+(p11) mov in3=-1
+(p12) mov in4=-1
+(p13) mov in5=-1
+ ;;
+(p14) mov in6=-1
+(p15) mov in7=-1
+ br.ret.sptk.few rp
+ .endp demine_args
+
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4)
@@ -802,11 +828,7 @@
SAVE_REST
;;
alloc r14=ar.pfs,0,0,2,0 // must be first in an insn group
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- mov out0=r0 // defer reading of cr.ivr to handle_irq...
-#else
mov out0=cr.ivr // pass cr.ivr as first arg
-#endif
add out1\x16,sp // pass pointer to pt_regs as second arg
;;
srlz.d // make sure we see the effect of cr.ivr
@@ -1091,12 +1113,11 @@
// 0x5000 Entry 20 (size 16 bundles) Page Not Present (10,22,49)
mov r16=cr.ifa
rsm psr.dt
-#if 1
- // If you disable this, you MUST re-enable to update_mmu_cache() code in pgtable.h
+ // The Linux page fault handler doesn't expect non-present pages to be in
+ // the TLB. Flush the existing entry now, so we meet that expectation.
mov r17=_PAGE_SIZE_4K<<2
;;
ptc.l r16,r17
-#endif
;;
mov r31=pr
srlz.d
diff -urN linux-davidm/arch/ia64/kernel/pci-dma.c linux-2.4.0-test12-lia/arch/ia64/kernel/pci-dma.c
--- linux-davidm/arch/ia64/kernel/pci-dma.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/pci-dma.c Wed Dec 31 16:00:00 1969
@@ -1,530 +0,0 @@
-/*
- * Dynamic DMA mapping support.
- *
- * This implementation is for IA-64 platforms that do not support
- * I/O TLBs (aka DMA address translation hardware).
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
- * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
- */
-
-#include <linux/config.h>
-
-#include <linux/mm.h>
-#include <linux/pci.h>
-#include <linux/spinlock.h>
-#include <linux/string.h>
-#include <linux/types.h>
-
-#include <asm/io.h>
-#include <asm/pci.h>
-#include <asm/dma.h>
-
-#ifdef CONFIG_SWIOTLB
-
-#include <linux/init.h>
-#include <linux/bootmem.h>
-
-#define ALIGN(val, align) ((unsigned long) \
- (((unsigned long) (val) + ((align) - 1)) & ~((align) - 1)))
-
-/*
- * log of the size of each IO TLB slab. The number of slabs is command line
- * controllable.
- */
-#define IO_TLB_SHIFT 11
-
-/*
- * Used to do a quick range check in pci_unmap_single and pci_sync_single, to see if the
- * memory was in fact allocated by this API.
- */
-static char *io_tlb_start, *io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and io_tlb_end.
- * This is command line adjustable via setup_io_tlb_npages.
- */
-unsigned long io_tlb_nslabs = 1024;
-
-/*
- * This is a free list describing the number of free entries available from each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
-
-/*
- * We need to save away the original address corresponding to a mapped entry for the sync
- * operations.
- */
-static unsigned char **io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-spinlock_t io_tlb_lock = SPIN_LOCK_UNLOCKED;
-
-static int __init
-setup_io_tlb_npages (char *str)
-{
- io_tlb_nslabs = simple_strtoul(str, NULL, 0) << (PAGE_SHIFT - IO_TLB_SHIFT);
- return 1;
-}
-__setup("swiotlb=", setup_io_tlb_npages);
-
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer
- * data structures for the software IO TLB used to implement the PCI DMA API
- */
-void
-setup_swiotlb (void)
-{
- int i;
-
- /*
- * Get IO TLB memory from the low pages
- */
- io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs * (1 << IO_TLB_SHIFT));
- if (!io_tlb_start)
- BUG();
- io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
-
- /*
- * Allocate and initialize the free list array. This array is used
- * to find contiguous free memory regions of size 2^IO_TLB_SHIFT between
- * io_tlb_start and io_tlb_end.
- */
- io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
- for (i = 0; i < io_tlb_nslabs; i++)
- io_tlb_list[i] = io_tlb_nslabs - i;
- io_tlb_index = 0;
- io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
- printk("Placing software IO TLB between 0x%p - 0x%p\n",
- (void *) io_tlb_start, (void *) io_tlb_end);
-}
-
-/*
- * Allocates bounce buffer and returns its kernel virtual address.
- */
-static void *
-__pci_map_single (struct pci_dev *hwdev, char *buffer, size_t size, int direction)
-{
- unsigned long flags;
- char *dma_addr;
- unsigned int nslots, stride, index, wrap;
- int i;
-
- /*
- * For mappings greater than a page size, we limit the stride (and hence alignment)
- * to a page size.
- */
- nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
- if (size > (1 << PAGE_SHIFT))
- stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
- else
- stride = nslots;
-
- if (!nslots)
- BUG();
-
- /*
- * Find suitable number of IO TLB entries size that will fit this request and
- * allocate a buffer from that IO TLB pool.
- */
- spin_lock_irqsave(&io_tlb_lock, flags);
- {
- wrap = index = ALIGN(io_tlb_index, stride);
-
- if (index >= io_tlb_nslabs)
- wrap = index = 0;
-
- do {
- /*
- * If we find a slot that indicates we have 'nslots' number of
- * contiguous buffers, we allocate the buffers from that slot and mark the
- * entries as '0' indicating unavailable.
- */
- if (io_tlb_list[index] >= nslots) {
- int count = 0;
-
- for (i = index; i < index + nslots; i++)
- io_tlb_list[i] = 0;
- for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
- io_tlb_list[i] = ++count;
- dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
- /*
- * Update the indices to avoid searching in the next round.
- */
- io_tlb_index = ((index + nslots) < io_tlb_nslabs
- ? (index + nslots) : 0);
-
- goto found;
- }
- index += stride;
- if (index >= io_tlb_nslabs)
- index = 0;
- } while (index != wrap);
-
- /*
- * XXX What is a suitable recovery mechanism here? We cannot
- * sleep because we are called from with in interrupts!
- */
- panic("__pci_map_single: could not allocate software IO TLB (%ld bytes)", size);
-found:
- }
- spin_unlock_irqrestore(&io_tlb_lock, flags);
-
- /*
- * Save away the mapping from the original address to the DMA address. This is needed
- * when we sync the memory. Then we sync the buffer if needed.
- */
- io_tlb_orig_addr[index] = buffer;
- if (direction = PCI_DMA_TODEVICE || direction = PCI_DMA_BIDIRECTIONAL)
- memcpy(dma_addr, buffer, size);
-
- return dma_addr;
-}
-
-/*
- * dma_addr is the kernel virtual address of the bounce buffer to unmap.
- */
-static void
-__pci_unmap_single (struct pci_dev *hwdev, char *dma_addr, size_t size, int direction)
-{
- unsigned long flags;
- int i, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
- int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
- char *buffer = io_tlb_orig_addr[index];
-
- /*
- * First, sync the memory before unmapping the entry
- */
- if ((direction = PCI_DMA_FROMDEVICE) || (direction = PCI_DMA_BIDIRECTIONAL))
- /*
- * bounce... copy the data back into the original buffer
- * and delete the bounce buffer.
- */
- memcpy(buffer, dma_addr, size);
-
- /*
- * Return the buffer to the free list by setting the corresponding entries to indicate
- * the number of contigous entries available.
- * While returning the entries to the free list, we merge the entries with slots below
- * and above the pool being returned.
- */
- spin_lock_irqsave(&io_tlb_lock, flags);
- {
- int count = ((index + nslots) < io_tlb_nslabs ? io_tlb_list[index + nslots] : 0);
- /*
- * Step 1: return the slots to the free list, merging the slots with
- * superceeding slots
- */
- for (i = index + nslots - 1; i >= index; i--)
- io_tlb_list[i] = ++count;
- /*
- * Step 2: merge the returned slots with the preceeding slots, if
- * available (non zero)
- */
- for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
- io_tlb_list[i] = ++count;
- }
- spin_unlock_irqrestore(&io_tlb_lock, flags);
-}
-
-static void
-__pci_sync_single (struct pci_dev *hwdev, char *dma_addr, size_t size, int direction)
-{
- int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
- char *buffer = io_tlb_orig_addr[index];
-
- /*
- * bounce... copy the data back into/from the original buffer
- * XXX How do you handle PCI_DMA_BIDIRECTIONAL here ?
- */
- if (direction = PCI_DMA_FROMDEVICE)
- memcpy(buffer, dma_addr, size);
- else if (direction = PCI_DMA_TODEVICE)
- memcpy(dma_addr, buffer, size);
- else
- BUG();
-}
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.
- * The PCI address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory
- * until either pci_unmap_single or pci_dma_sync_single is performed.
- */
-dma_addr_t
-pci_map_single (struct pci_dev *hwdev, void *ptr, size_t size, int direction)
-{
- unsigned long pci_addr = virt_to_phys(ptr);
-
- if (direction = PCI_DMA_NONE)
- BUG();
- /*
- * Check if the PCI device can DMA to ptr... if so, just return ptr
- */
- if ((pci_addr & ~hwdev->dma_mask) = 0)
- /*
- * Device is bit capable of DMA'ing to the
- * buffer... just return the PCI address of ptr
- */
- return pci_addr;
-
- /*
- * get a bounce buffer:
- */
- pci_addr = virt_to_phys(__pci_map_single(hwdev, ptr, size, direction));
-
- /*
- * Ensure that the address returned is DMA'ble:
- */
- if ((pci_addr & ~hwdev->dma_mask) != 0)
- panic("__pci_map_single: bounce buffer is not DMA'ble");
-
- return pci_addr;
-}
-
-/*
- * Unmap a single streaming mode DMA translation. The dma_addr and size
- * must match what was provided for in a previous pci_map_single call. All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guarenteed to see
- * whatever the device wrote there.
- */
-void
-pci_unmap_single (struct pci_dev *hwdev, dma_addr_t pci_addr, size_t size, int direction)
-{
- char *dma_addr = phys_to_virt(pci_addr);
-
- if (direction = PCI_DMA_NONE)
- BUG();
- if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
- __pci_unmap_single(hwdev, dma_addr, size, direction);
-}
-
-/*
- * Make physical memory consistent for a single
- * streaming mode DMA translation after a transfer.
- *
- * If you perform a pci_map_single() but wish to interrogate the
- * buffer using the cpu, yet do not wish to teardown the PCI dma
- * mapping, you must call this function before doing so. At the
- * next point you give the PCI dma address back to the card, the
- * device again owns the buffer.
- */
-void
-pci_dma_sync_single (struct pci_dev *hwdev, dma_addr_t pci_addr, size_t size, int direction)
-{
- char *dma_addr = phys_to_virt(pci_addr);
-
- if (direction = PCI_DMA_NONE)
- BUG();
- if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
- __pci_sync_single(hwdev, dma_addr, size, direction);
-}
-
-/*
- * Map a set of buffers described by scatterlist in streaming
- * mode for DMA. This is the scather-gather version of the
- * above pci_map_single interface. Here the scatter gather list
- * elements are each tagged with the appropriate dma address
- * and length. They are obtained via sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- * DMA address/length pairs than there are SG table elements.
- * (for example via virtual mapping capabilities)
- * The routine returns the number of addr/length pairs actually
- * used, at most nents.
- *
- * Device ownership issues as mentioned above for pci_map_single are
- * the same here.
- */
-int
-pci_map_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
-{
- int i;
-
- if (direction = PCI_DMA_NONE)
- BUG();
-
- for (i = 0; i < nelems; i++, sg++) {
- sg->orig_address = sg->address;
- if ((virt_to_phys(sg->address) & ~hwdev->dma_mask) != 0) {
- sg->address = __pci_map_single(hwdev, sg->address, sg->length, direction);
- }
- }
- return nelems;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.
- * Again, cpu read rules concerning calls here are the same as for
- * pci_unmap_single() above.
- */
-void
-pci_unmap_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
-{
- int i;
-
- if (direction = PCI_DMA_NONE)
- BUG();
-
- for (i = 0; i < nelems; i++, sg++)
- if (sg->orig_address != sg->address) {
- __pci_unmap_single(hwdev, sg->address, sg->length, direction);
- sg->address = sg->orig_address;
- }
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA
- * translations after a transfer.
- *
- * The same as pci_dma_sync_single but for a scatter-gather list,
- * same rules and usage.
- */
-void
-pci_dma_sync_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
-{
- int i;
-
- if (direction = PCI_DMA_NONE)
- BUG();
-
- for (i = 0; i < nelems; i++, sg++)
- if (sg->orig_address != sg->address)
- __pci_sync_single(hwdev, sg->address, sg->length, direction);
-}
-
-#else
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.
- * The 32-bit bus address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory
- * until either pci_unmap_single or pci_dma_sync_single is performed.
- */
-dma_addr_t
-pci_map_single (struct pci_dev *hwdev, void *ptr, size_t size, int direction)
-{
- if (direction = PCI_DMA_NONE)
- BUG();
- return virt_to_bus(ptr);
-}
-
-/*
- * Unmap a single streaming mode DMA translation. The dma_addr and size
- * must match what was provided for in a previous pci_map_single call. All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guarenteed to see
- * whatever the device wrote there.
- */
-void
-pci_unmap_single (struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size, int direction)
-{
- if (direction = PCI_DMA_NONE)
- BUG();
- /* Nothing to do */
-}
-/*
- * Map a set of buffers described by scatterlist in streaming
- * mode for DMA. This is the scather-gather version of the
- * above pci_map_single interface. Here the scatter gather list
- * elements are each tagged with the appropriate dma address
- * and length. They are obtained via sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- * DMA address/length pairs than there are SG table elements.
- * (for example via virtual mapping capabilities)
- * The routine returns the number of addr/length pairs actually
- * used, at most nents.
- *
- * Device ownership issues as mentioned above for pci_map_single are
- * the same here.
- */
-int
-pci_map_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
-{
- if (direction = PCI_DMA_NONE)
- BUG();
- return nents;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.
- * Again, cpu read rules concerning calls here are the same as for
- * pci_unmap_single() above.
- */
-void
-pci_unmap_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
-{
- if (direction = PCI_DMA_NONE)
- BUG();
- /* Nothing to do */
-}
-/*
- * Make physical memory consistent for a single
- * streaming mode DMA translation after a transfer.
- *
- * If you perform a pci_map_single() but wish to interrogate the
- * buffer using the cpu, yet do not wish to teardown the PCI dma
- * mapping, you must call this function before doing so. At the
- * next point you give the PCI dma address back to the card, the
- * device again owns the buffer.
- */
-void
-pci_dma_sync_single (struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size, int direction)
-{
- if (direction = PCI_DMA_NONE)
- BUG();
- /* Nothing to do */
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA
- * translations after a transfer.
- *
- * The same as pci_dma_sync_single but for a scatter-gather list,
- * same rules and usage.
- */
-void
-pci_dma_sync_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
-{
- if (direction = PCI_DMA_NONE)
- BUG();
- /* Nothing to do */
-}
-
-#endif /* CONFIG_SWIOTLB */
-
-void *
-pci_alloc_consistent (struct pci_dev *hwdev, size_t size, dma_addr_t *dma_handle)
-{
- unsigned long pci_addr;
- int gfp = GFP_ATOMIC;
- void *ret;
-
- if (!hwdev || hwdev->dma_mask <= 0xffffffff)
- gfp |= GFP_DMA; /* XXX fix me: should change this to GFP_32BIT or ZONE_32BIT */
- ret = (void *)__get_free_pages(gfp, get_order(size));
- if (!ret)
- return NULL;
-
- memset(ret, 0, size);
- pci_addr = virt_to_phys(ret);
- if ((pci_addr & ~hwdev->dma_mask) != 0)
- panic("pci_alloc_consistent: allocated memory is out of range for PCI device");
- *dma_handle = pci_addr;
- return ret;
-}
-
-void
-pci_free_consistent (struct pci_dev *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
-{
- free_pages((unsigned long) vaddr, get_order(size));
-}
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.0-test12-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/setup.c Thu Dec 14 14:12:41 2000
@@ -261,13 +261,6 @@
paging_init();
platform_setup(cmdline_p);
-
-#ifdef CONFIG_SWIOTLB
- {
- extern void setup_swiotlb (void);
- setup_swiotlb();
- }
-#endif
}
/*
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-test12-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/smp.c Thu Dec 14 20:11:27 2000
@@ -81,10 +81,6 @@
};
static volatile struct smp_call_struct *smp_call_function_data;
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
-extern spinlock_t ivr_read_lock;
-#endif
-
#define IPI_RESCHEDULE 0
#define IPI_CALL_FUNC 1
#define IPI_CPU_STOP 2
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.0-test12-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/kernel/unwind.c Thu Dec 14 14:14:49 2000
@@ -521,6 +521,10 @@
struct unw_reg_state *rs;
rs = alloc_reg_state();
+ if (!rs) {
+ printk("unwind: cannot stack reg state!\n");
+ return;
+ }
memcpy(rs, &sr->curr, sizeof(*rs));
rs->next = sr->stack;
sr->stack = rs;
diff -urN linux-davidm/arch/ia64/lib/Makefile linux-2.4.0-test12-lia/arch/ia64/lib/Makefile
--- linux-davidm/arch/ia64/lib/Makefile Mon Oct 9 17:54:56 2000
+++ linux-2.4.0-test12-lia/arch/ia64/lib/Makefile Thu Dec 14 14:15:00 2000
@@ -11,7 +11,8 @@
__divdi3.o __udivdi3.o __moddi3.o __umoddi3.o \
checksum.o clear_page.o csum_partial_copy.o copy_page.o \
copy_user.o clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o \
- flush.o do_csum.o
+ flush.o do_csum.o \
+ swiotlb.o
ifneq ($(CONFIG_ITANIUM_ASTEP_SPECIFIC),y)
L_OBJS += memcpy.o memset.o strlen.o
diff -urN linux-davidm/arch/ia64/lib/flush.S linux-2.4.0-test12-lia/arch/ia64/lib/flush.S
--- linux-davidm/arch/ia64/lib/flush.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.0-test12-lia/arch/ia64/lib/flush.S Thu Dec 14 14:15:15 2000
@@ -12,29 +12,33 @@
.psr lsb
.lsb
-GLOBAL_ENTRY(ia64_flush_icache_page)
+ /*
+ * flush_icache_range(start,end)
+ * Must flush range from start to end-1 but nothing else (need to
+ * be careful not to touch addresses that may be unmapped).
+ */
+GLOBAL_ENTRY(flush_icache_range)
UNW(.prologue)
- alloc r2=ar.pfs,1,0,0,0
+ alloc r2=ar.pfs,2,0,0,0
+ sub r8=in1,in0,1
+ ;;
+ shr.u r8=r8,5 // we flush 32 bytes per iteration
UNW(.save ar.lc, r3)
mov r3=ar.lc // save ar.lc
+ ;;
.body
- mov r8=PAGE_SIZE/64-1 // repeat/until loop
- ;;
mov ar.lc=r8
- add r82,in0
;;
-.Loop1: fc in0 // issuable on M0 only
- add in0d,in0
- fc r8
- add r8d,r8
- br.cloop.sptk.few .Loop1
+.Loop: fc in0 // issuable on M0 only
+ add in02,in0
+ br.cloop.sptk.few .Loop
;;
sync.i
;;
srlz.i
;;
mov ar.lc=r3 // restore ar.lc
- br.ret.sptk.few rp
-END(ia64_flush_icache_page)
+ br.ret.sptk.many rp
+END(flush_icache_range)
diff -urN linux-davidm/arch/ia64/lib/io.c linux-2.4.0-test12-lia/arch/ia64/lib/io.c
--- linux-davidm/arch/ia64/lib/io.c Mon Oct 9 17:54:56 2000
+++ linux-2.4.0-test12-lia/arch/ia64/lib/io.c Thu Dec 14 14:15:32 2000
@@ -1,3 +1,4 @@
+#include <linux/config.h>
#include <linux/types.h>
#include <asm/io.h>
@@ -48,3 +49,54 @@
}
}
+#ifdef CONFIG_IA64_GENERIC
+
+unsigned int
+ia64_inb (unsigned long port)
+{
+ return __ia64_inb(port);
+}
+
+unsigned int
+ia64_inw (unsigned long port)
+{
+ return __ia64_inw(port);
+}
+
+unsigned int
+ia64_inl (unsigned long port)
+{
+ return __ia64_inl(port);
+}
+
+void
+ia64_outb (unsigned char val, unsigned long port)
+{
+ __ia64_outb(val, port);
+}
+
+void
+ia64_outw (unsigned short val, unsigned long port)
+{
+ __ia64_outw(val, port);
+}
+
+void
+ia64_outl (unsigned int val, unsigned long port)
+{
+ __ia64_outl(val, port);
+}
+
+/* define aliases: */
+
+asm (".global __ia64_inb, __ia64_inw, __ia64_inl");
+asm ("__ia64_inb = ia64_inb");
+asm ("__ia64_inw = ia64_inw");
+asm ("__ia64_inl = ia64_inl");
+
+asm (".global __ia64_outb, __ia64_outw, __ia64_outl");
+asm ("__ia64_outb = ia64_outb");
+asm ("__ia64_outw = ia64_outw");
+asm ("__ia64_outl = ia64_outl");
+
+#endif /* CONFIG_IA64_GENERIC */
diff -urN linux-davidm/arch/ia64/lib/swiotlb.c linux-2.4.0-test12-lia/arch/ia64/lib/swiotlb.c
--- linux-davidm/arch/ia64/lib/swiotlb.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-test12-lia/arch/ia64/lib/swiotlb.c Thu Dec 14 14:19:36 2000
@@ -0,0 +1,454 @@
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
+ *
+ * 00/12/13 davidm Rename to swiotlb.c and add mark_clean() to avoid
+ * unnecessary i-cache flushing.
+ */
+
+#include <linux/config.h>
+
+#include <linux/mm.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+
+#include <asm/io.h>
+#include <asm/pci.h>
+#include <asm/dma.h>
+
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#define ALIGN(val, align) ((unsigned long) \
+ (((unsigned long) (val) + ((align) - 1)) & ~((align) - 1)))
+
+/*
+ * log of the size of each IO TLB slab. The number of slabs is command line controllable.
+ */
+#define IO_TLB_SHIFT 11
+
+/*
+ * Used to do a quick range check in swiotlb_unmap_single and swiotlb_sync_single, to see
+ * if the memory was in fact allocated by this API.
+ */
+static char *io_tlb_start, *io_tlb_end;
+
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and io_tlb_end.
+ * This is command line adjustable via setup_io_tlb_npages.
+ */
+static unsigned long io_tlb_nslabs = 1024;
+
+/*
+ * This is a free list describing the number of free entries available from each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry for the sync
+ * operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+static spinlock_t io_tlb_lock = SPIN_LOCK_UNLOCKED;
+
+static int __init
+setup_io_tlb_npages (char *str)
+{
+ io_tlb_nslabs = simple_strtoul(str, NULL, 0) << (PAGE_SHIFT - IO_TLB_SHIFT);
+ return 1;
+}
+__setup("swiotlb=", setup_io_tlb_npages);
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data structures for
+ * the software IO TLB used to implement the PCI DMA API.
+ */
+void
+swiotlb_init (void)
+{
+ int i;
+
+ /*
+ * Get IO TLB memory from the low pages
+ */
+ io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs * (1 << IO_TLB_SHIFT));
+ if (!io_tlb_start)
+ BUG();
+ io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
+
+ /*
+ * Allocate and initialize the free list array. This array is used
+ * to find contiguous free memory regions of size 2^IO_TLB_SHIFT between
+ * io_tlb_start and io_tlb_end.
+ */
+ io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+ for (i = 0; i < io_tlb_nslabs; i++)
+ io_tlb_list[i] = io_tlb_nslabs - i;
+ io_tlb_index = 0;
+ io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
+
+ printk("Placing software IO TLB between 0x%p - 0x%p\n",
+ (void *) io_tlb_start, (void *) io_tlb_end);
+}
+
+/*
+ * Allocates bounce buffer and returns its kernel virtual address.
+ */
+static void *
+map_single (struct pci_dev *hwdev, char *buffer, size_t size, int direction)
+{
+ unsigned long flags;
+ char *dma_addr;
+ unsigned int nslots, stride, index, wrap;
+ int i;
+
+ /*
+ * For mappings greater than a page size, we limit the stride (and hence alignment)
+ * to a page size.
+ */
+ nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+ if (size > (1 << PAGE_SHIFT))
+ stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+ else
+ stride = nslots;
+
+ if (!nslots)
+ BUG();
+
+ /*
+ * Find suitable number of IO TLB entries size that will fit this request and
+ * allocate a buffer from that IO TLB pool.
+ */
+ spin_lock_irqsave(&io_tlb_lock, flags);
+ {
+ wrap = index = ALIGN(io_tlb_index, stride);
+
+ if (index >= io_tlb_nslabs)
+ wrap = index = 0;
+
+ do {
+ /*
+ * If we find a slot that indicates we have 'nslots' number of
+ * contiguous buffers, we allocate the buffers from that slot and
+ * mark the entries as '0' indicating unavailable.
+ */
+ if (io_tlb_list[index] >= nslots) {
+ int count = 0;
+
+ for (i = index; i < index + nslots; i++)
+ io_tlb_list[i] = 0;
+ for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
+ io_tlb_list[i] = ++count;
+ dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+
+ /*
+ * Update the indices to avoid searching in the next round.
+ */
+ io_tlb_index = ((index + nslots) < io_tlb_nslabs
+ ? (index + nslots) : 0);
+
+ goto found;
+ }
+ index += stride;
+ if (index >= io_tlb_nslabs)
+ index = 0;
+ } while (index != wrap);
+
+ /*
+ * XXX What is a suitable recovery mechanism here? We cannot
+ * sleep because we are called from with in interrupts!
+ */
+ panic("map_single: could not allocate software IO TLB (%ld bytes)", size);
+found:
+ }
+ spin_unlock_irqrestore(&io_tlb_lock, flags);
+
+ /*
+ * Save away the mapping from the original address to the DMA address. This is
+ * needed when we sync the memory. Then we sync the buffer if needed.
+ */
+ io_tlb_orig_addr[index] = buffer;
+ if (direction = PCI_DMA_TODEVICE || direction = PCI_DMA_BIDIRECTIONAL)
+ memcpy(dma_addr, buffer, size);
+
+ return dma_addr;
+}
+
+/*
+ * dma_addr is the kernel virtual address of the bounce buffer to unmap.
+ */
+static void
+unmap_single (struct pci_dev *hwdev, char *dma_addr, size_t size, int direction)
+{
+ unsigned long flags;
+ int i, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+ int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+ char *buffer = io_tlb_orig_addr[index];
+
+ /*
+ * First, sync the memory before unmapping the entry
+ */
+ if ((direction = PCI_DMA_FROMDEVICE) || (direction = PCI_DMA_BIDIRECTIONAL))
+ /*
+ * bounce... copy the data back into the original buffer * and delete the
+ * bounce buffer.
+ */
+ memcpy(buffer, dma_addr, size);
+
+ /*
+ * Return the buffer to the free list by setting the corresponding entries to
+ * indicate the number of contigous entries available. While returning the
+ * entries to the free list, we merge the entries with slots below and above the
+ * pool being returned.
+ */
+ spin_lock_irqsave(&io_tlb_lock, flags);
+ {
+ int count = ((index + nslots) < io_tlb_nslabs ? io_tlb_list[index + nslots] : 0);
+ /*
+ * Step 1: return the slots to the free list, merging the slots with
+ * superceeding slots
+ */
+ for (i = index + nslots - 1; i >= index; i--)
+ io_tlb_list[i] = ++count;
+ /*
+ * Step 2: merge the returned slots with the preceeding slots, if
+ * available (non zero)
+ */
+ for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
+ io_tlb_list[i] = ++count;
+ }
+ spin_unlock_irqrestore(&io_tlb_lock, flags);
+}
+
+static void
+sync_single (struct pci_dev *hwdev, char *dma_addr, size_t size, int direction)
+{
+ int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+ char *buffer = io_tlb_orig_addr[index];
+
+ /*
+ * bounce... copy the data back into/from the original buffer
+ * XXX How do you handle PCI_DMA_BIDIRECTIONAL here ?
+ */
+ if (direction = PCI_DMA_FROMDEVICE)
+ memcpy(buffer, dma_addr, size);
+ else if (direction = PCI_DMA_TODEVICE)
+ memcpy(dma_addr, buffer, size);
+ else
+ BUG();
+}
+
+void *
+swiotlb_alloc_consistent (struct pci_dev *hwdev, size_t size, dma_addr_t *dma_handle)
+{
+ unsigned long pci_addr;
+ int gfp = GFP_ATOMIC;
+ void *ret;
+
+ if (!hwdev || hwdev->dma_mask <= 0xffffffff)
+ gfp |= GFP_DMA; /* XXX fix me: should change this to GFP_32BIT or ZONE_32BIT */
+ ret = (void *)__get_free_pages(gfp, get_order(size));
+ if (!ret)
+ return NULL;
+
+ memset(ret, 0, size);
+ pci_addr = virt_to_phys(ret);
+ if ((pci_addr & ~hwdev->dma_mask) != 0)
+ panic("swiotlb_alloc_consistent: allocated memory is out of range for PCI device");
+ *dma_handle = pci_addr;
+ return ret;
+}
+
+void
+swiotlb_free_consistent (struct pci_dev *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
+{
+ free_pages((unsigned long) vaddr, get_order(size));
+}
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode. The PCI address
+ * to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory until either
+ * swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ */
+dma_addr_t
+swiotlb_map_single (struct pci_dev *hwdev, void *ptr, size_t size, int direction)
+{
+ unsigned long pci_addr = virt_to_phys(ptr);
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /*
+ * Check if the PCI device can DMA to ptr... if so, just return ptr
+ */
+ if ((pci_addr & ~hwdev->dma_mask) = 0)
+ /*
+ * Device is bit capable of DMA'ing to the buffer... just return the PCI
+ * address of ptr
+ */
+ return pci_addr;
+
+ /*
+ * get a bounce buffer:
+ */
+ pci_addr = virt_to_phys(map_single(hwdev, ptr, size, direction));
+
+ /*
+ * Ensure that the address returned is DMA'ble:
+ */
+ if ((pci_addr & ~hwdev->dma_mask) != 0)
+ panic("map_single: bounce buffer is not DMA'ble");
+
+ return pci_addr;
+}
+
+/*
+ * Since DMA is i-cache coherent, any (complete) pages that were written via
+ * DMA can be marked as "clean" so that update_mmu_cache() doesn't have to
+ * flush them when they get mapped into an executable vm-area.
+ */
+static void
+mark_clean (void *addr, size_t size)
+{
+ unsigned long pg_addr, end;
+
+ pg_addr = PAGE_ALIGN((unsigned long) addr);
+ end = (unsigned long) addr + size;
+ while (pg_addr + PAGE_SIZE <= end) {
+ set_bit(PG_arch_1, virt_to_page(pg_addr));
+ pg_addr += PAGE_SIZE;
+ }
+}
+
+/*
+ * Unmap a single streaming mode DMA translation. The dma_addr and size must match what
+ * was provided for in a previous swiotlb_map_single call. All other usages are
+ * undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guarenteed to see whatever the
+ * device wrote there.
+ */
+void
+swiotlb_unmap_single (struct pci_dev *hwdev, dma_addr_t pci_addr, size_t size, int direction)
+{
+ char *dma_addr = phys_to_virt(pci_addr);
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+ unmap_single(hwdev, dma_addr, size, direction);
+ else if (direction = PCI_DMA_FROMDEVICE)
+ mark_clean(dma_addr, size);
+}
+
+/*
+ * Make physical memory consistent for a single streaming mode DMA translation after a
+ * transfer.
+ *
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer using the cpu,
+ * yet do not wish to teardown the PCI dma mapping, you must call this function before
+ * doing so. At the next point you give the PCI dma address back to the card, the device
+ * again owns the buffer.
+ */
+void
+swiotlb_sync_single (struct pci_dev *hwdev, dma_addr_t pci_addr, size_t size, int direction)
+{
+ char *dma_addr = phys_to_virt(pci_addr);
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+ sync_single(hwdev, dma_addr, size, direction);
+ else if (direction = PCI_DMA_FROMDEVICE)
+ mark_clean(dma_addr, size);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming mode for DMA. This is the
+ * scather-gather version of the above swiotlb_map_single interface. Here the scatter
+ * gather list elements are each tagged with the appropriate dma address and length. They
+ * are obtained via sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ * DMA address/length pairs than there are SG table elements.
+ * (for example via virtual mapping capabilities)
+ * The routine returns the number of addr/length pairs actually
+ * used, at most nents.
+ *
+ * Device ownership issues as mentioned above for swiotlb_map_single are the same here.
+ */
+int
+swiotlb_map_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
+{
+ int i;
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+
+ for (i = 0; i < nelems; i++, sg++) {
+ sg->orig_address = sg->address;
+ if ((virt_to_phys(sg->address) & ~hwdev->dma_mask) != 0) {
+ sg->address = map_single(hwdev, sg->address, sg->length,
+ direction);
+ }
+ }
+ return nelems;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations. Again, cpu read rules concerning calls
+ * here are the same as for swiotlb_unmap_single() above.
+ */
+void
+swiotlb_unmap_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
+{
+ int i;
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+
+ for (i = 0; i < nelems; i++, sg++)
+ if (sg->orig_address != sg->address) {
+ unmap_single(hwdev, sg->address, sg->length, direction);
+ sg->address = sg->orig_address;
+ } else if (direction = PCI_DMA_FROMDEVICE)
+ mark_clean(sg->address, sg->length);
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA translations after a
+ * transfer.
+ *
+ * The same as swiotlb_dma_sync_single but for a scatter-gather list, same rules and
+ * usage.
+ */
+void
+swiotlb_sync_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
+{
+ int i;
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+
+ for (i = 0; i < nelems; i++, sg++)
+ if (sg->orig_address != sg->address)
+ sync_single(hwdev, sg->address, sg->length, direction);
+}
+
+unsigned long
+swiotlb_dma_address (struct scatterlist *sg)
+{
+ return virt_to_phys(sg->address);
+}
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.0-test12-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/mm/init.c Thu Dec 14 14:36:33 2000
@@ -1,8 +1,8 @@
/*
* Initialize MMU support.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
#include <linux/kernel.h>
@@ -19,6 +19,7 @@
#include <asm/efi.h>
#include <asm/ia32.h>
#include <asm/io.h>
+#include <asm/machvec.h>
#include <asm/pgalloc.h>
#include <asm/sal.h>
#include <asm/system.h>
@@ -428,17 +429,15 @@
extern char __start_gate_section[];
long reserved_pages, codesize, datasize, initsize;
-#ifdef CONFIG_SWIOTLB
- {
- /*
- * This needs to be called _after_ the command line has been parsed but
- * _before_ any drivers that may need the sw I/O TLB are initialized or
- * bootmem has been freed.
- */
- extern void setup_swiotlb (void);
- setup_swiotlb();
- }
+#ifdef CONFIG_PCI
+ /*
+ * This needs to be called _after_ the command line has been parsed but _before_
+ * any drivers that may need the PCI DMA interface are initialized or bootmem has
+ * been freed.
+ */
+ platform_pci_dma_init();
#endif
+
if (!mem_map)
BUG();
diff -urN linux-davidm/arch/ia64/sn/fprom/fpmem.c linux-2.4.0-test12-lia/arch/ia64/sn/fprom/fpmem.c
--- linux-davidm/arch/ia64/sn/fprom/fpmem.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/sn/fprom/fpmem.c Wed Dec 13 18:59:33 2000
@@ -176,7 +176,7 @@
if (bank = 0) {
hole = (cnode = 0) ? KERNEL_SIZE : PROMRESERVED_SIZE;
numbytes -= hole;
- build_mem_desc(md, EFI_RUNTIME_SERVICES_CODE, paddr, hole);
+ build_mem_desc(md, EFI_RUNTIME_SERVICES_DATA, paddr, hole);
paddr += hole;
count++ ;
md += mdsize;
diff -urN linux-davidm/arch/ia64/sn/fprom/fw-emu.c linux-2.4.0-test12-lia/arch/ia64/sn/fprom/fw-emu.c
--- linux-davidm/arch/ia64/sn/fprom/fw-emu.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/sn/fprom/fw-emu.c Wed Dec 13 18:59:33 2000
@@ -379,6 +379,9 @@
memcpy(acpi_rsdt->header.signature, "RSDT",4);
acpi_rsdt->header.length = sizeof(acpi_rsdt_t);
+ memcpy(acpi_rsdt->header.oem_id, "SGI", 3);
+ memcpy(acpi_rsdt->header.oem_table_id, "SN1", 3);
+ acpi_rsdt->header.oem_revision = 0x00010001;
acpi_rsdt->entry_ptrs[0] = __fwtab_pa(base_nasid, acpi_sapic);
memcpy(acpi_sapic->header.signature, "SPIC ", 4);
@@ -407,7 +410,7 @@
sal_systab->entry_count = 3;
strcpy(sal_systab->oem_id, "SGI");
- strcpy(sal_systab->product_id, "sn1");
+ strcpy(sal_systab->product_id, "SN1");
/* fill in an entry point: */
sal_ed->type = SAL_DESC_ENTRY_POINT;
@@ -464,7 +467,7 @@
bp->efi_memmap = __fwtab_pa(base_nasid, efi_memmap);
bp->efi_memmap_size = num_memmd*mdsize;
bp->efi_memdesc_size = mdsize;
- bp->efi_memdesc_version = 1;
+ bp->efi_memdesc_version = 0x101;
bp->command_line = __fwtab_pa(base_nasid, cmd_line);
bp->console_info.num_cols = 80;
bp->console_info.num_rows = 25;
diff -urN linux-davidm/arch/ia64/sn/io/klgraph_hack.c linux-2.4.0-test12-lia/arch/ia64/sn/io/klgraph_hack.c
--- linux-davidm/arch/ia64/sn/io/klgraph_hack.c Thu Dec 14 19:58:05 2000
+++ linux-2.4.0-test12-lia/arch/ia64/sn/io/klgraph_hack.c Wed Dec 13 18:59:33 2000
@@ -139,11 +139,12 @@
uint64_t *tmp;
volatile u32 *tmp32;
+#ifdef 0
/* Preset some values */
/* Write IOERR clear to clear the CRAZY bit in the status */
tmp = (uint64_t *)0xc0000a0001c001f8; *tmp = (uint64_t)0xffffffff;
/* set widget control register...setting bedrock widget id to b */
- tmp = (uint64_t *)0xc0000a0001c00020; *tmp = (uint64_t)0x801b;
+ /* tmp = (uint64_t *)0xc0000a0001c00020; *tmp = (uint64_t)0x801b; */
/* set io outbound widget access...allow all */
tmp = (uint64_t *)0xc0000a0001c00110; *tmp = (uint64_t)0xff01;
/* set io inbound widget access...allow all */
@@ -163,6 +164,7 @@
*tmp32 = 0xba98;
tmp32 = (volatile u32 *)0xc0000a000f000288L;
*tmp32 = 0xba98;
+#endif
printk("Widget ID Address 0x%p Value 0x%lx\n", (uint64_t *)0xc0000a0001e00000, *( (volatile uint64_t *)0xc0000a0001e00000) );
diff -urN linux-davidm/arch/ia64/sn/io/ml_SN_intr.c linux-2.4.0-test12-lia/arch/ia64/sn/io/ml_SN_intr.c
--- linux-davidm/arch/ia64/sn/io/ml_SN_intr.c Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/arch/ia64/sn/io/ml_SN_intr.c Wed Dec 13 18:59:33 2000
@@ -19,6 +19,7 @@
#include <linux/types.h>
#include <linux/config.h>
#include <linux/slab.h>
+#include <asm/smp.h>
#include <asm/sn/sgi.h>
#include <asm/sn/iograph.h>
#include <asm/sn/invent.h>
@@ -678,6 +679,7 @@
int local_cpu_num;
cpu = cnode_slice_to_cpuid(cnode, slice);
+ cpu = cpu_logical_id(cpu);
if (cpu = CPU_NONE)
continue;
diff -urN linux-davidm/arch/ia64/sn/io/pci_bus_cvlink.c linux-2.4.0-test12-lia/arch/ia64/sn/io/pci_bus_cvlink.c
--- linux-davidm/arch/ia64/sn/io/pci_bus_cvlink.c Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/arch/ia64/sn/io/pci_bus_cvlink.c Wed Dec 13 18:59:33 2000
@@ -8,9 +8,11 @@
* Copyright (C) 2000 by Colin Ngam
*/
+#include <linux/init.h>
#include <linux/types.h>
#include <linux/config.h>
#include <linux/pci.h>
+#include <linux/pci_ids.h>
#include <linux/sched.h>
#include <linux/ioport.h>
#include <asm/sn/types.h>
@@ -149,6 +151,34 @@
}
/*
+ * Most drivers currently do not properly tell the arch specific pci dma
+ * interfaces whether they can handle A64. Here is where we privately
+ * keep track of this.
+ */
+static void __init
+set_sn1_pci64(struct pci_dev *dev)
+{
+ unsigned short vendor = dev->vendor;
+ unsigned short device = dev->device;
+
+ if (vendor = PCI_VENDOR_ID_QLOGIC) {
+ if ((device = PCI_DEVICE_ID_QLOGIC_ISP2100) ||
+ (device = PCI_DEVICE_ID_QLOGIC_ISP2200)) {
+ SET_PCIA64(dev);
+ return;
+ }
+ }
+
+ if (vendor = PCI_VENDOR_ID_SGI) {
+ if (device = PCI_DEVICE_ID_SGI_IOC3) {
+ SET_PCIA64(dev);
+ return;
+ }
+ }
+
+}
+
+/*
* sn1_pci_fixup() - This routine is called when platform_pci_fixup() is
* invoked at the end of pcibios_init() to link the Linux pci
* infrastructure to SGI IO Infrasturcture - ia64/kernel/pci.c
@@ -172,6 +202,7 @@
sn1_pci_find_bios();
return;
}
+
#if 0
{
devfs_handle_t bridge_vhdl = pci_bus_to_vertex(0);
@@ -236,7 +267,9 @@
device_sysdata = kmalloc(sizeof(struct sn1_device_sysdata),
GFP_KERNEL);
device_sysdata->vhdl = devfn_to_vertex(device_dev->bus->number, device_dev->devfn);
+ device_sysdata->isa64 = 0;
device_dev->sysdata = (void *) device_sysdata;
+ set_sn1_pci64(device_dev);
pci_read_config_word(device_dev, PCI_COMMAND, &cmd);
/*
diff -urN linux-davidm/arch/ia64/sn/io/pci_dma.c linux-2.4.0-test12-lia/arch/ia64/sn/io/pci_dma.c
--- linux-davidm/arch/ia64/sn/io/pci_dma.c Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/arch/ia64/sn/io/pci_dma.c Wed Dec 13 18:59:33 2000
@@ -1,5 +1,4 @@
-/* $Id$
- *
+/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
@@ -12,6 +11,18 @@
#include <linux/mm.h>
#include <linux/string.h>
#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/devfs_fs_kernel.h>
+
+#ifndef LANGUAGE_C
+#define LANGUAGE_C 99
+#endif
+#ifndef _LANGUAGE_C
+#define _LANGUAGE_C 99
+#endif
+#ifndef CONFIG_IA64_SGI_IO
+#define CONFIG_IA64_SGI_IO 99
+#endif
#include <asm/io.h>
#include <asm/sn/sgi.h>
@@ -20,11 +31,27 @@
#include <asm/sn/pci/pcibr.h>
#include <asm/sn/pci/pcibr_private.h>
#include <asm/sn/iobus.h>
+#include <asm/sn/pci/pci_bus_cvlink.h>
+#include <asm/sn/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/invent.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/alenlist.h>
-#ifdef BRINGUP
-#ifndef BRIDGE_DIRECT_MAP_DMA
-#define BRIDGE_DIRECT_MAP_DMA 0xb180000000000000ull
+/*
+ * this is REALLY ugly, blame it on gcc's lame inlining that we
+ * have to put procedures in header files
+ */
+#if LANGUAGE_C = 99
+#undef LANGUAGE_C
#endif
+#if _LANGUAGE_C = 99
+#undef _LANGUAGE_C
+#endif
+#if CONFIG_IA64_SGI_IO = 99
+#undef CONFIG_IA64_SGI_IO
#endif
/*
@@ -40,37 +67,267 @@
void *ret;
int gfp = GFP_ATOMIC;
devfs_handle_t vhdl;
- unsigned char slot;
+ struct sn1_device_sysdata *device_sysdata;
+ paddr_t temp_ptr;
+
+ *dma_handle = (dma_addr_t) NULL;
/*
* get vertex for the device
*/
- vhdl = (devfs_handle_t) hwdev->sysdata;
- slot = PCI_SLOT(hwdev->devfn);
+ device_sysdata = (struct sn1_device_sysdata *) hwdev->sysdata;
+ vhdl = device_sysdata->vhdl;
+
+ if ( ret = (void *)__get_free_pages(gfp, get_order(size)) ) {
+ memset(ret, 0, size);
+ } else {
+ return(NULL);
+ }
+
+ temp_ptr = (paddr_t) __pa(ret);
+ if (IS_PCIA64(hwdev)) {
+
+ /*
+ * This device supports 64bits DMA addresses.
+ */
+ *dma_handle = pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size,
+ PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD
+ | PCIIO_DMA_A64 );
+ return (ret);
+ }
/*
- * any device that can't dma into a 32 bit address space
- * really has no business in this system, but we'll do
- * what we can..
- */
- if (!hwdev || hwdev->dma_mask != 0xffffffff)
- gfp |= GFP_DMA;
- ret = (void *)__get_free_pages(gfp, get_order(size));
-
-#ifdef BRINGUP
- printk("%s : FIXME: not doing busaddr\n", __FUNCTION__);
- if (ret) {
- memset(ret, 0, size);
- *dma_handle = __pa(ret) | BRIDGE_DIRECT_MAP_DMA;
- }
-#else
- if (ret) {
- memset(ret, 0, size);
-
- *dma_handle = pciio_dmatrans_addr(vhdl, NULL, (paddr_t)ret, size,
+ * Devices that supports 32 Bits upto 63 Bits DMA Address gets
+ * 32 Bits DMA addresses.
+ *
+ * First try to get 32 Bit Direct Map Support.
+ */
+ if (IS_PCI32G(hwdev)) {
+ *dma_handle = pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size,
+ PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD);
+ if (dma_handle) {
+ return (ret);
+ } else {
+ /*
+ * We need to map this request by using ATEs.
+ */
+ printk("sn1_pci_alloc_consistent: 32Bits DMA Page Map support not available yet!");
+ BUG();
+ }
+ }
+
+ if (IS_PCI32L(hwdev)) {
+ /*
+ * SNIA64 cannot support DMA Addresses smaller than 32 bits.
+ */
+ return (NULL);
+ }
+
+ return NULL;
+}
+
+void
+sn1_pci_free_consistent(struct pci_dev *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
+{
+ free_pages((unsigned long) vaddr, get_order(size));
+}
+
+/*
+ * On sn1 we use the alt_address entry of the scatterlist to store
+ * the physical address corresponding to the given virtual address
+ */
+int
+sn1_pci_map_sg (struct pci_dev *hwdev,
+ struct scatterlist *sg, int nents, int direction)
+{
+
+ int i;
+ devfs_handle_t vhdl;
+ dma_addr_t dma_addr;
+ paddr_t temp_ptr;
+ struct sn1_device_sysdata *device_sysdata;
+
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+
+ /*
+ * Handle 64 bit cards.
+ */
+ device_sysdata = (struct sn1_device_sysdata *) hwdev->sysdata;
+ vhdl = device_sysdata->vhdl;
+ for (i = 0; i < nents; i++, sg++) {
+ sg->orig_address = sg->address;
+ dma_addr = 0;
+ temp_ptr = (paddr_t) __pa(sg->address);
+
+ /*
+ * Handle the most common case 64Bit cards.
+ */
+ if (IS_PCIA64(hwdev)) {
+ dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL,
+ temp_ptr, sg->length,
+ PCIBR_BARRIER | PCIIO_BYTE_STREAM |
+ PCIIO_DMA_CMD | PCIIO_DMA_A64 );
+ sg->address = (char *)dma_addr;
+/* printk("pci_map_sg: 64Bits hwdev %p DMA Address 0x%p alt_address 0x%p orig_address 0x%p length 0x%x\n", hwdev, sg->address, sg->alt_address, sg->orig_address, sg->length); */
+ continue;
+ }
+
+ /*
+ * Handle 32Bits and greater cards.
+ */
+ if (IS_PCI32G(hwdev)) {
+ dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL,
+ temp_ptr, sg->length,
+ PCIBR_BARRIER | PCIIO_BYTE_STREAM |
PCIIO_DMA_CMD);
- }
-#endif /* BRINGUP */
- return ret;
+ if (dma_addr) {
+ sg->address = (char *)dma_addr;
+/* printk("pci_map_single: 32Bit direct pciio_dmatrans_addr pcidev %p returns dma_addr 0x%lx\n", hwdev, dma_addr); */
+ continue;
+ } else {
+ /*
+ * We need to map this request by using ATEs.
+ */
+ printk("pci_map_single: 32Bits DMA Page Map support not available yet!");
+ BUG();
+
+ }
+ }
+ }
+
+ return nents;
+
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.
+ * Again, cpu read rules concerning calls here are the same as for
+ * pci_unmap_single() above.
+ */
+void
+sn1_pci_unmap_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
+{
+ int i;
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ for (i = 0; i < nelems; i++, sg++)
+ if (sg->orig_address != sg->address) {
+ /* phys_to_virt((dma_addr_t)sg->address | ~0x80000000); */
+ sg->address = sg->orig_address;
+ sg->orig_address = 0;
+ }
+}
+
+/*
+ * We map this to the one step pciio_dmamap_trans interface rather than
+ * the two step pciio_dmamap_alloc/pciio_dmamap_addr because we have
+ * no way of saving the dmamap handle from the alloc to later free
+ * (which is pretty much unacceptable).
+ *
+ * TODO: simplify our interface;
+ * get rid of dev_desc and vhdl (seems redundant given a pci_dev);
+ * figure out how to save dmamap handle so can use two step.
+ */
+dma_addr_t sn1_pci_map_single (struct pci_dev *hwdev,
+ void *ptr, size_t size, int direction)
+{
+ devfs_handle_t vhdl;
+ dma_addr_t dma_addr;
+ paddr_t temp_ptr;
+ struct sn1_device_sysdata *device_sysdata;
+
+
+ if (direction = PCI_DMA_NONE)
+ BUG();
+
+ if (IS_PCI32L(hwdev)) {
+ /*
+ * SNIA64 cannot support DMA Addresses smaller than 32 bits.
+ */
+ return ((dma_addr_t) NULL);
+ }
+
+ /*
+ * find vertex for the device
+ */
+ device_sysdata = (struct sn1_device_sysdata *)hwdev->sysdata;
+ vhdl = device_sysdata->vhdl;
+/* printk("pci_map_single: Called vhdl = 0x%p ptr = 0x%p size = %d\n", vhdl, ptr, size); */
+ /*
+ * Call our dmamap interface
+ */
+ dma_addr = 0;
+ temp_ptr = (paddr_t) __pa(ptr);
+
+ if (IS_PCIA64(hwdev)) {
+ /*
+ * This device supports 64bits DMA addresses.
+ */
+ dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL,
+ temp_ptr, size,
+ PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD
+ | PCIIO_DMA_A64 );
+/* printk("pci_map_single: 64Bit pciio_dmatrans_addr pcidev %p returns dma_addr 0x%lx\n", hwdev, dma_addr); */
+ return (dma_addr);
+ }
+
+ /*
+ * Devices that supports 32 Bits upto 63 Bits DMA Address gets
+ * 32 Bits DMA addresses.
+ *
+ * First try to get 32 Bit Direct Map Support.
+ */
+ if (IS_PCI32G(hwdev)) {
+ dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL,
+ temp_ptr, size,
+ PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD);
+ if (dma_addr) {
+/* printk("pci_map_single: 32Bit direct pciio_dmatrans_addr pcidev %p returns dma_addr 0x%lx\n", hwdev, dma_addr); */
+ return (dma_addr);
+ } else {
+ /*
+ * We need to map this request by using ATEs.
+ */
+ printk("pci_map_single: 32Bits DMA Page Map support not available yet!");
+ BUG();
+ }
+ }
+
+ if (IS_PCI32L(hwdev)) {
+ /*
+ * SNIA64 cannot support DMA Addresses smaller than 32 bits.
+ */
+ return ((dma_addr_t) NULL);
+ }
+
+ return ((dma_addr_t) NULL);
+
+}
+
+void
+sn1_pci_unmap_single (struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
+}
+
+void
+sn1_pci_dma_sync_single (struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
+}
+
+void
+sn1_pci_dma_sync_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
+{
+ if (direction = PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
}
diff -urN linux-davidm/arch/ia64/sn/sn1/sn1_asm.S linux-2.4.0-test12-lia/arch/ia64/sn/sn1/sn1_asm.S
--- linux-davidm/arch/ia64/sn/sn1/sn1_asm.S Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/arch/ia64/sn/sn1/sn1_asm.S Wed Dec 13 18:59:33 2000
@@ -6,22 +6,3 @@
#include <linux/config.h>
-#ifdef CONFIG_IA64_SGI_SYNERGY_1_0_HACKS
-// Code to work around a SYNERGY 1.0 bug.
-
- .align 16
- .global enable_fsb_hack
- .proc enable_fsb_hack
-enable_fsb_hack:
- movl r16=0xe000000000000000
- movl r17=0x4ffffffff0000000 /* only trap 0-256MB: covered by DTR0 */
- mov r20=0
- mov r21=1
- ;;
- mov dbr[r20]=r16
- mov dbr[r21]=r17
- ;;
- srlz.d
- br.ret.sptk.few rp
- .endp enable_fsb_hack
-#endif /* CONFIG_IA64_SGI_SYNERGY_1_0_HACKS */
diff -urN linux-davidm/drivers/net/eepro100.c linux-2.4.0-test12-lia/drivers/net/eepro100.c
--- linux-davidm/drivers/net/eepro100.c Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/drivers/net/eepro100.c Thu Dec 14 14:42:16 2000
@@ -43,7 +43,7 @@
static int txdmacount = 128;
static int rxdmacount /* = 0 */;
-#ifdef __ia64__
+#if defined(__ia64__) || defined(__alpha__) || defined(__sparc__)
/* align rx buffers to 2 bytes so that IP header is aligned */
# define RX_ALIGN
# define RxFD_ALIGNMENT __attribute__ ((aligned (2), packed))
@@ -53,11 +53,7 @@
/* Set the copy breakpoint for the copy-only-tiny-buffer Rx method.
Lower values use more memory, but are faster. */
-#if defined(__alpha__) || defined(__sparc__)
-static int rx_copybreak = 1518;
-#else
static int rx_copybreak = 200;
-#endif
/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
static int max_interrupt_work = 20;
diff -urN linux-davidm/drivers/scsi/qla1280.c linux-2.4.0-test12-lia/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/drivers/scsi/qla1280.c Thu Dec 14 14:42:57 2000
@@ -242,9 +242,6 @@
STATIC void qla1280_removeq(scsi_lu_t *q, srb_t *sp);
STATIC void qla1280_mem_free(scsi_qla_host_t *ha);
static void qla1280_do_dpc(void *p);
-#ifdef QLA1280_UNUSED
-static void qla1280_set_flags(char * s);
-#endif
static char *qla1280_get_token(char *, char *);
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0)
STATIC inline void mdelay(int);
diff -urN linux-davidm/drivers/usb/uhci.c linux-2.4.0-test12-lia/drivers/usb/uhci.c
--- linux-davidm/drivers/usb/uhci.c Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/drivers/usb/uhci.c Thu Dec 14 14:43:23 2000
@@ -71,46 +71,6 @@
/* If a transfer is still active after this much time, turn off FSBR */
#define IDLE_TIMEOUT (HZ / 20) /* 50 ms */
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
-
-static struct uhci *guhci;
-
-void
-disable_usb (void)
-{
- unsigned short cmd;
- unsigned int io_addr;
-
- if (guhci = NULL)
- return;
-
- io_addr = guhci->io_addr;
-
- cmd = inw (io_addr + USBCMD);
-
- outw(cmd & ~ USBCMD_RS, io_addr+USBCMD);
-
- while ((inw (io_addr + USBSTS) & USBSTS_HCH) = 0);
-}
-
-void
-reenable_usb (void)
-{
- unsigned int io_addr;
- unsigned short cmd;
-
- if (guhci = NULL)
- return;
-
- io_addr = guhci->io_addr;
-
- cmd = inw (io_addr + USBCMD);
-
- outw(cmd | USBCMD_RS, io_addr+USBCMD);
-}
-
-#endif /* CONFIG_ITANIUM_A1_SPECIFIC */
-
/*
* Only the USB core should call uhci_alloc_dev and uhci_free_dev
*/
diff -urN linux-davidm/fs/partitions/check.c linux-2.4.0-test12-lia/fs/partitions/check.c
--- linux-davidm/fs/partitions/check.c Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/fs/partitions/check.c Thu Dec 14 14:43:39 2000
@@ -32,7 +32,10 @@
#include "sun.h"
#include "ibm.h"
#include "ultrix.h"
-#include "efi.h"
+
+#ifdef CONFIG_EFI_PARTITION
+# include "efi.h"
+#endif
extern void device_init(void);
extern int *blk_size[];
diff -urN linux-davidm/include/asm-ia64/hw_irq.h linux-2.4.0-test12-lia/include/asm-ia64/hw_irq.h
--- linux-davidm/include/asm-ia64/hw_irq.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/hw_irq.h Thu Dec 14 14:43:48 2000
@@ -8,6 +8,7 @@
#include <linux/config.h>
+#include <linux/sched.h>
#include <linux/types.h>
#include <asm/machvec.h>
diff -urN linux-davidm/include/asm-ia64/machvec.h linux-2.4.0-test12-lia/include/asm-ia64/machvec.h
--- linux-davidm/include/asm-ia64/machvec.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/machvec.h Thu Dec 14 14:44:03 2000
@@ -14,14 +14,9 @@
#include <linux/types.h>
/* forward declarations: */
-struct hw_interrupt_type;
-struct irq_desc;
-struct mm_struct;
+struct pci_dev;
struct pt_regs;
-struct task_struct;
-struct timeval;
-struct vm_area_struct;
-struct acpi_entry_iosapic;
+struct scatterlist;
typedef void ia64_mv_setup_t (char **);
typedef void ia64_mv_irq_init_t (void);
@@ -32,6 +27,18 @@
typedef void ia64_mv_cmci_handler_t (int, void *, struct pt_regs *);
typedef void ia64_mv_log_print_t (void);
typedef void ia64_mv_send_ipi_t (int, int, int, int);
+
+/* PCI-DMA interface: */
+typedef void ia64_mv_pci_dma_init (void);
+typedef void *ia64_mv_pci_alloc_consistent (struct pci_dev *, size_t, dma_addr_t *);
+typedef void ia64_mv_pci_free_consistent (struct pci_dev *, size_t, void *, dma_addr_t);
+typedef dma_addr_t ia64_mv_pci_map_single (struct pci_dev *, void *, size_t, int);
+typedef void ia64_mv_pci_unmap_single (struct pci_dev *, dma_addr_t, size_t, int);
+typedef int ia64_mv_pci_map_sg (struct pci_dev *, struct scatterlist *, int, int);
+typedef void ia64_mv_pci_unmap_sg (struct pci_dev *, struct scatterlist *, int, int);
+typedef void ia64_mv_pci_dma_sync_single (struct pci_dev *, dma_addr_t, size_t, int);
+typedef void ia64_mv_pci_dma_sync_sg (struct pci_dev *, struct scatterlist *, int, int);
+typedef unsigned long ia64_mv_pci_dma_address (struct scatterlist *);
/*
* WARNING: The legacy I/O space is _architected_. Platforms are
* expected to follow this architected model (see Section 10.7 in the
@@ -71,6 +78,16 @@
# define platform_log_print ia64_mv.log_print
# define platform_pci_fixup ia64_mv.pci_fixup
# define platform_send_ipi ia64_mv.send_ipi
+# define platform_pci_dma_init ia64_mv.dma_init
+# define platform_pci_alloc_consistent ia64_mv.alloc_consistent
+# define platform_pci_free_consistent ia64_mv.free_consistent
+# define platform_pci_map_single ia64_mv.map_single
+# define platform_pci_unmap_single ia64_mv.unmap_single
+# define platform_pci_map_sg ia64_mv.map_sg
+# define platform_pci_unmap_sg ia64_mv.unmap_sg
+# define platform_pci_dma_sync_single ia64_mv.sync_single
+# define platform_pci_dma_sync_sg ia64_mv.sync_sg
+# define platform_pci_dma_address ia64_mv.dma_address
# define platform_inb ia64_mv.inb
# define platform_inw ia64_mv.inw
# define platform_inl ia64_mv.inl
@@ -90,6 +107,16 @@
ia64_mv_cmci_handler_t *cmci_handler;
ia64_mv_log_print_t *log_print;
ia64_mv_send_ipi_t *send_ipi;
+ ia64_mv_pci_dma_init *dma_init;
+ ia64_mv_pci_alloc_consistent *alloc_consistent;
+ ia64_mv_pci_free_consistent *free_consistent;
+ ia64_mv_pci_map_single *map_single;
+ ia64_mv_pci_unmap_single *unmap_single;
+ ia64_mv_pci_map_sg *map_sg;
+ ia64_mv_pci_unmap_sg *unmap_sg;
+ ia64_mv_pci_dma_sync_single *sync_single;
+ ia64_mv_pci_dma_sync_sg *sync_sg;
+ ia64_mv_pci_dma_address *dma_address;
ia64_mv_inb_t *inb;
ia64_mv_inw_t *inw;
ia64_mv_inl_t *inl;
@@ -110,6 +137,16 @@
platform_cmci_handler, \
platform_log_print, \
platform_send_ipi, \
+ platform_pci_dma_init, \
+ platform_pci_alloc_consistent, \
+ platform_pci_free_consistent, \
+ platform_pci_map_single, \
+ platform_pci_unmap_single, \
+ platform_pci_map_sg, \
+ platform_pci_unmap_sg, \
+ platform_pci_dma_sync_single, \
+ platform_pci_dma_sync_sg, \
+ platform_pci_dma_address, \
platform_inb, \
platform_inw, \
platform_inl, \
@@ -126,6 +163,20 @@
# endif /* CONFIG_IA64_GENERIC */
/*
+ * Declare default routines which aren't declared anywhere else:
+ */
+extern ia64_mv_pci_dma_init swiotlb_init;
+extern ia64_mv_pci_alloc_consistent swiotlb_alloc_consistent;
+extern ia64_mv_pci_free_consistent swiotlb_free_consistent;
+extern ia64_mv_pci_map_single swiotlb_map_single;
+extern ia64_mv_pci_unmap_single swiotlb_unmap_single;
+extern ia64_mv_pci_map_sg swiotlb_map_sg;
+extern ia64_mv_pci_unmap_sg swiotlb_unmap_sg;
+extern ia64_mv_pci_dma_sync_single swiotlb_sync_single;
+extern ia64_mv_pci_dma_sync_sg swiotlb_sync_sg;
+extern ia64_mv_pci_dma_address swiotlb_dma_address;
+
+/*
* Define default versions so we can extend machvec for new platforms without having
* to update the machvec files for all existing platforms.
*/
@@ -152,6 +203,36 @@
#endif
#ifndef platform_send_ipi
# define platform_send_ipi ia64_send_ipi /* default to architected version */
+#endif
+#ifndef platform_pci_dma_init
+# define platform_pci_dma_init swiotlb_init
+#endif
+#ifndef platform_pci_alloc_consistent
+# define platform_pci_alloc_consistent swiotlb_alloc_consistent
+#endif
+#ifndef platform_pci_free_consistent
+# define platform_pci_free_consistent swiotlb_free_consistent
+#endif
+#ifndef platform_pci_map_single
+# define platform_pci_map_single swiotlb_map_single
+#endif
+#ifndef platform_pci_unmap_single
+# define platform_pci_unmap_single swiotlb_unmap_single
+#endif
+#ifndef platform_pci_map_sg
+# define platform_pci_map_sg swiotlb_map_sg
+#endif
+#ifndef platform_pci_unmap_sg
+# define platform_pci_unmap_sg swiotlb_unmap_sg
+#endif
+#ifndef platform_pci_dma_sync_single
+# define platform_pci_dma_sync_single swiotlb_sync_single
+#endif
+#ifndef platform_pci_dma_sync_sg
+# define platform_pci_dma_sync_sg swiotlb_sync_sg
+#endif
+#ifndef platform_pci_dma_address
+# define platform_pci_dma_address swiotlb_dma_address
#endif
#ifndef platform_inb
# define platform_inb __ia64_inb
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.0-test12-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/offsets.h Thu Dec 14 14:44:18 2000
@@ -8,7 +8,7 @@
*/
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 3968 /* 0xf80 */
+#define IA64_TASK_SIZE 3360 /* 0xd20 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
@@ -20,7 +20,7 @@
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
#define IA64_TASK_THREAD_OFFSET 1456 /* 0x5b0 */
#define IA64_TASK_THREAD_KSP_OFFSET 1456 /* 0x5b0 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 3824 /* 0xef0 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 3216 /* 0xc90 */
#define IA64_TASK_PID_OFFSET 196 /* 0xc4 */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/page.h linux-2.4.0-test12-lia/include/asm-ia64/page.h
--- linux-davidm/include/asm-ia64/page.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/page.h Thu Dec 14 14:44:29 2000
@@ -40,9 +40,6 @@
extern void clear_page (void *page);
extern void copy_page (void *to, void *from);
-#define clear_user_page(page, vaddr) clear_page(page)
-#define copy_user_page(to, from, vaddr) copy_page(to, from)
-
# ifdef STRICT_MM_TYPECHECKS
/*
* These are used to make use of C type-checking..
diff -urN linux-davidm/include/asm-ia64/pci.h linux-2.4.0-test12-lia/include/asm-ia64/pci.h
--- linux-davidm/include/asm-ia64/pci.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/pci.h Thu Dec 14 14:44:39 2000
@@ -22,125 +22,42 @@
struct pci_dev;
-static inline void pcibios_set_master(struct pci_dev *dev)
+static inline void
+pcibios_set_master (struct pci_dev *dev)
{
/* No special bus mastering setup handling */
}
-static inline void pcibios_penalize_isa_irq(int irq)
+static inline void
+pcibios_penalize_isa_irq (int irq)
{
/* We don't do dynamic PCI IRQ allocation */
}
/*
- * Dynamic DMA mapping API.
+ * Dynamic DMA mapping API. See Documentation/DMA-mapping.txt for details.
*/
+#define pci_alloc_consistent platform_pci_alloc_consistent
+#define pci_free_consistent platform_pci_free_consistent
+#define pci_map_single platform_pci_map_single
+#define pci_unmap_single platform_pci_unmap_single
+#define pci_map_sg platform_pci_map_sg
+#define pci_unmap_sg platform_pci_unmap_sg
+#define pci_dma_sync_single platform_pci_dma_sync_single
+#define pci_dma_sync_sg platform_pci_dma_sync_sg
+#define sg_dma_address platform_pci_dma_address
/*
- * Allocate and map kernel buffer using consistent mode DMA for a device.
- * hwdev should be valid struct pci_dev pointer for PCI devices,
- * NULL for PCI-like buses (ISA, EISA).
- * Returns non-NULL cpu-view pointer to the buffer if successful and
- * sets *dma_addrp to the pci side dma address as well, else *dma_addrp
- * is undefined.
- */
-extern void *pci_alloc_consistent (struct pci_dev *hwdev, size_t size, dma_addr_t *dma_handle);
-
-/*
- * Free and unmap a consistent DMA buffer.
- * cpu_addr is what was returned from pci_alloc_consistent,
- * size must be the same as what as passed into pci_alloc_consistent,
- * and likewise dma_addr must be the same as what *dma_addrp was set to.
- *
- * References to the memory and mappings associated with cpu_addr/dma_addr
- * past this call are illegal.
- */
-extern void pci_free_consistent (struct pci_dev *hwdev, size_t size,
- void *vaddr, dma_addr_t dma_handle);
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.
- * The 32-bit bus address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory
- * until either pci_unmap_single or pci_dma_sync_single is performed.
- */
-extern dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction);
-
-/*
- * Unmap a single streaming mode DMA translation. The dma_addr and size
- * must match what was provided for in a previous pci_map_single call. All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guarenteed to see
- * whatever the device wrote there.
- */
-extern void pci_unmap_single (struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size, int direction);
-
-/*
- * Map a set of buffers described by scatterlist in streaming
- * mode for DMA. This is the scather-gather version of the
- * above pci_map_single interface. Here the scatter gather list
- * elements are each tagged with the appropriate dma address
- * and length. They are obtained via sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- * DMA address/length pairs than there are SG table elements.
- * (for example via virtual mapping capabilities)
- * The routine returns the number of addr/length pairs actually
- * used, at most nents.
- *
- * Device ownership issues as mentioned above for pci_map_single are
- * the same here.
- */
-extern int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction);
-
-/*
- * Unmap a set of streaming mode DMA translations.
- * Again, cpu read rules concerning calls here are the same as for
- * pci_unmap_single() above.
- */
-extern void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction);
-
-/*
- * Make physical memory consistent for a single
- * streaming mode DMA translation after a transfer.
- *
- * If you perform a pci_map_single() but wish to interrogate the
- * buffer using the cpu, yet do not wish to teardown the PCI dma
- * mapping, you must call this function before doing so. At the
- * next point you give the PCI dma address back to the card, the
- * device again owns the buffer.
- */
-extern void pci_dma_sync_single (struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size, int direction);
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA
- * translations after a transfer.
- *
- * The same as pci_dma_sync_single but for a scatter-gather list,
- * same rules and usage.
- */
-extern void pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction);
-
-/* Return whether the given PCI device DMA address mask can
- * be supported properly. For example, if your device can
- * only drive the low 24-bits during PCI bus mastering, then
+ * Return whether the given PCI device DMA address mask can be supported properly. For
+ * example, if your device can only drive the low 24-bits during PCI bus mastering, then
* you would pass 0x00ffffff as the mask to this function.
*/
static inline int
-pci_dma_supported(struct pci_dev *hwdev, dma_addr_t mask)
+pci_dma_supported (struct pci_dev *hwdev, dma_addr_t mask)
{
return 1;
}
-/* These macros should be used after a pci_map_sg call has been done
- * to get bus addresses of each of the SG entries and their lengths.
- * You should only work with the number of sg entries pci_map_sg
- * returns, or alternatively stop on the first sg_dma_len(sg) which
- * is 0.
- */
-#define sg_dma_address(sg) (virt_to_bus((sg)->address))
#define sg_dma_len(sg) ((sg)->length)
#endif /* _ASM_IA64_PCI_H */
diff -urN linux-davidm/include/asm-ia64/pgalloc.h linux-2.4.0-test12-lia/include/asm-ia64/pgalloc.h
--- linux-davidm/include/asm-ia64/pgalloc.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/pgalloc.h Thu Dec 14 14:44:49 2000
@@ -15,6 +15,7 @@
#include <linux/config.h>
+#include <linux/mm.h>
#include <linux/threads.h>
#include <asm/mmu_context.h>
@@ -260,6 +261,73 @@
printk("flush_tlb_pgtables: can't flush across regions!!\n");
}
flush_tlb_range(mm, ia64_thash(start), ia64_thash(end));
+}
+
+/*
+ * Now for some cache flushing routines. This is the kind of stuff
+ * that can be very expensive, so try to avoid them whenever possible.
+ */
+
+/* Caches aren't brain-dead on the IA-64. */
+#define flush_cache_all() do { } while (0)
+#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_range(mm, start, end) do { } while (0)
+#define flush_cache_page(vma, vmaddr) do { } while (0)
+#define flush_page_to_ram(page) do { } while (0)
+
+extern void flush_icache_range (unsigned long start, unsigned long end);
+
+static inline void
+flush_dcache_page (struct page *page)
+{
+ clear_bit(PG_arch_1, &page->flags);
+}
+
+static inline void
+clear_user_page (void *addr, unsigned long vaddr, struct page *page)
+{
+ clear_page(addr);
+ flush_dcache_page(page);
+}
+
+static inline void
+copy_user_page (void *to, void *from, unsigned long vaddr, struct page *page)
+{
+ copy_page(to, from);
+ flush_dcache_page(page);
+}
+
+/*
+ * IA-64 doesn't have any external MMU info: the page tables contain
+ * all the necessary information. However, we can use this macro
+ * to pre-install (override) a PTE that we know is needed anyhow.
+ */
+static inline void
+update_mmu_cache (struct vm_area_struct *vma, unsigned long address, pte_t pte)
+{
+ struct page *page;
+
+ if ((vma->vm_flags & PROT_EXEC) = 0)
+ return; /* not an executable page... */
+
+ page = pte_page(pte);
+ address &= PAGE_MASK;
+
+ /*
+ * Avoid flushing pages that can't possibly contain code. All newly created
+ * anonymous pages are such pages. However, once the page gets swapped out and
+ * then read back in, the page may contain code (since the user may have written
+ * code into that page). Fortunately, page->mapping tells us which case applies:
+ * it's non-NULL if and only if the page is in the page cache (whether due to
+ * regular mappings or due to swap-cache pages).
+ */
+ if (!page->mapping)
+ return;
+
+ if (test_and_set_bit(PG_arch_1, &page->flags))
+ return;
+
+ flush_icache_range(address, address + PAGE_SIZE);
}
#endif /* _ASM_IA64_PGALLOC_H */
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.4.0-test12-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/pgtable.h Thu Dec 14 14:50:34 2000
@@ -163,28 +163,6 @@
*/
#define page_address(page) ((page)->virtual)
-/*
- * Now for some cache flushing routines. This is the kind of stuff
- * that can be very expensive, so try to avoid them whenever possible.
- */
-
-/* Caches aren't brain-dead on the ia-64. */
-#define flush_cache_all() do { } while (0)
-#define flush_cache_mm(mm) do { } while (0)
-#define flush_cache_range(mm, start, end) do { } while (0)
-#define flush_cache_page(vma, vmaddr) do { } while (0)
-#define flush_page_to_ram(page) do { } while (0)
-#define flush_dcache_page(page) do { } while (0)
-#define flush_icache_range(start, end) do { } while (0)
-
-extern void ia64_flush_icache_page (unsigned long addr);
-
-#define flush_icache_page(vma,pg) \
-do { \
- if ((vma)->vm_flags & PROT_EXEC) \
- ia64_flush_icache_page((unsigned long) page_address(pg)); \
-} while (0)
-
/* Quick test to see if ADDR is a (potentially) valid physical address. */
static inline long
ia64_phys_addr_valid (unsigned long addr)
@@ -449,47 +427,6 @@
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern void paging_init (void);
-
-/*
- * IA-64 doesn't have any external MMU info: the page tables contain
- * all the necessary information. However, we can use this macro
- * to pre-install (override) a PTE that we know is needed anyhow.
- *
- * Asit says that on Itanium, it is generally faster to let the VHPT
- * walker pick up a newly installed PTE (and VHPT misses should be
- * extremely rare compared to normal misses). Also, since
- * pre-installing the PTE has the problem that we may evict another
- * TLB entry needlessly because we don't know for sure whether we need
- * to update the iTLB or dTLB, I tend to prefer this solution, too.
- * Also, this avoids nasty issues with forward progress (what if the
- * newly installed PTE gets replaced before we return to the previous
- * execution context?).
- *
- */
-#if 1
-# define update_mmu_cache(vma,address,pte)
-#else
-# define update_mmu_cache(vma,address,pte) \
-do { \
- /* \
- * This is usually not a win. We may end up polluting the \
- * dtlb with itlb entries and vice versa (e.g., consider stack \
- * pages that are normally marked executable). It would be \
- * better to insert the TLB entry for the TLB cache that we \
- * know needs the new entry. However, the update_mmu_cache() \
- * arguments don't tell us whether we got here through a data \
- * access or through an instruction fetch. \
- * \
- * If you re-enable this code, you must disable the ptc code in \
- * Entry 20 of the ivt. \
- */ \
- unsigned long flags; \
- \
- ia64_clear_ic(flags); \
- ia64_itc((vma->vm_flags & PROT_EXEC) ? 0x3 : 0x2, address, pte_val(pte), PAGE_SHIFT); \
- __restore_flags(flags); \
-} while (0)
-#endif
#define SWP_TYPE(entry) (((entry).val >> 1) & 0xff)
#define SWP_OFFSET(entry) (((entry).val << 1) >> 10)
diff -urN linux-davidm/include/asm-ia64/sal.h linux-2.4.0-test12-lia/include/asm-ia64/sal.h
--- linux-davidm/include/asm-ia64/sal.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/sal.h Thu Dec 14 14:50:49 2000
@@ -505,19 +505,7 @@
ia64_sal_pci_config_read (u64 pci_config_addr, u64 size, u64 *value)
{
struct ia64_sal_retval isrv;
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- extern spinlock_t ivr_read_lock;
- unsigned long flags;
-
- /*
- * Avoid PCI configuration read/write overwrite -- A0 Interrupt loss workaround
- */
- spin_lock_irqsave(&ivr_read_lock, flags);
-#endif
SAL_CALL(isrv, SAL_PCI_CONFIG_READ, pci_config_addr, size, 0, 0, 0, 0, 0);
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- spin_unlock_irqrestore(&ivr_read_lock, flags);
-#endif
if (value)
*value = isrv.v0;
return isrv.status;
@@ -528,20 +516,8 @@
ia64_sal_pci_config_write (u64 pci_config_addr, u64 size, u64 value)
{
struct ia64_sal_retval isrv;
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- extern spinlock_t ivr_read_lock;
- unsigned long flags;
-
- /*
- * Avoid PCI configuration read/write overwrite -- A0 Interrupt loss workaround
- */
- spin_lock_irqsave(&ivr_read_lock, flags);
-#endif
SAL_CALL(isrv, SAL_PCI_CONFIG_WRITE, pci_config_addr, size, value,
0, 0, 0, 0);
-#ifdef CONFIG_ITANIUM_A1_SPECIFIC
- spin_unlock_irqrestore(&ivr_read_lock, flags);
-#endif
return isrv.status;
}
diff -urN linux-davidm/include/asm-ia64/sn/mmzone.h linux-2.4.0-test12-lia/include/asm-ia64/sn/mmzone.h
--- linux-davidm/include/asm-ia64/sn/mmzone.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/sn/mmzone.h Wed Dec 13 19:35:14 2000
@@ -6,6 +6,7 @@
#define _LINUX_ASM_SN_MMZONE_H
#include <asm/sn/mmzone_sn1.h>
+#include <asm/sn/sn_cpuid.h>
/*
* Memory is conceptually divided into chunks. A chunk is either
@@ -104,5 +105,7 @@
#define MAP_NR_SN1(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
#endif /* CONFIG_DISCONTIGMEM */
+
+#define numa_node_id() cpuid_to_cnodeid(smp_processor_id())
#endif /* !_LINUX_ASM_SN_MMZONE_H */
diff -urN linux-davidm/include/asm-ia64/sn/mmzone_sn1.h linux-2.4.0-test12-lia/include/asm-ia64/sn/mmzone_sn1.h
--- linux-davidm/include/asm-ia64/sn/mmzone_sn1.h Thu Dec 14 19:58:06 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/sn/mmzone_sn1.h Wed Dec 13 18:59:33 2000
@@ -5,8 +5,10 @@
* Copyright, 2000, Silicon Graphics, sprasad@engr.sgi.com
*/
-/* SN1 will first attempt a 64 cpu config = 16 nodes X 4 cpus */
-#define MAXNODES 16
+/* Maximum configuration supported by SNIA hardware. There are other
+ * restrictions that may limit us to a smaller max configuration.
+ */
+#define MAXNODES 128
#define MAXNASIDS 128
#define CHUNKSZ (64*1024*1024)
diff -urN linux-davidm/include/asm-ia64/sn/pci/pci_bus_cvlink.h linux-2.4.0-test12-lia/include/asm-ia64/sn/pci/pci_bus_cvlink.h
--- linux-davidm/include/asm-ia64/sn/pci/pci_bus_cvlink.h Thu Dec 14 19:58:07 2000
+++ linux-2.4.0-test12-lia/include/asm-ia64/sn/pci/pci_bus_cvlink.h Wed Dec 13 18:59:33 2000
@@ -10,12 +10,20 @@
#ifndef _ASM_SN_PCI_CVLINK_H
#define _ASM_SN_PCI_CVLINK_H
+#define SET_PCIA64(dev) \
+ (((struct sn1_device_sysdata *)((dev)->sysdata))->isa64) = 1
+#define IS_PCIA64(dev) (((dev)->dma_mask = 0xffffffffffffffffUL) || \
+ (((struct sn1_device_sysdata *)((dev)->sysdata))->isa64))
+#define IS_PCI32G(dev) ((dev)->dma_mask >= 0xffffffff)
+#define IS_PCI32L(dev) ((dev)->dma_mask < 0xffffffff)
+
struct sn1_widget_sysdata {
devfs_handle_t vhdl;
};
struct sn1_device_sysdata {
devfs_handle_t vhdl;
+ int isa64;
};
#endif /* _ASM_SN_PCI_CVLINK_H */
diff -urN linux-davidm/include/linux/highmem.h linux-2.4.0-test12-lia/include/linux/highmem.h
--- linux-davidm/include/linux/highmem.h Wed Dec 13 17:30:34 2000
+++ linux-2.4.0-test12-lia/include/linux/highmem.h Thu Dec 14 14:51:06 2000
@@ -45,7 +45,7 @@
/* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */
static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
{
- clear_user_page(kmap(page), vaddr);
+ clear_user_page(kmap(page), vaddr, page);
kunmap(page);
}
@@ -87,7 +87,7 @@
vfrom = kmap(from);
vto = kmap(to);
- copy_user_page(vto, vfrom, vaddr);
+ copy_user_page(vto, vfrom, vaddr, to);
kunmap(from);
kunmap(to);
}
diff -urN linux-davidm/include/linux/irq.h linux-2.4.0-test12-lia/include/linux/irq.h
--- linux-davidm/include/linux/irq.h Thu Dec 14 19:58:07 2000
+++ linux-2.4.0-test12-lia/include/linux/irq.h Thu Dec 14 14:51:17 2000
@@ -57,6 +57,7 @@
#include <asm/hw_irq.h> /* the arch dependent stuff */
extern unsigned int do_IRQ (unsigned long irq, struct pt_regs *regs);
+extern void do_IRQ_per_cpu (unsigned long irq, struct pt_regs *regs);
extern int handle_IRQ_event(unsigned int, struct pt_regs *, struct irqaction *);
extern int setup_irq(unsigned int , struct irqaction * );
diff -urN linux-davidm/kernel/ptrace.c linux-2.4.0-test12-lia/kernel/ptrace.c
--- linux-davidm/kernel/ptrace.c Wed Dec 6 18:33:42 2000
+++ linux-2.4.0-test12-lia/kernel/ptrace.c Thu Dec 14 14:52:18 2000
@@ -53,14 +53,14 @@
flush_cache_page(vma, addr);
if (write) {
- maddr = kmap(page);
- memcpy(maddr + (addr & ~PAGE_MASK), buf, len);
+ maddr = kmap(page) + (addr & ~PAGE_MASK);
+ memcpy(maddr, buf, len);
flush_page_to_ram(page);
- flush_icache_page(vma, page);
+ flush_icache_range((unsigned long) maddr, (unsigned long)maddr + len);
kunmap(page);
} else {
- maddr = kmap(page);
- memcpy(buf, maddr + (addr & ~PAGE_MASK), len);
+ maddr = kmap(page) + (addr & ~PAGE_MASK);
+ memcpy(buf, maddr, len);
flush_page_to_ram(page);
kunmap(page);
}
diff -urN linux-davidm/mm/memory.c linux-2.4.0-test12-lia/mm/memory.c
--- linux-davidm/mm/memory.c Thu Dec 14 19:58:07 2000
+++ linux-2.4.0-test12-lia/mm/memory.c Thu Dec 14 14:52:29 2000
@@ -1030,7 +1030,6 @@
return -1;
flush_page_to_ram(page);
- flush_icache_page(vma, page);
}
mm->rss++;
@@ -1118,7 +1117,6 @@
* handle that later.
*/
flush_page_to_ram(new_page);
- flush_icache_page(vma, new_page);
entry = mk_pte(new_page, vma->vm_page_prot);
if (write_access) {
entry = pte_mkwrite(pte_mkdirty(entry));
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.0-test12)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (29 preceding siblings ...)
2000-12-15 5:00 ` [Linux-ia64] kernel update (relative to 2.4.0-test12) David Mosberger
@ 2000-12-15 22:43 ` Nathan Straz
2001-01-09 9:48 ` [Linux-ia64] kernel update (relative to 2.4.0) David Mosberger
` (184 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Nathan Straz @ 2000-12-15 22:43 UTC (permalink / raw)
To: linux-ia64
On Thu, Dec 14, 2000 at 09:00:58PM -0800, David Mosberger wrote:
> in files linux-2.4.0-test12-ia64-001214.diff*
>
> supposed to. Thus, I'd like to see everyone giving this kernel a good
> workout and to report any new problems.
The kernel boots on my B1 BigSur, but it locks up as soon as I touch the
root file system after remounting it rw. Sometimes I have SysRq access
for a few seconds, sometimes not. No messages, no oops, just a good
solid hang. Anyone else having trouble?
I booted up with init=/bin/bash and followed along with
/etc/rc.d/rc.sysinit.
mount -n -o remount,rw / // Works fine
>/etc/mtab // locks the system.
I tried "rm /etc/mtab" for the fun of it (after remounting) and it
locked, but after the system came back up and fscked it found the
unlinked inode and placed it in /lost+found is all data intact.
Anyone else running into problems?
--
Nate Straz nstraz@sgi.com
sgi, inc http://www.sgi.com/
Linux Test Project http://oss.sgi.com/projects/ltp/
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (30 preceding siblings ...)
2000-12-15 22:43 ` Nathan Straz
@ 2001-01-09 9:48 ` David Mosberger
2001-01-09 11:05 ` Sapariya Manish.j
` (183 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-01-09 9:48 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.0-ia64-010109.diff*
What changed since last time:
o Stephane's latest perfmon support
o Asit: update SAL header file for v3.0 and update MCA code accordingly.
o Jonathan Nicklin: Move IPI operation word into per-CPU data
structure to avoid cache line bouncing.
o Sync up with BJ Numa's latest qla1280/12160 SCSI driver
o Updates for 2.4.0, including new-style Makefiles.
o Fix & clean up IA-32 version of execve() (Don, you may want to double
check this, though it does work well for me.)
o Clean up interrupt register initialization (and do it on all CPUs, not
just the boot processor)
o Use a "lazy execute bit" approach in the PTEs to avoid flushing the cache
for newly created anonymous pages.
o Be more strict about enforcing the rule that no vm-area may cross
unimplemented address space. Also enforce 4GB addr limit for 32-bit
processes.
o Serialize SAL calls even on UP; also on MP interrupts are now disabled
while we're in a SAL call (again to enforce serialization)
This kernel has been tested on Lions, Big Surs, and the HP simulator.
In particular, I used it to compile kernels on a 4-way machine for
hours and hours with a concurrency level of five and didn't encounter
any problems, so I believe it to be fairly solid. But as always YMMV.
Enjoy,
--david
PS: As always, the diff below is only a (very rough) approximation of
what changed since the last IA-64 patch. To get the real sources,
get Linus's 2.4.0 tree and apply the above patch on top of it.
diff -urN linux-davidm/arch/ia64/Makefile linux-2.4.0-lia/arch/ia64/Makefile
--- linux-davidm/arch/ia64/Makefile Tue Jan 9 00:09:50 2001
+++ linux-2.4.0-lia/arch/ia64/Makefile Mon Jan 8 23:37:12 2001
@@ -5,7 +5,7 @@
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
-# Copyright (C) 1998-2000 by David Mosberger-Tang <davidm@hpl.hp.com>
+# Copyright (C) 1998-2001 by David Mosberger-Tang <davidm@hpl.hp.com>
#
NM := $(CROSS_COMPILE)nm -B
@@ -53,7 +53,7 @@
endif
ifdef CONFIG_IA64_SGI_SN1
-CFLAGS += -DBRINGUP
+ CFLAGS += -DBRINGUP
SUBDIRS := arch/$(ARCH)/sn/sn1 \
arch/$(ARCH)/sn \
arch/$(ARCH)/sn/io \
@@ -120,8 +120,6 @@
@$(MAKEBOOT) srmboot
archclean:
- @$(MAKE) -C arch/$(ARCH)/kernel clean
- @$(MAKE) -C arch/$(ARCH)/tools clean
@$(MAKEBOOT) clean
archmrproper:
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Tue Jan 9 00:09:50 2001
+++ linux-2.4.0-lia/arch/ia64/config.in Mon Jan 8 23:37:40 2001
@@ -18,6 +18,7 @@
comment 'General setup'
define_bool CONFIG_IA64 y
+define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data structure to 64 bytes
define_bool CONFIG_ISA n
define_bool CONFIG_EISA n
diff -urN linux-davidm/arch/ia64/dig/setup.c linux-2.4.0-lia/arch/ia64/dig/setup.c
--- linux-davidm/arch/ia64/dig/setup.c Tue Jan 9 00:09:50 2001
+++ linux-2.4.0-lia/arch/ia64/dig/setup.c Mon Oct 30 22:28:55 2000
@@ -95,14 +95,3 @@
outb(0xff, 0xA1);
outb(0xff, 0x21);
}
-
-void
-dig_irq_init (void)
-{
- /*
- * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support
- * enabled.
- */
- outb(0xff, 0xA1);
- outb(0xff, 0x21);
-}
diff -urN linux-davidm/arch/ia64/ia32/binfmt_elf32.c linux-2.4.0-lia/arch/ia64/ia32/binfmt_elf32.c
--- linux-davidm/arch/ia64/ia32/binfmt_elf32.c Tue Jan 9 00:09:50 2001
+++ linux-2.4.0-lia/arch/ia64/ia32/binfmt_elf32.c Mon Jan 8 23:37:53 2001
@@ -98,6 +95,7 @@
current->thread.map_base = 0x40000000;
current->thread.task_size = 0xc0000000; /* use what Linux/x86 uses... */
+ set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
/* setup ia32 state for ia32_load_state */
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.4.0-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/ia32/sys_ia32.c Mon Jan 8 23:38:02 2001
@@ -68,85 +68,77 @@
extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long);
static int
-nargs(unsigned int arg, char **ap)
+nargs (unsigned int arg, char **ap)
{
int n, err, addr;
+ if (!arg)
+ return 0;
+
n = 0;
do {
err = get_user(addr, (int *)A(arg));
if (err)
return err;
- if (ap) { /* no access_ok needed, we allocated */
- err = __put_user((char *)A(addr), ap++);
- if (err)
- return err;
- }
+ if (ap)
+ *ap++ = (char *) A(addr);
arg += sizeof(unsigned int);
n++;
} while (addr);
- return(n - 1);
+ return n - 1;
}
asmlinkage long
-sys32_execve(
-char *filename,
-unsigned int argv,
-unsigned int envp,
-int dummy3,
-int dummy4,
-int dummy5,
-int dummy6,
-int dummy7,
-int stack)
+sys32_execve (char *filename, unsigned int argv, unsigned int envp,
+ int dummy3, int dummy4, int dummy5, int dummy6, int dummy7,
+ int stack)
{
struct pt_regs *regs = (struct pt_regs *)&stack;
+ unsigned long old_map_base, old_task_size;
char **av, **ae;
int na, ne, len;
long r;
na = nargs(argv, NULL);
if (na < 0)
- return(na);
+ return na;
ne = nargs(envp, NULL);
if (ne < 0)
- return(ne);
+ return ne;
len = (na + ne + 2) * sizeof(*av);
- /*
- * kmalloc won't work because the `sys_exec' code will attempt
- * to do a `get_user' on the arg list and `get_user' will fail
- * on a kernel address (simplifies `get_user'). Instead we
- * do an mmap to get a user address. Note that since a successful
- * `execve' frees all current memory we only have to do an
- * `munmap' if the `execve' failes.
- */
- down(¤t->mm->mmap_sem);
-
- av = (char **) do_mmap_pgoff(0, 0UL, len, PROT_READ | PROT_WRITE,
- MAP_PRIVATE | MAP_ANONYMOUS, 0);
-
- up(¤t->mm->mmap_sem);
+ av = kmalloc(len, GFP_KERNEL);
+ if (!av)
+ return -ENOMEM;
- if (IS_ERR(av))
- return (long)av;
ae = av + na + 1;
- r = __put_user(0, (av + na));
- if (r)
- goto out;
- r = __put_user(0, (ae + ne));
- if (r)
- goto out;
+ av[na] = NULL;
+ ae[ne] = NULL;
+
r = nargs(argv, av);
if (r < 0)
goto out;
r = nargs(envp, ae);
if (r < 0)
goto out;
+
+ old_map_base = current->thread.map_base;
+ old_task_size = current->thread.task_size;
+
+ /* we may be exec'ing a 64-bit process: reset map base & task-size: */
+ current->thread.map_base = DEFAULT_MAP_BASE;
+ current->thread.task_size = DEFAULT_TASK_SIZE;
+
+ set_fs(KERNEL_DS);
r = sys_execve(filename, av, ae, regs);
- if (r < 0)
-out:
- sys_munmap((unsigned long) av, len);
- return(r);
+ if (r < 0) {
+ /* oops, execve failed, switch back to old map base & task-size: */
+ current->thread.map_base = old_map_base;
+ current->thread.task_size = old_task_size;
+ out:
+ kfree(av);
+ }
+ set_fs(USER_DS); /* establish new task-size as the address-limit */
+ return r;
}
static inline int
@@ -179,7 +171,7 @@
struct stat s;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_newstat(filename, &s);
set_fs (old_fs);
if (putstat (statbuf, &s))
diff -urN linux-davidm/arch/ia64/kernel/Makefile linux-2.4.0-lia/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/Makefile Mon Jan 8 23:39:04 2001
@@ -11,7 +11,9 @@
O_TARGET := kernel.o
-obj-y := acpi.o entry.o gate.o efi.o efi_stub.o irq.o irq_ia64.o irq_sapic.o ivt.o \
+export-objs := ia64_ksyms.o
+
+obj-y := acpi.o entry.o gate.o efi.o efi_stub.o ia64_ksyms.o irq.o irq_ia64.o irq_sapic.o ivt.o \
machvec.o pal.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
obj-$(CONFIG_IA64_GENERIC) += machvec.o iosapic.o
@@ -21,9 +23,5 @@
obj-$(CONFIG_SMP) += smp.o smpboot.o
obj-$(CONFIG_IA64_MCA) += mca.o mca_asm.o
obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o
-
-export-objs := ia64_ksyms.o
-
-clean::
include $(TOPDIR)/Rules.make
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/entry.S Mon Jan 8 23:39:39 2001
@@ -586,21 +573,30 @@
back_from_resched:
{ .mii
adds r2=IA64_TASK_NEED_RESCHED_OFFSET,r13
- mov r3=ip
+ mov r3=ip // r3 <- &back_from_resched
adds r14=IA64_TASK_SIGPENDING_OFFSET,r13
}
+#ifdef CONFIG_PERFMON
+ adds r15=IA64_TASK_PFM_NOTIFY,r13
+#endif
;;
+#ifdef CONFIG_PERFMON
+ ld8 r15=[r15]
+#endif
ld8 r2=[r2]
ld4 r14=[r14]
mov rp=r3 // arrange for schedule() to return to back_from_resched
;;
- cmp.ne p6,p0=r2,r0
cmp.ne p2,p0=r14,r0 // NOTE: pKern is an alias for p2!!
- srlz.d
-(p6) br.call.spnt.many b6=invoke_schedule // ignore return value
-2:
- // check & deliver pending signals:
-(p2) br.call.spnt.few rp=handle_signal_delivery
+#ifdef CONFIG_PERFMON
+ cmp.ne p6,p0=r15,r0 // current->task.pfm_notify != 0?
+#endif
+ cmp.ne p7,p0=r2,r0 // current->need_resched != 0?
+#ifdef CONFIG_PERFMON
+(p6) br.call.spnt.many b6=pfm_overflow_notify
+#endif
+(p7) br.call.spnt.many b7=invoke_schedule
+(p2) br.call.spnt.many rp=handle_signal_delivery // check & deliver pending signals
.ret9:
#ifdef CONFIG_IA64_SOFTSDV_HACKS
// Check for lost ticks
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.0-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/ia64_ksyms.c Mon Jan 8 23:39:53 2001
@@ -45,6 +45,15 @@
EXPORT_SYMBOL(disable_irq);
EXPORT_SYMBOL(disable_irq_nosync);
+#include <asm/semaphore.h>
+EXPORT_SYMBOL_NOVERS(__down);
+EXPORT_SYMBOL_NOVERS(__down_interruptible);
+EXPORT_SYMBOL_NOVERS(__down_trylock);
+EXPORT_SYMBOL_NOVERS(__up);
+EXPORT_SYMBOL_NOVERS(__down_read_failed);
+EXPORT_SYMBOL_NOVERS(__down_write_failed);
+EXPORT_SYMBOL_NOVERS(__rwsem_wake);
+
#include <asm/page.h>
EXPORT_SYMBOL(clear_page);
diff -urN linux-davidm/arch/ia64/kernel/irq_ia64.c linux-2.4.0-lia/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/irq_ia64.c Mon Jan 8 23:40:04 2001
@@ -147,13 +147,6 @@
void __init
init_IRQ (void)
{
- /*
- * Disable all local interrupts
- */
- ia64_set_itv(0, 1);
- ia64_set_lrr0(0, 1);
- ia64_set_lrr1(0, 1);
-
irq_desc[IA64_SPURIOUS_INT].handler = &irq_type_ia64_sapic;
#ifdef CONFIG_SMP
/*
@@ -163,14 +156,7 @@
irq_desc[IPI_IRQ].handler = &irq_type_ia64_sapic;
setup_irq(IPI_IRQ, &ipi_irqaction);
#endif
-
- ia64_set_pmv(1 << 16);
- ia64_set_cmcv(CMC_IRQ); /* XXX fix me */
-
platform_irq_init();
-
- /* clear TPR to enable all interrupt classes: */
- ia64_set_tpr(0);
}
void
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.0-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/ivt.S Thu Jan 4 23:05:50 2001
@@ -504,6 +504,7 @@
mov r28=ar.ccv // save ar.ccv
;;
1: ld8 r18=[r17]
+ ;;
# if defined(CONFIG_IA32_SUPPORT) && \
(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC))
//
@@ -511,7 +512,6 @@
// If the PTE is indicates the page is not present, then just turn this into a
// page fault.
//
- ;;
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
(p6) br.sptk page_fault // page wasn't present
# endif
diff -urN linux-davidm/arch/ia64/kernel/mca.c linux-2.4.0-lia/arch/ia64/kernel/mca.c
--- linux-davidm/arch/ia64/kernel/mca.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/mca.c Mon Jan 8 23:40:28 2001
@@ -27,6 +27,7 @@
#include <asm/mca.h>
#include <asm/irq.h>
+#include <asm/machvec.h>
typedef struct ia64_fptr {
@@ -235,13 +236,15 @@
if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT,
SAL_MC_PARAM_MECHANISM_INT,
IA64_MCA_RENDEZ_INT_VECTOR,
- IA64_MCA_RENDEZ_TIMEOUT))
+ IA64_MCA_RENDEZ_TIMEOUT,
+ 0))
return;
/* Register the wakeup interrupt vector with SAL */
if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP,
SAL_MC_PARAM_MECHANISM_INT,
IA64_MCA_WAKEUP_INT_VECTOR,
+ 0,
0))
return;
@@ -543,8 +546,7 @@
cmci_handler_platform(cmc_irq, arg, ptregs);
/* Clear the CMC SAL logs now that they have been saved in the OS buffer */
- ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM);
+ ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC);
}
/*
@@ -618,8 +620,7 @@
init_handler_platform(regs); /* call platform specific routines */
/* Clear the INIT SAL logs now that they have been saved in the OS buffer */
- ia64_sal_clear_state_info(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_sal_clear_state_info(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM);
+ ia64_sal_clear_state_info(SAL_INFO_TYPE_INIT);
}
/*
@@ -658,7 +659,7 @@
/* Get the process state information */
log_buffer = IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type);
- if (!(total_len=ia64_sal_get_state_info(sal_info_type, sal_sub_info_type ,(u64 *)log_buffer)))
+ if (!(total_len=ia64_sal_get_state_info(sal_info_type,(u64 *)log_buffer)))
prfunc("ia64_mca_log_get : Getting processor log failed\n");
IA64_MCA_DEBUG("ia64_log_get: retrieved %d bytes of error information\n",total_len);
@@ -683,7 +684,7 @@
void
ia64_log_clear(int sal_info_type, int sal_sub_info_type, int clear_os_buffer, prfunc_t prfunc)
{
- if (ia64_sal_clear_state_info(sal_info_type, sal_sub_info_type))
+ if (ia64_sal_clear_state_info(sal_info_type))
prfunc("ia64_mca_log_get : Clearing processor log failed\n");
if (clear_os_buffer) {
diff -urN linux-davidm/arch/ia64/kernel/mca_asm.S linux-2.4.0-lia/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/mca_asm.S Wed Nov 15 17:57:45 2000
@@ -7,7 +7,6 @@
// 00/03/29 cfleck Added code to save INIT handoff state in pt_regs format, switch to temp
// kstack, switch modes, jump to C INIT handler
//
-#include <linux/config.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/mca_asm.h>
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c linux-2.4.0-lia/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/perfmon.c Mon Jan 8 23:40:51 2001
@@ -11,53 +11,35 @@
*/
#include <linux/config.h>
-
#include <linux/kernel.h>
-#include <linux/init.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/smp_lock.h>
#include <linux/proc_fs.h>
+#include <linux/init.h>
+#include <linux/vmalloc.h>
+#include <linux/wrapper.h>
+#include <linux/mm.h>
+#include <asm/bitops.h>
+#include <asm/efi.h>
#include <asm/errno.h>
#include <asm/hw_irq.h>
+#include <asm/page.h>
+#include <asm/pal.h>
+#include <asm/perfmon.h>
+#include <asm/pgtable.h>
#include <asm/processor.h>
+#include <asm/signal.h>
+#include <asm/system.h>
#include <asm/system.h>
#include <asm/uaccess.h>
-#include <asm/pal.h>
-
-/* Long blurb on how this works:
- * We set dcr.pp, psr.pp, and the appropriate pmc control values with
- * this. Notice that we go about modifying _each_ task's pt_regs to
- * set cr_ipsr.pp. This will start counting when "current" does an
- * _rfi_. Also, since each task's cr_ipsr.pp, and cr_ipsr is inherited
- * across forks, we do _not_ need additional code on context
- * switches. On stopping of the counters we dont need to go about
- * changing every task's cr_ipsr back to where it wuz, because we can
- * just set pmc[0]=1. But we do it anyways becuase we will probably
- * add thread specific accounting later.
- *
- * The obvious problem with this is that on SMP systems, it is a bit
- * of work (when someone wants to do it:-)) - it would be easier if we
- * just added code to the context-switch path, but if we wanted to support
- * per-thread accounting, the context-switch path might be long unless
- * we introduce a flag in the task_struct. Right now, the following code
- * will NOT work correctly on MP (for more than one reason:-)).
- *
- * The short answer is that to make this work on SMP, we would need
- * to lock the run queue to ensure no context switches, send
- * an IPI to each processor, and in that IPI handler, set processor regs,
- * and just modify the psr bit of only the _current_ thread, since we have
- * modified the psr bit correctly in the kernel stack for every process
- * which is not running. Also, we need pmd arrays per-processor, and
- * the READ_PMD command will need to get values off of other processors.
- * IPIs are the answer, irrespective of what the question is. Might
- * crash on SMP systems without the lock_kernel().
- */
#ifdef CONFIG_PERFMON
-#define MAX_PERF_COUNTER 4 /* true for Itanium, at least */
+#define PFM_VERSION "0.2"
+#define PFM_SMPL_HDR_VERSION 1
+
#define PMU_FIRST_COUNTER 4 /* first generic counter */
#define PFM_WRITE_PMCS 0xa0
@@ -67,6 +49,8 @@
#define PFM_START 0xa4
#define PFM_ENABLE 0xa5 /* unfreeze only */
#define PFM_DISABLE 0xa6 /* freeze only */
+#define PFM_RESTART 0xcf
+#define PFM_CREATE_CONTEXT 0xa7
/*
* Those 2 are just meant for debugging. I considered using sysctl() for
* that but it is a little bit too pervasive. This solution is at least
@@ -75,101 +59,869 @@
#define PFM_DEBUG_ON 0xe0
#define PFM_DEBUG_OFF 0xe1
+
+/*
+ * perfmon API flags
+ */
+#define PFM_FL_INHERIT_NONE 0x00 /* never inherit a context across fork (default) */
+#define PFM_FL_INHERIT_ONCE 0x01 /* clone pfm_context only once across fork() */
+#define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork() */
+#define PFM_FL_SMPL_OVFL_NOBLOCK 0x04 /* do not block on sampling buffer overflow */
+#define PFM_FL_SYSTEMWIDE 0x08 /* create a systemwide context */
+
+/*
+ * PMC API flags
+ */
+#define PFM_REGFL_OVFL_NOTIFY 1 /* send notification on overflow */
+
+/*
+ * Private flags and masks
+ */
+#define PFM_FL_INHERIT_MASK (PFM_FL_INHERIT_NONE|PFM_FL_INHERIT_ONCE|PFM_FL_INHERIT_ALL)
+
#ifdef CONFIG_SMP
#define cpu_is_online(i) (cpu_online_map & (1UL << i))
#else
#define cpu_is_online(i) 1
#endif
-#define PMC_IS_IMPL(i) (pmu_conf.impl_regs[i>>6] & (1<< (i&~(64-1))))
-#define PMD_IS_IMPL(i) (pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
+#define PMC_IS_IMPL(i) (i < pmu_conf.num_pmcs && pmu_conf.impl_regs[i>>6] & (1<< (i&~(64-1))))
+#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
#define PMD_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
#define PMC_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
+/* This is the Itanium-specific PMC layout for counter config */
+typedef struct {
+ unsigned long pmc_plm:4; /* privilege level mask */
+ unsigned long pmc_ev:1; /* external visibility */
+ unsigned long pmc_oi:1; /* overflow interrupt */
+ unsigned long pmc_pm:1; /* privileged monitor */
+ unsigned long pmc_ig1:1; /* reserved */
+ unsigned long pmc_es:7; /* event select */
+ unsigned long pmc_ig2:1; /* reserved */
+ unsigned long pmc_umask:4; /* unit mask */
+ unsigned long pmc_thres:3; /* threshold */
+ unsigned long pmc_ig3:1; /* reserved (missing from table on p6-17) */
+ unsigned long pmc_ism:2; /* instruction set mask */
+ unsigned long pmc_ig4:38; /* reserved */
+} pmc_counter_reg_t;
+
+/* test for EAR/BTB configuration */
+#define PMU_DEAR_EVENT 0x67
+#define PMU_IEAR_EVENT 0x23
+#define PMU_BTB_EVENT 0x11
+
+#define PMC_IS_DEAR(a) (((pmc_counter_reg_t *)(a))->pmc_es = PMU_DEAR_EVENT)
+#define PMC_IS_IEAR(a) (((pmc_counter_reg_t *)(a))->pmc_es = PMU_IEAR_EVENT)
+#define PMC_IS_BTB(a) (((pmc_counter_reg_t *)(a))->pmc_es = PMU_BTB_EVENT)
+
/*
- * this structure needs to be enhanced
+ * This header is at the beginning of the sampling buffer returned to the user.
+ * It is exported as Read-Only at this point. It is directly followed with the
+ * first record.
*/
typedef struct {
- unsigned long pfr_reg_num; /* which register */
- unsigned long pfr_reg_value; /* configuration (PMC) or initial value (PMD) */
- unsigned long pfr_reg_reset; /* reset value on overflow (PMD) */
- void *pfr_smpl_buf; /* pointer to user buffer for EAR/BTB */
- unsigned long pfr_smpl_size; /* size of user buffer for EAR/BTB */
- pid_t pfr_notify_pid; /* process to notify */
- int pfr_notify_sig; /* signal for notification, 0=no notification */
-} perfmon_req_t;
+ int hdr_version; /* could be used to differentiate formats */
+ int hdr_reserved;
+ unsigned long hdr_entry_size; /* size of one entry in bytes */
+ unsigned long hdr_count; /* how many valid entries */
+ unsigned long hdr_pmds; /* which pmds are recorded */
+} perfmon_smpl_hdr_t;
-#if 0
+/*
+ * Header entry in the buffer as a header as follows.
+ * The header is directly followed with the PMDS to saved in increasing index order:
+ * PMD4, PMD5, .... How many PMDs are present is determined by the tool which must
+ * keep track of it when generating the final trace file.
+ */
typedef struct {
- unsigned long pmu_reg_data; /* generic PMD register */
- unsigned long pmu_reg_num; /* which register number */
-} perfmon_reg_t;
-#endif
+ int pid; /* identification of process */
+ int cpu; /* which cpu was used */
+ unsigned long rate; /* initial value of this counter */
+ unsigned long stamp; /* timestamp */
+ unsigned long ip; /* where did the overflow interrupt happened */
+ unsigned long regs; /* which registers overflowed (up to 64)*/
+} perfmon_smpl_entry_t;
/*
- * This structure is initialize at boot time and contains
+ * There is one such data structure per perfmon context. It is used to describe the
+ * sampling buffer. It is to be shared among siblings whereas the pfm_context isn't.
+ * Therefore we maintain a refcnt which is incremented on fork().
+ * This buffer is private to the kernel only the actual sampling buffer including its
+ * header are exposed to the user. This construct allows us to export the buffer read-write,
+ * if needed, without worrying about security problems.
+ */
+typedef struct {
+ atomic_t psb_refcnt; /* how many users for the buffer */
+ int reserved;
+ void *psb_addr; /* points to location of first entry */
+ unsigned long psb_entries; /* maximum number of entries */
+ unsigned long psb_size; /* aligned size of buffer */
+ unsigned long psb_index; /* next free entry slot */
+ unsigned long psb_entry_size; /* size of each entry including entry header */
+ perfmon_smpl_hdr_t *psb_hdr; /* points to sampling buffer header */
+} pfm_smpl_buffer_desc_t;
+
+
+/*
+ * This structure is initialized at boot time and contains
* a description of the PMU main characteristic as indicated
* by PAL
*/
typedef struct {
+ unsigned long pfm_is_disabled; /* indicates if perfmon is working properly */
unsigned long perf_ovfl_val; /* overflow value for generic counters */
unsigned long max_counters; /* upper limit on counter pair (PMC/PMD) */
+ unsigned long num_pmcs ; /* highest PMC implemented (may have holes) */
+ unsigned long num_pmds; /* highest PMD implemented (may have holes) */
unsigned long impl_regs[16]; /* buffer used to hold implememted PMC/PMD mask */
} pmu_config_t;
+#define PERFMON_IS_DISABLED() pmu_conf.pfm_is_disabled
+
+typedef struct {
+ __u64 val; /* virtual 64bit counter value */
+ __u64 ival; /* initial value from user */
+ __u64 smpl_rval; /* reset value on sampling overflow */
+ __u64 ovfl_rval; /* reset value on overflow */
+ int flags; /* notify/do not notify */
+} pfm_counter_t;
+#define PMD_OVFL_NOTIFY(ctx, i) ((ctx)->ctx_pmds[i].flags & PFM_REGFL_OVFL_NOTIFY)
+
+/*
+ * perfmon context. One per process, is cloned on fork() depending on inheritance flags
+ */
+typedef struct {
+ unsigned int inherit:2; /* inherit mode */
+ unsigned int noblock:1; /* block/don't block on overflow with notification */
+ unsigned int system:1; /* do system wide monitoring */
+ unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */
+ unsigned int reserved:27;
+} pfm_context_flags_t;
+
+typedef struct pfm_context {
+
+ pfm_smpl_buffer_desc_t *ctx_smpl_buf; /* sampling buffer descriptor, if any */
+ unsigned long ctx_dear_counter; /* which PMD holds D-EAR */
+ unsigned long ctx_iear_counter; /* which PMD holds I-EAR */
+ unsigned long ctx_btb_counter; /* which PMD holds BTB */
+
+ pid_t ctx_notify_pid; /* who to notify on overflow */
+ int ctx_notify_sig; /* XXX: SIGPROF or other */
+ pfm_context_flags_t ctx_flags; /* block/noblock */
+ pid_t ctx_creator; /* pid of creator (debug) */
+ unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */
+ unsigned long ctx_smpl_regs; /* which registers to record on overflow */
+
+ struct semaphore ctx_restart_sem; /* use for blocking notification mode */
+
+ pfm_counter_t ctx_pmds[IA64_NUM_PMD_COUNTERS]; /* XXX: size should be dynamic */
+} pfm_context_t;
+
+#define ctx_fl_inherit ctx_flags.inherit
+#define ctx_fl_noblock ctx_flags.noblock
+#define ctx_fl_system ctx_flags.system
+#define ctx_fl_frozen ctx_flags.frozen
+
+#define CTX_IS_DEAR(c,n) ((c)->ctx_dear_counter = (n))
+#define CTX_IS_IEAR(c,n) ((c)->ctx_iear_counter = (n))
+#define CTX_IS_BTB(c,n) ((c)->ctx_btb_counter = (n))
+#define CTX_OVFL_NOBLOCK(c) ((c)->ctx_fl_noblock = 1)
+#define CTX_INHERIT_MODE(c) ((c)->ctx_fl_inherit)
+#define CTX_HAS_SMPL(c) ((c)->ctx_smpl_buf != NULL)
+
static pmu_config_t pmu_conf;
/* for debug only */
-static unsigned long pfm_debug=1; /* 0= nodebug, >0= debug output on */
-#define DBprintk(a) {\
- if (pfm_debug >0) { printk a; } \
+static unsigned long pfm_debug=0; /* 0= nodebug, >0= debug output on */
+#define DBprintk(a) \
+ do { \
+ if (pfm_debug >0) { printk(__FUNCTION__" "); printk a; } \
+ } while (0);
+
+static void perfmon_softint(unsigned long ignored);
+static void ia64_reset_pmu(void);
+
+DECLARE_TASKLET(pfm_tasklet, perfmon_softint, 0);
+
+/*
+ * structure used to pass information between the interrupt handler
+ * and the tasklet.
+ */
+typedef struct {
+ pid_t to_pid; /* which process to notify */
+ pid_t from_pid; /* which process is source of overflow */
+ int sig; /* with which signal */
+ unsigned long bitvect; /* which counters have overflowed */
+} notification_info_t;
+
+#define notification_is_invalid(i) (i->to_pid < 2)
+
+/* will need to be cache line padded */
+static notification_info_t notify_info[NR_CPUS];
+
+/*
+ * We force cache line alignment to avoid false sharing
+ * given that we have one entry per CPU.
+ */
+static struct {
+ struct task_struct *owner;
+} ____cacheline_aligned pmu_owners[NR_CPUS];
+/* helper macros */
+#define SET_PMU_OWNER(t) do { pmu_owners[smp_processor_id()].owner = (t); } while(0);
+#define PMU_OWNER() pmu_owners[smp_processor_id()].owner
+
+/* for debug only */
+static struct proc_dir_entry *perfmon_dir;
+
+/*
+ * finds the number of PM(C|D) registers given
+ * the bitvector returned by PAL
+ */
+static unsigned long __init
+find_num_pm_regs(long *buffer)
+{
+ int i=3; /* 4 words/per bitvector */
+
+ /* start from the most significant word */
+ while (i>=0 && buffer[i] = 0 ) i--;
+ if (i< 0) {
+ printk(KERN_ERR "perfmon: No bit set in pm_buffer\n");
+ return 0;
+ }
+ return 1+ ia64_fls(buffer[i]) + 64 * i;
+}
+
+
+/*
+ * Generates a unique (per CPU) timestamp
+ */
+static inline unsigned long
+perfmon_get_stamp(void)
+{
+ unsigned long tmp;
+
+ /* XXX: need more to adjust for Itanium itc bug */
+ __asm__ __volatile__("mov %0=ar.itc" : "=r"(tmp) :: "memory");
+
+ return tmp;
+}
+
+/* Given PGD from the address space's page table, return the kernel
+ * virtual mapping of the physical memory mapped at ADR.
+ */
+static inline unsigned long
+uvirt_to_kva(pgd_t *pgd, unsigned long adr)
+{
+ unsigned long ret = 0UL;
+ pmd_t *pmd;
+ pte_t *ptep, pte;
+
+ if (!pgd_none(*pgd)) {
+ pmd = pmd_offset(pgd, adr);
+ if (!pmd_none(*pmd)) {
+ ptep = pte_offset(pmd, adr);
+ pte = *ptep;
+ if (pte_present(pte)) {
+ ret = (unsigned long) page_address(pte_page(pte));
+ ret |= (adr & (PAGE_SIZE - 1));
+ }
+ }
+ }
+ DBprintk(("uv2kva(%lx-->%lx)\n", adr, ret));
+ return ret;
+}
+
+
+/* Here we want the physical address of the memory.
+ * This is used when initializing the contents of the
+ * area and marking the pages as reserved.
+ */
+static inline unsigned long
+kvirt_to_pa(unsigned long adr)
+{
+ unsigned long va, kva, ret;
+
+ va = VMALLOC_VMADDR(adr);
+ kva = uvirt_to_kva(pgd_offset_k(va), va);
+ ret = __pa(kva);
+ DBprintk(("kv2pa(%lx-->%lx)\n", adr, ret));
+ return ret;
+}
+
+
+static void *
+rvmalloc(unsigned long size)
+{
+ void *mem;
+ unsigned long adr, page;
+
+ /* XXX: may have to revisit this part because
+ * vmalloc() does not necessarily return a page-aligned buffer.
+ * This maybe a security problem when mapped at user level
+ */
+ mem=vmalloc(size);
+ if (mem) {
+ memset(mem, 0, size); /* Clear the ram out, no junk to the user */
+ adr=(unsigned long) mem;
+ while (size > 0) {
+ page = kvirt_to_pa(adr);
+ mem_map_reserve(virt_to_page(__va(page)));
+ adr+=PAGE_SIZE;
+ size-=PAGE_SIZE;
+ }
+ }
+ return mem;
+}
+
+static void
+rvfree(void *mem, unsigned long size)
+{
+ unsigned long adr, page;
+
+ if (mem) {
+ adr=(unsigned long) mem;
+ while (size > 0) {
+ page = kvirt_to_pa(adr);
+ mem_map_unreserve(virt_to_page(__va(page)));
+ adr+=PAGE_SIZE;
+ size-=PAGE_SIZE;
+ }
+ vfree(mem);
+ }
+}
+
+static pfm_context_t *
+pfm_context_alloc(void)
+{
+ pfm_context_t *pfc;
+
+ /* allocate context descriptor */
+ pfc = vmalloc(sizeof(*pfc));
+ if (pfc) memset(pfc, 0, sizeof(*pfc));
+
+ return pfc;
+}
+
+static void
+pfm_context_free(pfm_context_t *pfc)
+{
+ if (pfc) vfree(pfc);
+}
+
+static int
+pfm_remap_buffer(unsigned long buf, unsigned long addr, unsigned long size)
+{
+ unsigned long page;
+
+ while (size > 0) {
+ page = kvirt_to_pa(buf);
+
+ if (remap_page_range(addr, page, PAGE_SIZE, PAGE_SHARED)) return -ENOMEM;
+
+ addr += PAGE_SIZE;
+ buf += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+ return 0;
+}
+
+/*
+ * counts the number of PMDS to save per entry.
+ * This code is generic enough to accomodate more than 64 PMDS when they become available
+ */
+static unsigned long
+pfm_smpl_entry_size(unsigned long *which, unsigned long size)
+{
+ unsigned long res = 0;
+ int i;
+
+ for (i=0; i < size; i++, which++) res += hweight64(*which);
+
+ DBprintk((" res=%ld\n", res));
+
+ return res;
}
/*
- * could optimize to avoid cache line conflicts in SMP
+ * Allocates the sampling buffer and remaps it into caller's address space
*/
-static struct task_struct *pmu_owners[NR_CPUS];
+static int
+pfm_smpl_buffer_alloc(pfm_context_t *ctx, unsigned long which_pmds, unsigned long entries, void **user_addr)
+{
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ unsigned long addr, size, regcount;
+ void *smpl_buf;
+ pfm_smpl_buffer_desc_t *psb;
+
+ regcount = pfm_smpl_entry_size(&which_pmds, 1);
+ /*
+ * ask for a sampling buffer but nothing to record !
+ */
+ if (regcount = 0) {
+ DBprintk((" no pmds to record\n"));
+ return -EINVAL;
+ }
+ /*
+ * 1 buffer hdr and for each entry a header + regcount PMDs to save
+ */
+ size = PAGE_ALIGN( sizeof(perfmon_smpl_hdr_t)
+ + entries * (sizeof(perfmon_smpl_entry_t) + regcount*sizeof(u64)));
+ /*
+ * check requested size to avoid Denial-of-service attacks
+ * XXX: may have to refine this test
+ */
+ if (size > current->rlim[RLIMIT_MEMLOCK].rlim_cur) return -EAGAIN;
+
+ /* find some free area in address space */
+ addr = get_unmapped_area(0, size);
+ if (!addr) goto no_addr;
+
+ DBprintk((" entries=%ld aligned size=%ld, unmapped @0x%lx\n", entries, size, addr));
+
+ /* allocate vma */
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (!vma) goto no_vma;
+
+ /* XXX: see rvmalloc() for page alignment problem */
+ smpl_buf = rvmalloc(size);
+ if (smpl_buf = NULL) goto no_buffer;
+
+ DBprintk((" smpl_buf @%p\n", smpl_buf));
+
+ if (pfm_remap_buffer((unsigned long)smpl_buf, addr, size)) goto cant_remap;
+
+ /* allocate sampling buffer descriptor now */
+ psb = vmalloc(sizeof(*psb));
+ if (psb = NULL) goto no_buffer_desc;
+
+ /* start with something clean */
+ memset(smpl_buf, 0x0, size);
+
+ psb->psb_hdr = smpl_buf;
+ psb->psb_addr = (char *)smpl_buf+sizeof(perfmon_smpl_hdr_t); /* first entry */
+ psb->psb_size = size; /* aligned size */
+ psb->psb_index = 0;
+ psb->psb_entries = entries;
+
+ atomic_set(&psb->psb_refcnt, 1);
+
+ psb->psb_entry_size = sizeof(perfmon_smpl_entry_t) + regcount*sizeof(u64);
+
+ DBprintk((" psb @%p entry_size=%ld hdr=%p addr=%p\n", psb,psb->psb_entry_size, psb->psb_hdr, psb->psb_addr));
+
+ /* initialize some of the fields of header */
+ psb->psb_hdr->hdr_version = PFM_SMPL_HDR_VERSION;
+ psb->psb_hdr->hdr_entry_size = sizeof(perfmon_smpl_entry_t)+regcount*sizeof(u64);
+ psb->psb_hdr->hdr_pmds = which_pmds;
+
+ /* store which PMDS to record */
+ ctx->ctx_smpl_regs = which_pmds;
+
+ /* link to perfmon context */
+ ctx->ctx_smpl_buf = psb;
+
+ /*
+ * initialize the vma for the sampling buffer
+ */
+ vma->vm_mm = mm;
+ vma->vm_start = addr;
+ vma->vm_end = addr + size;
+ vma->vm_flags = VM_READ|VM_MAYREAD;
+ vma->vm_page_prot = PAGE_READONLY; /* XXX may need to change */
+ vma->vm_ops = NULL;
+ vma->vm_pgoff = 0;
+ vma->vm_file = NULL;
+ vma->vm_raend = 0;
+
+ vma->vm_private_data = ctx; /* link to pfm_context(not yet used) */
+
+ /*
+ * now insert the vma in the vm list for the process
+ */
+ insert_vm_struct(mm, vma);
+
+ mm->total_vm += size >> PAGE_SHIFT;
+
+ /*
+ * that's the address returned to the user
+ */
+ *user_addr = (void *)addr;
+
+ return 0;
+
+ /* outlined error handling */
+no_addr:
+ DBprintk(("Cannot find unmapped area for size %ld\n", size));
+ return -ENOMEM;
+no_vma:
+ DBprintk(("Cannot allocate vma\n"));
+ return -ENOMEM;
+cant_remap:
+ DBprintk(("Can't remap buffer\n"));
+ rvfree(smpl_buf, size);
+no_buffer:
+ DBprintk(("Can't allocate sampling buffer\n"));
+ kmem_cache_free(vm_area_cachep, vma);
+ return -ENOMEM;
+no_buffer_desc:
+ DBprintk(("Can't allocate sampling buffer descriptor\n"));
+ kmem_cache_free(vm_area_cachep, vma);
+ rvfree(smpl_buf, size);
+ return -ENOMEM;
+}
+
+static int
+pfx_is_sane(pfreq_context_t *pfx)
+{
+ /* valid signal */
+ if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return 0;
+
+ /* cannot send to process 1, 0 means do not notify */
+ if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return 0;
+
+ /* asked for sampling, but nothing to record ! */
+ if (pfx->smpl_entries > 0 && pfm_smpl_entry_size(&pfx->smpl_regs, 1) = 0) return 0;
+
+ /* probably more to add here */
+
+
+ return 1;
+}
+
+static int
+pfm_context_create(struct task_struct *task, int flags, perfmon_req_t *req)
+{
+ pfm_context_t *ctx;
+ perfmon_req_t tmp;
+ void *uaddr = NULL;
+ int ret = -EINVAL;
+ int ctx_flags;
+
+ /* to go away */
+ if (flags) {
+ printk("perfmon: use context flags instead of perfmon() flags. Obsoleted API\n");
+ }
+
+ copy_from_user(&tmp, req, sizeof(tmp));
+
+ ctx_flags = tmp.pfr_ctx.flags;
+
+ /* not yet supported */
+ if (ctx_flags & PFM_FL_SYSTEMWIDE) return -EINVAL;
+
+ if (!pfx_is_sane(&tmp.pfr_ctx)) return -EINVAL;
+
+ ctx = pfm_context_alloc();
+ if (!ctx) return -ENOMEM;
+
+ /* record who the creator is (for debug) */
+ ctx->ctx_creator = task->pid;
+
+ ctx->ctx_notify_pid = tmp.pfr_ctx.notify_pid;
+ ctx->ctx_notify_sig = SIGPROF; /* siginfo imposes a fixed signal */
+
+ if (tmp.pfr_ctx.smpl_entries) {
+ DBprintk((" sampling entries=%ld\n",tmp.pfr_ctx.smpl_entries));
+ if ((ret=pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs, tmp.pfr_ctx.smpl_entries, &uaddr)) ) goto buffer_error;
+ tmp.pfr_ctx.smpl_vaddr = uaddr;
+ }
+ /* initialization of context's flags */
+ ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
+ ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0;
+ ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEMWIDE) ? 1: 0;
+ ctx->ctx_fl_frozen = 0;
+
+ sema_init(&ctx->ctx_restart_sem, 0); /* init this semaphore to locked */
+
+ /* XXX fixme take care of errors here */
+ copy_to_user(req, &tmp, sizeof(tmp));
+
+ DBprintk((" context=%p, pid=%d notify_sig %d notify_pid=%d\n",ctx, task->pid, ctx->ctx_notify_sig, ctx->ctx_notify_pid));
+ DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+
+ /* link with task */
+ task->thread.pfm_context = ctx;
+
+ return 0;
+
+buffer_error:
+ vfree(ctx);
+
+ return ret;
+}
+
+static void
+pfm_reset_regs(pfm_context_t *ctx)
+{
+ unsigned long mask = ctx->ctx_ovfl_regs;
+ int i, cnum;
+
+ DBprintk((" ovfl_regs=0x%lx\n", mask));
+ /*
+ * now restore reset value on sampling overflowed counters
+ */
+ for(i=0, cnum=PMU_FIRST_COUNTER; i < pmu_conf.max_counters; i++, cnum++, mask >>= 1) {
+ if (mask & 0x1) {
+ DBprintk((" reseting PMD[%d]=%lx\n", cnum, ctx->ctx_pmds[i].smpl_rval & pmu_conf.perf_ovfl_val));
+
+ /* upper part is ignored on rval */
+ ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
+ }
+ }
+}
+
+static int
+pfm_write_pmcs(struct task_struct *ta, perfmon_req_t *req, int count)
+{
+ struct thread_struct *th = &ta->thread;
+ pfm_context_t *ctx = th->pfm_context;
+ perfmon_req_t tmp;
+ unsigned long cnum;
+ int i;
+
+ /* XXX: ctx locking may be required here */
+
+ for (i = 0; i < count; i++, req++) {
+
+ copy_from_user(&tmp, req, sizeof(tmp));
+
+ cnum = tmp.pfr_reg.reg_num;
+
+ /* XXX needs to check validity of the data maybe */
+ if (!PMC_IS_IMPL(cnum)) {
+ DBprintk((" invalid pmc[%ld]\n", cnum));
+ return -EINVAL;
+ }
+
+ if (PMC_IS_COUNTER(cnum)) {
+
+ /*
+ * we keep track of EARS/BTB to speed up sampling later
+ */
+ if (PMC_IS_DEAR(&tmp.pfr_reg.reg_value)) {
+ ctx->ctx_dear_counter = cnum;
+ } else if (PMC_IS_IEAR(&tmp.pfr_reg.reg_value)) {
+ ctx->ctx_iear_counter = cnum;
+ } else if (PMC_IS_BTB(&tmp.pfr_reg.reg_value)) {
+ ctx->ctx_btb_counter = cnum;
+ }
+
+ if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY)
+ ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY;
+ }
+
+ ia64_set_pmc(cnum, tmp.pfr_reg.reg_value);
+ DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags));
+
+ }
+ /*
+ * we have to set this here event hough we haven't necessarily started monitoring
+ * because we may be context switched out
+ */
+ th->flags |= IA64_THREAD_PM_VALID;
+
+ return 0;
+}
+
+static int
+pfm_write_pmds(struct task_struct *ta, perfmon_req_t *req, int count)
+{
+ struct thread_struct *th = &ta->thread;
+ pfm_context_t *ctx = th->pfm_context;
+ perfmon_req_t tmp;
+ unsigned long cnum;
+ int i;
+
+ /* XXX: ctx locking may be required here */
+
+ for (i = 0; i < count; i++, req++) {
+ int k;
+
+ copy_from_user(&tmp, req, sizeof(tmp));
+
+ cnum = tmp.pfr_reg.reg_num;
+
+ k = cnum - PMU_FIRST_COUNTER;
+
+ if (!PMD_IS_IMPL(cnum)) return -EINVAL;
+
+ /* update virtualized (64bits) counter */
+ if (PMD_IS_COUNTER(cnum)) {
+ ctx->ctx_pmds[k].ival = tmp.pfr_reg.reg_value;
+ ctx->ctx_pmds[k].val = tmp.pfr_reg.reg_value & ~pmu_conf.perf_ovfl_val;
+ ctx->ctx_pmds[k].smpl_rval = tmp.pfr_reg.reg_smpl_reset;
+ ctx->ctx_pmds[k].ovfl_rval = tmp.pfr_reg.reg_ovfl_reset;
+ }
+
+ /* writes to unimplemented part is ignored, so this is safe */
+ ia64_set_pmd(cnum, tmp.pfr_reg.reg_value);
+
+ /* to go away */
+ ia64_srlz_d();
+ DBprintk((" setting PMD[%ld]: pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx\n",
+ cnum,
+ ctx->ctx_pmds[k].val,
+ ctx->ctx_pmds[k].ovfl_rval,
+ ctx->ctx_pmds[k].smpl_rval,
+ ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val));
+ }
+ /*
+ * we have to set this here event hough we haven't necessarily started monitoring
+ * because we may be context switched out
+ */
+ th->flags |= IA64_THREAD_PM_VALID;
+
+ return 0;
+}
+
+static int
+pfm_read_pmds(struct task_struct *ta, perfmon_req_t *req, int count)
+{
+ struct thread_struct *th = &ta->thread;
+ pfm_context_t *ctx = th->pfm_context;
+ unsigned long val=0;
+ perfmon_req_t tmp;
+ int i;
+
+ /*
+ * XXX: MUST MAKE SURE WE DON"T HAVE ANY PENDING OVERFLOW BEFORE READING
+ * This is required when the monitoring has been stoppped by user of kernel.
+ * If ity is still going on, then that's fine because we a re not gauranteed
+ * to return an accurate value in this case
+ */
+
+ /* XXX: ctx locking may be required here */
+
+ for (i = 0; i < count; i++, req++) {
+ int k;
+
+ copy_from_user(&tmp, req, sizeof(tmp));
+
+ if (!PMD_IS_IMPL(tmp.pfr_reg.reg_num)) return -EINVAL;
+
+ k = tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER;
+
+ if (PMD_IS_COUNTER(tmp.pfr_reg.reg_num)) {
+ if (ta = current){
+ val = ia64_get_pmd(tmp.pfr_reg.reg_num);
+ } else {
+ val = th->pmd[k];
+ }
+ val &= pmu_conf.perf_ovfl_val;
+ /*
+ * lower part of .val may not be zero, so we must be an addition because of
+ * residual count (see update_counters).
+ */
+ val += ctx->ctx_pmds[k].val;
+ } else {
+ /* for now */
+ if (ta != current) return -EINVAL;
+
+ val = ia64_get_pmd(tmp.pfr_reg.reg_num);
+ }
+ tmp.pfr_reg.reg_value = val;
+
+ DBprintk((" reading PMD[%ld]=0x%lx\n", tmp.pfr_reg.reg_num, val));
+
+ if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT;
+ }
+ return 0;
+}
+
+static int
+pfm_do_restart(struct task_struct *task)
+{
+ struct thread_struct *th = &task->thread;
+ pfm_context_t *ctx = th->pfm_context;
+ void *sem = &ctx->ctx_restart_sem;
+
+ if (task = current) {
+ DBprintk((" restartig self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
+
+ pfm_reset_regs(ctx);
+
+ /*
+ * We ignore block/don't block because we never block
+ * for a self-monitoring process.
+ */
+ ctx->ctx_fl_frozen = 0;
+
+ if (CTX_HAS_SMPL(ctx)) {
+ ctx->ctx_smpl_buf->psb_hdr->hdr_count = 0;
+ ctx->ctx_smpl_buf->psb_index = 0;
+ }
+
+ /* pfm_reset_smpl_buffers(ctx,th->pfm_ovfl_regs);*/
+
+ /* simply unfreeze */
+ ia64_set_pmc(0, 0);
+ ia64_srlz_d();
+
+ return 0;
+ }
+
+ /* check if blocking */
+ if (CTX_OVFL_NOBLOCK(ctx) = 0) {
+ DBprintk((" unblocking %d \n", task->pid));
+ up(sem);
+ return 0;
+ }
+
+ /*
+ * in case of non blocking mode, then it's just a matter of
+ * of reseting the sampling buffer (if any) index. The PMU
+ * is already active.
+ */
+
+ /*
+ * must reset the header count first
+ */
+ if (CTX_HAS_SMPL(ctx)) {
+ DBprintk((" resetting sampling indexes for %d \n", task->pid));
+ ctx->ctx_smpl_buf->psb_hdr->hdr_count = 0;
+ ctx->ctx_smpl_buf->psb_index = 0;
+ }
+
+ return 0;
+}
+
static int
do_perfmonctl (struct task_struct *task, int cmd, int flags, perfmon_req_t *req, int count, struct pt_regs *regs)
{
perfmon_req_t tmp;
- int i;
+ struct thread_struct *th = &task->thread;
+ pfm_context_t *ctx = th->pfm_context;
+
+ memset(&tmp, 0, sizeof(tmp));
switch (cmd) {
- case PFM_WRITE_PMCS:
- /* we don't quite support this right now */
+ case PFM_CREATE_CONTEXT:
+ /* a context has already been defined */
+ if (ctx) return -EBUSY;
+
+ /* may be a temporary limitation */
if (task != current) return -EINVAL;
+ if (req = NULL || count != 1) return -EINVAL;
+
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- for (i = 0; i < count; i++, req++) {
- copy_from_user(&tmp, req, sizeof(tmp));
+ return pfm_context_create(task, flags, req);
- /* XXX needs to check validity of the data maybe */
+ case PFM_WRITE_PMCS:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
- if (!PMC_IS_IMPL(tmp.pfr_reg_num)) {
- DBprintk((__FUNCTION__ " invalid pmc[%ld]\n", tmp.pfr_reg_num));
- return -EINVAL;
- }
-
- /* XXX: for counters, need to some checks */
- if (PMC_IS_COUNTER(tmp.pfr_reg_num)) {
- current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].sig = tmp.pfr_notify_sig;
- current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].pid = tmp.pfr_notify_pid;
-
- DBprintk((__FUNCTION__" setting PMC[%ld] send sig %d to %d\n",tmp.pfr_reg_num, tmp.pfr_notify_sig, tmp.pfr_notify_pid));
- }
- ia64_set_pmc(tmp.pfr_reg_num, tmp.pfr_reg_value);
+ if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- DBprintk((__FUNCTION__" setting PMC[%ld]=0x%lx\n", tmp.pfr_reg_num, tmp.pfr_reg_value));
+ if (!ctx) {
+ DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid));
+ return -EINVAL;
}
- /*
- * we have to set this here event hough we haven't necessarily started monitoring
- * because we may be context switched out
- */
- current->thread.flags |= IA64_THREAD_PM_VALID;
- break;
+ return pfm_write_pmcs(task, req, count);
case PFM_WRITE_PMDS:
/* we don't quite support this right now */
@@ -177,34 +929,22 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- for (i = 0; i < count; i++, req++) {
- copy_from_user(&tmp, req, sizeof(tmp));
-
- if (!PMD_IS_IMPL(tmp.pfr_reg_num)) return -EINVAL;
-
- /* update virtualized (64bits) counter */
- if (PMD_IS_COUNTER(tmp.pfr_reg_num)) {
- current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].val = tmp.pfr_reg_value & ~pmu_conf.perf_ovfl_val;
- current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].rval = tmp.pfr_reg_reset;
- }
- /* writes to unimplemented part is ignored, so this is safe */
- ia64_set_pmd(tmp.pfr_reg_num, tmp.pfr_reg_value);
- /* to go away */
- ia64_srlz_d();
- DBprintk((__FUNCTION__" setting PMD[%ld]: pmod.val=0x%lx pmd=0x%lx rval=0x%lx\n", tmp.pfr_reg_num, current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].val, ia64_get_pmd(tmp.pfr_reg_num),current->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].rval));
+ if (!ctx) {
+ DBprintk((" PFM_WRITE_PMDS: no context for task %d\n", task->pid));
+ return -EINVAL;
}
- /*
- * we have to set this here event hough we haven't necessarily started monitoring
- * because we may be context switched out
- */
- current->thread.flags |= IA64_THREAD_PM_VALID;
- break;
+ return pfm_write_pmds(task, req, count);
case PFM_START:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- pmu_owners[smp_processor_id()] = current;
+ if (!ctx) {
+ DBprintk((" PFM_START: no context for task %d\n", task->pid));
+ return -EINVAL;
+ }
+
+ SET_PMU_OWNER(current);
/* will start monitoring right after rfi */
ia64_psr(regs)->up = 1;
@@ -213,9 +953,10 @@
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- current->thread.flags |= IA64_THREAD_PM_VALID;
+ th->flags |= IA64_THREAD_PM_VALID;
ia64_set_pmc(0, 0);
+ ia64_srlz_d();
break;
@@ -223,23 +964,39 @@
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- pmu_owners[smp_processor_id()] = current;
+ if (!ctx) {
+ DBprintk((" PFM_ENABLE: no context for task %d\n", task->pid));
+ return -EINVAL;
+ }
+
+ /* reset all registers to stable quiet state */
+ ia64_reset_pmu();
+
+ /* make sure nothing starts */
+ ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->pp = 0;
+
+ /* do it on the live register as well */
+ __asm__ __volatile__ ("rsm psr.pp|psr.pp;;"::: "memory");
+
+ SET_PMU_OWNER(current);
/*
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- current->thread.flags |= IA64_THREAD_PM_VALID;
+ th->flags |= IA64_THREAD_PM_VALID;
/* simply unfreeze */
ia64_set_pmc(0, 0);
+ ia64_srlz_d();
break;
case PFM_DISABLE:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- /* simply unfreeze */
+ /* simply freeze */
ia64_set_pmc(0, 1);
ia64_srlz_d();
break;
@@ -248,121 +1005,89 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
if (!access_ok(VERIFY_WRITE, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- /* This looks shady, but IMHO this will work fine. This is
- * the sequence that I could come up with to avoid races
- * with the interrupt handler. See explanation in the
- * following comment.
- */
-#if 0
-/* irrelevant with user monitors */
- local_irq_save(flags);
- __asm__ __volatile__("rsm psr.pp\n");
- dcr = ia64_get_dcr();
- dcr &= ~IA64_DCR_PP;
- ia64_set_dcr(dcr);
- local_irq_restore(flags);
-#endif
- /*
- * We cannot write to pmc[0] to stop counting here, as
- * that particular instruction might cause an overflow
- * and the mask in pmc[0] might get lost. I'm _not_
- * sure of the hardware behavior here. So we stop
- * counting by psr.pp = 0. And we reset dcr.pp to
- * prevent an interrupt from mucking up psr.pp in the
- * meanwhile. Perfmon interrupts are pended, hence the
- * above code should be ok if one of the above instructions
- * caused overflows, i.e the interrupt should get serviced
- * when we re-enabled interrupts. When I muck with dcr,
- * is the irq_save/restore needed?
- */
-
- for (i = 0; i < count; i++, req++) {
- unsigned long val=0;
-
- copy_from_user(&tmp, req, sizeof(tmp));
-
- if (!PMD_IS_IMPL(tmp.pfr_reg_num)) return -EINVAL;
-
- if (PMD_IS_COUNTER(tmp.pfr_reg_num)) {
- if (task = current){
- val = ia64_get_pmd(tmp.pfr_reg_num) & pmu_conf.perf_ovfl_val;
- } else {
- val = task->thread.pmd[tmp.pfr_reg_num - PMU_FIRST_COUNTER] & pmu_conf.perf_ovfl_val;
- }
- val += task->thread.pmu_counters[tmp.pfr_reg_num - PMU_FIRST_COUNTER].val;
- } else {
- /* for now */
- if (task != current) return -EINVAL;
-
- val = ia64_get_pmd(tmp.pfr_reg_num);
+ if (!ctx) {
+ DBprintk((" PFM_READ_PMDS: no context for task %d\n", task->pid));
+ return -EINVAL;
}
- tmp.pfr_reg_value = val;
-
-DBprintk((__FUNCTION__" reading PMD[%ld]=0x%lx\n", tmp.pfr_reg_num, val));
-
- if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT;
- }
-#if 0
-/* irrelevant with user monitors */
- local_irq_save(flags);
- __asm__ __volatile__("ssm psr.pp");
- dcr = ia64_get_dcr();
- dcr |= IA64_DCR_PP;
- ia64_set_dcr(dcr);
- local_irq_restore(flags);
-#endif
- break;
+ return pfm_read_pmds(task, req, count);
case PFM_STOP:
- /* we don't quite support this right now */
- if (task != current) return -EINVAL;
-
- ia64_set_pmc(0, 1);
- ia64_srlz_d();
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
- ia64_psr(regs)->up = 0;
+ ia64_set_pmc(0, 1);
+ ia64_srlz_d();
- current->thread.flags &= ~IA64_THREAD_PM_VALID;
+ ia64_psr(regs)->up = 0;
- pmu_owners[smp_processor_id()] = NULL;
+ th->flags &= ~IA64_THREAD_PM_VALID;
-#if 0
-/* irrelevant with user monitors */
- local_irq_save(flags);
- dcr = ia64_get_dcr();
- dcr &= ~IA64_DCR_PP;
- ia64_set_dcr(dcr);
- local_irq_restore(flags);
- ia64_psr(regs)->up = 0;
-#endif
+ SET_PMU_OWNER(NULL);
- break;
+ /* we probably will need some more cleanup here */
+ break;
case PFM_DEBUG_ON:
- printk(__FUNCTION__" debuggin on\n");
+ printk(" debugging on\n");
pfm_debug = 1;
break;
case PFM_DEBUG_OFF:
- printk(__FUNCTION__" debuggin off\n");
+ printk(" debugging off\n");
pfm_debug = 0;
break;
+ case PFM_RESTART: /* temporary, will most likely end up as a PFM_ENABLE */
+
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
+ printk(" PFM_RESTART not monitoring\n");
+ return -EINVAL;
+ }
+ if (!ctx) {
+ printk(" PFM_RESTART no ctx for %d\n", task->pid);
+ return -EINVAL;
+ }
+ if (CTX_OVFL_NOBLOCK(ctx) = 0 && ctx->ctx_fl_frozen=0) {
+ printk("task %d without pmu_frozen set\n", task->pid);
+ return -EINVAL;
+ }
+
+ return pfm_do_restart(task); /* we only look at first entry */
+
default:
- DBprintk((__FUNCTION__" UNknown command 0x%x\n", cmd));
- return -EINVAL;
- break;
+ DBprintk((" UNknown command 0x%x\n", cmd));
+ return -EINVAL;
}
return 0;
}
+/*
+ * XXX: do something better here
+ */
+static int
+perfmon_bad_permissions(struct task_struct *task)
+{
+ /* stolen from bad_signal() */
+ return (current->session != task->session)
+ && (current->euid ^ task->suid) && (current->euid ^ task->uid)
+ && (current->uid ^ task->suid) && (current->uid ^ task->uid);
+}
+
asmlinkage int
sys_perfmonctl (int pid, int cmd, int flags, perfmon_req_t *req, int count, long arg6, long arg7, long arg8, long stack)
{
struct pt_regs *regs = (struct pt_regs *) &stack;
struct task_struct *child = current;
- int ret;
+ int ret = -ESRCH;
+ /* sanity check:
+ *
+ * ensures that we don't do bad things in case the OS
+ * does not have enough storage to save/restore PMC/PMD
+ */
+ if (PERFMON_IS_DISABLED()) return -ENOSYS;
+
+ /* XXX: pid interface is going away in favor of pfm context */
if (pid != current->pid) {
read_lock(&tasklist_lock);
{
@@ -370,37 +1095,240 @@
if (child)
get_task_struct(child);
}
- if (!child) {
- read_unlock(&tasklist_lock);
- return -ESRCH;
- }
+
+ if (!child) goto abort_call;
+
+ ret = -EPERM;
+
+ if (perfmon_bad_permissions(child)) goto abort_call;
+
/*
* XXX: need to do more checking here
*/
- if (child->state != TASK_ZOMBIE) {
- DBprintk((__FUNCTION__" warning process %d not in stable state %ld\n", pid, child->state));
+ if (child->state != TASK_ZOMBIE && child->state != TASK_STOPPED) {
+ DBprintk((" warning process %d not in stable state %ld\n", pid, child->state));
}
}
ret = do_perfmonctl(child, cmd, flags, req, count, regs);
+abort_call:
if (child != current) read_unlock(&tasklist_lock);
return ret;
}
-static inline int
-update_counters (u64 pmc0)
+/*
+ * This function is invoked on the exit path of the kernel. Therefore it must make sure
+ * it does does modify the caller's input registers (in0-in7) in case of entry by system call
+ * which can be restarted. That's why it's declared as a system call and all 8 possible args
+ * are declared even though not used.
+ */
+void asmlinkage
+pfm_overflow_notify(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
{
- unsigned long mask, i, cnum;
- struct thread_struct *th;
- struct task_struct *ta;
+ struct task_struct *task;
+ struct thread_struct *th = ¤t->thread;
+ pfm_context_t *ctx = current->thread.pfm_context;
+ struct siginfo si;
+ int ret;
- if (pmu_owners[smp_processor_id()] = NULL) {
- DBprintk((__FUNCTION__" Spurious overflow interrupt: PMU not owned\n"));
- return 0;
+ /*
+ * do some sanity checks first
+ */
+ if (!ctx) {
+ printk("perfmon: process %d has no PFM context\n", current->pid);
+ return;
}
-
+ if (ctx->ctx_notify_pid < 2) {
+ printk("perfmon: process %d invalid notify_pid=%d\n", current->pid, ctx->ctx_notify_pid);
+ return;
+ }
+
+ DBprintk((" current=%d ctx=%p bv=0%lx\n", current->pid, ctx, ctx->ctx_ovfl_regs));
+ /*
+ * NO matter what notify_pid is,
+ * we clear overflow, won't notify again
+ */
+ th->pfm_pend_notify = 0;
+
+ /*
+ * When measuring in kernel mode and non-blocking fashion, it is possible to
+ * get an overflow while executing this code. Therefore the state of pend_notify
+ * and ovfl_regs can be altered. The important point is not to loose any notification.
+ * It is fine to get called for nothing. To make sure we do collect as much state as
+ * possible, update_counters() always uses |= to add bit to the ovfl_regs field.
+ *
+ * In certain cases, it is possible to come here, with ovfl_regs = 0;
+ *
+ * XXX: pend_notify and ovfl_regs could be merged maybe !
+ */
+ if (ctx->ctx_ovfl_regs = 0) {
+ printk("perfmon: spurious overflow notification from pid %d\n", current->pid);
+ return;
+ }
+ read_lock(&tasklist_lock);
+
+ task = find_task_by_pid(ctx->ctx_notify_pid);
+
+ if (task) {
+ si.si_signo = ctx->ctx_notify_sig;
+ si.si_errno = 0;
+ si.si_code = PROF_OVFL; /* goes to user */
+ si.si_addr = NULL;
+ si.si_pid = current->pid; /* who is sending */
+ si.si_pfm_ovfl = ctx->ctx_ovfl_regs;
+
+ DBprintk((" SIGPROF to %d @ %p\n", task->pid, task));
+
+ /* must be done with tasklist_lock locked */
+ ret = send_sig_info(ctx->ctx_notify_sig, &si, task);
+ if (ret != 0) {
+ DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_pid, ret));
+ task = NULL; /* will cause return */
+ }
+ } else {
+ printk("perfmon: notify_pid %d not found\n", ctx->ctx_notify_pid);
+ }
+
+ read_unlock(&tasklist_lock);
+
+ /* now that we have released the lock handle error condition */
+ if (!task || CTX_OVFL_NOBLOCK(ctx)) {
+ /* we clear all pending overflow bits in noblock mode */
+ ctx->ctx_ovfl_regs = 0;
+ return;
+ }
+ DBprintk((" CPU%d %d before sleep\n", smp_processor_id(), current->pid));
+
+ /*
+ * may go through without blocking on SMP systems
+ * if restart has been received already by the time we call down()
+ */
+ ret = down_interruptible(&ctx->ctx_restart_sem);
+
+ DBprintk((" CPU%d %d after sleep ret=%d\n", smp_processor_id(), current->pid, ret));
+
+ /*
+ * in case of interruption of down() we don't restart anything
+ */
+ if (ret >= 0) {
+ /* we reactivate on context switch */
+ ctx->ctx_fl_frozen = 0;
+ /*
+ * the ovfl_sem is cleared by the restart task and this is safe because we always
+ * use the local reference
+ */
+
+ pfm_reset_regs(ctx);
+
+ /* now we can clear this mask */
+ ctx->ctx_ovfl_regs = 0;
+
+ /*
+ * Unlock sampling buffer and reset index atomically
+ * XXX: not really needed when blocking
+ */
+ if (CTX_HAS_SMPL(ctx)) {
+ ctx->ctx_smpl_buf->psb_hdr->hdr_count = 0;
+ ctx->ctx_smpl_buf->psb_index = 0;
+ }
+
+ DBprintk((" CPU%d %d unfreeze PMU\n", smp_processor_id(), current->pid));
+
+ ia64_set_pmc(0, 0);
+ ia64_srlz_d();
+
+ /* state restored, can go back to work (user mode) */
+ }
+}
+
+static void
+perfmon_softint(unsigned long ignored)
+{
+ notification_info_t *info;
+ int my_cpu = smp_processor_id();
+ struct task_struct *task;
+ struct siginfo si;
+
+ info = notify_info+my_cpu;
+
+ DBprintk((" CPU%d current=%d to_pid=%d from_pid=%d bv=0x%lx\n", \
+ smp_processor_id(), current->pid, info->to_pid, info->from_pid, info->bitvect));
+
+ /* assumption check */
+ if (info->from_pid = info->to_pid) {
+ DBprintk((" Tasklet assumption error: from=%d tor=%d\n", info->from_pid, info->to_pid));
+ return;
+ }
+
+ if (notification_is_invalid(info)) {
+ DBprintk((" invalid notification information\n"));
+ return;
+ }
+
+ /* sanity check */
+ if (info->to_pid = 1) {
+ DBprintk((" cannot notify init\n"));
+ return;
+ }
+ /*
+ * XXX: needs way more checks here to make sure we send to a task we have control over
+ */
+ read_lock(&tasklist_lock);
+
+ task = find_task_by_pid(info->to_pid);
+
+ DBprintk((" after find %p\n", task));
+
+ if (task) {
+ int ret;
+
+ si.si_signo = SIGPROF;
+ si.si_errno = 0;
+ si.si_code = PROF_OVFL; /* goes to user */
+ si.si_addr = NULL;
+ si.si_pid = info->from_pid; /* who is sending */
+ si.si_pfm_ovfl = info->bitvect;
+
+ DBprintk((" SIGPROF to %d @ %p\n", task->pid, task));
+
+ /* must be done with tasklist_lock locked */
+ ret = send_sig_info(SIGPROF, &si, task);
+ if (ret != 0)
+ DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", info->to_pid, ret));
+
+ /* invalidate notification */
+ info->to_pid = info->from_pid = 0;
+ info->bitvect = 0;
+ }
+
+ read_unlock(&tasklist_lock);
+
+ DBprintk((" after unlock %p\n", task));
+
+ if (!task) {
+ printk("perfmon: CPU%d cannot find process %d\n", smp_processor_id(), info->to_pid);
+ }
+}
+
+/*
+ * main overflow processing routine.
+ * it can be called from the interrupt path or explicitely during the context switch code
+ * Return:
+ * 0 : do not unfreeze the PMU
+ * 1 : PMU can be unfrozen
+ */
+static unsigned long
+update_counters (struct task_struct *ta, u64 pmc0, struct pt_regs *regs)
+{
+ unsigned long mask, i, cnum;
+ struct thread_struct *th;
+ pfm_context_t *ctx;
+ unsigned long bv = 0;
+ int my_cpu = smp_processor_id();
+ int ret = 1, buffer_is_full = 0;
+ int ovfl_is_smpl, can_notify, need_reset_pmd16=0;
/*
* It is never safe to access the task for which the overflow interrupt is destinated
* using the current variable as the interrupt may occur in the middle of a context switch
@@ -408,76 +1336,269 @@
*
* For monitoring, however, we do need to get access to the task which caused the overflow
* to account for overflow on the counters.
+ *
* We accomplish this by maintaining a current owner of the PMU per CPU. During context
* switch the ownership is changed in a way such that the reflected owner is always the
* valid one, i.e. the one that caused the interrupt.
*/
- ta = pmu_owners[smp_processor_id()];
- th = &pmu_owners[smp_processor_id()]->thread;
+
+ if (ta = NULL) {
+ DBprintk((" owners[%d]=NULL\n", my_cpu));
+ return 0x1;
+ }
+ th = &ta->thread;
+ ctx = th->pfm_context;
/*
- * Don't think this could happen given first test. Keep as sanity check
+ * XXX: debug test
+ * Don't think this could happen given upfront tests
*/
if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
- DBprintk((__FUNCTION__" Spurious overflow interrupt: process %d not using perfmon\n", ta->pid));
+ printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", ta->pid);
+ return 0x1;
+ }
+ if (!ctx) {
+ printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", ta->pid);
return 0;
}
/*
- * if PMU not frozen: spurious from previous context
- * if PMC[0] = 0x1 : frozen but no overflow reported: leftover from previous context
- *
- * in either case we don't touch the state upon return from handler
+ * sanity test. Should never happen
*/
- if ((pmc0 & 0x1) = 0 || pmc0 = 0x1) {
- DBprintk((__FUNCTION__" Spurious overflow interrupt: process %d freeze=0\n",ta->pid));
- return 0;
+ if ((pmc0 & 0x1 )= 0) {
+ printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", ta->pid, pmc0);
+ return 0x0;
}
- mask = pmc0 >> 4;
+ mask = pmc0 >> PMU_FIRST_COUNTER;
- for (i = 0, cnum = PMU_FIRST_COUNTER; i < pmu_conf.max_counters; cnum++, i++, mask >>= 1) {
+ DBprintk(("pmc0=0x%lx pid=%d\n", pmc0, ta->pid));
- if (mask & 0x1) {
- DBprintk((__FUNCTION__ " PMD[%ld] overflowed pmd=0x%lx pmod.val=0x%lx\n", cnum, ia64_get_pmd(cnum), th->pmu_counters[i].val));
-
+ DBprintk(("ctx is in %s mode\n", CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK"));
+
+ if (CTX_HAS_SMPL(ctx)) {
+ pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
+ unsigned long *e, m, idx=0;
+ perfmon_smpl_entry_t *h;
+ int j;
+
+ idx = ia64_fetch_and_add(1, &psb->psb_index);
+ DBprintk((" trying to record index=%ld entries=%ld\n", idx, psb->psb_entries));
+
+ /*
+ * XXX: there is a small chance that we could run out on index before resetting
+ * but index is unsigned long, so it will take some time.....
+ */
+ if (idx > psb->psb_entries) {
+ buffer_is_full = 1;
+ goto reload_pmds;
+ }
+
+ /* first entry is really entry 0, not 1 caused by fetch_and_add */
+ idx--;
+
+ h = (perfmon_smpl_entry_t *)(((char *)psb->psb_addr) + idx*(psb->psb_entry_size));
+
+ h->pid = ta->pid;
+ h->cpu = my_cpu;
+ h->rate = 0;
+ h->ip = regs ? regs->cr_iip : 0x0; /* where did the fault happened */
+ h->regs = mask; /* which registers overflowed */
+
+ /* guaranteed to monotonically increase on each cpu */
+ h->stamp = perfmon_get_stamp();
+
+ e = (unsigned long *)(h+1);
+ /*
+ * selectively store PMDs in increasing index number
+ */
+ for (j=0, m = ctx->ctx_smpl_regs; m; m >>=1, j++) {
+ if (m & 0x1) {
+ if (PMD_IS_COUNTER(j))
+ *e = ctx->ctx_pmds[j-PMU_FIRST_COUNTER].val
+ + (ia64_get_pmd(j) & pmu_conf.perf_ovfl_val);
+ else
+ *e = ia64_get_pmd(j); /* slow */
+ DBprintk((" e=%p pmd%d =0x%lx\n", e, j, *e));
+ e++;
+ }
+ }
+ /* make the new entry visible to user, needs to be atomic */
+ ia64_fetch_and_add(1, &psb->psb_hdr->hdr_count);
+
+ DBprintk((" index=%ld entries=%ld hdr_count=%ld\n", idx, psb->psb_entries, psb->psb_hdr->hdr_count));
+
+ /* sampling buffer full ? */
+ if (idx = (psb->psb_entries-1)) {
+ bv = mask;
+ buffer_is_full = 1;
+
+ DBprintk((" sampling buffer full must notify bv=0x%lx\n", bv));
+
+ if (!CTX_OVFL_NOBLOCK(ctx)) goto buffer_full;
/*
- * Because we somtimes (EARS/BTB) reset to a specific value, we cannot simply use
- * val to count the number of times we overflowed. Otherwise we would loose the value
- * current in the PMD (which can be >0). So to make sure we don't loose
- * the residual counts we set val to contain full 64bits value of the counter.
+ * here, we have a full buffer but we are in non-blocking mode
+ * so we need to reloads overflowed PMDs with sampling reset values
+ * and restart
*/
- th->pmu_counters[i].val += 1+pmu_conf.perf_ovfl_val+(ia64_get_pmd(cnum) &pmu_conf.perf_ovfl_val);
+ }
+ }
+reload_pmds:
+ ovfl_is_smpl = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full;
+ can_notify = CTX_HAS_SMPL(ctx) = 0 && ctx->ctx_notify_pid;
- /* writes to upper part are ignored, so this is safe */
- ia64_set_pmd(cnum, th->pmu_counters[i].rval);
+ for (i = 0, cnum = PMU_FIRST_COUNTER; mask ; cnum++, i++, mask >>= 1) {
+
+ if ((mask & 0x1) = 0) continue;
+
+ DBprintk((" PMD[%ld] overflowed pmd=0x%lx pmod.val=0x%lx\n", cnum, ia64_get_pmd(cnum), ctx->ctx_pmds[i].val));
+
+ /*
+ * Because we sometimes (EARS/BTB) reset to a specific value, we cannot simply use
+ * val to count the number of times we overflowed. Otherwise we would loose the current value
+ * in the PMD (which can be >0). So to make sure we don't loose
+ * the residual counts we set val to contain full 64bits value of the counter.
+ *
+ * XXX: is this needed for EARS/BTB ?
+ */
+ ctx->ctx_pmds[i].val += 1 + pmu_conf.perf_ovfl_val
+ + (ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val); /* slow */
+
+ DBprintk((" pmod[%ld].val=0x%lx pmd=0x%lx\n", i, ctx->ctx_pmds[i].val, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val));
- DBprintk((__FUNCTION__ " pmod[%ld].val=0x%lx pmd=0x%lx\n", i, th->pmu_counters[i].val, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val));
+ if (can_notify && PMD_OVFL_NOTIFY(ctx, i)) {
+ DBprintk((" CPU%d should notify process %d with signal %d\n", my_cpu, ctx->ctx_notify_pid, ctx->ctx_notify_sig));
+ bv |= 1 << i;
+ } else {
+ DBprintk((" CPU%d PMD[%ld] overflow, no notification\n", my_cpu, cnum));
+ /*
+ * In case no notification is requested, we reload the reset value right away
+ * otherwise we wait until the notify_pid process has been called and has
+ * has finished processing data. Check out pfm_overflow_notify()
+ */
- if (th->pmu_counters[i].pid != 0 && th->pmu_counters[i].sig>0) {
- DBprintk((__FUNCTION__ " shouild notify process %d with signal %d\n",th->pmu_counters[i].pid, th->pmu_counters[i].sig));
+ /* writes to upper part are ignored, so this is safe */
+ if (ovfl_is_smpl) {
+ DBprintk((" CPU%d PMD[%ld] reloaded with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
+ } else {
+ DBprintk((" CPU%d PMD[%ld] reloaded with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ ia64_set_pmd(cnum, ctx->ctx_pmds[i].ovfl_rval);
}
}
+ if (cnum = ctx->ctx_btb_counter) need_reset_pmd16=1;
}
- return 1;
+ /*
+ * In case of BTB, overflow
+ * we need to reset the BTB index.
+ */
+ if (need_reset_pmd16) {
+ DBprintk(("reset PMD16\n"));
+ ia64_set_pmd(16, 0);
+ }
+buffer_full:
+ /* see pfm_overflow_notify() on details for why we use |= here */
+ ctx->ctx_ovfl_regs |= bv;
+
+ /* nobody to notify, return and unfreeze */
+ if (!bv) return 0x0;
+
+
+ if (ctx->ctx_notify_pid = ta->pid) {
+ struct siginfo si;
+
+ si.si_errno = 0;
+ si.si_addr = NULL;
+ si.si_pid = ta->pid; /* who is sending */
+
+
+ si.si_signo = ctx->ctx_notify_sig; /* is SIGPROF */
+ si.si_code = PROF_OVFL; /* goes to user */
+ si.si_pfm_ovfl = bv;
+
+
+ /*
+ * in this case, we don't stop the task, we let it go on. It will
+ * necessarily go to the signal handler (if any) when it goes back to
+ * user mode.
+ */
+ DBprintk((" sending %d notification to self %d\n", si.si_signo, ta->pid));
+
+
+ /* this call is safe in an interrupt handler */
+ ret = send_sig_info(ctx->ctx_notify_sig, &si, ta);
+ if (ret != 0)
+ printk(" send_sig_info(process %d, SIGPROF)=%d\n", ta->pid, ret);
+ /*
+ * no matter if we block or not, we keep PMU frozen and do not unfreeze on ctxsw
+ */
+ ctx->ctx_fl_frozen = 1;
+
+ } else {
+#if 0
+ /*
+ * The tasklet is guaranteed to be scheduled for this CPU only
+ */
+ notify_info[my_cpu].to_pid = ctx->notify_pid;
+ notify_info[my_cpu].from_pid = ta->pid; /* for debug only */
+ notify_info[my_cpu].bitvect = bv;
+ /* tasklet is inserted and active */
+ tasklet_schedule(&pfm_tasklet);
+#endif
+ /*
+ * stored the vector of overflowed registers for use in notification
+ * mark that a notification/blocking is pending (arm the trap)
+ */
+ th->pfm_pend_notify = 1;
+
+ /*
+ * if we do block, then keep PMU frozen until restart
+ */
+ if (!CTX_OVFL_NOBLOCK(ctx)) ctx->ctx_fl_frozen = 1;
+
+ DBprintk((" process %d notify ovfl_regs=0x%lx\n", ta->pid, bv));
+ }
+ /*
+ * keep PMU frozen (and overflowed bits cleared) when we have to stop,
+ * otherwise return a resume 'value' for PMC[0]
+ *
+ * XXX: maybe that's enough to get rid of ctx_fl_frozen ?
+ */
+ DBprintk((" will return pmc0=0x%x\n",ctx->ctx_fl_frozen ? 0x1 : 0x0));
+ return ctx->ctx_fl_frozen ? 0x1 : 0x0;
}
static void
perfmon_interrupt (int irq, void *arg, struct pt_regs *regs)
{
- /* unfreeze if not spurious */
- if ( update_counters(ia64_get_pmc(0)) ) {
- ia64_set_pmc(0, 0);
+ u64 pmc0;
+ struct task_struct *ta;
+
+ pmc0 = ia64_get_pmc(0); /* slow */
+
+ /*
+ * if we have some pending bits set
+ * assumes : if any PM[0].bit[63-1] is set, then PMC[0].fr = 1
+ */
+ if ((pmc0 & ~0x1) && (ta=PMU_OWNER())) {
+
+ /* assumes, PMC[0].fr = 1 at this point */
+ pmc0 = update_counters(ta, pmc0, regs);
+
+ /*
+ * if pmu_frozen = 0
+ * pmc0 = 0 and we resume monitoring right away
+ * else
+ * pmc0 = 0x1 frozen but all pending bits are cleared
+ */
+ ia64_set_pmc(0, pmc0);
ia64_srlz_d();
+ } else {
+ printk("perfmon: Spurious PMU overflow interrupt: pmc0=0x%lx owner=%p\n", pmc0, PMU_OWNER());
}
}
-static struct irqaction perfmon_irqaction = {
- handler: perfmon_interrupt,
- flags: SA_INTERRUPT,
- name: "perfmon"
-};
-
+/* for debug only */
static int
perfmon_proc_info(char *page)
{
@@ -488,11 +1609,12 @@
p += sprintf(p, "PMC[0]=%lx\nPerfmon debug: %s\n", pmc0, pfm_debug ? "On" : "Off");
for(i=0; i < NR_CPUS; i++) {
if (cpu_is_online(i))
- p += sprintf(p, "CPU%d.PMU %d\n", i, pmu_owners[i] ? pmu_owners[i]->pid: -1);
+ p += sprintf(p, "CPU%d.PMU %d\n", i, pmu_owners[i].owner ? pmu_owners[i].owner->pid: 0);
}
return p - page;
}
+/* for debug only */
static int
perfmon_read_entry(char *page, char **start, off_t off, int count, int *eof, void *data)
{
@@ -509,7 +1631,11 @@
return len;
}
-static struct proc_dir_entry *perfmon_dir;
+static struct irqaction perfmon_irqaction = {
+ handler: perfmon_interrupt,
+ flags: SA_INTERRUPT,
+ name: "perfmon"
+};
void __init
perfmon_init (void)
@@ -524,19 +1650,39 @@
ia64_set_pmv(PERFMON_IRQ);
ia64_srlz_d();
- printk("perfmon: Initialized vector to %u\n",PERFMON_IRQ);
+ pmu_conf.pfm_is_disabled = 1;
+
+ printk("perfmon: version %s\n", PFM_VERSION);
+ printk("perfmon: Interrupt vectored to %u\n", PERFMON_IRQ);
if ((status=ia64_pal_perf_mon_info(pmu_conf.impl_regs, &pm_info)) != 0) {
- printk(__FUNCTION__ " pal call failed (%ld)\n", status);
+ printk("perfmon: PAL call failed (%ld)\n", status);
return;
}
pmu_conf.perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1;
-
- /* XXX need to use PAL instead */
pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic;
+ pmu_conf.num_pmds = find_num_pm_regs(pmu_conf.impl_regs);
+ pmu_conf.num_pmcs = find_num_pm_regs(&pmu_conf.impl_regs[4]);
printk("perfmon: Counters are %d bits\n", pm_info.pal_perf_mon_info_s.width);
printk("perfmon: Maximum counter value 0x%lx\n", pmu_conf.perf_ovfl_val);
+ printk("perfmon: %ld PMC/PMD pairs\n", pmu_conf.max_counters);
+ printk("perfmon: %ld PMCs, %ld PMDs\n", pmu_conf.num_pmcs, pmu_conf.num_pmds);
+ printk("perfmon: Sampling format v%d\n", PFM_SMPL_HDR_VERSION);
+
+ /* sanity check */
+ if (pmu_conf.num_pmds >= IA64_NUM_PMD_REGS || pmu_conf.num_pmcs >= IA64_NUM_PMC_REGS) {
+ printk(KERN_ERR "perfmon: ERROR not enough PMC/PMD storage in kernel, perfmon is DISABLED\n");
+ return; /* no need to continue anyway */
+ }
+ /* we are all set */
+ pmu_conf.pfm_is_disabled = 0;
+
+ /*
+ * Insert the tasklet in the list.
+ * It is still disabled at this point, so it won't run
+ printk(__FUNCTION__" tasklet is %p state=%d, count=%d\n", &perfmon_tasklet, perfmon_tasklet.state, perfmon_tasklet.count);
+ */
/*
* for now here for debug purposes
@@ -555,14 +1701,19 @@
* XXX: for system wide this function MUST never be called
*/
void
-ia64_save_pm_regs (struct task_struct *ta)
+pfm_save_regs (struct task_struct *ta)
{
- struct thread_struct *t = &ta->thread;
+ struct task_struct *owner;
+ struct thread_struct *t;
u64 pmc0, psr;
- int i,j;
+ int i;
+ if (ta = NULL) {
+ panic(__FUNCTION__" task is NULL\n");
+ }
+ t = &ta->thread;
/*
- * We must maek sure that we don't loose any potential overflow
+ * We must make sure that we don't loose any potential overflow
* interrupt while saving PMU context. In this code, external
* interrupts are always enabled.
*/
@@ -575,94 +1726,102 @@
/*
* stop monitoring:
* This is the only way to stop monitoring without destroying overflow
- * information in PMC[0..3].
+ * information in PMC[0].
* This is the last instruction which can cause overflow when monitoring
* in kernel.
- * By now, we could still have an overflow interrupt in flight.
+ * By now, we could still have an overflow interrupt in-flight.
*/
- __asm__ __volatile__ ("rsm psr.up;;"::: "memory");
+ __asm__ __volatile__ ("rum psr.up;;"::: "memory");
/*
+ * Mark the PMU as not owned
+ * This will cause the interrupt handler to do nothing in case an overflow
+ * interrupt was in-flight
+ * This also guarantees that pmc0 will contain the final state
+ * It virtually gives us full control on overflow processing from that point
+ * on.
+ * It must be an atomic operation.
+ */
+ owner = PMU_OWNER();
+ SET_PMU_OWNER(NULL);
+
+ /*
* read current overflow status:
*
- * We may be reading stale information at this point, if we got interrupt
- * just before the read(pmc0) but that's all right. However, if we did
- * not get the interrupt before, this read reflects LAST state.
- *
+ * we are guaranteed to read the final stable state
*/
- pmc0 = ia64_get_pmc(0);
+ ia64_srlz_d();
+ pmc0 = ia64_get_pmc(0); /* slow */
/*
* freeze PMU:
*
* This destroys the overflow information. This is required to make sure
* next process does not start with monitoring on if not requested
- * (PSR.up may not be enough).
- *
- * We could still get an overflow interrupt by now. However the handler
- * will not do anything if is sees PMC[0].fr=1 but no overflow bits
- * are set. So PMU will stay in frozen state. This implies that pmc0
- * will still be holding the correct unprocessed information.
- *
*/
ia64_set_pmc(0, 1);
ia64_srlz_d();
/*
- * check for overflow bits set:
- *
- * If pmc0 reports PMU frozen, this means we have a pending overflow,
- * therefore we invoke the handler. Handler is reentrant with regards
- * to PMC[0] so it is safe to call it twice.
- *
- * IF pmc0 reports overflow, we need to reread current PMC[0] value
- * in case the handler was invoked right after the first pmc0 read.
- * it is was not invoked then pmc0=PMC[0], otherwise it's been invoked
- * and overflow information has been processed, so we don't need to call.
- *
- * Test breakdown:
- * - pmc0 & ~0x1: test if overflow happened
- * - second part: check if current register reflects this as well.
- *
- * NOTE: testing for pmc0 & 0x1 is not enough has it would trigger call
- * when PM_VALID and PMU.fr which is common when setting up registers
- * just before actually starting monitors.
+ * Check for overflow bits and proceed manually if needed
*
+ * It is safe to call the interrupt handler now because it does
+ * not try to block the task right away. Instead it will set a
+ * flag and let the task proceed. The blocking will only occur
+ * next time the task exits from the kernel.
*/
- if ((pmc0 & ~0x1) && ((pmc0=ia64_get_pmc(0)) &~0x1) ) {
- printk(__FUNCTION__" Warning: pmc[0]=0x%lx\n", pmc0);
- update_counters(pmc0);
- /*
- * XXX: not sure that's enough. the next task may still get the
- * interrupt.
- */
+ if (pmc0 & ~0x1) {
+ if (owner != ta) printk(__FUNCTION__" owner=%p task=%p\n", owner, ta);
+ printk(__FUNCTION__" Warning: pmc[0]=0x%lx explicit call\n", pmc0);
+
+ pmc0 = update_counters(owner, pmc0, NULL);
+ /* we will save the updated version of pmc0 */
}
/*
* restore PSR for context switch to save
*/
- __asm__ __volatile__ ("mov psr.l=%0;;"::"r"(psr): "memory");
+ __asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(psr): "memory");
- /*
- * XXX: this will need to be extended beyong just counters
+
+ /*
+ * XXX needs further optimization.
+ * Also must take holes into account
*/
- for (i=0,j=4; i< IA64_NUM_PMD_COUNTERS; i++,j++) {
- t->pmd[i] = ia64_get_pmd(j);
- t->pmc[i] = ia64_get_pmc(j);
+ for (i=0; i< pmu_conf.num_pmds; i++) {
+ t->pmd[i] = ia64_get_pmd(i);
}
+
+ /* skip PMC[0], we handle it separately */
+ for (i=1; i< pmu_conf.num_pmcs; i++) {
+ t->pmc[i] = ia64_get_pmc(i);
+ }
+
/*
- * PMU is frozen, PMU context is saved: nobody owns the PMU on this CPU
- * At this point, we should not receive any pending interrupt from the
- * 'switched out' task
+ * Throughout this code we could have gotten an overflow interrupt. It is transformed
+ * into a spurious interrupt as soon as we give up pmu ownership.
*/
- pmu_owners[smp_processor_id()] = NULL;
}
void
-ia64_load_pm_regs (struct task_struct *ta)
+pfm_load_regs (struct task_struct *ta)
{
struct thread_struct *t = &ta->thread;
- int i,j;
+ pfm_context_t *ctx = ta->thread.pfm_context;
+ int i;
+
+ /*
+ * XXX needs further optimization.
+ * Also must take holes into account
+ */
+ for (i=0; i< pmu_conf.num_pmds; i++) {
+ ia64_set_pmd(i, t->pmd[i]);
+ }
+
+ /* skip PMC[0] to avoid side effects */
+ for (i=1; i< pmu_conf.num_pmcs; i++) {
+ ia64_set_pmc(i, t->pmc[i]);
+ }
/*
* we first restore ownership of the PMU to the 'soon to be current'
@@ -670,26 +1829,277 @@
* of this function, we get an interrupt, we attribute it to the correct
* task
*/
- pmu_owners[smp_processor_id()] = ta;
+ SET_PMU_OWNER(ta);
+
+#if 0
+ /*
+ * check if we had pending overflow before context switching out
+ * If so, we invoke the handler manually, i.e. simulate interrupt.
+ *
+ * XXX: given that we do not use the tasklet anymore to stop, we can
+ * move this back to the pfm_save_regs() routine.
+ */
+ if (t->pmc[0] & ~0x1) {
+ /* freeze set in pfm_save_regs() */
+ DBprintk((" pmc[0]=0x%lx manual interrupt\n",t->pmc[0]));
+ update_counters(ta, t->pmc[0], NULL);
+ }
+#endif
/*
- * XXX: this will need to be extended beyong just counters
+ * unfreeze only when possible
*/
- for (i=0,j=4; i< IA64_NUM_PMD_COUNTERS; i++,j++) {
- ia64_set_pmd(j, t->pmd[i]);
- ia64_set_pmc(j, t->pmc[i]);
+ if (ctx->ctx_fl_frozen = 0) {
+ ia64_set_pmc(0, 0);
+ ia64_srlz_d();
+ }
+}
+
+
+/*
+ * This function is called when a thread exits (from exit_thread()).
+ * This is a simplified pfm_save_regs() that simply flushes hthe current
+ * register state into the save area taking into account any pending
+ * overflow. This time no notification is sent because the taks is dying
+ * anyway. The inline processing of overflows avoids loosing some counts.
+ * The PMU is frozen on exit from this call and is to never be reenabled
+ * again for this task.
+ */
+void
+pfm_flush_regs (struct task_struct *ta)
+{
+ pfm_context_t *ctx;
+ u64 pmc0, psr, mask;
+ int i,j;
+
+ if (ta = NULL) {
+ panic(__FUNCTION__" task is NULL\n");
+ }
+ ctx = ta->thread.pfm_context;
+ if (ctx = NULL) {
+ panic(__FUNCTION__" no PFM ctx is NULL\n");
}
/*
- * unfreeze PMU
+ * We must make sure that we don't loose any potential overflow
+ * interrupt while saving PMU context. In this code, external
+ * interrupts are always enabled.
+ */
+
+ /*
+ * save current PSR: needed because we modify it
+ */
+ __asm__ __volatile__ ("mov %0=psr;;": "=r"(psr) :: "memory");
+
+ /*
+ * stop monitoring:
+ * This is the only way to stop monitoring without destroying overflow
+ * information in PMC[0].
+ * This is the last instruction which can cause overflow when monitoring
+ * in kernel.
+ * By now, we could still have an overflow interrupt in-flight.
+ */
+ __asm__ __volatile__ ("rsm psr.up;;"::: "memory");
+
+ /*
+ * Mark the PMU as not owned
+ * This will cause the interrupt handler to do nothing in case an overflow
+ * interrupt was in-flight
+ * This also guarantees that pmc0 will contain the final state
+ * It virtually gives us full control on overflow processing from that point
+ * on.
+ * It must be an atomic operation.
+ */
+ SET_PMU_OWNER(NULL);
+
+ /*
+ * read current overflow status:
+ *
+ * we are guaranteed to read the final stable state
+ */
+ ia64_srlz_d();
+ pmc0 = ia64_get_pmc(0); /* slow */
+
+ /*
+ * freeze PMU:
+ *
+ * This destroys the overflow information. This is required to make sure
+ * next process does not start with monitoring on if not requested
+ */
+ ia64_set_pmc(0, 1);
+ ia64_srlz_d();
+
+ /*
+ * restore PSR for context switch to save
+ */
+ __asm__ __volatile__ ("mov psr.l=%0;;"::"r"(psr): "memory");
+
+ /*
+ * This loop flushes the PMD into the PFM context.
+ * IT also processes overflow inline.
+ *
+ * IMPORTANT: No notification is sent at this point as the process is dying.
+ * The implicit notification will come from a SIGCHILD or a return from a
+ * waitpid().
+ *
+ * XXX: must take holes into account
*/
- ia64_set_pmc(0, 0);
+ mask = pmc0 >> PMU_FIRST_COUNTER;
+ for (i=0,j=PMU_FIRST_COUNTER; i< pmu_conf.max_counters; i++,j++) {
+
+ /* collect latest results */
+ ctx->ctx_pmds[i].val += ia64_get_pmd(j) & pmu_conf.perf_ovfl_val;
+
+ /* take care of overflow inline */
+ if (mask & 0x1) {
+ ctx->ctx_pmds[i].val += 1 + pmu_conf.perf_ovfl_val;
+ DBprintk((" PMD[%d] overflowed pmd=0x%lx pmds.val=0x%lx\n",
+ j, ia64_get_pmd(j), ctx->ctx_pmds[i].val));
+ }
+ }
+}
+
+/*
+ * XXX: this routine is not very portable for PMCs
+ * XXX: make this routine able to work with non current context
+ */
+static void
+ia64_reset_pmu(void)
+{
+ int i;
+
+ /* PMU is frozen, no pending overflow bits */
+ ia64_set_pmc(0,1);
+
+ /* extra overflow bits + counter configs cleared */
+ for(i=1; i< PMU_FIRST_COUNTER + pmu_conf.max_counters ; i++) {
+ ia64_set_pmc(i,0);
+ }
+
+ /* opcode matcher set to all 1s */
+ ia64_set_pmc(8,~0);
+ ia64_set_pmc(9,~0);
+
+ /* I-EAR config cleared, plm=0 */
+ ia64_set_pmc(10,0);
+
+ /* D-EAR config cleared, PMC[11].pt must be 1 */
+ ia64_set_pmc(11,1 << 28);
+
+ /* BTB config. plm=0 */
+ ia64_set_pmc(12,0);
+
+ /* Instruction address range, PMC[13].ta must be 1 */
+ ia64_set_pmc(13,1);
+
+ /* clears all PMD registers */
+ for(i=0;i< pmu_conf.num_pmds; i++) {
+ if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0);
+ }
ia64_srlz_d();
}
+/*
+ * task is the newly created task
+ */
+int
+pfm_inherit(struct task_struct *task)
+{
+ pfm_context_t *ctx = current->thread.pfm_context;
+ pfm_context_t *nctx;
+ struct thread_struct *th = &task->thread;
+ int i, cnum;
+
+ /*
+ * takes care of easiest case first
+ */
+ if (CTX_INHERIT_MODE(ctx) = PFM_FL_INHERIT_NONE) {
+ DBprintk((" removing PFM context for %d\n", task->pid));
+ task->thread.pfm_context = NULL;
+ task->thread.pfm_pend_notify = 0;
+ /* copy_thread() clears IA64_THREAD_PM_VALID */
+ return 0;
+ }
+ nctx = pfm_context_alloc();
+ if (nctx = NULL) return -ENOMEM;
+
+ /* copy content */
+ *nctx = *ctx;
+
+ if (ctx->ctx_fl_inherit = PFM_FL_INHERIT_ONCE) {
+ nctx->ctx_fl_inherit = PFM_FL_INHERIT_NONE;
+ DBprintk((" downgrading to INHERIT_NONE for %d\n", task->pid));
+ }
+
+ /* initialize counters in new context */
+ for(i=0, cnum= PMU_FIRST_COUNTER; i < pmu_conf.max_counters; cnum++, i++) {
+ nctx->ctx_pmds[i].val = nctx->ctx_pmds[i].ival & ~pmu_conf.perf_ovfl_val;
+ th->pmd[cnum] = nctx->ctx_pmds[i].ival & pmu_conf.perf_ovfl_val;
+
+ }
+ /* clear BTB index register */
+ th->pmd[16] = 0;
+
+ /* if sampling then increment number of users of buffer */
+ if (nctx->ctx_smpl_buf) {
+ atomic_inc(&nctx->ctx_smpl_buf->psb_refcnt);
+ }
+
+ nctx->ctx_fl_frozen = 0;
+ nctx->ctx_ovfl_regs = 0;
+ sema_init(&nctx->ctx_restart_sem, 0); /* reset this semaphore to locked */
+
+ /* clear pending notification */
+ th->pfm_pend_notify = 0;
+
+ /* link with new task */
+ th->pfm_context = nctx;
+
+ DBprintk((" nctx=%p for process %d\n", nctx, task->pid));
+
+ /*
+ * the copy_thread routine automatically clears
+ * IA64_THREAD_PM_VALID, so we need to reenable it, if it was used by the caller
+ */
+ if (current->thread.flags & IA64_THREAD_PM_VALID) {
+ DBprintk((" setting PM_VALID for %d\n", task->pid));
+ th->flags |= IA64_THREAD_PM_VALID;
+ }
+
+ return 0;
+}
+
+/* called from exit_thread() */
+void
+pfm_context_exit(struct task_struct *task)
+{
+ pfm_context_t *ctx = task->thread.pfm_context;
+
+ if (!ctx) {
+ DBprintk((" invalid context for %d\n", task->pid));
+ return;
+ }
+
+ /* check is we have a sampling buffer attached */
+ if (ctx->ctx_smpl_buf) {
+ pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
+
+ /* if only user left, then remove */
+ DBprintk((" pid %d: task %d sampling psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter));
+
+ if (atomic_dec_and_test(&psb->psb_refcnt) ) {
+ rvfree(psb->psb_hdr, psb->psb_size);
+ vfree(psb);
+ DBprintk((" pid %d: cleaning task %d sampling buffer\n", current->pid, task->pid ));
+ }
+ }
+ DBprintk((" pid %d: task %d pfm_context is freed @%p\n", current->pid, task->pid, ctx));
+ pfm_context_free(ctx);
+}
+
#else /* !CONFIG_PERFMON */
-asmlinkage unsigned long
-sys_perfmonctl (int cmd, int count, void *ptr)
+asmlinkage int
+sys_perfmonctl (int pid, int cmd, int flags, perfmon_req_t *req, int count, long arg6, long arg7, long arg8, long stack)
{
return -ENOSYS;
}
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.0-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/process.c Mon Jan 8 23:41:03 2001
@@ -1,8 +1,8 @@
/*
* Architecture-specific setup.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#define __KERNEL_SYSCALLS__ /* see <asm/unistd.h> */
#include <linux/config.h>
@@ -20,6 +20,7 @@
#include <asm/delay.h>
#include <asm/efi.h>
+#include <asm/perfmon.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/sal.h>
@@ -147,7 +148,7 @@
ia64_save_debug_regs(&task->thread.dbr[0]);
#ifdef CONFIG_PERFMON
if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
- ia64_save_pm_regs(task);
+ pfm_save_regs(task);
#endif
if (IS_IA32_PROCESS(ia64_task_regs(task)))
ia32_save_state(&task->thread);
@@ -160,7 +161,7 @@
ia64_load_debug_regs(&task->thread.dbr[0]);
#ifdef CONFIG_PERFMON
if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
- ia64_load_pm_regs(task);
+ pfm_load_regs(task);
#endif
if (IS_IA32_PROCESS(ia64_task_regs(task)))
ia32_load_state(&task->thread);
@@ -210,6 +211,7 @@
struct switch_stack *child_stack, *stack;
extern char ia64_ret_from_clone;
struct pt_regs *child_ptregs;
+ int retval = 0;
#ifdef CONFIG_SMP
/*
@@ -290,7 +292,11 @@
if (IS_IA32_PROCESS(ia64_task_regs(current)))
ia32_save_state(&p->thread);
#endif
- return 0;
+#ifdef CONFIG_PERFMON
+ if (current->thread.pfm_context)
+ retval = pfm_inherit(p);
+#endif
+ return retval;
}
#ifdef CONFIG_IA64_NEW_UNWIND
@@ -523,6 +530,15 @@
#endif
}
+#ifdef CONFIG_PERFMON
+void
+release_thread (struct task_struct *task)
+{
+ if (task->thread.pfm_context)
+ pfm_context_exit(task);
+}
+#endif
+
/*
* Clean up state associated with current thread. This is called when
* the thread calls exit().
@@ -545,7 +561,7 @@
* we garantee no race. this call we also stop
* monitoring
*/
- ia64_save_pm_regs(current);
+ pfm_flush_regs(current);
/*
* make sure that switch_to() will not save context again
*/
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.0-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/setup.c Mon Jan 8 23:41:49 2001
@@ -1,8 +1,8 @@
/*
* Architecture-specific setup.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 2000, Rohit Seth <rohit.seth@intel.com>
* Copyright (C) 1999 VA Linux Systems
@@ -444,6 +431,15 @@
: "r" (((ulong) IA32_CR4 << 32) | IA32_CR0));
#endif
+ /* disable all local interrupt sources: */
+ ia64_set_itv(1 << 16);
+ ia64_set_lrr0(1 << 16);
+ ia64_set_lrr1(1 << 16);
+ ia64_set_pmv(1 << 16);
+ ia64_set_cmcv(1 << 16);
+
+ /* clear TPR & XTP to enable all interrupt classes: */
+ ia64_set_tpr(0);
#ifdef CONFIG_SMP
normal_xtp();
#endif
diff -urN linux-davidm/arch/ia64/kernel/signal.c linux-2.4.0-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/signal.c Mon Jan 8 23:53:05 2001
@@ -190,6 +190,11 @@
err |= __put_user(from->si_utime, &to->si_utime);
err |= __put_user(from->si_stime, &to->si_stime);
err |= __put_user(from->si_status, &to->si_status);
+ case __SI_PROF >> 16:
+ err |= __put_user(from->si_uid, &to->si_uid);
+ err |= __put_user(from->si_pid, &to->si_pid);
+ err |= __put_user(from->si_pfm_ovfl, &to->si_pfm_ovfl);
+ break;
default:
err |= __put_user(from->si_uid, &to->si_uid);
err |= __put_user(from->si_pid, &to->si_pid);
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/smp.c Mon Jan 8 23:42:26 2001
@@ -71,7 +79,7 @@
static volatile int smp_commenced;
static int max_cpus = -1; /* Command line */
-static unsigned long ipi_op[NR_CPUS];
+
struct smp_call_struct {
void (*func) (void *info);
void *info;
@@ -159,7 +172,7 @@
handle_IPI(int irq, void *dev_id, struct pt_regs *regs)
{
int this_cpu = smp_processor_id();
- unsigned long *pending_ipis = &ipi_op[this_cpu];
+ unsigned long *pending_ipis = &cpu_data[this_cpu].ipi_operation;
unsigned long ops;
/* Count this now; we may make a call that never returns. */
@@ -274,7 +293,7 @@
if (dest_cpu = -1)
return;
- set_bit(op, &ipi_op[dest_cpu]);
+ set_bit(op, &cpu_data[dest_cpu].ipi_operation);
platform_send_ipi(dest_cpu, IPI_IRQ, IA64_IPI_DM_INT, 0);
}
@@ -508,10 +526,6 @@
perfmon_init_percpu();
#endif
- /* Disable all local interrupts */
- ia64_set_lrr0(0, 1);
- ia64_set_lrr1(0, 1);
-
local_irq_enable(); /* Interrupts have been off until now */
calibrate_delay();
@@ -610,7 +624,6 @@
/* Take care of some initial bookkeeping. */
memset(&__cpu_physical_id, -1, sizeof(__cpu_physical_id));
- memset(&ipi_op, 0, sizeof(ipi_op));
/* Setup BP mappings */
__cpu_physical_id[0] = hard_smp_processor_id();
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.4.0-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/time.c Mon Jan 8 23:43:02 2001
@@ -226,7 +226,7 @@
#endif
/* arrange for the cycle counter to generate a timer interrupt: */
- ia64_set_itv(TIMER_IRQ, 0);
+ ia64_set_itv(TIMER_IRQ);
itm.next[smp_processor_id()].count = ia64_get_itc() + itm.delta;
ia64_set_itm(itm.next[smp_processor_id()].count);
}
diff -urN linux-davidm/arch/ia64/lib/Makefile linux-2.4.0-lia/arch/ia64/lib/Makefile
--- linux-davidm/arch/ia64/lib/Makefile Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/lib/Makefile Mon Jan 8 23:43:14 2001
@@ -7,18 +7,18 @@
L_TARGET = lib.a
+export-objs := io.o swiotlb.o
+
obj-y := __divsi3.o __udivsi3.o __modsi3.o __umodsi3.o \
__divdi3.o __udivdi3.o __moddi3.o __umoddi3.o \
checksum.o clear_page.o csum_partial_copy.o copy_page.o \
copy_user.o clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o \
- flush.o do_csum.o \
+ flush.o io.o do_csum.o \
swiotlb.o
ifneq ($(CONFIG_ITANIUM_ASTEP_SPECIFIC),y)
obj-y += memcpy.o memset.o strlen.o
endif
-
-export-objs += io.o
IGNORE_FLAGS_OBJS = __divsi3.o __udivsi3.o __modsi3.o __umodsi3.o \
__divdi3.o __udivdi3.o __moddi3.o __umoddi3.o
diff -urN linux-davidm/arch/ia64/lib/swiotlb.c linux-2.4.0-lia/arch/ia64/lib/swiotlb.c
--- linux-davidm/arch/ia64/lib/swiotlb.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/lib/swiotlb.c Mon Jan 8 23:43:36 2001
@@ -10,7 +10,10 @@
* unnecessary i-cache flushing.
*/
+#include <linux/config.h>
+
#include <linux/mm.h>
+#include <linux/module.h>
#include <linux/pci.h>
#include <linux/spinlock.h>
#include <linux/string.h>
@@ -325,12 +328,8 @@
pg_addr = PAGE_ALIGN((unsigned long) addr);
end = (unsigned long) addr + size;
while (pg_addr + PAGE_SIZE <= end) {
-#if 0
- set_bit(PG_arch_1, virt_to_page(pg_addr));
-#else
- if (!VALID_PAGE(virt_to_page(pg_addr)))
- printk("Invalid addr %lx!!!\n", pg_addr);
-#endif
+ struct page *page = virt_to_page(pg_addr);
+ set_bit(PG_arch_1, &page->flags);
pg_addr += PAGE_SIZE;
}
}
@@ -454,3 +453,14 @@
{
return virt_to_phys(sg->address);
}
+
+EXPORT_SYMBOL(swiotlb_init);
+EXPORT_SYMBOL(swiotlb_map_single);
+EXPORT_SYMBOL(swiotlb_unmap_single);
+EXPORT_SYMBOL(swiotlb_map_sg);
+EXPORT_SYMBOL(swiotlb_unmap_sg);
+EXPORT_SYMBOL(swiotlb_sync_single);
+EXPORT_SYMBOL(swiotlb_sync_sg);
+EXPORT_SYMBOL(swiotlb_dma_address);
+EXPORT_SYMBOL(swiotlb_alloc_consistent);
+EXPORT_SYMBOL(swiotlb_free_consistent);
diff -urN linux-davidm/arch/ia64/tools/print_offsets.c linux-2.4.0-lia/arch/ia64/tools/print_offsets.c
--- linux-davidm/arch/ia64/tools/print_offsets.c Tue Jan 9 00:09:51 2001
+++ linux-2.4.0-lia/arch/ia64/tools/print_offsets.c Mon Jan 8 23:43:49 2001
@@ -1,8 +1,8 @@
/*
* Utility to generate asm-ia64/offsets.h.
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*
* Note that this file has dual use: when building the kernel
* natively, the file is translated into a binary and executed. When
@@ -57,6 +57,9 @@
{ "IA64_TASK_THREAD_KSP_OFFSET", offsetof (struct task_struct, thread.ksp) },
#ifdef CONFIG_IA32_SUPPORT
{ "IA64_TASK_THREAD_SIGMASK_OFFSET",offsetof (struct task_struct, thread.un.sigmask) },
+#endif
+#ifdef CONFIG_PERFMON
+ { "IA64_TASK_PFM_NOTIFY", offsetof(struct task_struct, thread.pfm_pend_notify) },
#endif
{ "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) },
{ "IA64_TASK_MM_OFFSET", offsetof (struct task_struct, mm) },
diff -urN linux-davidm/drivers/ide/ide-geometry.c linux-2.4.0-lia/drivers/ide/ide-geometry.c
--- linux-davidm/drivers/ide/ide-geometry.c Thu Jan 4 22:40:12 2001
+++ linux-2.4.0-lia/drivers/ide/ide-geometry.c Thu Jan 4 23:10:38 2001
@@ -3,8 +3,11 @@
*/
#include <linux/config.h>
#include <linux/ide.h>
-#include <linux/mc146818rtc.h>
#include <asm/io.h>
+
+#ifdef __i386__
+# include <linux/mc146818rtc.h>
+#endif
/*
* We query CMOS about hard disks : it could be that we have a SCSI/ESDI/etc
diff -urN linux-davidm/drivers/scsi/qla1280.c linux-2.4.0-lia/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Tue Jan 9 00:09:52 2001
+++ linux-2.4.0-lia/drivers/scsi/qla1280.c Mon Jan 8 23:45:09 2001
@@ -16,9 +16,21 @@
* General Public License for more details.
**
******************************************************************************/
-#define QLA1280_VERSION "3.19 Beta"
+#define QLA1280_VERSION "3.21 Beta"
/****************************************************************************
Revision History:
+ Rev 3.21 Beta January 4, 2001 BN Qlogic
+ - Changed criteria of 64/32 Bit mode of HBA
+ operation according to BITS_PER_LONG rather
+ than HBA's NVRAM setting of >4Gig memory bit;
+ so that the HBA auto-configures without the need
+ to setup each system individually.
+ Rev 3.20 Beta December 5, 2000 BN Qlogic
+ - Added priority handling to IA-64 onboard SCSI
+ ISP12160 chip for kernels greater than 2.3.18.
+ - Added irqrestore for qla1280_intr_handler.
+ - Enabled /proc/scsi/qla1280 interface.
+ - Clear /proc/scsi/qla1280 counters in detect().
Rev 3.19 Beta October 13, 2000 BN Qlogic
- Declare driver_template for new kernel
(2.4.0 and greater) scsi initialization scheme.
@@ -167,16 +179,9 @@
#define STOP_ON_ERROR 0 /* Stop on aborts and resets */
#define STOP_ON_RESET 0
#define STOP_ON_ABORT 0
-
+#define QLA1280_PROFILE 1 /* 3.20 */
#define DEBUG_QLA1280 0
-/*************** 64 BIT PCI DMA ******************************************/
-#define FORCE_64BIT_PCI_DMA 0 /* set to one for testing only */
-/* Applicable to 64 version of the Linux 2.4.x and above only */
-/* NVRAM bit nv->cntr_flags_1.enable_64bit_addressing should be used for */
-/* administrator control of PCI DMA width size per system configuration */
-/*************************************************************************/
-
#define BZERO(ptr, amt) memset(ptr, 0, amt)
#define BCOPY(src, dst, amt) memcpy(dst, src, amt)
#define KMALLOC(siz) kmalloc((siz), GFP_ATOMIC)
@@ -241,7 +246,7 @@
STATIC int qla1280_return_status( sts_entry_t *sts, Scsi_Cmnd *cp);
STATIC void qla1280_removeq(scsi_lu_t *q, srb_t *sp);
STATIC void qla1280_mem_free(scsi_qla_host_t *ha);
-static void qla1280_do_dpc(void *p);
+void qla1280_do_dpc(void *p);
static char *qla1280_get_token(char *, char *);
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0)
STATIC inline void mdelay(int);
@@ -429,7 +434,7 @@
static unsigned long qla1280_verbose = 1L;
static scsi_qla_host_t *qla1280_hostlist = NULL;
-#ifdef QLA1280_PROFILE
+#if QLA1280_PROFILE
static int qla1280_buffer_size = 0;
static char *qla1280_buffer = NULL;
#endif
@@ -521,7 +526,7 @@
uint32_t b, t, l;
host = NULL;
-
+
/* Find the host that was specified */
for( ha=qla1280_hostlist; (ha != NULL) && ha->host->host_no != hostno; ha=ha->next )
;
@@ -579,7 +584,7 @@
ha->request_dma,
ha->response_dma);
len += size;
- size = sprintf(PROC_BUF, "Request Queue count= 0x%lx, Response Queue count= 0x%lx\n",
+ size = sprintf(PROC_BUF, "Request Queue count= 0x%x, Response Queue count= 0x%x\n",
REQUEST_ENTRY_CNT,
RESPONSE_ENTRY_CNT);
len += size;
@@ -671,7 +676,7 @@
struct Scsi_Host *host;
scsi_qla_host_t *ha, *cur_ha;
struct _qlaboards *bdp;
- int i, j;
+ int i,j;
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,18)
unsigned short subsys;
#endif
@@ -747,14 +752,99 @@
#else
template->proc_name = "qla1280";
#endif
+
+ /* 3.20 */
+ /* present the on-board ISP12160 for IA-64 Lion systems
+ first to the OS; to preserve boot drive access in case another
+ QLA12160 is inserted in the PCI slots */
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
+ while ((pdev = pci_find_subsys(QLA1280_VENDOR_ID,
+ bdp->device_id, /* QLA12160 first in list */
+ PCI_ANY_ID,
+ PCI_ANY_ID,pdev))) {
+
+ /* only interested here on devices on PCI bus=1 slot=2 */
+ if ((pdev->bus->number != 1) ||
+ (PCI_SLOT(pdev->devfn) != 2)) continue;
+
+ if (pci_enable_device(pdev)) goto find_devices;
+ printk("qla1x160: Initializing IA-64 ISP12160\n");
+ host = scsi_register(template, sizeof(scsi_qla_host_t));
+ ha = (scsi_qla_host_t *) host->hostdata;
+ /* Clear our data area */
+ for( j =0, cp = (char *)ha; j < sizeof(scsi_qla_host_t); j++)
+ *cp++ = 0;
+ /* Sanitize the information from PCI BIOS. */
+ host->irq = pdev->irq;
+ host->io_port = pci_resource_start(pdev, 0);
+ ha->pci_bus = pdev->bus->number;
+ ha->pci_device_fn = pdev->devfn;
+ ha->pdev = pdev;
+ ha->device_id = bdp->device_id; /* QLA12160 first in list */
+
+ ha->devnum = 0; // This priority ISP12160 is always devnum zero
+ if( qla1280_mem_alloc(ha) ) {
+ printk(KERN_INFO "qla1x160: Failed to get memory\n");
+ }
+ ha->ports = bdp->numPorts;
+ /* following needed for all cases of OS versions */
+ host->io_port &= PCI_BASE_ADDRESS_IO_MASK;
+ ha->iobase = (device_reg_t *) host->io_port;
+ ha->host = host;
+ ha->host_no = host->host_no;
+ /* 3.20 zero out /proc/scsi/qla1280 counters */
+ ha->actthreads = 0;
+ ha->qthreads = 0;
+ ha->isr_count = 0;
+
+ /* load the F/W, read paramaters, and init the H/W */
+ ha->instance = num_hosts;
+ if (qla1280_initialize_adapter(ha))
+ {
+ printk(KERN_INFO "qla1x160: Failed to initialize onboard ISP12160 on IA-64 \n");
+ qla1280_mem_free(ha);
+ scsi_unregister(host);
+ goto find_devices;
+ }
+ host->max_channel = bdp->numPorts-1;
+ /* Register our resources with Linux */
+ if( qla1280_register_with_Linux(ha, bdp->numPorts-1) ) {
+ printk(KERN_INFO "qla1x160: Failed to register resources for onboard ISP12160 on IA-64\n");
+ qla1280_mem_free(ha);
+ scsi_unregister(host);
+ goto find_devices;
+ }
+ reg = ha->iobase;
+ /* Disable ISP interrupts. */
+ qla1280_disable_intrs(ha);
+ /* Insure mailbox registers are free. */
+ WRT_REG_WORD(®->semaphore, 0);
+ WRT_REG_WORD(®->host_cmd, HC_CLR_RISC_INT);
+ WRT_REG_WORD(®->host_cmd, HC_CLR_HOST_INT);
+
+ /* Enable chip interrupts. */
+ qla1280_enable_intrs(ha);
+ /* Insert new entry into the list of adapters */
+ ha->next = NULL;
+ /* this preferred device will always be the first one found */
+ cur_ha = qla1280_hostlist = ha;
+ num_hosts++;
+ }
+#endif
+
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
+ find_devices:
+#endif
+
+ pdev = NULL;
/* Try and find each different type of adapter we support */
- for( i=0; bdp->device_id != 0 && i < NUM_OF_ISP_DEVICES; i++, bdp++ ) {
+ for(i=0;bdp->device_id != 0 && i < NUM_OF_ISP_DEVICES;i++,bdp++) {
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,95)
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
/* PCI_SUBSYSTEM_IDS supported */
while ((pdev = pci_find_subsys(QLA1280_VENDOR_ID,
bdp->device_id, PCI_ANY_ID, PCI_ANY_ID, pdev) )) {
- if (pci_enable_device(pdev)) continue;
+ if (pci_enable_device(pdev)) continue;
#else
while ((pdev = pci_find_device(QLA1280_VENDOR_ID,
bdp->device_id, pdev ) )) {
@@ -766,24 +856,31 @@
#endif /* 2,1,95 */
/* found a adapter */
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
- printk("qla1280: detect() found an HBA\n");
- printk("qla1280: VID=%x DID=%x SSVID=%x SSDID=%x\n",
- pdev->vendor, pdev->device,
- pdev->subsystem_vendor, pdev->subsystem_device);
/* If it's an AMI SubSys Vendor ID adapter, skip it. */
if (pdev->subsystem_vendor = PCI_VENDOR_ID_AMI)
{
- printk("qla1280: Skip AMI SubSys Vendor ID Chip\n");
+ printk("qla1x160: Skip AMI SubSys Vendor ID Chip\n");
continue;
}
+
+ /* 3.20 skip IA-64 Lion on-board ISP12160 */
+ /* since we already initialized and presented it */
+ if ((pdev->bus->number = 1) &&
+ (PCI_SLOT(pdev->devfn) = 2)) continue;
+
+ printk("qla1x160: Supported Device Found VID=%x DID=%x SSVID=%x SSDID=%x\n",
+ pdev->vendor, pdev->device,
+ pdev->subsystem_vendor, pdev->subsystem_device);
+
#else
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,95)
+ printk("qla1x160: Supported Device Found\n");
pci_read_config_word(pdev, PCI_SUBSYSTEM_VENDOR_ID,
&subsys);
/* Bypass all AMI SUBSYS VENDOR IDs */
if (subsys = PCI_VENDOR_ID_AMI)
{
- printk("qla1280: Skip AMI SubSys Vendor ID Chip\n");
+ printk("qla1x160: Skip AMI SubSys Vendor ID Chip\n");
continue;
}
#endif /* 2,1,95 */
@@ -814,10 +911,10 @@
ha->pci_device_fn = pci_devfn;
#endif
ha->device_id = bdp->device_id;
-
- ha->devnum = i;
+ ha->devnum = i; // specifies microcode load address
+
if( qla1280_mem_alloc(ha) ) {
- printk(KERN_INFO "qla1280: Failed to get memory\n");
+ printk(KERN_INFO "qla1x160: Failed to get memory\n");
}
ha->ports = bdp->numPorts;
@@ -831,7 +928,7 @@
ha->instance = num_hosts;
if (qla1280_initialize_adapter(ha))
{
- printk(KERN_INFO "qla1280: Failed to initialize adapter\n");
+ printk(KERN_INFO "qla1x160:Failed to initialize adapter\n");
qla1280_mem_free(ha);
scsi_unregister(host);
continue;
@@ -840,7 +937,7 @@
host->max_channel = bdp->numPorts-1;
/* Register our resources with Linux */
if( qla1280_register_with_Linux(ha, bdp->numPorts-1) ) {
- printk(KERN_INFO "qla1280: Failed to register resources\n");
+ printk(KERN_INFO "qla1x160: Failed to register resources\n");
qla1280_mem_free(ha);
scsi_unregister(host);
continue;
@@ -1068,8 +1165,7 @@
{
CMD_RESULT(cmd) = (int) (DID_BUS_BUSY << 16);
qla1280_done_q_put(sp, &ha->done_q_first, &ha->done_q_last);
-
- schedule_task(&ha->run_qla_bh);
+ schedule_task(&ha->run_qla_bh);
ha->flags.dpc_sched = TRUE;
DRIVER_UNLOCK
return(0);
@@ -1507,6 +1603,7 @@
if(test_and_set_bit(QLA1280_IN_ISR_BIT, &ha->flags))
{
COMTRACE('X')
+ spin_unlock_irqrestore(&io_request_lock, cpu_flags);
return;
}
ha->isr_count++;
@@ -1534,6 +1631,7 @@
{
COMTRACE('X')
printk(KERN_INFO "scsi(%d): Already in interrupt - returning \n", (int)ha->host_no);
+ spin_unlock_irqrestore(&io_request_lock, cpu_flags);
return;
}
set_bit(QLA1280_IN_ISR_BIT, (int *)&ha->flags);
@@ -1565,7 +1663,7 @@
ha->run_qla_bh.routine = qla1280_do_dpc;
COMTRACE('P')
- schedule_task(&ha->run_qla_bh);
+ schedule_task(&ha->run_qla_bh);
ha->flags.dpc_sched = TRUE;
}
clear_bit(QLA1280_IN_ISR_BIT, (int *)&ha->flags);
@@ -1589,7 +1687,7 @@
* "host->can_queue". This can cause a panic if we were in our interrupt
* code .
**************************************************************************/
-static void qla1280_do_dpc(void *p)
+void qla1280_do_dpc(void *p)
{
scsi_qla_host_t *ha = (scsi_qla_host_t *) p;
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,95)
@@ -1773,10 +1871,10 @@
scsi_to_pci_dma_dir(cmd->sc_data_direction));
}
else if (cmd->request_bufflen) {
- DEBUG(sprintf(debug_buff,
+ /*DEBUG(sprintf(debug_buff,
"No S/G unmap_single cmd=%x saved_dma_handle=%lx\n\r",
cmd,sp->saved_dma_handle);)
- DEBUG(qla1280_print(debug_buff);)
+ DEBUG(qla1280_print(debug_buff);)*/
pci_unmap_single(ha->pdev,sp->saved_dma_handle,
cmd->request_bufflen,
@@ -3220,17 +3318,19 @@
ha->flags.disable_risc_code_load nv->cntr_flags_1.disable_loading_risc_code;
- /* Enable 64bit addressing. */
- ha->flags.enable_64bit_addressing - nv->cntr_flags_1.enable_64bit_addressing;
-
-#if FORCE_64BIT_PCI_DMA
+#if BITS_PER_LONG > 32
+ /* Enable 64bit addressing for OS/System combination supporting it */
+ /* actual NVRAM bit is: nv->cntr_flags_1.enable_64bit_addressing */
+ /* but we will ignore it and use BITS_PER_LONG macro to setup for */
+ /* 64 or 32 bit access of host memory in all x86/ia-64/Alpha systems */
ha->flags.enable_64bit_addressing = 1;
+#else
+ ha->flags.enable_64bit_addressing = 0;
#endif
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,18)
if (ha->flags.enable_64bit_addressing) {
- printk("[[[ qla1x160: 64 Bit PCI Addressing Enabled ]]]\n");
+ printk("[[[ qla1x160: 64 Bit PCI Addressing Enabled ]]]\n");
#if BITS_PER_LONG > 32
/* Update our PCI device dma_mask for full 64 bit mask */
@@ -3979,7 +4079,7 @@
}
else if (cmd->request_bufflen) /* If data transfer. */
{
- DEBUG(printk("Single data transfer len=0x%x\n",cmd->request_bufflen));
+ /*DEBUG(printk("Single data transfer len=0x%x\n",cmd->request_bufflen));*/
seg_cnt = 1;
}
@@ -4169,9 +4269,9 @@
*dword_ptr++ = cpu_to_le32(pci_dma_lo32(dma_handle));
*dword_ptr++ = cpu_to_le32(pci_dma_hi32(dma_handle));
*dword_ptr = (uint32_t) cmd->request_bufflen;
- DEBUG(sprintf(debug_buff,
+ /*DEBUG(sprintf(debug_buff,
"No S/G map_single saved_dma_handle=%lx\n\r",dma_handle));
- DEBUG(qla1280_print(debug_buff));
+ DEBUG(qla1280_print(debug_buff));*/
#ifdef QL_DEBUG_LEVEL_5
qla1280_print(
"qla1280_64bit_start_scsi: No scatter/gather command packet data - c");
@@ -4215,6 +4315,10 @@
ha->request_ring_ptr++;
/* Set chip new ring index. */
+ DEBUG(qla1280_print("qla1280_64bit_start_scsi: Wakeup RISC for pending command\n\r"));
+ ha->qthreads--;
+ sp->flags |= SRB_SENT;
+ ha->actthreads++;
WRT_REG_WORD(®->mailbox4, ha->req_ring_index);
}
else
@@ -4557,9 +4661,9 @@
*dword_ptr++ = cpu_to_le32(pci_dma_lo32(dma_handle));
*dword_ptr = (uint32_t) cmd->request_bufflen;
- DEBUG(sprintf(debug_buff,
+ /*DEBUG(sprintf(debug_buff,
"No S/G map_single saved_dma_handle=%lx\n\r",dma_handle));
- DEBUG(qla1280_print(debug_buff));
+ DEBUG(qla1280_print(debug_buff));*/
#endif
}
}
@@ -4593,7 +4697,6 @@
ha->qthreads--;
sp->flags |= SRB_SENT;
ha->actthreads++;
- /* qla1280_output_number((uint32_t)ha->actthreads++, 16); */
WRT_REG_WORD(®->mailbox4, ha->req_ring_index);
}
else
diff -urN linux-davidm/drivers/scsi/qla1280.h linux-2.4.0-lia/drivers/scsi/qla1280.h
--- linux-davidm/drivers/scsi/qla1280.h Tue Jan 9 00:09:52 2001
+++ linux-2.4.0-lia/drivers/scsi/qla1280.h Mon Jan 8 23:47:49 2001
@@ -40,14 +40,14 @@
* Driver debug definitions.
*/
/* #define QL_DEBUG_LEVEL_1 */ /* Output register accesses to COM1 */
-/* #define QL_DEBUG_LEVEL_2 */ /* Output error msgs to COM1 */
+/* #define QL_DEBUG_LEVEL_2 */ /* Output error msgs to COM1 */
/* #define QL_DEBUG_LEVEL_3 */ /* Output function trace msgs to COM1 */
-/* #define QL_DEBUG_LEVEL_4 */ /* Output NVRAM trace msgs to COM1 */
+/* #define QL_DEBUG_LEVEL_4 */ /* Output NVRAM trace msgs to COM1 */
/* #define QL_DEBUG_LEVEL_5 */ /* Output ring trace msgs to COM1 */
/* #define QL_DEBUG_LEVEL_6 */ /* Output WATCHDOG timer trace to COM1 */
/* #define QL_DEBUG_LEVEL_7 */ /* Output RISC load trace msgs to COM1 */
- #define QL_DEBUG_CONSOLE /* Output to console instead of COM1 */
+#define QL_DEBUG_CONSOLE /* Output to console instead of COM1 */
/* comment this #define to get output of qla1280_print to COM1 */
/* if COM1 is not connected to a host system, the driver hangs system! */
diff -urN linux-davidm/drivers/sound/sound_firmware.c linux-2.4.0-lia/drivers/sound/sound_firmware.c
--- linux-davidm/drivers/sound/sound_firmware.c Tue Mar 14 17:54:42 2000
+++ linux-2.4.0-lia/drivers/sound/sound_firmware.c Mon Jan 8 23:48:00 2001
@@ -7,7 +7,6 @@
#include <linux/unistd.h>
#include <asm/uaccess.h>
-static int errno;
static int do_mod_firmware_load(const char *fn, char **fp)
{
int fd;
diff -urN linux-davidm/include/asm-ia64/cache.h linux-2.4.0-lia/include/asm-ia64/cache.h
--- linux-davidm/include/asm-ia64/cache.h Tue Jan 9 00:09:52 2001
+++ linux-2.4.0-lia/include/asm-ia64/cache.h Tue Jan 9 00:09:37 2001
@@ -9,7 +9,7 @@
*/
/* Bytes per L1 (data) cache line. */
-#define L1_CACHE_SHIFT 6
+#define L1_CACHE_SHIFT CONFIG_IA64_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#ifdef CONFIG_SMP
diff -urN linux-davidm/include/asm-ia64/delay.h linux-2.4.0-lia/include/asm-ia64/delay.h
--- linux-davidm/include/asm-ia64/delay.h Tue Jan 9 00:09:52 2001
+++ linux-2.4.0-lia/include/asm-ia64/delay.h Tue Jan 9 00:10:48 2001
@@ -34,13 +34,9 @@
}
static __inline__ void
-ia64_set_itv (unsigned char vector, unsigned char masked)
+ia64_set_itv (unsigned long val)
{
- if (masked > 1)
- masked = 1;
-
- __asm__ __volatile__("mov cr.itv=%0;; srlz.d;;"
- :: "r"((masked << 16) | vector) : "memory");
+ __asm__ __volatile__("mov cr.itv=%0;; srlz.d;;" :: "r"(val) : "memory");
}
static __inline__ void
diff -urN linux-davidm/include/asm-ia64/perfmon.h linux-2.4.0-lia/include/asm-ia64/perfmon.h
--- linux-davidm/include/asm-ia64/perfmon.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.0-lia/include/asm-ia64/perfmon.h Mon Jan 8 23:48:59 2001
@@ -0,0 +1,51 @@
+/*
+ * Copyright (C) 2001 Hewlett-Packard Co
+ * Copyright (C) 2001 Stephane Eranian <eranian@hpl.hp.com>
+ */
+
+#ifndef _ASM_IA64_PERFMON_H
+#define _ASM_IA64_PERFMON_H
+
+#include <linux/types.h>
+
+/*
+ * Structure used to define a context
+ */
+typedef struct {
+ unsigned long smpl_entries; /* how many entries in sampling buffer */
+ unsigned long smpl_regs; /* which pmds to record on overflow */
+ void *smpl_vaddr; /* returns address of BTB buffer */
+
+ pid_t notify_pid; /* which process to notify on overflow */
+ int notify_sig; /* XXX: not used anymore */
+
+ int flags; /* NOBLOCK/BLOCK/ INHERIT flags (will replace API flags) */
+} pfreq_context_t;
+
+/*
+ * Structure used to configure a PMC or PMD
+ */
+typedef struct {
+ unsigned long reg_num; /* which register */
+ unsigned long reg_value; /* configuration (PMC) or initial value (PMD) */
+ unsigned long reg_smpl_reset; /* reset of sampling buffer overflow (large) */
+ unsigned long reg_ovfl_reset; /* reset on counter overflow (small) */
+ int reg_flags; /* (PMD): notify/don't notify */
+} pfreq_reg_t;
+
+/*
+ * main request structure passed by user
+ */
+typedef union {
+ pfreq_context_t pfr_ctx; /* request to configure a context */
+ pfreq_reg_t pfr_reg; /* request to configure a PMD/PMC */
+} perfmon_req_t;
+
+extern void pfm_save_regs (struct task_struct *);
+extern void pfm_load_regs (struct task_struct *);
+
+extern int pfm_inherit (struct task_struct *);
+extern void pfm_context_exit (struct task_struct *);
+extern void pfm_flush_regs (struct task_struct *);
+
+#endif /* _ASM_IA64_PERFMON_H */
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.0-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Tue Jan 9 00:09:52 2001
+++ linux-2.4.0-lia/include/asm-ia64/processor.h Tue Jan 9 00:10:47 2001
@@ -2,9 +2,9 @@
#define _ASM_IA64_PROCESSOR_H
/*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1998-2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*
@@ -27,6 +27,9 @@
#define IA64_NUM_PMD_REGS 32
#define IA64_NUM_PMD_COUNTERS 4
+#define DEFAULT_MAP_BASE 0x2000000000000000
+#define DEFAULT_TASK_SIZE 0xa000000000000000
+
/*
* TASK_SIZE really is a mis-named. It really is the maximum user
* space address (plus one). On IA-64, there are five regions of 2TB
@@ -257,6 +260,7 @@
__u64 ipi_count;
__u64 prof_counter;
__u64 prof_multiplier;
+ __u64 ipi_operation;
#endif
};
@@ -294,13 +298,9 @@
#ifdef CONFIG_PERFMON
__u64 pmc[IA64_NUM_PMC_REGS];
__u64 pmd[IA64_NUM_PMD_REGS];
- struct {
- __u64 val; /* virtual 64bit counter */
- __u64 rval; /* reset value on overflow */
- int sig; /* signal used to notify */
- int pid; /* process to notify */
- } pmu_counters[IA64_NUM_PMD_COUNTERS];
-# define INIT_THREAD_PM {0, }, {0, }, {{ 0, 0, 0, 0}, },
+ unsigned long pfm_pend_notify; /* non-zero if we need to notify and block */
+ void *pfm_context; /* pointer to detailed PMU context */
+# define INIT_THREAD_PM {0, }, {0, }, 0, 0,
#else
# define INIT_THREAD_PM
#endif
@@ -338,8 +338,8 @@
{0, }, /* dbr */ \
{0, }, /* ibr */ \
INIT_THREAD_PM \
- 0x2000000000000000, /* map_base */ \
- 0xa000000000000000, /* task_size */ \
+ DEFAULT_MAP_BASE, /* map_base */ \
+ DEFAULT_TASK_SIZE, /* task_size */ \
INIT_THREAD_IA32 \
0 /* siginfo */ \
}
@@ -368,7 +368,11 @@
* parent of DEAD_TASK has collected the exist status of the task via
* wait(). This is a no-op on IA-64.
*/
-#define release_thread(dead_task)
+#ifdef CONFIG_PERFMON
+ extern void release_thread (struct task_struct *task);
+#else
+# define release_thread(dead_task)
+#endif
/*
* This is the mechanism for creating a new kernel thread.
@@ -619,24 +623,16 @@
}
static inline void
-ia64_set_lrr0 (__u8 vector, __u8 masked)
+ia64_set_lrr0 (unsigned long val)
{
- if (masked > 1)
- masked = 1;
-
- __asm__ __volatile__ ("mov cr.lrr0=%0;; srlz.d"
- :: "r"((masked << 16) | vector) : "memory");
+ __asm__ __volatile__ ("mov cr.lrr0=%0;; srlz.d" :: "r"(val) : "memory");
}
static inline void
-ia64_set_lrr1 (__u8 vector, __u8 masked)
+ia64_set_lrr1 (unsigned long val)
{
- if (masked > 1)
- masked = 1;
-
- __asm__ __volatile__ ("mov cr.lrr1=%0;; srlz.d"
- :: "r"((masked << 16) | vector) : "memory");
+ __asm__ __volatile__ ("mov cr.lrr1=%0;; srlz.d" :: "r"(val) : "memory");
}
static inline void
diff -urN linux-davidm/include/asm-ia64/sal.h linux-2.4.0-lia/include/asm-ia64/sal.h
--- linux-davidm/include/asm-ia64/sal.h Tue Jan 9 00:09:52 2001
+++ linux-2.4.0-lia/include/asm-ia64/sal.h Tue Jan 9 00:09:49 2001
@@ -28,15 +28,12 @@
#define __SAL_CALL(result,a0,a1,a2,a3,a4,a5,a6,a7) \
result = (*ia64_sal)(a0,a1,a2,a3,a4,a5,a6,a7)
-#ifdef CONFIG_SMP
-# define SAL_CALL(result,args...) do { \
- spin_lock(&sal_lock); \
- __SAL_CALL(result,args); \
- spin_unlock(&sal_lock); \
+# define SAL_CALL(result,args...) do { \
+ unsigned long flags; \
+ spin_lock_irqsave(&sal_lock, flags); \
+ __SAL_CALL(result,args); \
+ spin_unlock_irqrestore(&sal_lock, flags); \
} while (0)
-#else
-# define SAL_CALL(result,args...) __SAL_CALL(result,args)
-#endif
#define SAL_SET_VECTORS 0x01000000
#define SAL_GET_STATE_INFO 0x01000001
@@ -440,11 +437,10 @@
* machine state at the time of MCA's, INITs or CMCs
*/
static inline s64
-ia64_sal_clear_state_info (u64 sal_info_type, u64 sal_info_sub_type)
+ia64_sal_clear_state_info (u64 sal_info_type)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_CLEAR_STATE_INFO, sal_info_type, sal_info_sub_type,
- 0, 0, 0, 0, 0);
+ SAL_CALL(isrv, SAL_CLEAR_STATE_INFO, sal_info_type, 0, 0, 0, 0, 0, 0);
return isrv.status;
}
@@ -453,10 +449,10 @@
* state at the time of the MCAs, INITs or CMCs.
*/
static inline u64
-ia64_sal_get_state_info (u64 sal_info_type, u64 sal_info_sub_type, u64 *sal_info)
+ia64_sal_get_state_info (u64 sal_info_type, u64 *sal_info)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_GET_STATE_INFO, sal_info_type, sal_info_sub_type,
+ SAL_CALL(isrv, SAL_GET_STATE_INFO, sal_info_type, 0,
sal_info, 0, 0, 0, 0);
if (isrv.status)
return 0;
@@ -466,11 +462,10 @@
* state at the time of MCAs, INITs or CMCs
*/
static inline u64
-ia64_sal_get_state_info_size (u64 sal_info_type, u64 sal_info_sub_type)
+ia64_sal_get_state_info_size (u64 sal_info_type)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_GET_STATE_INFO_SIZE, sal_info_type, sal_info_sub_type,
- 0, 0, 0, 0, 0);
+ SAL_CALL(isrv, SAL_GET_STATE_INFO_SIZE, sal_info_type, 0, 0, 0, 0, 0, 0);
if (isrv.status)
return 0;
return isrv.v0;
@@ -492,11 +487,10 @@
* non-monarch processor at the end of machine check processing.
*/
static inline s64
-ia64_sal_mc_set_params (u64 param_type, u64 i_or_m, u64 i_or_m_val, u64 timeout)
+ia64_sal_mc_set_params (u64 param_type, u64 i_or_m, u64 i_or_m_val, u64 timeout, u64 rz_always)
{
struct ia64_sal_retval isrv;
- SAL_CALL(isrv, SAL_MC_SET_PARAMS, param_type, i_or_m, i_or_m_val, timeout,
- 0, 0, 0);
+ SAL_CALL(isrv, SAL_MC_SET_PARAMS, param_type, i_or_m, i_or_m_val, timeout, rz_always, 0, 0);
return isrv.status;
}
diff -urN linux-davidm/include/asm-ia64/siginfo.h linux-2.4.0-lia/include/asm-ia64/siginfo.h
--- linux-davidm/include/asm-ia64/siginfo.h Mon Oct 9 17:55:00 2000
+++ linux-2.4.0-lia/include/asm-ia64/siginfo.h Mon Jan 8 23:49:52 2001
@@ -2,8 +2,8 @@
#define _ASM_IA64_SIGINFO_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/types.h>
@@ -66,6 +66,12 @@
long _band; /* POLL_IN, POLL_OUT, POLL_MSG (XPG requires a "long") */
int _fd;
} _sigpoll;
+ /* SIGPROF */
+ struct {
+ pid_t _pid; /* which child */
+ uid_t _uid; /* sender's uid */
+ unsigned long _pfm_ovfl_counters; /* which PMU counter overflowed */
+ } _sigprof;
} _sifields;
} siginfo_t;
@@ -85,6 +91,7 @@
#define si_isr _sifields._sigfault._isr /* valid if si_code=FPE_FLTxxx */
#define si_band _sifields._sigpoll._band
#define si_fd _sifields._sigpoll._fd
+#define si_pfm_ovfl _sifields._sigprof._pfm_ovfl_counters
/*
* si_code values
@@ -98,6 +105,7 @@
#define __SI_FAULT (3 << 16)
#define __SI_CHLD (4 << 16)
#define __SI_RT (5 << 16)
+#define __SI_PROF (6 << 16)
#define __SI_CODE(T,N) ((T) << 16 | ((N) & 0xffff))
#else
#define __SI_KILL 0
@@ -199,6 +207,11 @@
#define POLL_PRI (__SI_POLL|5) /* high priority input available */
#define POLL_HUP (__SI_POLL|6) /* device disconnected */
#define NSIGPOLL 6
+
+/*
+ * SIGPROF si_codes
+ */
+#define PROF_OVFL (__SI_PROF|1) /* some counters overflowed */
/*
* sigevent definitions
diff -urN linux-davidm/kernel/ptrace.c linux-2.4.0-lia/kernel/ptrace.c
--- linux-davidm/kernel/ptrace.c Tue Jan 9 00:09:53 2001
+++ linux-2.4.0-lia/kernel/ptrace.c Wed Jan 3 23:17:46 2001
@@ -68,7 +68,7 @@
fault_in_page:
/* -1: out of memory. 0 - unmapped page */
- if (handle_mm_fault(mm, vma, addr, write) > 0)
+ if (handle_mm_fault(mm, vma, addr, write ? VM_WRITE : VM_READ) > 0)
goto repeat;
return 0;
diff -urN linux-davidm/lib/Makefile linux-2.4.0-lia/lib/Makefile
--- linux-davidm/lib/Makefile Tue Jan 9 00:09:53 2001
+++ linux-2.4.0-lia/lib/Makefile Wed Jan 3 23:17:56 2001
@@ -10,7 +10,7 @@
export-objs := cmdline.o
-obj-y := errno.o ctype.o string.o vsprintf.o brlock.o cmdline.o
+obj-y := errno.o ctype.o string.o vsprintf.o brlock.o cmdline.o crc32.o
ifneq ($(CONFIG_HAVE_DEC_LOCK),y)
obj-y += dec_and_lock.o
diff -urN linux-davidm/mm/memory.c linux-2.4.0-lia/mm/memory.c
--- linux-davidm/mm/memory.c Tue Jan 9 00:09:53 2001
+++ linux-2.4.0-lia/mm/memory.c Thu Jan 4 22:52:47 2001
@@ -1150,8 +1150,10 @@
*/
static inline int handle_pte_fault(struct mm_struct *mm,
struct vm_area_struct * vma, unsigned long address,
- int write_access, pte_t * pte)
+ int access_type, pte_t * pte)
{
+ int write_access = is_write_access(access_type);
+ int exec_access = is_exec_access(access_type);
pte_t entry;
/*
@@ -1178,6 +1180,8 @@
entry = pte_mkdirty(entry);
}
+ if (exec_access)
+ entry = pte_mkexec(entry);
entry = pte_mkyoung(entry);
establish_pte(vma, address, pte, entry);
spin_unlock(&mm->page_table_lock);
@@ -1188,7 +1192,7 @@
* By the time we get here, we already hold the mm semaphore
*/
int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct * vma,
- unsigned long address, int write_access)
+ unsigned long address, int access_type)
{
int ret = -1;
pgd_t *pgd;
@@ -1200,7 +1204,7 @@
if (pmd) {
pte_t * pte = pte_alloc(pmd, address);
if (pte)
- ret = handle_pte_fault(mm, vma, address, write_access, pte);
+ ret = handle_pte_fault(mm, vma, address, access_type, pte);
}
return ret;
}
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.4.0)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (31 preceding siblings ...)
2001-01-09 9:48 ` [Linux-ia64] kernel update (relative to 2.4.0) David Mosberger
@ 2001-01-09 11:05 ` Sapariya Manish.j
2001-01-10 3:26 ` [Linux-ia64] kernel update (relative to 2.4.0) - copy_user fi Mallick, Asit K
` (182 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Sapariya Manish.j @ 2001-01-09 11:05 UTC (permalink / raw)
To: linux-ia64
Hi David,
I got the source of the tool chain, but i could find how to build this
source to generate the
compiler, linker binaries from this. I went throught the source-tree but i
couldnt find document on how to build the binaries. This is my first time
that i will be building the source of gcc. Any link to the docs will be of
great help.
thanx,
Manish
-----Original Message-----
From: David Mosberger [mailto:davidm@hpl.hp.com]
Sent: Monday, January 08, 2001 10:16 PM
To: lia64-sim@napali.hpl.hp.com
Subject: RE: [lia64-sim] ski special interface BREAK instructions
>>>>> On Mon, 8 Jan 2001 13:15:49 +0530 , "Sapariya Manish.j"
<S.Manish@zensar.com> said:
Manish> Hi, Could anyone tell me, where could i get the gcc for Ia
Manish> 64.
The canonical location is:
ftp://ftp.cygnus.com/pub/ia64-linux/
--david
--
To unsubscribe: echo unsubscribe lia64-sim | mail majordomo@linux.hpl.hp.com
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.4.0) - copy_user fi
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (32 preceding siblings ...)
2001-01-09 11:05 ` Sapariya Manish.j
@ 2001-01-10 3:26 ` Mallick, Asit K
2001-01-12 2:30 ` [Linux-ia64] kernel update (relative to 2.4.0) Jim Wilson
` (181 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Mallick, Asit K @ 2001-01-10 3:26 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 596 bytes --]
There is one return path in the __copy_user that do not restore the ar.lc
register. This is causing applications that uses ar.lc to fail. Attached is
a patch to fix it (file is also attached).
Thanks,
Asit
--- linux-2.4.0/arch/ia64/lib/copy_user.S Thu Jan 4 12:50:17 2001
+++ linux/arch/ia64/lib/copy_user.S Tue Jan 9 05:35:36 2001
@@ -319,6 +319,7 @@
EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 5b
;;
+ mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs
br.ret.dptk.few rp
[-- Attachment #2: copyu.diff --]
[-- Type: application/octet-stream, Size: 343 bytes --]
--- linux-2.4.0/arch/ia64/lib/copy_user.S Thu Jan 4 12:50:17 2001
+++ linux/arch/ia64/lib/copy_user.S Tue Jan 9 05:35:36 2001
@@ -319,6 +319,7 @@
EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 5b
;;
+ mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs
br.ret.dptk.few rp
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.0)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (33 preceding siblings ...)
2001-01-10 3:26 ` [Linux-ia64] kernel update (relative to 2.4.0) - copy_user fi Mallick, Asit K
@ 2001-01-12 2:30 ` Jim Wilson
2001-01-26 4:53 ` David Mosberger
` (180 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Jim Wilson @ 2001-01-12 2:30 UTC (permalink / raw)
To: linux-ia64
>I got the source of the tool chain, but i could find how to build this
>source to generate the
>compiler, linker binaries from this.
In general, for most FSF programs, the build process is the same.
Unpack the sources so that you have a src directory. Then something like
mkdir build
cd build
../src/configure
make
make install
This will build a native toolchain including everything.
If you don't want to install in /usr/local, you can change that directory
name with the configure option --prefix=<path>.
If you don't want the gdb gui, including tcl/tk/tix/itcl/etc, then you
can use the configure option --disable-gdbtk.
For a cross, there are a few extra steps. You need libraries and header files
from the target. You can grab /usr/lib and /usr/include from the target and
put them on your build machine. Then use the --with-libs=<path> and
--with-headers=<path> configure options to point at the target library and
header files respectively. You also need the --target=ia64-linux configure
option to indicate that this is a cross to ia64-linux.
There is documentation on this stuff, but it is scattered around a bit.
There is a toplevel README file that contains some info, with pointers
to tool specific README files like gcc/README, which in turn point at other
files. Red Hat has some nice printed documentation that we give to customers,
but it isn't in a convenient form for our purposes. It is done in FrameMaker
I believe. There is also some online stuff, for instance
http://gcc.gnu.org/install/index.html. This is gcc specific, but most of
this stuff is the same for all of the other parts of the toolchain.
Jim
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.0)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (34 preceding siblings ...)
2001-01-12 2:30 ` [Linux-ia64] kernel update (relative to 2.4.0) Jim Wilson
@ 2001-01-26 4:53 ` David Mosberger
2001-01-31 20:32 ` [Linux-ia64] kernel update (relative to 2.4.1) David Mosberger
` (179 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-01-26 4:53 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.0-ia64-010125.diff*
What changed since last time:
o Alex Williamson: Fix "implemented VA bits" calculation so it
doesn't fail when all virtual address bits are implemented.
o Erich Focht/YT: Fix llseek() in /proc/mem to make it work when
offset >=2^63.
o William Taber/YT: fix mm/memory.c so handle_mm_fault() gets called
with the right arguments.
o BJ Nima: Update qla1280 driver to version 3.23 beta.
o Keith Owens: Fix unw_access_gr() so it accesses stacked regs correctly.
o Don Dugger: Fix munmap, mprotect, & msync to do the right thing
when 4KB pages are in use. Various other IA-32 fixes.
o Asit Mallick:
+ Fix TLB handlers (this was fix was posted to the list earlier on).
+ Add missing restore of ar.lc in copy_user()
o YT:
+ Make IA-64 timer data structure (itm) part of CPU-local data
+ "Randomize" the timer tick on different CPUs so it's less likely
for multiple CPUs to hit the timer interrupt simultaneous (shouldn't
really make much of a difference on today's machines, but it seemed
like the right thing to do and was easy to add)
+ Make unaligned handler work inside the kernel and fix
a nasty bug that would let a user program crash the system.
+ Turn on "alignment check" (psr.ac) inside the kernel so we can
reliably detect unaligned accesses
+ Clean up syscall path and improve code scheduling.
+ Make MAX_DMA_ADDRESS a variable so platform-specific code can
override the default value
+ Various IA-32 subsystem cleanups
- added ia32_execve() to get the right register frame when
execv'ing an IA-64 program
- drop IA-32 syscall register frame in the IA-32 specific
code, so we don't have to check and slow down the normal
kernel exit path
- added sys32_signal since Intel's cross compiler wants it
Below is a diff that shows the changes since the last IA-64 patch. As
usual, this is for your enjoyment only. If you want to get the real
sources, start with 2.4.0 and apply the above patch on top of it.
This patch has been tested on BigSur, Lion, and the HP Ski simulator
and found to be generally good to public health. As always, if you
should encounter some unexpected problems, please let us all know.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help linux-2.4.0-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Thu Jan 25 19:17:24 2001
+++ linux-2.4.0-lia/Documentation/Configure.help Thu Jan 25 15:00:58 2001
@@ -17030,11 +17030,6 @@
Say Y here to enable hacks to make the kernel work on the Intel
SoftSDV simulator. Select N here if you're unsure.
-Enable AzusA hacks
-CONFIG_IA64_AZUSA_HACKS
- Say Y here to enable hacks to make the kernel work on the NEC
- AzusA platform. Select N here if you're unsure.
-
Force socket buffers below 4GB?
CONFIG_SKB_BELOW_4GB
Most of today's network interface cards (NICs) support DMA to
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.0-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Thu Jan 25 19:17:24 2001
+++ linux-2.4.0-lia/arch/ia64/config.in Thu Jan 25 18:44:45 2001
@@ -54,7 +54,6 @@
bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
- bool ' Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
bool ' Enable ACPI 2.0 with errata 1.3' CONFIG_ACPI20
bool ' ACPI kernel configuration manager (EXPERIMENTAL)' CONFIG_ACPI_KERNEL_CONFIG
diff -urN linux-davidm/arch/ia64/ia32/binfmt_elf32.c linux-2.4.0-lia/arch/ia64/ia32/binfmt_elf32.c
--- linux-davidm/arch/ia64/ia32/binfmt_elf32.c Thu Jan 25 19:17:24 2001
+++ linux-2.4.0-lia/arch/ia64/ia32/binfmt_elf32.c Thu Jan 25 15:06:16 2001
@@ -93,8 +93,8 @@
/* Do all the IA-32 setup here */
- current->thread.map_base = 0x40000000;
- current->thread.task_size = 0xc0000000; /* use what Linux/x86 uses... */
+ current->thread.map_base = 0x40000000;
+ current->thread.task_size = 0xc0000000; /* use what Linux/x86 uses... */
set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
/* setup ia32 state for ia32_load_state */
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S linux-2.4.0-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/ia32/ia32_entry.S Thu Jan 25 15:06:29 2001
@@ -4,11 +4,36 @@
#include "../kernel/entry.h"
+ .text
+
+ /*
+ * execve() is special because in case of success, we need to
+ * setup a null register window frame (in case an IA-32 process
+ * is exec'ing an IA-64 program).
+ */
+ENTRY(ia32_execve)
+ UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(3))
+ alloc loc1=ar.pfs,3,2,4,0
+ mov loc0=rp
+ UNW(.body)
+ mov out0=in0 // filename
+ ;; // stop bit between alloc and call
+ mov out1=in1 // argv
+ mov out2=in2 // envp
+ add out3\x16,sp // regs
+ br.call.sptk.few rp=sys32_execve
+1: cmp4.ge p6,p0=r8,r0
+ mov ar.pfs=loc1 // restore ar.pfs
+ ;;
+(p6) mov ar.pfs=r0 // clear ar.pfs in case of success
+ sxt4 r8=r8 // return 64-bit result
+ mov rp=loc0
+ br.ret.sptk.few rp
+END(ia32_execve)
+
//
// Get possibly unaligned sigmask argument into an aligned
// kernel buffer
- .text
-
GLOBAL_ENTRY(ia32_rt_sigsuspend)
// We'll cheat and not do an alloc here since we are ultimately
// going to do a simple branch to the IA64 sys_rt_sigsuspend.
@@ -46,8 +71,9 @@
cmp.ge p6,p7=r8,r0 // syscall executed successfully?
adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
;;
+ alloc r3=ar.pfs,0,0,0,0 // drop the syscall argument frame
st8 [r2]=r8 // store return value in slot for r8
- br.cond.sptk.few ia64_leave_kernel
+ br.cond.sptk.many ia64_leave_kernel
END(ia32_ret_from_syscall)
//
@@ -69,7 +95,8 @@
;;
st8.spill [r2]=r8 // store return value in slot for r8
br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
-.ret2: br.cond.sptk.many ia64_leave_kernel // rp MUST be != ia64_leave_kernel!
+.ret2: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
+ br.cond.sptk.many ia64_leave_kernel // rp MUST be != ia64_leave_kernel!
END(ia32_trace_syscall)
GLOBAL_ENTRY(sys32_vfork)
@@ -116,7 +143,7 @@
data8 sys_creat
data8 sys_link
data8 sys_unlink /* 10 */
- data8 sys32_execve
+ data8 ia32_execve
data8 sys_chdir
data8 sys32_time
data8 sys_mknod
@@ -153,7 +180,7 @@
data8 sys_brk /* 45 */
data8 sys_setgid
data8 sys_getgid
- data8 sys32_ni_syscall
+ data8 sys32_signal
data8 sys_geteuid
data8 sys_getegid /* 50 */
data8 sys_acct
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.4.0-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Thu Jan 25 19:17:24 2001
+++ linux-2.4.0-lia/arch/ia64/ia32/sys_ia32.c Thu Jan 25 17:17:36 2001
@@ -64,7 +64,6 @@
#define NAME_OFFSET(de) ((int) ((de)->d_name - (char *) (de)))
extern asmlinkage long sys_execve (char *, char **, char **, struct pt_regs *);
-extern asmlinkage long sys_munmap (unsigned long, size_t len);
extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long);
static int
@@ -134,10 +133,10 @@
/* oops, execve failed, switch back to old map base & task-size: */
current->thread.map_base = old_map_base;
current->thread.task_size = old_task_size;
+ set_fs(USER_DS); /* establish new task-size as the address-limit */
out:
kfree(av);
}
- set_fs(USER_DS); /* establish new task-size as the address-limit */
return r;
}
@@ -213,7 +212,6 @@
return ret;
}
-#define ALIGN4K(a) (((a) + 0xfff) & ~0xfff)
#define OFFSET4K(a) ((a) & 0xfff)
unsigned long
@@ -320,18 +318,46 @@
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
+ if (PAGE_ALIGN(a.len) = 0)
+ return a.addr;
+
if (!(a.flags & MAP_ANONYMOUS)) {
file = fget(a.fd);
if (!file)
return -EBADF;
}
+#ifdef CONFIG_IA64_PAGE_SIZE_4KB
+ if ((a.offset & ~PAGE_MASK) != 0)
+ return -EINVAL;
+
+ down(¤t->mm->mmap_sem);
+ retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, a.offset >> PAGE_SHIFT);
+ up(¤t->mm->mmap_sem);
+#else // CONFIG_IA64_PAGE_SIZE_4KB
retval = ia32_do_mmap(file, a.addr, a.len, a.prot, a.flags, a.fd, a.offset);
+#endif // CONFIG_IA64_PAGE_SIZE_4KB
if (file)
fput(file);
return retval;
}
asmlinkage long
+sys32_mprotect(unsigned long start, size_t len, unsigned long prot)
+{
+
+#ifdef CONFIG_IA64_PAGE_SIZE_4KB
+ return(sys_mprotect(start, len, prot));
+#else // CONFIG_IA64_PAGE_SIZE_4KB
+ if (prot = 0)
+ return(0);
+ len += start & ~PAGE_MASK;
+ if ((start & ~PAGE_MASK) && (prot & PROT_WRITE))
+ prot |= PROT_EXEC;
+ return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot));
+#endif // CONFIG_IA64_PAGE_SIZE_4KB
+}
+
+asmlinkage long
sys32_pipe(int *fd)
{
int retval;
@@ -347,15 +373,17 @@
}
asmlinkage long
-sys32_mprotect(unsigned long start, size_t len, unsigned long prot)
+sys32_signal (int sig, unsigned int handler)
{
+ struct k_sigaction new_sa, old_sa;
+ int ret;
- if (prot = 0)
- return(0);
- len += start & ~PAGE_MASK;
- if ((start & ~PAGE_MASK) && (prot & PROT_WRITE))
- prot |= PROT_EXEC;
- return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot));
+ new_sa.sa.sa_handler = (__sighandler_t) A(handler);
+ new_sa.sa.sa_flags = SA_ONESHOT | SA_NOMASK;
+
+ ret = do_sigaction(sig, &new_sa, &old_sa);
+
+ return ret ? ret : (unsigned long)old_sa.sa.sa_handler;
}
asmlinkage long
@@ -526,7 +554,6 @@
return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
(__get_user(o->tv_sec, &i->tv_sec) |
__get_user(o->tv_usec, &i->tv_usec)));
- return ENOSYS;
}
static inline long
@@ -545,18 +572,16 @@
__get_user(o->it_interval.tv_usec, &i->it_interval.tv_usec) |
__get_user(o->it_value.tv_sec, &i->it_value.tv_sec) |
__get_user(o->it_value.tv_usec, &i->it_value.tv_usec)));
- return ENOSYS;
}
static inline long
put_it32(struct itimerval32 *o, struct itimerval *i)
{
- return (!access_ok(VERIFY_WRITE, i, sizeof(*i)) ||
+ return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
(__put_user(i->it_interval.tv_sec, &o->it_interval.tv_sec) |
__put_user(i->it_interval.tv_usec, &o->it_interval.tv_usec) |
__put_user(i->it_value.tv_sec, &o->it_value.tv_sec) |
__put_user(i->it_value.tv_usec, &o->it_value.tv_usec)));
- return ENOSYS;
}
extern int do_getitimer(int which, struct itimerval *value);
@@ -2644,6 +2669,8 @@
}
return(ret);
}
+
+asmlinkage long sys_msync(unsigned long start, size_t len, int flags);
asmlinkage int
sys_pause (void)
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.0-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Thu Jan 25 19:17:25 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/entry.S Thu Jan 25 15:08:06 2001
@@ -40,7 +40,7 @@
#include <asm/asmmacro.h>
#include <asm/pgtable.h>
-#include "entry.h"
+#include "minstate.h"
.text
.psr abi64
@@ -147,7 +147,7 @@
mov r13=in0 // set "current" pointer
;;
(p6) ssm psr.i // renable psr.i AFTER the ic bit is serialized
- DO_LOAD_SWITCH_STACK( )
+ DO_LOAD_SWITCH_STACK
#ifdef CONFIG_SMP
sync.i // ensure "fc"s done by this CPU are visible on other CPUs
@@ -492,22 +492,6 @@
br.cond.sptk.few strace_save_retval
END(ia64_trace_syscall)
-/*
- * A couple of convenience macros to help implement/understand the state
- * restoration that happens at the end of ia64_ret_from_syscall.
- */
-#define rARPR r31
-#define rCRIFS r30
-#define rCRIPSR r29
-#define rCRIIP r28
-#define rARRSC r27
-#define rARPFS r26
-#define rARUNAT r25
-#define rARRNAT r24
-#define rARBSPSTORE r23
-#define rKRBS r22
-#define rB6 r21
-
GLOBAL_ENTRY(ia64_ret_from_clone)
PT_REGS_UNWIND_INFO(0)
#ifdef CONFIG_SMP
@@ -554,10 +538,11 @@
;;
add r3=r2,r3
#else
+ adds r16=IA64_PT_REGS_R8_OFFSET+16,r12
movl r3=irq_stat // softirq_active
#endif
;;
- ld8 r2=[r3] // r3 (softirq_active+softirq_mask) is guaranteed to be 8-byte aligned!
+ ld8 r2=[r3] // r3 (softirq_active+softirq_mask) is guaranteed to be 8-byte aligned!
;;
shr r3=r2,32
;;
@@ -692,12 +677,13 @@
;;
ld8 rCRIFS=[r16],16 // load cr.ifs
ld8 rARUNAT=[r17],16 // load ar.unat
+ cmp.eq p9,p0=r0,r0 // set p9 to indicate that we should restore cr.ifs
;;
ld8 rARPFS=[r16],16 // load ar.pfs
ld8 rARRSC=[r17],16 // load ar.rsc
;;
ld8 rARRNAT=[r16],16 // load ar.rnat (may be garbage)
- ld8 rARBSPSTORE=[r17],16 // load ar.bspstore (may be garbage)
+ ld8 rARBSPSTORE=[r17],16 // load ar.bspstore (may be garbage)
;;
ld8 rARPR=[r16],16 // load predicates
ld8 rB6=[r17],16 // load b6
@@ -707,15 +693,17 @@
;;
ld8.fill r2=[r16],16
ld8.fill r3=[r17],16
+ extr.u r19=rCRIPSR,32,2 // extract ps.cpl
;;
ld8.fill r12=[r16],16
ld8.fill r13=[r17],16
- extr.u r19=rCRIPSR,32,2 // extract ps.cpl
+(pSys) shr.u r18=r18,16 // get size of existing "dirty" partition
;;
- ld8.fill r14=[r16],16
- ld8.fill r15=[r17],16
+ ld8.fill r14=[r16]
+ ld8.fill r15=[r17]
cmp.eq p6,p7=r0,r19 // are we returning to kernel mode? (psr.cpl=0)
;;
+ mov r16=ar.bsp // get existing backing store pointer
mov b6=rB6
mov ar.pfs=rARPFS
(p6) br.cond.dpnt.few skip_rbs_switch
@@ -724,47 +712,34 @@
* Restore user backing store.
*
* NOTE: alloc, loadrs, and cover can't be predicated.
- *
- * XXX This needs some scheduling/tuning once we believe it
- * really does work as intended.
*/
- mov r16=ar.bsp // get existing backing store pointer
(pNonSys) br.cond.dpnt.few dont_preserve_current_frame
cover // add current frame into dirty partition
;;
- mov rCRIFS=cr.ifs // fetch the cr.ifs value that "cover" produced
mov r17=ar.bsp // get new backing store pointer
+ sub r18=r16,r18 // krbs = old bsp - size of dirty partition
+ cmp.ne p9,p0=r0,r0 // clear p9 to skip restore of cr.ifs
;;
- sub r16=r17,r16 // calculate number of bytes that were added to rbs
- ;;
- shl r16=r16,16 // shift additional frame size into position for loadrs
+ sub r18=r17,r18 // calculate number of bytes that were added to rbs
;;
- add r18=r16,r18 // adjust the loadrs value
+ shl r18=r18,16 // shift size of dirty partition into loadrs position
;;
dont_preserve_current_frame:
- alloc r16=ar.pfs,0,0,0,0 // drop the current call frame (noop for syscalls)
- ;;
+ alloc r17=ar.pfs,0,0,0,0 // drop the current call frame (noop for syscalls)
mov ar.rsc=r18 // load ar.rsc to be used for "loadrs"
-#ifdef CONFIG_IA32_SUPPORT
- tbit.nz p6,p0=rCRIPSR,IA64_PSR_IS_BIT
- ;;
-(p6) mov ar.rsc=r0 // returning to IA32 mode
-#endif
;;
loadrs
;;
- mov ar.bspstore=rARBSPSTORE
- ;;
- mov ar.rnat=rARRNAT // must happen with RSE in lazy mode
-
skip_rbs_switch:
+(p7) mov ar.bspstore=rARBSPSTORE
+(p9) mov cr.ifs=rCRIFS
+ mov cr.ipsr=rCRIPSR
+ mov cr.iip=rCRIIP
+ ;;
+(p7) mov ar.rnat=rARRNAT // must happen with RSE in lazy mode
mov ar.rsc=rARRSC
mov ar.unat=rARUNAT
- mov cr.ifs=rCRIFS // restore cr.ifs only if not a (synchronous) syscall
mov pr=rARPR,-1
- mov cr.iip=rCRIIP
- mov cr.ipsr=rCRIPSR
- ;;
rfi;; // must be last instruction in an insn group
END(ia64_leave_kernel)
@@ -904,7 +879,7 @@
(pNonSys) mov out2=0 // out2=0 => not a syscall
br.call.sptk.few rp=ia64_do_signal
.ret16: // restore the switch stack (ptrace may have modified it)
- DO_LOAD_SWITCH_STACK( )
+ DO_LOAD_SWITCH_STACK
br.ret.sptk.many rp
#endif /* !CONFIG_IA64_NEW_UNWIND */
END(handle_signal_delivery)
@@ -944,7 +919,7 @@
adds out2\x16,sp // out1=&sigscratch
br.call.sptk.many rp=ia64_rt_sigsuspend
.ret18: // restore the switch stack (ptrace may have modified it)
- DO_LOAD_SWITCH_STACK( )
+ DO_LOAD_SWITCH_STACK
br.ret.sptk.many rp
#endif /* !CONFIG_IA64_NEW_UNWIND */
END(sys_rt_sigsuspend)
@@ -957,7 +932,7 @@
PT_REGS_SAVES(16)
adds sp=-16,sp
.body
- cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
+ cmp.eq pNonSys,pSys=r0,r0 // sigreturn isn't a normal syscall...
;;
adds out0\x16,sp // out0 = &sigscratch
br.call.sptk.few rp=ia64_rt_sigreturn
@@ -979,7 +954,7 @@
UNW(.spillsp ar.unat, PT(AR_UNAT)+IA64_SWITCH_STACK_SIZE)
UNW(.spillsp pr, PT(PR)+IA64_SWITCH_STACK_SIZE)
adds sp=-IA64_SWITCH_STACK_SIZE,sp
- cmp.eq pNonSys,p0=r0,r0 // sigreturn isn't a normal syscall...
+ cmp.eq pNonSys,pSys=r0,r0 // sigreturn isn't a normal syscall...
;;
UNW(.body)
@@ -1002,13 +977,12 @@
// r16 = fake ar.pfs, we simply need to make sure
// privilege is still 0
//
- PT_REGS_UNWIND_INFO(0)
mov r16=r0
UNW(.prologue)
DO_SAVE_SWITCH_STACK
br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in ivt
.ret21: .body
- DO_LOAD_SWITCH_STACK(PT_REGS_UNWIND_INFO(0))
+ DO_LOAD_SWITCH_STACK
br.cond.sptk.many rp // goes to ia64_leave_kernel
END(ia64_prepare_handle_unaligned)
diff -urN linux-davidm/arch/ia64/kernel/entry.h linux-2.4.0-lia/arch/ia64/kernel/entry.h
--- linux-davidm/arch/ia64/kernel/entry.h Thu Jun 22 07:09:44 2000
+++ linux-2.4.0-lia/arch/ia64/kernel/entry.h Thu Jan 25 15:08:35 2001
@@ -55,11 +55,10 @@
br.cond.sptk.many save_switch_stack; \
1:
-#define DO_LOAD_SWITCH_STACK(extra) \
+#define DO_LOAD_SWITCH_STACK \
movl r28\x1f; \
;; \
mov b7=r28; \
br.cond.sptk.many load_switch_stack; \
1: UNW(.restore sp); \
- extra; \
adds sp=IA64_SWITCH_STACK_SIZE,sp
diff -urN linux-davidm/arch/ia64/kernel/iosapic.c linux-2.4.0-lia/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/iosapic.c Thu Jan 25 15:40:04 2001
@@ -137,12 +137,6 @@
(dmode << IOSAPIC_DELIVERY_SHIFT) |
vector);
-#ifdef CONFIG_IA64_AZUSA_HACKS
- /* set Flush Disable bit */
- if (addr != (char *) 0xc0000000fec00000)
- low32 |= (1 << 17);
-#endif
-
/* dest contains both id and eid */
high32 = (dest << IOSAPIC_DEST_SHIFT);
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.0-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Thu Jan 25 19:17:25 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/ivt.S Thu Jan 25 15:41:48 2001
@@ -45,6 +45,12 @@
#include <asm/system.h>
#include <asm/unistd.h>
+#if 1
+# define PSR_DEFAULT_BITS psr.ac
+#else
+# define PSR_DEFAULT_BITS 0
+#endif
+
#define MINSTATE_VIRT /* needed by minstate.h */
#include "minstate.h"
@@ -184,15 +190,16 @@
* mode, walk the page table, and then re-execute the L3 PTE read
* and go on normally after that.
*/
-itlb_fault:
mov r16=cr.ifa // get virtual address
mov r29° // save b0
mov r31=pr // save predicates
+itlb_fault:
mov r17=cr.iha // get virtual address of L3 PTE
movl r30\x1f // load nested fault continuation point
;;
1: ld8 r18=[r17] // read L3 PTE
;;
+ mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
(p6) br.cond.spnt.many page_fault
;;
@@ -219,15 +226,16 @@
* mode, walk the page table, and then re-execute the L3 PTE read
* and go on normally after that.
*/
-dtlb_fault:
mov r16=cr.ifa // get virtual address
mov r29° // save b0
mov r31=pr // save predicates
+dtlb_fault:
mov r17=cr.iha // get virtual address of L3 PTE
movl r30\x1f // load nested fault continuation point
;;
1: ld8 r18=[r17] // read L3 PTE
;;
+ mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
(p6) br.cond.spnt.many page_fault
;;
@@ -261,6 +269,7 @@
(p8) thash r17=r16
;;
(p8) mov cr.iha=r17
+(p8) mov r29° // save b0
(p8) br.cond.dptk.many itlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
@@ -296,6 +305,7 @@
(p8) thash r17=r16
;;
(p8) mov cr.iha=r17
+(p8) mov r29° // save b0
(p8) br.cond.dptk.many dtlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
@@ -336,7 +346,7 @@
mov r9=cr.isr
adds r3=8,r2 // set up second base pointer
;;
- ssm psr.ic
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -399,7 +409,6 @@
shr.u r18=r16,PMD_SHIFT // shift L2 index into position
;;
ld8 r17=[r17] // fetch the L1 entry (may be 0)
- mov b0=r30
;;
(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
dep r17=r18,r17,3,(PAGE_SHIFT-3) // compute address of L2 page table entry
@@ -409,8 +418,8 @@
;;
(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
- ;;
(p6) br.cond.spnt.many page_fault
+ mov b0=r30
br.sptk.many b0 // return to continuation point
;;
@@ -613,8 +622,7 @@
SAVE_MIN // uses r31; defines r2:
- // turn interrupt collection back on:
- ssm psr.ic
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interrupt collection is enabled
cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
@@ -701,11 +709,12 @@
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4)
+interrupt:
mov r31=pr // prepare to save predicates
;;
SAVE_MIN_WITH_COVER // uses r31; defines r2 and r3
- ssm psr.ic // turn interrupt collection
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
adds r3=8,r2 // set up second base pointer for SAVE_REST
srlz.i // ensure everybody knows psr.ic is back on
@@ -755,7 +764,7 @@
// The "alloc" can cause a mandatory store which could lead to
// an "Alt DTLB" fault which we can handle only if psr.ic is on.
//
- ssm psr.ic
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -801,7 +810,7 @@
SAVE_MIN
;;
mov r14=cr.isr
- ssm psr.ic
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -887,8 +896,7 @@
mov r8=cr.iim // get break immediate (must be done while psr.ic is off)
adds r3=8,r2 // set up second base pointer for SAVE_REST
- // turn interrupt collection back on:
- ssm psr.ic
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -926,7 +934,7 @@
// wouldn't get the state to recover.
//
mov r15=cr.ifa
- ssm psr.ic
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interrupt collection is enabled
;;
@@ -974,7 +982,7 @@
mov r10=cr.iim
mov r11=cr.itir
;;
- ssm psr.ic
+ ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interrupt collection is enabled
;;
diff -urN linux-davidm/arch/ia64/kernel/mca_asm.S linux-2.4.0-lia/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/mca_asm.S Thu Jan 25 15:42:26 2001
@@ -8,6 +8,7 @@
// kstack, switch modes, jump to C INIT handler
//
#include <linux/config.h>
+
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/mca_asm.h>
diff -urN linux-davidm/arch/ia64/kernel/minstate.h linux-2.4.0-lia/arch/ia64/kernel/minstate.h
--- linux-davidm/arch/ia64/kernel/minstate.h Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/minstate.h Thu Jan 25 15:42:35 2001
@@ -107,7 +107,7 @@
* Note that psr.ic is NOT turned on by this macro. This is so that
* we can pass interruption state as arguments to a handler.
*/
-#define DO_SAVE_MIN(COVER,EXTRA) \
+#define DO_SAVE_MIN(COVER,SAVE_IFS,EXTRA) \
mov rARRSC=ar.rsc; \
mov rARPFS=ar.pfs; \
mov rR1=r1; \
@@ -116,14 +116,15 @@
mov rB6¶; /* rB6 = branch reg 6 */ \
mov rCRIIP=cr.iip; \
mov r1=ar.k6; /* r1 = current (physical) */ \
+ COVER; \
;; \
invala; \
extr.u r16=rCRIPSR,32,2; /* extract psr.cpl */ \
;; \
cmp.eq pKern,p7=r0,r16; /* are we in kernel mode already? (psr.cpl=0) */ \
/* switch from user to kernel RBS: */ \
- COVER; \
;; \
+ SAVE_IFS; \
MINSTATE_START_SAVE_MIN \
;; \
mov r16=r1; /* initialize first base pointer */ \
@@ -240,6 +241,6 @@
# define STOPS
#endif
-#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs,) STOPS
-#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs, mov r15=r19) STOPS
-#define SAVE_MIN DO_SAVE_MIN(mov rCRIFS=r0,) STOPS
+#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover, mov rCRIFS=cr.ifs,) STOPS
+#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover, mov rCRIFS=cr.ifs, mov r15=r19) STOPS
+#define SAVE_MIN DO_SAVE_MIN( , mov rCRIFS=r0, ) STOPS
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.0-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Thu Jan 25 19:17:25 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/smp.c Thu Jan 25 17:19:30 2001
@@ -64,7 +64,6 @@
volatile int __cpu_physical_id[NR_CPUS] = { -1, }; /* Logical ID -> SAPIC ID */
int smp_num_cpus = 1;
volatile int smp_threads_ready; /* Set when the idlers are all forked */
-cycles_t cacheflush_time;
unsigned long ap_wakeup_vector = -1; /* External Int to use to wakeup AP's */
static volatile unsigned long cpu_callin_map;
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.4.0-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Thu Jan 25 19:17:25 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/time.c Thu Jan 25 17:21:22 2001
@@ -1,9 +1,9 @@
/*
* linux/arch/ia64/kernel/time.c
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
* Copyright (C) 1998-2000 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 David Mosberger <davidm@hpl.hp.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
* Copyright (C) 1999-2000 VA Linux Systems
* Copyright (C) 1999-2000 Walt Drummond <drummond@valinux.com>
@@ -32,14 +32,6 @@
#endif
-static struct {
- unsigned long delta;
- union {
- unsigned long count;
- unsigned char pad[SMP_CACHE_BYTES];
- } next[NR_CPUS];
-} itm;
-
static void
do_profile (unsigned long ip)
{
@@ -82,7 +74,7 @@
unsigned long now = ia64_get_itc(), last_tick;
unsigned long elapsed_cycles, lost = jiffies - wall_jiffies;
- last_tick = (itm.next[smp_processor_id()].count - (lost+1)*itm.delta);
+ last_tick = (my_cpu_data.itm_next - (lost+1)*my_cpu_data.itm_delta);
# if 1
if ((long) (now - last_tick) < 0) {
printk("Yikes: now < last_tick (now=0x%lx,last_tick=%lx)! No can do.\n",
@@ -153,7 +145,7 @@
int cpu = smp_processor_id();
unsigned long new_itm;
- new_itm = itm.next[cpu].count;
+ new_itm = cpu_data[cpu].itm_next;
if (!time_after(ia64_get_itc(), new_itm))
printk("Oops: timer tick before it's due (itc=%lx,itm=%lx)\n",
@@ -183,8 +175,8 @@
write_unlock(&xtime_lock);
}
- new_itm += itm.delta;
- itm.next[cpu].count = new_itm;
+ new_itm += cpu_data[cpu].itm_delta;
+ cpu_data[cpu].itm_next = new_itm;
if (time_after(new_itm, ia64_get_itc()))
break;
}
@@ -197,8 +189,8 @@
* turn would let our clock run too fast (with the potentially
* devastating effect of losing monotony of time).
*/
- while (!time_after(new_itm, ia64_get_itc() + itm.delta/2))
- new_itm += itm.delta;
+ while (!time_after(new_itm, ia64_get_itc() + cpu_data[cpu].itm_delta/2))
+ new_itm += cpu_data[cpu].itm_delta;
ia64_set_itm(new_itm);
}
@@ -219,16 +211,29 @@
* Encapsulate access to the itm structure for SMP.
*/
void __init
-ia64_cpu_local_tick(void)
+ia64_cpu_local_tick (void)
{
+ int cpu = smp_processor_id();
+ unsigned long shift = 0, delta;
+
#ifdef CONFIG_IA64_SOFTSDV_HACKS
ia64_set_itc(0);
#endif
/* arrange for the cycle counter to generate a timer interrupt: */
ia64_set_itv(TIMER_IRQ);
- itm.next[smp_processor_id()].count = ia64_get_itc() + itm.delta;
- ia64_set_itm(itm.next[smp_processor_id()].count);
+
+ delta = cpu_data[cpu].itm_delta;
+ /*
+ * Stagger the timer tick for each CPU so they don't occur all at (almost) the
+ * same time:
+ */
+ if (cpu) {
+ unsigned long hi = 1UL << ia64_fls(cpu);
+ shift = (2*(cpu - hi) + 1) * delta/hi/2;
+ }
+ cpu_data[cpu].itm_next = ia64_get_itc() + delta + shift;
+ ia64_set_itm(cpu_data[cpu].itm_next);
}
void __init
@@ -236,6 +241,7 @@
{
unsigned long platform_base_freq, itc_freq, drift;
struct pal_freq_ratio itc_ratio, proc_ratio;
+ int cpu = smp_processor_id();
long status;
/*
@@ -275,16 +281,16 @@
itc_ratio.num = 1; /* avoid division by zero */
itc_freq = (platform_base_freq*itc_ratio.num)/itc_ratio.den;
- itm.delta = itc_freq / HZ;
+ cpu_data[cpu].itm_delta = (itc_freq + HZ/2) / HZ;
printk("CPU %d: base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
smp_processor_id(),
platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000,
itc_ratio.num, itc_ratio.den, itc_freq / 1000000, (itc_freq / 1000) % 1000);
- my_cpu_data.proc_freq = (platform_base_freq*proc_ratio.num)/proc_ratio.den;
- my_cpu_data.itc_freq = itc_freq;
- my_cpu_data.cyc_per_usec = itc_freq / 1000000;
- my_cpu_data.usec_per_cyc = (1000000UL << IA64_USEC_PER_CYC_SHIFT) / itc_freq;
+ cpu_data[cpu].proc_freq = (platform_base_freq*proc_ratio.num)/proc_ratio.den;
+ cpu_data[cpu].itc_freq = itc_freq;
+ cpu_data[cpu].cyc_per_usec = (itc_freq + 500000) / 1000000;
+ cpu_data[cpu].usec_per_cyc = ((1000000UL<<IA64_USEC_PER_CYC_SHIFT) + itc_freq/2)/itc_freq;
/* Setup the CPU local timer tick */
ia64_cpu_local_tick();
diff -urN linux-davidm/arch/ia64/kernel/unaligned.c linux-2.4.0-lia/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/unaligned.c Thu Jan 25 17:21:49 2001
@@ -1,8 +1,11 @@
/*
* Architecture-specific unaligned trap handling.
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
* Copyright (C) 1999-2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 2001/01/17 Add support emulation of unaligned kernel accesses.
*/
#include <linux/kernel.h>
#include <linux/sched.h>
@@ -17,14 +20,28 @@
#undef DEBUG_UNALIGNED_TRAP
#ifdef DEBUG_UNALIGNED_TRAP
-#define DPRINT(a) { printk("%s, line %d: ", __FUNCTION__, __LINE__); printk a;}
+# define DPRINT(a...) do { printk("%s.%u: ", __FUNCTION__, __LINE__); printk (a); } while (0)
+# define DDUMP(str,vp,len) dump(str, vp, len)
+
+static void
+dump (const char *str, void *vp, size_t len)
+{
+ unsigned char *cp = vp;
+ int i;
+
+ printk("%s", str);
+ for (i = 0; i < len; ++i)
+ printk (" %02x", *cp++);
+ printk("\n");
+}
#else
-#define DPRINT(a)
+# define DPRINT(a...)
+# define DDUMP(str,vp,len)
#endif
#define IA64_FIRST_STACKED_GR 32
#define IA64_FIRST_ROTATING_FR 32
-#define SIGN_EXT9 __IA64_UL(0xffffffffffffff00)
+#define SIGN_EXT9 0xffffffffffffff00ul
/*
* For M-unit:
@@ -40,7 +57,8 @@
* mask ([40:32]) using 9 bits. The 'e' comes from the fact that we defer
* checking the m-bit until later in the load/store emulation.
*/
-#define IA64_OPCODE_MASK 0x1ef00000000
+#define IA64_OPCODE_MASK 0x1ef
+#define IA64_OPCODE_SHIFT 32
/*
* Table C-28 Integer Load/Store
@@ -50,18 +68,18 @@
* ld8.fill, st8.fill MUST be aligned because the RNATs are based on
* the address (bits [8:3]), so we must failed.
*/
-#define LD_OP 0x08000000000
-#define LDS_OP 0x08100000000
-#define LDA_OP 0x08200000000
-#define LDSA_OP 0x08300000000
-#define LDBIAS_OP 0x08400000000
-#define LDACQ_OP 0x08500000000
+#define LD_OP 0x080
+#define LDS_OP 0x081
+#define LDA_OP 0x082
+#define LDSA_OP 0x083
+#define LDBIAS_OP 0x084
+#define LDACQ_OP 0x085
/* 0x086, 0x087 are not relevant */
-#define LDCCLR_OP 0x08800000000
-#define LDCNC_OP 0x08900000000
-#define LDCCLRACQ_OP 0x08a00000000
-#define ST_OP 0x08c00000000
-#define STREL_OP 0x08d00000000
+#define LDCCLR_OP 0x088
+#define LDCNC_OP 0x089
+#define LDCCLRACQ_OP 0x08a
+#define ST_OP 0x08c
+#define STREL_OP 0x08d
/* 0x08e,0x8f are not relevant */
/*
@@ -79,32 +97,32 @@
* ld8.fill, st8.fill must be aligned because the Nat register are based on
* the address, so we must fail and the program must be fixed.
*/
-#define LD_IMM_OP 0x0a000000000
-#define LDS_IMM_OP 0x0a100000000
-#define LDA_IMM_OP 0x0a200000000
-#define LDSA_IMM_OP 0x0a300000000
-#define LDBIAS_IMM_OP 0x0a400000000
-#define LDACQ_IMM_OP 0x0a500000000
+#define LD_IMM_OP 0x0a0
+#define LDS_IMM_OP 0x0a1
+#define LDA_IMM_OP 0x0a2
+#define LDSA_IMM_OP 0x0a3
+#define LDBIAS_IMM_OP 0x0a4
+#define LDACQ_IMM_OP 0x0a5
/* 0x0a6, 0xa7 are not relevant */
-#define LDCCLR_IMM_OP 0x0a800000000
-#define LDCNC_IMM_OP 0x0a900000000
-#define LDCCLRACQ_IMM_OP 0x0aa00000000
-#define ST_IMM_OP 0x0ac00000000
-#define STREL_IMM_OP 0x0ad00000000
+#define LDCCLR_IMM_OP 0x0a8
+#define LDCNC_IMM_OP 0x0a9
+#define LDCCLRACQ_IMM_OP 0x0aa
+#define ST_IMM_OP 0x0ac
+#define STREL_IMM_OP 0x0ad
/* 0x0ae,0xaf are not relevant */
/*
* Table C-32 Floating-point Load/Store
*/
-#define LDF_OP 0x0c000000000
-#define LDFS_OP 0x0c100000000
-#define LDFA_OP 0x0c200000000
-#define LDFSA_OP 0x0c300000000
+#define LDF_OP 0x0c0
+#define LDFS_OP 0x0c1
+#define LDFA_OP 0x0c2
+#define LDFSA_OP 0x0c3
/* 0x0c6 is irrelevant */
-#define LDFCCLR_OP 0x0c800000000
-#define LDFCNC_OP 0x0c900000000
+#define LDFCCLR_OP 0x0c8
+#define LDFCNC_OP 0x0c9
/* 0x0cb is irrelevant */
-#define STF_OP 0x0cc00000000
+#define STF_OP 0x0cc
/*
* Table C-33 Floating-point Load +Reg
@@ -116,14 +134,14 @@
/*
* Table C-34 Floating-point Load/Store +Imm
*/
-#define LDF_IMM_OP 0x0e000000000
-#define LDFS_IMM_OP 0x0e100000000
-#define LDFA_IMM_OP 0x0e200000000
-#define LDFSA_IMM_OP 0x0e300000000
+#define LDF_IMM_OP 0x0e0
+#define LDFS_IMM_OP 0x0e1
+#define LDFA_IMM_OP 0x0e2
+#define LDFSA_IMM_OP 0x0e3
/* 0x0e6 is irrelevant */
-#define LDFCCLR_IMM_OP 0x0e800000000
-#define LDFCNC_IMM_OP 0x0e900000000
-#define STF_IMM_OP 0x0ec00000000
+#define LDFCCLR_IMM_OP 0x0e8
+#define LDFCNC_IMM_OP 0x0e9
+#define STF_IMM_OP 0x0ec
typedef struct {
unsigned long qp:6; /* [0:5] */
@@ -255,150 +273,148 @@
}
static void
-set_rse_reg(struct pt_regs *regs, unsigned long r1, unsigned long val, int nat)
+set_rse_reg (struct pt_regs *regs, unsigned long r1, unsigned long val, int nat)
{
- struct switch_stack *sw = (struct switch_stack *)regs - 1;
- unsigned long *kbs = ((unsigned long *)current) + IA64_RBS_OFFSET/8;
+ struct switch_stack *sw = (struct switch_stack *) regs - 1;
+ unsigned long *bsp, *bspstore, *addr, *rnat_addr, *ubs_end;
+ unsigned long *kbs = (void *) current + IA64_RBS_OFFSET;
+ unsigned long rnats, nat_mask;
unsigned long on_kbs;
- unsigned long *bsp, *bspstore, *addr, *ubs_end, *slot;
- unsigned long rnats;
- long nlocals;
+ long sof = (regs->cr_ifs) & 0x7f;
- /*
- * cr_ifs=[rv:ifm], ifm=[....:sof(6)]
- * nlocal=number of locals (in+loc) register of the faulting function
- */
- nlocals = (regs->cr_ifs) & 0x7f;
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
- DPRINT(("sw.bsptore=%lx pt.bspstore=%lx\n", sw->ar_bspstore, regs->ar_bspstore));
- DPRINT(("cr.ifs=%lx sof=%ld sol=%ld\n",
- regs->cr_ifs, regs->cr_ifs &0x7f, (regs->cr_ifs>>7)&0x7f));
+ if ((r1 - 32) >= sof) {
+ /* this should never happen, as the "rsvd register fault" has higher priority */
+ DPRINT("ignoring write to r%lu; only %lu registers are allocated!\n", r1, sof);
+ return;
+ }
- on_kbs = ia64_rse_num_regs(kbs, (unsigned long *)sw->ar_bspstore);
- bspstore = (unsigned long *)regs->ar_bspstore;
+ on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ if (addr >= kbs) {
+ /* the register is on the kernel backing store: easy... */
+ rnat_addr = ia64_rse_rnat_addr(addr);
+ if ((unsigned long) rnat_addr >= sw->ar_bspstore)
+ rnat_addr = &sw->ar_rnat;
+ nat_mask = 1UL << ia64_rse_slot_num(addr);
- DPRINT(("rse_slot_num=0x%lx\n",ia64_rse_slot_num((unsigned long *)sw->ar_bspstore)));
- DPRINT(("kbs=%p nlocals=%ld\n", (void *) kbs, nlocals));
- DPRINT(("bspstore next rnat slot %p\n",
- (void *) ia64_rse_rnat_addr((unsigned long *)sw->ar_bspstore)));
- DPRINT(("on_kbs=%ld rnats=%ld\n",
- on_kbs, ((sw->ar_bspstore-(unsigned long)kbs)>>3) - on_kbs));
+ *addr = val;
+ if (nat)
+ *rnat_addr |= nat_mask;
+ else
+ *rnat_addr &= ~nat_mask;
+ return;
+ }
/*
- * See get_rse_reg() for an explanation on the following instructions
+ * Avoid using user_mode() here: with "epc", we cannot use the privilege level to
+ * infer whether the interrupt task was running on the kernel backing store.
*/
+ if (regs->r12 >= TASK_SIZE) {
+ DPRINT("ignoring kernel write to r%lu; register isn't on the RBS!", r1);
+ return;
+ }
+
+ bspstore = (unsigned long *) regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
- bsp = ia64_rse_skip_regs(ubs_end, -nlocals);
- addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
+ bsp = ia64_rse_skip_regs(ubs_end, -sof);
+ addr = ia64_rse_skip_regs(bsp, r1 - 32);
- DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
- (void *) ubs_end, (void *) bsp, (void *) addr, ia64_rse_slot_num(addr)));
+ DPRINT("ubs_end=%p bsp=%p addr=%px\n", (void *) ubs_end, (void *) bsp, (void *) addr);
- ia64_poke(regs, current, (unsigned long)addr, val);
+ ia64_poke(regs, current, (unsigned long) addr, val);
- /*
- * addr will now contain the address of the RNAT for the register
- */
- addr = ia64_rse_rnat_addr(addr);
+ rnat_addr = ia64_rse_rnat_addr(addr);
- ia64_peek(regs, current, (unsigned long)addr, &rnats);
- DPRINT(("rnat @%p = 0x%lx nat=%d rnatval=%lx\n",
- (void *) addr, rnats, nat, rnats &ia64_rse_slot_num(slot)));
-
- if (nat) {
- rnats |= __IA64_UL(1) << ia64_rse_slot_num(slot);
- } else {
- rnats &= ~(__IA64_UL(1) << ia64_rse_slot_num(slot));
- }
- ia64_poke(regs, current, (unsigned long)addr, rnats);
+ ia64_peek(regs, current, (unsigned long) rnat_addr, &rnats);
+ DPRINT("rnat @%p = 0x%lx nat=%d old nat=%ld\n",
+ (void *) rnat_addr, rnats, nat, (rnats >> ia64_rse_slot_num(addr)) & 1);
+
+ nat_mask = 1UL << ia64_rse_slot_num(addr);
+ if (nat)
+ rnats |= nat_mask;
+ else
+ rnats &= ~nat_mask;
+ ia64_poke(regs, current, (unsigned long) rnat_addr, rnats);
- DPRINT(("rnat changed to @%p = 0x%lx\n", (void *) addr, rnats));
+ DPRINT("rnat changed to @%p = 0x%lx\n", (void *) rnat_addr, rnats);
}
static void
-get_rse_reg(struct pt_regs *regs, unsigned long r1, unsigned long *val, int *nat)
+get_rse_reg (struct pt_regs *regs, unsigned long r1, unsigned long *val, int *nat)
{
- struct switch_stack *sw = (struct switch_stack *)regs - 1;
- unsigned long *kbs = (unsigned long *)current + IA64_RBS_OFFSET/8;
+ struct switch_stack *sw = (struct switch_stack *) regs - 1;
+ unsigned long *bsp, *addr, *rnat_addr, *ubs_end, *bspstore;
+ unsigned long *kbs = (void *) current + IA64_RBS_OFFSET;
+ unsigned long rnats, nat_mask;
unsigned long on_kbs;
- long nlocals;
- unsigned long *bsp, *addr, *ubs_end, *slot, *bspstore;
- unsigned long rnats;
+ long sof = (regs->cr_ifs) & 0x7f;
- /*
- * cr_ifs=[rv:ifm], ifm=[....:sof(6)]
- * nlocals=number of local registers in the faulting function
- */
- nlocals = (regs->cr_ifs) & 0x7f;
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
- /*
- * save_switch_stack does a flushrs and saves bspstore.
- * on_kbs = actual number of registers saved on kernel backing store
- * (taking into accound potential RNATs)
- *
- * Note that this number can be greater than nlocals if the dirty
- * parititions included more than one stack frame at the time we
- * switched to KBS
- */
- on_kbs = ia64_rse_num_regs(kbs, (unsigned long *)sw->ar_bspstore);
- bspstore = (unsigned long *)regs->ar_bspstore;
+ if ((r1 - 32) >= sof) {
+ /* this should never happen, as the "rsvd register fault" has higher priority */
+ DPRINT("ignoring read from r%lu; only %lu registers are allocated!\n", r1, sof);
+ return;
+ }
+
+ on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ if (addr >= kbs) {
+ /* the register is on the kernel backing store: easy... */
+ *val = *addr;
+ if (nat) {
+ rnat_addr = ia64_rse_rnat_addr(addr);
+ if ((unsigned long) rnat_addr >= sw->ar_bspstore)
+ rnat_addr = &sw->ar_rnat;
+ nat_mask = 1UL << ia64_rse_slot_num(addr);
+ *nat = (*rnat_addr & nat_mask) != 0;
+ }
+ return;
+ }
/*
- * To simplify the logic, we calculate everything as if there was only
- * one backing store i.e., the user one (UBS). We let it to peek/poke
- * to figure out whether the register we're looking for really is
- * on the UBS or on KBS.
- *
- * regs->ar_bsptore = address of last register saved on UBS (before switch)
- *
- * ubs_end = virtual end of the UBS (if everything had been spilled there)
- *
- * We know that ubs_end is the point where the last register on the
- * stack frame we're interested in as been saved. So we need to walk
- * our way backward to figure out what the BSP "was" for that frame,
- * this will give us the location of r32.
- *
- * bsp = "virtual UBS" address of r32 for our frame
- *
- * Finally, get compute the address of the register we're looking for
- * using bsp as our base (move up again).
- *
- * Please note that in our case, we know that the register is necessarily
- * on the KBS because we are only interested in the current frame at the moment
- * we got the exception i.e., bsp is not changed until we switch to KBS.
+ * Avoid using user_mode() here: with "epc", we cannot use the privilege level to
+ * infer whether the interrupt task was running on the kernel backing store.
*/
+ if (regs->r12 >= TASK_SIZE) {
+ DPRINT("ignoring kernel read of r%lu; register isn't on the RBS!", r1);
+ return;
+ }
+
+ bspstore = (unsigned long *)regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
- bsp = ia64_rse_skip_regs(ubs_end, -nlocals);
- addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
+ bsp = ia64_rse_skip_regs(ubs_end, -sof);
+ addr = ia64_rse_skip_regs(bsp, r1 - 32);
- DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
- (void *) ubs_end, (void *) bsp, (void *) addr, ia64_rse_slot_num(addr)));
+ DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
- ia64_peek(regs, current, (unsigned long)addr, val);
+ ia64_peek(regs, current, (unsigned long) addr, val);
- /*
- * addr will now contain the address of the RNAT for the register
- */
- addr = ia64_rse_rnat_addr(addr);
+ if (nat) {
+ rnat_addr = ia64_rse_rnat_addr(addr);
+ nat_mask = 1UL << ia64_rse_slot_num(addr);
- ia64_peek(regs, current, (unsigned long)addr, &rnats);
- DPRINT(("rnat @%p = 0x%lx\n", (void *) addr, rnats));
-
- if (nat)
- *nat = rnats >> ia64_rse_slot_num(slot) & 0x1;
+ DPRINT("rnat @%p = 0x%lx\n", (void *) rnat_addr, rnats);
+
+ ia64_peek(regs, current, (unsigned long) rnat_addr, &rnats);
+ *nat = (rnats & nat_mask) != 0;
+ }
}
static void
-setreg(unsigned long regnum, unsigned long val, int nat, struct pt_regs *regs)
+setreg (unsigned long regnum, unsigned long val, int nat, struct pt_regs *regs)
{
- struct switch_stack *sw = (struct switch_stack *)regs -1;
+ struct switch_stack *sw = (struct switch_stack *) regs - 1;
unsigned long addr;
unsigned long bitmask;
unsigned long *unat;
-
/*
* First takes care of stacked registers
*/
@@ -422,8 +438,8 @@
addr = (unsigned long)regs;
unat = &sw->caller_unat;
}
- DPRINT(("tmp_base=%lx switch_stack=%s offset=%d\n",
- addr, unat=&sw->ar_unat ? "yes":"no", GR_OFFS(regnum)));
+ DPRINT("tmp_base=%lx switch_stack=%s offset=%d\n",
+ addr, unat=&sw->ar_unat ? "yes":"no", GR_OFFS(regnum));
/*
* add offset from base of struct
* and do it !
@@ -436,20 +452,20 @@
* We need to clear the corresponding UNAT bit to fully emulate the load
* UNAT bit_pos = GR[r3]{8:3} form EAS-2.4
*/
- bitmask = __IA64_UL(1) << (addr >> 3 & 0x3f);
- DPRINT(("*0x%lx=0x%lx NaT=%d prev_unat @%p=%lx\n", addr, val, nat, (void *) unat, *unat));
+ bitmask = 1UL << (addr >> 3 & 0x3f);
+ DPRINT("*0x%lx=0x%lx NaT=%d prev_unat @%p=%lx\n", addr, val, nat, (void *) unat, *unat);
if (nat) {
*unat |= bitmask;
} else {
*unat &= ~bitmask;
}
- DPRINT(("*0x%lx=0x%lx NaT=%d new unat: %p=%lx\n", addr, val, nat, (void *) unat,*unat));
+ DPRINT("*0x%lx=0x%lx NaT=%d new unat: %p=%lx\n", addr, val, nat, (void *) unat,*unat);
}
#define IA64_FPH_OFFS(r) (r - IA64_FIRST_ROTATING_FR)
static void
-setfpreg(unsigned long regnum, struct ia64_fpreg *fpval, struct pt_regs *regs)
+setfpreg (unsigned long regnum, struct ia64_fpreg *fpval, struct pt_regs *regs)
{
struct switch_stack *sw = (struct switch_stack *)regs - 1;
unsigned long addr;
@@ -478,7 +494,7 @@
addr = (unsigned long)regs;
}
- DPRINT(("tmp_base=%lx offset=%d\n", addr, FR_OFFS(regnum)));
+ DPRINT("tmp_base=%lx offset=%d\n", addr, FR_OFFS(regnum));
addr += FR_OFFS(regnum);
*(struct ia64_fpreg *)addr = *fpval;
@@ -498,21 +514,21 @@
* registers which can be used with stfX
*/
static inline void
-float_spill_f0(struct ia64_fpreg *final)
+float_spill_f0 (struct ia64_fpreg *final)
{
__asm__ __volatile__ ("stf.spill [%0]ð" :: "r"(final) : "memory");
}
static inline void
-float_spill_f1(struct ia64_fpreg *final)
+float_spill_f1 (struct ia64_fpreg *final)
{
__asm__ __volatile__ ("stf.spill [%0]ñ" :: "r"(final) : "memory");
}
static void
-getfpreg(unsigned long regnum, struct ia64_fpreg *fpval, struct pt_regs *regs)
+getfpreg (unsigned long regnum, struct ia64_fpreg *fpval, struct pt_regs *regs)
{
- struct switch_stack *sw = (struct switch_stack *)regs -1;
+ struct switch_stack *sw = (struct switch_stack *) regs - 1;
unsigned long addr;
/*
@@ -546,8 +562,8 @@
addr = FR_IN_SW(regnum) ? (unsigned long)sw
: (unsigned long)regs;
- DPRINT(("is_sw=%d tmp_base=%lx offset=0x%x\n",
- FR_IN_SW(regnum), addr, FR_OFFS(regnum)));
+ DPRINT("is_sw=%d tmp_base=%lx offset=0x%x\n",
+ FR_IN_SW(regnum), addr, FR_OFFS(regnum));
addr += FR_OFFS(regnum);
*fpval = *(struct ia64_fpreg *)addr;
@@ -557,9 +573,9 @@
static void
-getreg(unsigned long regnum, unsigned long *val, int *nat, struct pt_regs *regs)
+getreg (unsigned long regnum, unsigned long *val, int *nat, struct pt_regs *regs)
{
- struct switch_stack *sw = (struct switch_stack *)regs -1;
+ struct switch_stack *sw = (struct switch_stack *) regs - 1;
unsigned long addr, *unat;
if (regnum >= IA64_FIRST_STACKED_GR) {
@@ -588,7 +604,7 @@
unat = &sw->caller_unat;
}
- DPRINT(("addr_base=%lx offset=0x%x\n", addr, GR_OFFS(regnum)));
+ DPRINT("addr_base=%lx offset=0x%x\n", addr, GR_OFFS(regnum));
addr += GR_OFFS(regnum);
@@ -602,7 +618,7 @@
}
static void
-emulate_load_updates(update_t type, load_store_t *ld, struct pt_regs *regs, unsigned long ifa)
+emulate_load_updates (update_t type, load_store_t ld, struct pt_regs *regs, unsigned long ifa)
{
/*
* IMPORTANT:
@@ -610,7 +626,7 @@
* not get to this point in the code but we keep this sanity check,
* just in case.
*/
- if (ld->x6_op = 1 || ld->x6_op = 3) {
+ if (ld.x6_op = 1 || ld.x6_op = 3) {
printk(KERN_ERR __FUNCTION__": register update on speculative load, error\n");
die_if_kernel("unaligned reference on specualtive load with register update\n",
regs, 30);
@@ -630,12 +646,12 @@
*
* form imm9: [13:19] contain the first 7 bits
*/
- imm = ld->x << 7 | ld->imm;
+ imm = ld.x << 7 | ld.imm;
/*
* sign extend (1+8bits) if m set
*/
- if (ld->m) imm |= SIGN_EXT9;
+ if (ld.m) imm |= SIGN_EXT9;
/*
* ifa = r3 and we know that the NaT bit on r3 was clear so
@@ -643,11 +659,11 @@
*/
ifa += imm;
- setreg(ld->r3, ifa, 0, regs);
+ setreg(ld.r3, ifa, 0, regs);
- DPRINT(("ld.x=%d ld.m=%d imm=%ld r3=0x%lx\n", ld->x, ld->m, imm, ifa));
+ DPRINT("ld.x=%d ld.m=%d imm=%ld r3=0x%lx\n", ld.x, ld.m, imm, ifa);
- } else if (ld->m) {
+ } else if (ld.m) {
unsigned long r2;
int nat_r2;
@@ -667,39 +683,24 @@
* never reach this code when trying to do a ldX.s.
* If we ever make it to here on an ldfX.s then
*/
- getreg(ld->imm, &r2, &nat_r2, regs);
+ getreg(ld.imm, &r2, &nat_r2, regs);
ifa += r2;
/*
* propagate Nat r2 -> r3
*/
- setreg(ld->r3, ifa, nat_r2, regs);
+ setreg(ld.r3, ifa, nat_r2, regs);
- DPRINT(("imm=%d r2=%ld r3=0x%lx nat_r2=%d\n",ld->imm, r2, ifa, nat_r2));
+ DPRINT("imm=%d r2=%ld r3=0x%lx nat_r2=%d\n",ld.imm, r2, ifa, nat_r2);
}
}
static int
-emulate_load_int(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+emulate_load_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
{
- unsigned long val;
- unsigned int len = 1<< ld->x6_sz;
-
- /*
- * the macro supposes sequential access (which is the case)
- * if the first byte is an invalid address we return here. Otherwise
- * there is a guard page at the top of the user's address page and
- * the first access would generate a NaT consumption fault and return
- * with a SIGSEGV, which is what we want.
- *
- * Note: the first argument is ignored
- */
- if (access_ok(VERIFY_READ, (void *)ifa, len) < 0) {
- DPRINT(("verify area failed on %lx\n", ifa));
- return -1;
- }
+ unsigned int len = 1 << ld.x6_sz;
/*
* r0, as target, doesn't need to be checked because Illegal Instruction
@@ -710,42 +711,27 @@
*/
/*
- * ldX.a we don't try to emulate anything but we must
- * invalidate the ALAT entry.
+ * ldX.a we don't try to emulate anything but we must invalidate the ALAT entry.
* See comment below for explanation on how we handle ldX.a
*/
- if (ld->x6_op != 0x2) {
- /*
- * we rely on the macros in unaligned.h for now i.e.,
- * we let the compiler figure out how to read memory gracefully.
- *
- * We need this switch/case because the way the inline function
- * works. The code is optimized by the compiler and looks like
- * a single switch/case.
- */
- switch(len) {
- case 2:
- val = ia64_get_unaligned((void *)ifa, 2);
- break;
- case 4:
- val = ia64_get_unaligned((void *)ifa, 4);
- break;
- case 8:
- val = ia64_get_unaligned((void *)ifa, 8);
- break;
- default:
- DPRINT(("unknown size: x6=%d\n", ld->x6_sz));
- return -1;
- }
+ if (ld.x6_op != 0x2) {
+ unsigned long val = 0;
- setreg(ld->r1, val, 0, regs);
+ if (len != 2 && len != 4 && len != 8) {
+ DPRINT("unknown size: x6=%d\n", ld.x6_sz);
+ return -1;
+ }
+ /* this assumes little-endian byte-order: */
+ if (copy_from_user(&val, (void *) ifa, len))
+ return -1;
+ setreg(ld.r1, val, 0, regs);
}
/*
* check for updates on any kind of loads
*/
- if (ld->op = 0x5 || ld->m)
- emulate_load_updates(ld->op = 0x5 ? UPD_IMMEDIATE: UPD_REG, ld, regs, ifa);
+ if (ld.op = 0x5 || ld.m)
+ emulate_load_updates(ld.op = 0x5 ? UPD_IMMEDIATE: UPD_REG, ld, regs, ifa);
/*
* handling of various loads (based on EAS2.4):
@@ -753,7 +739,6 @@
* ldX.acq (ordered load):
* - acquire semantics would have been used, so force fence instead.
*
- *
* ldX.c.clr (check load and clear):
* - if we get to this handler, it's because the entry was not in the ALAT.
* Therefore the operation reverts to a normal load
@@ -831,45 +816,31 @@
* when the load has the .acq completer then
* use ordering fence.
*/
- if (ld->x6_op = 0x5 || ld->x6_op = 0xa)
+ if (ld.x6_op = 0x5 || ld.x6_op = 0xa)
mb();
/*
* invalidate ALAT entry in case of advanced load
*/
- if (ld->x6_op = 0x2)
- invala_gr(ld->r1);
+ if (ld.x6_op = 0x2)
+ invala_gr(ld.r1);
return 0;
}
static int
-emulate_store_int(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+emulate_store_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
{
unsigned long r2;
- unsigned int len = 1<< ld->x6_sz;
+ unsigned int len = 1 << ld.x6_sz;
/*
- * the macro supposes sequential access (which is the case)
- * if the first byte is an invalid address we return here. Otherwise
- * there is a guard page at the top of the user's address page and
- * the first access would generate a NaT consumption fault and return
- * with a SIGSEGV, which is what we want.
- *
- * Note: the first argument is ignored
- */
- if (access_ok(VERIFY_WRITE, (void *)ifa, len) < 0) {
- DPRINT(("verify area failed on %lx\n",ifa));
- return -1;
- }
-
- /*
* if we get to this handler, Nat bits on both r3 and r2 have already
* been checked. so we don't need to do it
*
* extract the value to be stored
*/
- getreg(ld->imm, &r2, 0, regs);
+ getreg(ld.imm, &r2, 0, regs);
/*
* we rely on the macros in unaligned.h for now i.e.,
@@ -879,48 +850,43 @@
* works. The code is optimized by the compiler and looks like
* a single switch/case.
*/
- DPRINT(("st%d [%lx]=%lx\n", len, ifa, r2));
+ DPRINT("st%d [%lx]=%lx\n", len, ifa, r2);
- switch(len) {
- case 2:
- ia64_put_unaligned(r2, (void *)ifa, 2);
- break;
- case 4:
- ia64_put_unaligned(r2, (void *)ifa, 4);
- break;
- case 8:
- ia64_put_unaligned(r2, (void *)ifa, 8);
- break;
- default:
- DPRINT(("unknown size: x6=%d\n", ld->x6_sz));
- return -1;
+ if (len != 2 && len != 4 && len != 8) {
+ DPRINT("unknown size: x6=%d\n", ld.x6_sz);
+ return -1;
}
+
+ /* this assumes little-endian byte-order: */
+ if (copy_to_user((void *) ifa, &r2, len))
+ return -1;
+
/*
* stX [r3]=r2,imm(9)
*
* NOTE:
- * ld->r3 can never be r0, because r0 would not generate an
+ * ld.r3 can never be r0, because r0 would not generate an
* unaligned access.
*/
- if (ld->op = 0x5) {
+ if (ld.op = 0x5) {
unsigned long imm;
/*
* form imm9: [12:6] contain first 7bits
*/
- imm = ld->x << 7 | ld->r1;
+ imm = ld.x << 7 | ld.r1;
/*
* sign extend (8bits) if m set
*/
- if (ld->m) imm |= SIGN_EXT9;
+ if (ld.m) imm |= SIGN_EXT9;
/*
* ifa = r3 (NaT is necessarily cleared)
*/
ifa += imm;
- DPRINT(("imm=%lx r3=%lx\n", imm, ifa));
+ DPRINT("imm=%lx r3=%lx\n", imm, ifa);
- setreg(ld->r3, ifa, 0, regs);
+ setreg(ld.r3, ifa, 0, regs);
}
/*
* we don't have alat_invalidate_multiple() so we need
@@ -931,7 +897,7 @@
/*
* stX.rel: use fence instead of release
*/
- if (ld->x6_op = 0xd)
+ if (ld.x6_op = 0xd)
mb();
return 0;
@@ -940,7 +906,7 @@
/*
* floating point operations sizes in bytes
*/
-static const unsigned short float_fsz[4]={
+static const unsigned char float_fsz[4]={
16, /* extended precision (e) */
8, /* integer (8) */
4, /* single precision (s) */
@@ -948,72 +914,68 @@
};
static inline void
-mem2float_extended(struct ia64_fpreg *init, struct ia64_fpreg *final)
+mem2float_extended (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldfe f6=[%0];; stf.spill [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static inline void
-mem2float_integer(struct ia64_fpreg *init, struct ia64_fpreg *final)
+mem2float_integer (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf8 f6=[%0];; stf.spill [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static inline void
-mem2float_single(struct ia64_fpreg *init, struct ia64_fpreg *final)
+mem2float_single (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldfs f6=[%0];; stf.spill [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static inline void
-mem2float_double(struct ia64_fpreg *init, struct ia64_fpreg *final)
+mem2float_double (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldfd f6=[%0];; stf.spill [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static inline void
-float2mem_extended(struct ia64_fpreg *init, struct ia64_fpreg *final)
+float2mem_extended (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stfe [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static inline void
-float2mem_integer(struct ia64_fpreg *init, struct ia64_fpreg *final)
+float2mem_integer (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stf8 [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static inline void
-float2mem_single(struct ia64_fpreg *init, struct ia64_fpreg *final)
+float2mem_single (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stfs [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static inline void
-float2mem_double(struct ia64_fpreg *init, struct ia64_fpreg *final)
+float2mem_double (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stfd [%1]ö"
:: "r"(init), "r"(final) : "f6","memory");
}
static int
-emulate_load_floatpair(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+emulate_load_floatpair (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
{
struct ia64_fpreg fpr_init[2];
struct ia64_fpreg fpr_final[2];
- unsigned long len = float_fsz[ld->x6_sz];
+ unsigned long len = float_fsz[ld.x6_sz];
- if (access_ok(VERIFY_READ, (void *)ifa, len<<1) < 0) {
- DPRINT(("verify area failed on %lx\n", ifa));
- return -1;
- }
/*
* fr0 & fr1 don't need to be checked because Illegal Instruction
* faults have higher priority than unaligned faults.
@@ -1025,35 +987,27 @@
/*
* make sure we get clean buffers
*/
- memset(&fpr_init,0, sizeof(fpr_init));
- memset(&fpr_final,0, sizeof(fpr_final));
+ memset(&fpr_init, 0, sizeof(fpr_init));
+ memset(&fpr_final, 0, sizeof(fpr_final));
/*
* ldfpX.a: we don't try to emulate anything but we must
* invalidate the ALAT entry and execute updates, if any.
*/
- if (ld->x6_op != 0x2) {
- /*
- * does the unaligned access
- */
- memcpy(&fpr_init[0], (void *)ifa, len);
- memcpy(&fpr_init[1], (void *)(ifa+len), len);
+ if (ld.x6_op != 0x2) {
+ /* this assumes little-endian byte-order: */
- DPRINT(("ld.r1=%d ld.imm=%d x6_sz=%d\n", ld->r1, ld->imm, ld->x6_sz));
-#ifdef DEBUG_UNALIGNED_TRAP
- { int i; char *c = (char *)&fpr_init;
- printk("fpr_init= ");
- for(i=0; i < len<<1; i++ ) {
- printk("%02x ", c[i]&0xff);
- }
- printk("\n");
- }
-#endif
+ if (copy_from_user(&fpr_init[0], (void *) ifa, len)
+ || copy_from_user(&fpr_init[1], (void *) (ifa + len), len))
+ return -1;
+
+ DPRINT("ld.r1=%d ld.imm=%d x6_sz=%d\n", ld.r1, ld.imm, ld.x6_sz);
+ DDUMP("frp_init =", &fpr_init, 2*len);
/*
* XXX fixme
* Could optimize inlines by using ldfpX & 2 spills
*/
- switch( ld->x6_sz ) {
+ switch( ld.x6_sz ) {
case 0:
mem2float_extended(&fpr_init[0], &fpr_final[0]);
mem2float_extended(&fpr_init[1], &fpr_final[1]);
@@ -1071,15 +1025,7 @@
mem2float_double(&fpr_init[1], &fpr_final[1]);
break;
}
-#ifdef DEBUG_UNALIGNED_TRAP
- { int i; char *c = (char *)&fpr_final;
- printk("fpr_final= ");
- for(i=0; i < len<<1; i++ ) {
- printk("%02x ", c[i]&0xff);
- }
- printk("\n");
- }
-#endif
+ DDUMP("fpr_final =", &fpr_final, 2*len);
/*
* XXX fixme
*
@@ -1087,16 +1033,15 @@
* use the storage from the saved context i.e., the actual final
* destination (pt_regs, switch_stack or thread structure).
*/
- setfpreg(ld->r1, &fpr_final[0], regs);
- setfpreg(ld->imm, &fpr_final[1], regs);
+ setfpreg(ld.r1, &fpr_final[0], regs);
+ setfpreg(ld.imm, &fpr_final[1], regs);
}
/*
* Check for updates: only immediate updates are available for this
* instruction.
*/
- if (ld->m) {
-
+ if (ld.m) {
/*
* the immediate is implicit given the ldsz of the operation:
* single: 8 (2x4) and for all others it's 16 (2x8)
@@ -1109,43 +1054,32 @@
* as long as we don't come here with a ldfpX.s.
* For this reason we keep this sanity check
*/
- if (ld->x6_op = 1 || ld->x6_op = 3) {
- printk(KERN_ERR "%s: register update on speculative load pair, error\n",
- __FUNCTION__);
- }
-
+ if (ld.x6_op = 1 || ld.x6_op = 3)
+ printk(KERN_ERR __FUNCTION__": register update on speculative load pair, "
+ "error\n");
- setreg(ld->r3, ifa, 0, regs);
+ setreg(ld.r3, ifa, 0, regs);
}
/*
* Invalidate ALAT entries, if any, for both registers.
*/
- if (ld->x6_op = 0x2) {
- invala_fr(ld->r1);
- invala_fr(ld->imm);
+ if (ld.x6_op = 0x2) {
+ invala_fr(ld.r1);
+ invala_fr(ld.imm);
}
return 0;
}
static int
-emulate_load_float(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+emulate_load_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
{
struct ia64_fpreg fpr_init;
struct ia64_fpreg fpr_final;
- unsigned long len = float_fsz[ld->x6_sz];
+ unsigned long len = float_fsz[ld.x6_sz];
/*
- * check for load pair because our masking scheme is not fine grain enough
- if (ld->x = 1) return emulate_load_floatpair(ifa,ld,regs);
- */
-
- if (access_ok(VERIFY_READ, (void *)ifa, len) < 0) {
- DPRINT(("verify area failed on %lx\n", ifa));
- return -1;
- }
- /*
* fr0 & fr1 don't need to be checked because Illegal Instruction
* faults have higher priority than unaligned faults.
*
@@ -1153,7 +1087,6 @@
* unaligned reference.
*/
-
/*
* make sure we get clean buffers
*/
@@ -1165,27 +1098,16 @@
* invalidate the ALAT entry.
* See comments in ldX for descriptions on how the various loads are handled.
*/
- if (ld->x6_op != 0x2) {
-
- /*
- * does the unaligned access
- */
- memcpy(&fpr_init, (void *)ifa, len);
+ if (ld.x6_op != 0x2) {
+ if (copy_from_user(&fpr_init, (void *) ifa, len))
+ return -1;
- DPRINT(("ld.r1=%d x6_sz=%d\n", ld->r1, ld->x6_sz));
-#ifdef DEBUG_UNALIGNED_TRAP
- { int i; char *c = (char *)&fpr_init;
- printk("fpr_init= ");
- for(i=0; i < len; i++ ) {
- printk("%02x ", c[i]&0xff);
- }
- printk("\n");
- }
-#endif
+ DPRINT("ld.r1=%d x6_sz=%d\n", ld.r1, ld.x6_sz);
+ DDUMP("fpr_init =", &fpr_init, len);
/*
* we only do something for x6_op={0,8,9}
*/
- switch( ld->x6_sz ) {
+ switch( ld.x6_sz ) {
case 0:
mem2float_extended(&fpr_init, &fpr_final);
break;
@@ -1199,15 +1121,7 @@
mem2float_double(&fpr_init, &fpr_final);
break;
}
-#ifdef DEBUG_UNALIGNED_TRAP
- { int i; char *c = (char *)&fpr_final;
- printk("fpr_final= ");
- for(i=0; i < len; i++ ) {
- printk("%02x ", c[i]&0xff);
- }
- printk("\n");
- }
-#endif
+ DDUMP("fpr_final =", &fpr_final, len);
/*
* XXX fixme
*
@@ -1215,66 +1129,51 @@
* use the storage from the saved context i.e., the actual final
* destination (pt_regs, switch_stack or thread structure).
*/
- setfpreg(ld->r1, &fpr_final, regs);
+ setfpreg(ld.r1, &fpr_final, regs);
}
/*
* check for updates on any loads
*/
- if (ld->op = 0x7 || ld->m)
- emulate_load_updates(ld->op = 0x7 ? UPD_IMMEDIATE: UPD_REG, ld, regs, ifa);
+ if (ld.op = 0x7 || ld.m)
+ emulate_load_updates(ld.op = 0x7 ? UPD_IMMEDIATE: UPD_REG, ld, regs, ifa);
/*
* invalidate ALAT entry in case of advanced floating point loads
*/
- if (ld->x6_op = 0x2)
- invala_fr(ld->r1);
+ if (ld.x6_op = 0x2)
+ invala_fr(ld.r1);
return 0;
}
static int
-emulate_store_float(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+emulate_store_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
{
struct ia64_fpreg fpr_init;
struct ia64_fpreg fpr_final;
- unsigned long len = float_fsz[ld->x6_sz];
+ unsigned long len = float_fsz[ld.x6_sz];
- /*
- * the macro supposes sequential access (which is the case)
- * if the first byte is an invalid address we return here. Otherwise
- * there is a guard page at the top of the user's address page and
- * the first access would generate a NaT consumption fault and return
- * with a SIGSEGV, which is what we want.
- *
- * Note: the first argument is ignored
- */
- if (access_ok(VERIFY_WRITE, (void *)ifa, len) < 0) {
- DPRINT(("verify area failed on %lx\n",ifa));
- return -1;
- }
-
/*
* make sure we get clean buffers
*/
memset(&fpr_init,0, sizeof(fpr_init));
memset(&fpr_final,0, sizeof(fpr_final));
-
/*
* if we get to this handler, Nat bits on both r3 and r2 have already
* been checked. so we don't need to do it
*
* extract the value to be stored
*/
- getfpreg(ld->imm, &fpr_init, regs);
+ getfpreg(ld.imm, &fpr_init, regs);
/*
* during this step, we extract the spilled registers from the saved
* context i.e., we refill. Then we store (no spill) to temporary
* aligned location
*/
- switch( ld->x6_sz ) {
+ switch( ld.x6_sz ) {
case 0:
float2mem_extended(&fpr_init, &fpr_final);
break;
@@ -1288,56 +1187,40 @@
float2mem_double(&fpr_init, &fpr_final);
break;
}
- DPRINT(("ld.r1=%d x6_sz=%d\n", ld->r1, ld->x6_sz));
-#ifdef DEBUG_UNALIGNED_TRAP
- { int i; char *c = (char *)&fpr_init;
- printk("fpr_init= ");
- for(i=0; i < len; i++ ) {
- printk("%02x ", c[i]&0xff);
- }
- printk("\n");
- }
- { int i; char *c = (char *)&fpr_final;
- printk("fpr_final= ");
- for(i=0; i < len; i++ ) {
- printk("%02x ", c[i]&0xff);
- }
- printk("\n");
- }
-#endif
+ DPRINT("ld.r1=%d x6_sz=%d\n", ld.r1, ld.x6_sz);
+ DDUMP("fpr_init =", &fpr_init, len);
+ DDUMP("fpr_final =", &fpr_final, len);
- /*
- * does the unaligned store
- */
- memcpy((void *)ifa, &fpr_final, len);
+ if (copy_to_user((void *) ifa, &fpr_final, len))
+ return -1;
/*
* stfX [r3]=r2,imm(9)
*
* NOTE:
- * ld->r3 can never be r0, because r0 would not generate an
+ * ld.r3 can never be r0, because r0 would not generate an
* unaligned access.
*/
- if (ld->op = 0x7) {
+ if (ld.op = 0x7) {
unsigned long imm;
/*
* form imm9: [12:6] contain first 7bits
*/
- imm = ld->x << 7 | ld->r1;
+ imm = ld.x << 7 | ld.r1;
/*
* sign extend (8bits) if m set
*/
- if (ld->m)
+ if (ld.m)
imm |= SIGN_EXT9;
/*
* ifa = r3 (NaT is necessarily cleared)
*/
ifa += imm;
- DPRINT(("imm=%lx r3=%lx\n", imm, ifa));
+ DPRINT("imm=%lx r3=%lx\n", imm, ifa);
- setreg(ld->r3, ifa, 0, regs);
+ setreg(ld.r3, ifa, 0, regs);
}
/*
* we don't have alat_invalidate_multiple() so we need
@@ -1348,129 +1231,96 @@
return 0;
}
-void
-ia64_handle_unaligned(unsigned long ifa, struct pt_regs *regs)
+/*
+ * Make sure we log the unaligned access, so that user/sysadmin can notice it and
+ * eventually fix the program. However, we don't want to do that for every access so we
+ * pace it with jiffies. This isn't really MP-safe, but it doesn't really have to be
+ * either...
+ */
+static int
+within_logging_rate_limit (void)
{
- static unsigned long unalign_count;
- static long last_time;
+ static unsigned long count, last_time;
+
+ if (count > 5 && jiffies - last_time > 5*HZ)
+ count = 0;
+ if (++count < 5) {
+ last_time = jiffies;
+ return 1;
+ }
+ return 0;
+}
+
+void
+ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
+{
+ const struct exception_table_entry *fix = NULL;
struct ia64_psr *ipsr = ia64_psr(regs);
- unsigned long *bundle_addr;
+ mm_segment_t old_fs = get_fs();
+ unsigned long bundle[2];
unsigned long opcode;
- unsigned long op;
- load_store_t *insn;
+ struct siginfo si;
+ union {
+ unsigned long l;
+ load_store_t insn;
+ } u;
int ret = -1;
- /*
- * Unaligned references in the kernel could come from unaligned
- * arguments to system calls. We fault the user process in
- * these cases and panic the kernel otherwise (the kernel should
- * be fixed to not make unaligned accesses).
- */
- if (!user_mode(regs)) {
- const struct exception_table_entry *fix;
-
- fix = search_exception_table(regs->cr_iip);
- if (fix) {
- regs->r8 = -EFAULT;
- if (fix->skip & 1) {
- regs->r9 = 0;
- }
- regs->cr_iip += ((long) fix->skip) & ~15;
- regs->cr_ipsr &= ~IA64_PSR_RI; /* clear exception slot number */
- return;
- }
- die_if_kernel("Unaligned reference while in kernel\n", regs, 30);
- /* NOT_REACHED */
- }
- /*
- * For now, we don't support user processes running big-endian
- * which do unaligned accesses
- */
if (ia64_psr(regs)->be) {
- struct siginfo si;
-
- printk(KERN_ERR "%s(%d): big-endian unaligned access %016lx (ip=%016lx) not "
- "yet supported\n",
- current->comm, current->pid, ifa, regs->cr_iip + ipsr->ri);
-
- si.si_signo = SIGBUS;
- si.si_errno = 0;
- si.si_code = BUS_ADRALN;
- si.si_addr = (void *) ifa;
- force_sig_info(SIGBUS, &si, current);
- return;
+ /* we don't support big-endian accesses */
+ die_if_kernel("big-endian unaligned accesses are not supported", regs, 0);
+ goto force_sigbus;
}
- if (current->thread.flags & IA64_THREAD_UAC_SIGBUS) {
- struct siginfo si;
-
- si.si_signo = SIGBUS;
- si.si_errno = 0;
- si.si_code = BUS_ADRALN;
- si.si_addr = (void *) ifa;
- force_sig_info(SIGBUS, &si, current);
- return;
- }
+ /*
+ * Treat kernel accesses for which there is an exception handler entry the same as
+ * user-level unaligned accesses. Otherwise, a clever program could user could
+ * trick this handler into reading an arbitrary kernel addresses...
+ */
+ if (user_mode(regs) || (fix = search_exception_table(regs->cr_iip))) {
+ if ((current->thread.flags & IA64_THREAD_UAC_SIGBUS) != 0)
+ goto force_sigbus;
- if (!(current->thread.flags & IA64_THREAD_UAC_NOPRINT)) {
- /*
- * Make sure we log the unaligned access, so that
- * user/sysadmin can notice it and eventually fix the
- * program.
- *
- * We don't want to do that for every access so we
- * pace it with jiffies.
- */
- if (unalign_count > 5 && jiffies - last_time > 5*HZ)
- unalign_count = 0;
- if (++unalign_count < 5) {
+ if (!(current->thread.flags & IA64_THREAD_UAC_NOPRINT)
+ && within_logging_rate_limit())
+ {
char buf[200]; /* comm[] is at most 16 bytes... */
size_t len;
- last_time = jiffies;
- len = sprintf(buf, "%s(%d): unaligned access to 0x%016lx, ip=0x%016lx\n\r",
- current->comm, current->pid, ifa, regs->cr_iip + ipsr->ri);
+ len = sprintf(buf, "%s(%d): unaligned access to 0x%016lx, "
+ "ip=0x%016lx\n\r", current->comm, current->pid,
+ ifa, regs->cr_iip + ipsr->ri);
tty_write_message(current->tty, buf);
buf[len-1] = '\0'; /* drop '\r' */
- printk("%s", buf); /* guard against command names containing %s!! */
+ printk("%s", buf); /* watch for command names containing %s */
}
+ } else {
+ if (within_logging_rate_limit())
+ printk("kernel unaligned access to 0x%016lx, ip=0x%016lx\n",
+ ifa, regs->cr_iip + ipsr->ri);
+ set_fs(KERNEL_DS);
}
- DPRINT(("iip=%lx ifa=%lx isr=%lx\n", regs->cr_iip, ifa, regs->cr_ipsr));
- DPRINT(("ISR.ei=%d ISR.sp=%d\n", ipsr->ri, ipsr->it));
+ DPRINT("iip=%lx ifa=%lx isr=%lx (ei=%d, sp=%d)\n",
+ regs->cr_iip, ifa, regs->cr_ipsr, ipsr->ri, ipsr->it);
- bundle_addr = (unsigned long *)(regs->cr_iip);
+ if (__copy_from_user(bundle, (void *) regs->cr_iip, 16))
+ goto failure;
/*
* extract the instruction from the bundle given the slot number
*/
- switch ( ipsr->ri ) {
- case 0: op = *bundle_addr >> 5;
- break;
-
- case 1: op = *bundle_addr >> 46 | (*(bundle_addr+1) & 0x7fffff)<<18;
- break;
-
- case 2: op = *(bundle_addr+1) >> 23;
- break;
- }
-
- insn = (load_store_t *)&op;
- opcode = op & IA64_OPCODE_MASK;
-
- DPRINT(("opcode=%lx ld.qp=%d ld.r1=%d ld.imm=%d ld.r3=%d ld.x=%d ld.hint=%d "
- "ld.x6=0x%x ld.m=%d ld.op=%d\n",
- opcode,
- insn->qp,
- insn->r1,
- insn->imm,
- insn->r3,
- insn->x,
- insn->hint,
- insn->x6_sz,
- insn->m,
- insn->op));
+ switch (ipsr->ri) {
+ case 0: u.l = (bundle[0] >> 5); break;
+ case 1: u.l = (bundle[0] >> 46) | (bundle[1] << 18); break;
+ case 2: u.l = (bundle[1] >> 23); break;
+ }
+ opcode = (u.l >> IA64_OPCODE_SHIFT) & IA64_OPCODE_MASK;
+
+ DPRINT("opcode=%lx ld.qp=%d ld.r1=%d ld.imm=%d ld.r3=%d ld.x=%d ld.hint=%d "
+ "ld.x6=0x%x ld.m=%d ld.op=%d\n", opcode, u.insn.qp, u.insn.r1, u.insn.imm,
+ u.insn.r3, u.insn.x, u.insn.hint, u.insn.x6_sz, u.insn.m, u.insn.op);
/*
* IMPORTANT:
@@ -1502,85 +1352,109 @@
* I would like to get rid of this switch case and do something
* more elegant.
*/
- switch(opcode) {
- case LDS_OP:
- case LDSA_OP:
- case LDS_IMM_OP:
- case LDSA_IMM_OP:
- case LDFS_OP:
- case LDFSA_OP:
- case LDFS_IMM_OP:
- /*
- * The instruction will be retried with defered exceptions
- * turned on, and we should get Nat bit installed
- *
- * IMPORTANT:
- * When PSR_ED is set, the register & immediate update
- * forms are actually executed even though the operation
- * failed. So we don't need to take care of this.
- */
- DPRINT(("forcing PSR_ED\n"));
- regs->cr_ipsr |= IA64_PSR_ED;
- return;
-
- case LD_OP:
- case LDA_OP:
- case LDBIAS_OP:
- case LDACQ_OP:
- case LDCCLR_OP:
- case LDCNC_OP:
- case LDCCLRACQ_OP:
- case LD_IMM_OP:
- case LDA_IMM_OP:
- case LDBIAS_IMM_OP:
- case LDACQ_IMM_OP:
- case LDCCLR_IMM_OP:
- case LDCNC_IMM_OP:
- case LDCCLRACQ_IMM_OP:
- ret = emulate_load_int(ifa, insn, regs);
- break;
- case ST_OP:
- case STREL_OP:
- case ST_IMM_OP:
- case STREL_IMM_OP:
- ret = emulate_store_int(ifa, insn, regs);
- break;
- case LDF_OP:
- case LDFA_OP:
- case LDFCCLR_OP:
- case LDFCNC_OP:
- case LDF_IMM_OP:
- case LDFA_IMM_OP:
- case LDFCCLR_IMM_OP:
- case LDFCNC_IMM_OP:
- ret = insn->x ?
- emulate_load_floatpair(ifa, insn, regs):
- emulate_load_float(ifa, insn, regs);
- break;
- case STF_OP:
- case STF_IMM_OP:
- ret = emulate_store_float(ifa, insn, regs);
- }
-
- DPRINT(("ret=%d\n", ret));
- if (ret) {
- struct siginfo si;
-
- si.si_signo = SIGBUS;
- si.si_errno = 0;
- si.si_code = BUS_ADRALN;
- si.si_addr = (void *) ifa;
- force_sig_info(SIGBUS, &si, current);
- } else {
+ switch (opcode) {
+ case LDS_OP:
+ case LDSA_OP:
+ case LDS_IMM_OP:
+ case LDSA_IMM_OP:
+ case LDFS_OP:
+ case LDFSA_OP:
+ case LDFS_IMM_OP:
/*
- * given today's architecture this case is not likely to happen
- * because a memory access instruction (M) can never be in the
- * last slot of a bundle. But let's keep it for now.
- */
- if (ipsr->ri = 2)
- regs->cr_iip += 16;
- ipsr->ri = ++ipsr->ri & 3;
- }
+ * The instruction will be retried with deferred exceptions turned on, and
+ * we should get Nat bit installed
+ *
+ * IMPORTANT: When PSR_ED is set, the register & immediate update forms
+ * are actually executed even though the operation failed. So we don't
+ * need to take care of this.
+ */
+ DPRINT("forcing PSR_ED\n");
+ regs->cr_ipsr |= IA64_PSR_ED;
+ goto done;
+
+ case LD_OP:
+ case LDA_OP:
+ case LDBIAS_OP:
+ case LDACQ_OP:
+ case LDCCLR_OP:
+ case LDCNC_OP:
+ case LDCCLRACQ_OP:
+ case LD_IMM_OP:
+ case LDA_IMM_OP:
+ case LDBIAS_IMM_OP:
+ case LDACQ_IMM_OP:
+ case LDCCLR_IMM_OP:
+ case LDCNC_IMM_OP:
+ case LDCCLRACQ_IMM_OP:
+ ret = emulate_load_int(ifa, u.insn, regs);
+ break;
+
+ case ST_OP:
+ case STREL_OP:
+ case ST_IMM_OP:
+ case STREL_IMM_OP:
+ ret = emulate_store_int(ifa, u.insn, regs);
+ break;
+
+ case LDF_OP:
+ case LDFA_OP:
+ case LDFCCLR_OP:
+ case LDFCNC_OP:
+ case LDF_IMM_OP:
+ case LDFA_IMM_OP:
+ case LDFCCLR_IMM_OP:
+ case LDFCNC_IMM_OP:
+ if (u.insn.x)
+ ret = emulate_load_floatpair(ifa, u.insn, regs);
+ else
+ ret = emulate_load_float(ifa, u.insn, regs);
+ break;
+
+ case STF_OP:
+ case STF_IMM_OP:
+ ret = emulate_store_float(ifa, u.insn, regs);
+ break;
+
+ default:
+ goto failure;
+ }
+ DPRINT("ret=%d\n", ret);
+ if (ret)
+ goto failure;
+
+ if (ipsr->ri = 2)
+ /*
+ * given today's architecture this case is not likely to happen because a
+ * memory access instruction (M) can never be in the last slot of a
+ * bundle. But let's keep it for now.
+ */
+ regs->cr_iip += 16;
+ ipsr->ri = (ipsr->ri + 1) & 0x3;
+
+ DPRINT("ipsr->ri=%d iip=%lx\n", ipsr->ri, regs->cr_iip);
+ done:
+ set_fs(old_fs); /* restore original address limit */
+ return;
- DPRINT(("ipsr->ri=%d iip=%lx\n", ipsr->ri, regs->cr_iip));
+ failure:
+ /* something went wrong... */
+ if (!user_mode(regs)) {
+ if (fix) {
+ regs->r8 = -EFAULT;
+ if (fix->skip & 1)
+ regs->r9 = 0;
+ regs->cr_iip += ((long) fix->skip) & ~15;
+ regs->cr_ipsr &= ~IA64_PSR_RI; /* clear exception slot number */
+ goto done;
+ }
+ die_if_kernel("error during unaligned kernel access\n", regs, ret);
+ /* NOT_REACHED */
+ }
+ force_sigbus:
+ si.si_signo = SIGBUS;
+ si.si_errno = 0;
+ si.si_code = BUS_ADRALN;
+ si.si_addr = (void *) ifa;
+ force_sig_info(SIGBUS, &si, current);
+ goto done;
}
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.0-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/kernel/unwind.c Thu Jan 25 17:22:06 2001
@@ -306,7 +306,7 @@
}
} else {
/* access a stacked register */
- addr = ia64_rse_skip_regs((unsigned long *) info->bsp, regnum);
+ addr = ia64_rse_skip_regs((unsigned long *) info->bsp, regnum - 32);
nat_addr = ia64_rse_rnat_addr(addr);
if ((unsigned long) addr < info->regstk.limit
|| (unsigned long) addr >= info->regstk.top)
diff -urN linux-davidm/arch/ia64/lib/copy_user.S linux-2.4.0-lia/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/lib/copy_user.S Thu Jan 25 17:22:15 2001
@@ -319,6 +319,7 @@
EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 5b
;;
+ mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs
br.ret.dptk.few rp
diff -urN linux-davidm/arch/ia64/lib/swiotlb.c linux-2.4.0-lia/arch/ia64/lib/swiotlb.c
--- linux-davidm/arch/ia64/lib/swiotlb.c Thu Jan 25 19:17:25 2001
+++ linux-2.4.0-lia/arch/ia64/lib/swiotlb.c Thu Jan 25 17:22:25 2001
@@ -10,8 +10,6 @@
* unnecessary i-cache flushing.
*/
-#include <linux/config.h>
-
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/pci.h>
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.0-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/mm/init.c Thu Jan 25 17:22:35 2001
@@ -23,6 +23,7 @@
#include <asm/pgalloc.h>
#include <asm/sal.h>
#include <asm/system.h>
+#include <asm/uaccess.h>
/* References to section boundaries: */
extern char _stext, _etext, _edata, __init_begin, __init_end;
@@ -37,6 +38,8 @@
extern void ia64_tlb_init (void);
+unsigned long MAX_DMA_ADDRESS = PAGE_OFFSET + 0x100000000UL;
+
static unsigned long totalram_pages;
/*
@@ -356,7 +359,7 @@
# define vmlpt_bits (impl_va_bits - PAGE_SHIFT + pte_bits)
# define POW2(n) (1ULL << (n))
- impl_va_bits = ffz(~my_cpu_data.unimpl_va_mask);
+ impl_va_bits = ffz(~(my_cpu_data.unimpl_va_mask | (7UL << 61)));
if (impl_va_bits < 51 || impl_va_bits > 61)
panic("CPU has bogus IMPL_VA_MSB value of %lu!\n", impl_va_bits - 1);
@@ -390,7 +393,7 @@
memset(zones_size, 0, sizeof(zones_size));
- max_dma = (PAGE_ALIGN(MAX_DMA_ADDRESS) >> PAGE_SHIFT);
+ max_dma = virt_to_phys(MAX_DMA_ADDRESS) >> PAGE_SHIFT;
if (max_low_pfn < max_dma)
zones_size[ZONE_DMA] = max_low_pfn;
else {
diff -urN linux-davidm/arch/ia64/sn/fprom/fw-emu.c linux-2.4.0-lia/arch/ia64/sn/fprom/fw-emu.c
--- linux-davidm/arch/ia64/sn/fprom/fw-emu.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.0-lia/arch/ia64/sn/fprom/fw-emu.c Wed Dec 13 18:59:33 2000
@@ -8,6 +8,8 @@
* Copyright (C) 2000 Silicon Graphics, Inc.
* Copyright (C) 2000 by Jack Steiner (steiner@sgi.com)
*/
+#include <linux/config.h>
+
#include <asm/efi.h>
#include <asm/pal.h>
#include <asm/sal.h>
diff -urN linux-davidm/drivers/char/mem.c linux-2.4.0-lia/drivers/char/mem.c
--- linux-davidm/drivers/char/mem.c Thu Jan 25 19:17:25 2001
+++ linux-2.4.0-lia/drivers/char/mem.c Thu Jan 25 17:43:34 2001
@@ -484,13 +484,13 @@
switch (orig) {
case 0:
file->f_pos = offset;
- return file->f_pos;
case 1:
file->f_pos += offset;
- return file->f_pos;
default:
return -EINVAL;
}
+ force_successful_syscall_return();
+ return file->f_pos;
}
static int open_port(struct inode * inode, struct file * filp)
diff -urN linux-davidm/drivers/scsi/qla1280.c linux-2.4.0-lia/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Thu Jan 25 19:17:26 2001
+++ linux-2.4.0-lia/drivers/scsi/qla1280.c Thu Jan 25 17:43:56 2001
@@ -16,9 +16,23 @@
* General Public License for more details.
**
******************************************************************************/
-#define QLA1280_VERSION "3.21 Beta"
+#define QLA1280_VERSION "3.23 Beta"
/****************************************************************************
Revision History:
+ Rev 3.23 Beta January 11, 2001 BN Qlogic
+ - Added check of device_id when handling non
+ QLA12160s during detect().
+ Rev 3.22 Beta January 5, 2001 BN Qlogic
+ - Changed queue_task() to schedule_task()
+ for kernels 2.4.0 and higher.
+ Note: 2.4.0-testxx kernels released prior to
+ the actual 2.4.0 kernel release on January 2001
+ will get compile/link errors with schedule_task().
+ Please update your kernel to released 2.4.0 level,
+ or comment lines in this file flagged with 3.22
+ to resolve compile/link error of schedule_task().
+ - Added -DCONFIG_SMP in addition to -D__SMP__
+ in Makefile for 2.4.0 builds of driver as module.
Rev 3.21 Beta January 4, 2001 BN Qlogic
- Changed criteria of 64/32 Bit mode of HBA
operation according to BITS_PER_LONG rather
@@ -521,10 +535,10 @@
scsi_qla_host_t *ha;
int size = 0;
scsi_lu_t *up;
- int len = 0;
- qla_boards_t *bdp;
- uint32_t b, t, l;
-
+ int len = 0;
+ qla_boards_t *bdp;
+ uint32_t b, t, l;
+ uint8_t *temp;
host = NULL;
/* Find the host that was specified */
@@ -570,6 +584,10 @@
/* save the size of our buffer */
qla1280_buffer_size = size;
+ /* 3.20 clear the buffer we use for proc display */
+ temp = qla1280_buffer;
+ for (b=0 ; b < size; b++) *(temp+b) = 0;
+
/* start building the print buffer */
bdp = &QL1280BoardTbl[ha->devnum];
size = sprintf(PROC_BUF,
@@ -580,7 +598,7 @@
size = sprintf(PROC_BUF, "SCSI Host Adapter Information: %s\n", bdp->bdName);
len += size;
- size = sprintf(PROC_BUF, "Request Queue = 0x%lx, Response Queue = 0x%lx\n",
+ size = sprintf(PROC_BUF, "Request Queue = 0x%x, Response Queue = 0x%x\n",
ha->request_dma,
ha->response_dma);
len += size;
@@ -754,21 +772,19 @@
#endif
/* 3.20 */
- /* present the on-board ISP12160 for IA-64 Lion systems
- first to the OS; to preserve boot drive access in case another
- QLA12160 is inserted in the PCI slots */
+ /* First Initialize QLA12160 on PCI Bus 1 Dev 2 */
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,18)
while ((pdev = pci_find_subsys(QLA1280_VENDOR_ID,
bdp->device_id, /* QLA12160 first in list */
PCI_ANY_ID,
PCI_ANY_ID,pdev))) {
- /* only interested here on devices on PCI bus=1 slot=2 */
+ /* find QLA12160 device on PCI bus=1 slot=2 */
if ((pdev->bus->number != 1) ||
(PCI_SLOT(pdev->devfn) != 2)) continue;
if (pci_enable_device(pdev)) goto find_devices;
- printk("qla1x160: Initializing IA-64 ISP12160\n");
+ printk("qla1x160: Initializing ISP12160 on PCI Bus 1, Dev 2\n");
host = scsi_register(template, sizeof(scsi_qla_host_t));
ha = (scsi_qla_host_t *) host->hostdata;
/* Clear our data area */
@@ -801,7 +817,7 @@
ha->instance = num_hosts;
if (qla1280_initialize_adapter(ha))
{
- printk(KERN_INFO "qla1x160: Failed to initialize onboard ISP12160 on IA-64 \n");
+ printk(KERN_INFO "qla1x160: Failed to initialize QLA12160 on PCI Bus 1 Dev 2 \n");
qla1280_mem_free(ha);
scsi_unregister(host);
goto find_devices;
@@ -809,7 +825,7 @@
host->max_channel = bdp->numPorts-1;
/* Register our resources with Linux */
if( qla1280_register_with_Linux(ha, bdp->numPorts-1) ) {
- printk(KERN_INFO "qla1x160: Failed to register resources for onboard ISP12160 on IA-64\n");
+ printk(KERN_INFO "qla1x160: Failed to register resources for QLA12160 on PCI Bus 1 Dev 2\n");
qla1280_mem_free(ha);
scsi_unregister(host);
goto find_devices;
@@ -863,9 +879,11 @@
continue;
}
- /* 3.20 skip IA-64 Lion on-board ISP12160 */
+ /* 3.20 and 3.23 */
+ /* skip QLA12160 already initialized on PCI Bus 1 Dev 2 */
/* since we already initialized and presented it */
- if ((pdev->bus->number = 1) &&
+ if ((bdp->device_id = QLA12160_DEVICE_ID) &&
+ (pdev->bus->number = 1) &&
(PCI_SLOT(pdev->devfn) = 2)) continue;
printk("qla1x160: Supported Device Found VID=%x DID=%x SSVID=%x SSDID=%x\n",
@@ -1165,7 +1183,12 @@
{
CMD_RESULT(cmd) = (int) (DID_BUS_BUSY << 16);
qla1280_done_q_put(sp, &ha->done_q_first, &ha->done_q_last);
- schedule_task(&ha->run_qla_bh);
+/* 3.22 */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,0) /* 3.22 */
+ queue_task(&ha->run_qla_bh,&tq_scheduler);
+#else /* 3.22 */
+ schedule_task(&ha->run_qla_bh); /* 3.22 */
+#endif /* 3.22 */
ha->flags.dpc_sched = TRUE;
DRIVER_UNLOCK
return(0);
@@ -1663,7 +1686,7 @@
ha->run_qla_bh.routine = qla1280_do_dpc;
COMTRACE('P')
- schedule_task(&ha->run_qla_bh);
+ queue_task_irq(&ha->run_qla_bh,&tq_scheduler);
ha->flags.dpc_sched = TRUE;
}
clear_bit(QLA1280_IN_ISR_BIT, (int *)&ha->flags);
diff -urN linux-davidm/fs/proc/base.c linux-2.4.0-lia/fs/proc/base.c
--- linux-davidm/fs/proc/base.c Thu Nov 16 13:18:26 2000
+++ linux-2.4.0-lia/fs/proc/base.c Thu Jan 25 17:45:12 2001
@@ -395,7 +395,22 @@
}
#endif
+static loff_t mem_lseek(struct file * file, loff_t offset, int orig)
+{
+ switch (orig) {
+ case 0:
+ file->f_pos = offset;
+ case 1:
+ file->f_pos += offset;
+ default:
+ return -EINVAL;
+ }
+ force_successful_syscall_return();
+ return file->f_pos;
+}
+
static struct file_operations proc_mem_operations = {
+ llseek: mem_lseek,
read: mem_read,
write: mem_write,
};
diff -urN linux-davidm/include/asm-ia64/acpikcfg.h linux-2.4.0-lia/include/asm-ia64/acpikcfg.h
--- linux-davidm/include/asm-ia64/acpikcfg.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.0-lia/include/asm-ia64/acpikcfg.h Thu Jan 25 17:47:22 2001
@@ -1,4 +1,5 @@
#include <linux/config.h>
+
#ifdef CONFIG_ACPI_KERNEL_CONFIG
/*
* acpikcfg.h - ACPI based Kernel Configuration Manager External Interfaces
diff -urN linux-davidm/include/asm-ia64/bitops.h linux-2.4.0-lia/include/asm-ia64/bitops.h
--- linux-davidm/include/asm-ia64/bitops.h Mon Oct 9 17:54:57 2000
+++ linux-2.4.0-lia/include/asm-ia64/bitops.h Thu Jan 25 17:51:43 2001
@@ -158,6 +158,7 @@
__asm__ ("getf.exp %0=%1" : "=r"(exp) : "f"(d));
return exp - 0xffff;
}
+
/*
* ffs: find first bit set. This is defined the same way as
* the libc and compiler builtin ffs routines, therefore
diff -urN linux-davidm/include/asm-ia64/dma.h linux-2.4.0-lia/include/asm-ia64/dma.h
--- linux-davidm/include/asm-ia64/dma.h Thu Jun 22 07:09:45 2000
+++ linux-2.4.0-lia/include/asm-ia64/dma.h Thu Jan 25 17:51:51 2001
@@ -2,35 +2,20 @@
#define _ASM_IA64_DMA_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
-#include <linux/spinlock.h> /* And spinlocks */
-#include <linux/delay.h>
#include <asm/io.h> /* need byte IO */
-#ifdef HAVE_REALLY_SLOW_DMA_CONTROLLER
-#define dma_outb outb_p
-#else
-#define dma_outb outb
-#endif
-
-#define dma_inb inb
-
-#define MAX_DMA_CHANNELS 8
-#define MAX_DMA_ADDRESS 0xffffffffUL
-
-extern spinlock_t dma_spin_lock;
-
-/* From PCI */
+extern unsigned long MAX_DMA_ADDRESS;
#ifdef CONFIG_PCI
-extern int isa_dma_bridge_buggy;
+ extern int isa_dma_bridge_buggy;
#else
-#define isa_dma_bridge_buggy (0)
+# define isa_dma_bridge_buggy (0)
#endif
#endif /* _ASM_IA64_DMA_H */
diff -urN linux-davidm/include/asm-ia64/ia32.h linux-2.4.0-lia/include/asm-ia64/ia32.h
--- linux-davidm/include/asm-ia64/ia32.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.0-lia/include/asm-ia64/ia32.h Thu Jan 25 17:52:10 2001
@@ -378,9 +378,10 @@
ia64_psr(regs)->ri = 0; /* clear return slot number */ \
ia64_psr(regs)->is = 1; /* IA-32 instruction set */ \
regs->cr_iip = new_ip; \
- regs->r12 = new_sp; \
+ regs->ar_rsc = 0xc; /* enforced lazy mode, priv. level 3 */ \
regs->ar_rnat = 0; \
regs->loadrs = 0; \
+ regs->r12 = new_sp; \
} while (0)
extern void ia32_gdt_init (void);
diff -urN linux-davidm/include/asm-ia64/mmu_context.h linux-2.4.0-lia/include/asm-ia64/mmu_context.h
--- linux-davidm/include/asm-ia64/mmu_context.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.0-lia/include/asm-ia64/mmu_context.h Thu Jan 25 17:59:55 2001
@@ -12,21 +12,14 @@
#include <asm/processor.h>
/*
- * Routines to manage the allocation of task context numbers. Task
- * context numbers are used to reduce or eliminate the need to perform
- * TLB flushes due to context switches. Context numbers are
- * implemented using ia-64 region ids. Since ia-64 TLBs do not
- * guarantee that the region number is checked when performing a TLB
- * lookup, we need to assign a unique region id to each region in a
- * process. We use the least significant three bits in a region id
- * for this purpose. On processors where the region number is checked
- * in TLB lookups, we can get back those two bits by defining
- * CONFIG_IA64_TLB_CHECKS_REGION_NUMBER. The macro
- * IA64_REGION_ID_BITS gives the number of bits in a region id. The
- * architecture manual guarantees this number to be in the range
- * 18-24.
+ * Routines to manage the allocation of task context numbers. Task context numbers are
+ * used to reduce or eliminate the need to perform TLB flushes due to context switches.
+ * Context numbers are implemented using ia-64 region ids. Since the IA-64 TLB does not
+ * consider the region number when performing a TLB lookup, we need to assign a unique
+ * region id to each region in a process. We use the least significant three bits in a
+ * region id for this purpose.
*
- * Copyright (C) 1998 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#define IA64_REGION_ID_KERNEL 0 /* the kernel's region id (tlb.c depends on this being 0) */
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.0-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Thu Jan 25 19:17:27 2001
+++ linux-2.4.0-lia/include/asm-ia64/offsets.h Thu Jan 25 18:00:03 2001
@@ -1,14 +1,17 @@
#ifndef _ASM_IA64_OFFSETS_H
#define _ASM_IA64_OFFSETS_H
+
/*
* DO NOT MODIFY
*
- * This file was generated by arch/ia64/tools/print_offsets.awk.
+ * This file was generated by arch/ia64/tools/print_offsets.
*
*/
-#define PT_PTRACED_BIT 0
-#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 3408 /* 0xd50 */
+
+#define PT_PTRACED_BIT 0
+#define PT_TRACESYS_BIT 1
+
+#define IA64_TASK_SIZE 3376 /* 0xd30 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
@@ -18,10 +21,9 @@
#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
-#define IA64_TASK_THREAD_OFFSET 960 /* 0x3c0 */
-#define IA64_TASK_THREAD_KSP_OFFSET 960 /* 0x3c0 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 3256 /* 0xcb8 */
-#define IA64_TASK_PFM_NOTIFY 3152 /* 0xc50 */
+#define IA64_TASK_THREAD_OFFSET 1456 /* 0x5b0 */
+#define IA64_TASK_THREAD_KSP_OFFSET 1456 /* 0x5b0 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 3224 /* 0xc98 */
#define IA64_TASK_PID_OFFSET 196 /* 0xc4 */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
@@ -71,7 +73,7 @@
#define IA64_PT_REGS_F8_OFFSET 368 /* 0x170 */
#define IA64_PT_REGS_F9_OFFSET 384 /* 0x180 */
#define IA64_SWITCH_STACK_CALLER_UNAT_OFFSET 0 /* 0x0 */
-#define IA64_SWITCH_STACK_AR_FPSR_OFFSET 8 /* 0x8 */
+#define IA64_SWITCH_STACK_AR_FPSR_OFFSET 8 /* 0x8 */
#define IA64_SWITCH_STACK_F2_OFFSET 16 /* 0x10 */
#define IA64_SWITCH_STACK_F3_OFFSET 32 /* 0x20 */
#define IA64_SWITCH_STACK_F4_OFFSET 48 /* 0x30 */
@@ -110,8 +112,8 @@
#define IA64_SWITCH_STACK_B5_OFFSET 504 /* 0x1f8 */
#define IA64_SWITCH_STACK_AR_PFS_OFFSET 512 /* 0x200 */
#define IA64_SWITCH_STACK_AR_LC_OFFSET 520 /* 0x208 */
-#define IA64_SWITCH_STACK_AR_UNAT_OFFSET 528 /* 0x210 */
-#define IA64_SWITCH_STACK_AR_RNAT_OFFSET 536 /* 0x218 */
+#define IA64_SWITCH_STACK_AR_UNAT_OFFSET 528 /* 0x210 */
+#define IA64_SWITCH_STACK_AR_RNAT_OFFSET 536 /* 0x218 */
#define IA64_SWITCH_STACK_AR_BSPSTORE_OFFSET 544 /* 0x220 */
#define IA64_SWITCH_STACK_PR_OFFSET 552 /* 0x228 */
#define IA64_SIGCONTEXT_AR_BSP_OFFSET 72 /* 0x48 */
@@ -119,7 +121,7 @@
#define IA64_SIGCONTEXT_FLAGS_OFFSET 0 /* 0x0 */
#define IA64_SIGCONTEXT_CFM_OFFSET 48 /* 0x30 */
#define IA64_SIGCONTEXT_FR6_OFFSET 560 /* 0x230 */
-#define IA64_CLONE_VFORK 16384 /* 0x4000 */
+#define IA64_CLONE_VFORK 16384 /* 0x4000 */
#define IA64_CLONE_VM 256 /* 0x100 */
#endif /* _ASM_IA64_OFFSETS_H */
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.4.0-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Thu Jan 25 19:17:27 2001
+++ linux-2.4.0-lia/include/asm-ia64/pgtable.h Thu Jan 25 18:00:19 2001
@@ -8,11 +8,12 @@
* This hopefully works with any (fixed) IA-64 page-size, as defined
* in <asm/page.h> (currently 8192).
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
+
#include <asm/mman.h>
#include <asm/page.h>
#include <asm/processor.h>
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.0-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Thu Jan 25 19:17:27 2001
+++ linux-2.4.0-lia/include/asm-ia64/processor.h Thu Jan 25 18:00:30 2001
@@ -230,9 +230,12 @@
#define IA64_USEC_PER_CYC_SHIFT 41
/*
- * CPU type, hardware bug flags, and per-CPU state.
+ * CPU type, hardware bug flags, and per-CPU state. Frequently used
+ * state comes earlier:
*/
struct cpuinfo_ia64 {
+ __u64 itm_delta; /* # of clock cycles between clock ticks */
+ __u64 itm_next; /* interval timer mask value to use for next clock tick */
__u64 *pgd_quick;
__u64 *pmd_quick;
__u64 *pte_quick;
@@ -353,10 +356,10 @@
ia64_psr(regs)->is = 0; /* IA-64 instruction set */ \
regs->cr_iip = new_ip; \
regs->ar_rsc = 0xf; /* eager mode, privilege level 3 */ \
- regs->r12 = new_sp - 16; /* allocate 16 byte scratch area */ \
- regs->ar_bspstore = IA64_RBS_BOT; \
regs->ar_rnat = 0; \
+ regs->ar_bspstore = IA64_RBS_BOT; \
regs->loadrs = 0; \
+ regs->r12 = new_sp - 16; /* allocate 16 byte scratch area */ \
} while (0)
/* Forward declarations, a strange C thing... */
diff -urN linux-davidm/include/asm-ia64/segment.h linux-2.4.0-lia/include/asm-ia64/segment.h
--- linux-davidm/include/asm-ia64/segment.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.0-lia/include/asm-ia64/segment.h Thu Jan 25 18:00:39 2001
@@ -3,4 +3,4 @@
/* Only here because we have some old header files that expect it.. */
-#endif /* __ALPHA_SEGMENT_H */
+#endif /* _ASM_IA64_SEGMENT_H */
diff -urN linux-davidm/include/asm-ia64/sn/addrs.h linux-2.4.0-lia/include/asm-ia64/sn/addrs.h
--- linux-davidm/include/asm-ia64/sn/addrs.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.0-lia/include/asm-ia64/sn/addrs.h Thu Jan 25 18:00:55 2001
@@ -11,6 +11,7 @@
#define _ASM_SN_ADDRS_H
#include <linux/config.h>
+
#if _LANGUAGE_C
#include <linux/types.h>
#endif /* _LANGUAGE_C */
diff -urN linux-davidm/include/asm-ia64/sn/agent.h linux-2.4.0-lia/include/asm-ia64/sn/agent.h
--- linux-davidm/include/asm-ia64/sn/agent.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.0-lia/include/asm-ia64/sn/agent.h Thu Jan 25 18:01:03 2001
@@ -13,6 +13,7 @@
#define _ASM_SGI_SN_AGENT_H
#include <linux/config.h>
+
#include <asm/sn/addrs.h>
#include <asm/sn/arch.h>
//#include <asm/sn/io.h>
diff -urN linux-davidm/include/asm-ia64/sn/mmzone_sn1.h linux-2.4.0-lia/include/asm-ia64/sn/mmzone_sn1.h
--- linux-davidm/include/asm-ia64/sn/mmzone_sn1.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.0-lia/include/asm-ia64/sn/mmzone_sn1.h Thu Jan 25 18:19:46 2001
@@ -1,11 +1,11 @@
#ifndef _ASM_IA64_MMZONE_SN1_H
#define _ASM_IA64_MMZONE_SN1_H
+#include <linux/config.h>
+
/*
* Copyright, 2000, Silicon Graphics, sprasad@engr.sgi.com
*/
-
-#include <linux/config.h>
/* Maximum configuration supported by SNIA hardware. There are other
* restrictions that may limit us to a smaller max configuration.
diff -urN linux-davidm/include/asm-ia64/timex.h linux-2.4.0-lia/include/asm-ia64/timex.h
--- linux-davidm/include/asm-ia64/timex.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.0-lia/include/asm-ia64/timex.h Thu Jan 25 18:21:38 2001
@@ -2,14 +2,15 @@
#define _ASM_IA64_TIMEX_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+/*
+ * 2001/01/18 davidm Removed CLOCK_TICK_RATE. It makes no sense on IA-64.
+ * Also removed cacheflush_time as it's entirely unused.
*/
-
-#define CLOCK_TICK_RATE 1193180 /* Underlying HZ XXX fix me! */
typedef unsigned long cycles_t;
-extern cycles_t cacheflush_time;
static inline cycles_t
get_cycles (void)
diff -urN linux-davidm/kernel/sched.c linux-2.4.0-lia/kernel/sched.c
--- linux-davidm/kernel/sched.c Thu Jan 4 22:40:21 2001
+++ linux-2.4.0-lia/kernel/sched.c Thu Jan 25 18:23:04 2001
@@ -260,7 +260,7 @@
target_tsk = tsk;
}
} else {
- if (oldest_idle = -1ULL) {
+ if (oldest_idle = (cycles_t) -1) {
int prio = preemption_goodness(tsk, p, cpu);
if (prio > max_prio) {
@@ -272,7 +272,7 @@
}
tsk = target_tsk;
if (tsk) {
- if (oldest_idle != -1ULL) {
+ if (oldest_idle != (cycles_t) -1) {
best_cpu = tsk->processor;
goto send_now_idle;
}
diff -urN linux-davidm/mm/memory.c linux-2.4.0-lia/mm/memory.c
--- linux-davidm/mm/memory.c Thu Jan 25 19:17:27 2001
+++ linux-2.4.0-lia/mm/memory.c Thu Jan 25 18:23:11 2001
@@ -473,7 +473,7 @@
goto out_unlock;
}
}
- if (handle_mm_fault(current->mm, vma, ptr, datain) <= 0)
+ if (handle_mm_fault(current->mm, vma, ptr, datain ? VM_READ : VM_WRITE) <= 0)
goto out_unlock;
spin_lock(&mm->page_table_lock);
map = follow_page(ptr);
@@ -1223,7 +1223,7 @@
if (addr >= end)
BUG();
do {
- if (handle_mm_fault(mm, vma, addr, write) < 0)
+ if (handle_mm_fault(mm, vma, addr, write ? VM_WRITE : VM_READ) < 0)
return -1;
addr += PAGE_SIZE;
} while (addr < end);
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.1)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (35 preceding siblings ...)
2001-01-26 4:53 ` David Mosberger
@ 2001-01-31 20:32 ` David Mosberger
2001-03-01 7:12 ` [Linux-ia64] kernel update (relative to 2.4.2) David Mosberger
` (178 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-01-31 20:32 UTC (permalink / raw)
To: linux-ia64
Here is a quick update of last week's patch that brings us in sync
with 2.4.1. As usual, the patch is available at ftp.*.kernel.org in
/pub/linux/kernel/ports/ia64//linux-2.4.1-ia64-010131.diff*.
Summary of changes:
- add Don's IA-32 patch that I missed last week
- add stop bit in integer division that Chuck found to be
missing; as far as I know, this didn't cause any actual
problems on Itanium, but it was clearly wrong
- schedule save_switch_stack/load_switch_stack for Itanium
- Gerrit: fix config.in so it works with "make xconfig" again
- print fpswa fault message as KERN_WARNING; use "dmesg -n 4"
to get rid of it
Enjoy,
--david
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.1-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Wed Jan 31 11:46:51 2001
+++ linux-2.4.1-lia/arch/ia64/config.in Wed Jan 31 10:17:50 2001
@@ -37,22 +37,32 @@
16KB CONFIG_IA64_PAGE_SIZE_16KB \
64KB CONFIG_IA64_PAGE_SIZE_64KB" 16KB
-if [ "$CONFIG_IA64_DIG" = "y" ]; then
+if [ "$CONFIG_IA64_DIG" = "y" -o "$CONFIG_IA64_SGI_SN1" = "y" ]; then
define_bool CONFIG_ITANIUM y
define_bool CONFIG_IA64_BRL_EMU y
+fi
+
+if [ "$CONFIG_ITANIUM" = "y" ]; then
bool ' Enable Itanium A-step specific code' CONFIG_ITANIUM_ASTEP_SPECIFIC
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
+ fi
+ if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium B1-step specific code' CONFIG_ITANIUM_B1_SPECIFIC
+ fi
+ if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium B2-step specific code' CONFIG_ITANIUM_B2_SPECIFIC
fi
bool ' Enable Itanium C-step specific code' CONFIG_ITANIUM_CSTEP_SPECIFIC
if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium C0-step specific code' CONFIG_ITANIUM_C0_SPECIFIC
fi
- bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
+fi
+
+if [ "$CONFIG_IA64_DIG" = "y" ]; then
+ bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
bool ' Enable ACPI 2.0 with errata 1.3' CONFIG_ACPI20
@@ -65,11 +75,6 @@
fi
if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
- bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
- bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
- fi
bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM n
define_bool CONFIG_DEVFS_DEBUG y
define_bool CONFIG_DEVFS_FS y
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S linux-2.4.1-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Wed Jan 31 11:46:51 2001
+++ linux-2.4.1-lia/arch/ia64/ia32/ia32_entry.S Wed Jan 31 10:18:02 2001
@@ -160,8 +160,8 @@
data8 sys32_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */
data8 sys32_ptrace
data8 sys32_alarm
- data8 sys_pause
data8 sys32_ni_syscall
+ data8 sys_pause
data8 ia32_utime /* 30 */
data8 sys32_ni_syscall /* old stty syscall holder */
data8 sys32_ni_syscall /* old gtty syscall holder */
@@ -276,7 +276,7 @@
data8 sys32_getdents
data8 sys32_select
data8 sys_flock
- data8 sys_msync
+ data8 sys32_msync
data8 sys32_readv /* 145 */
data8 sys32_writev
data8 sys_getsid
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.4.1-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Wed Jan 31 11:46:51 2001
+++ linux-2.4.1-lia/arch/ia64/ia32/sys_ia32.c Wed Jan 31 10:18:14 2001
@@ -655,7 +655,7 @@
ia32_utime(char * filename, struct utimbuf_32 *times32)
{
mm_segment_t old_fs = get_fs();
- struct timeval tv[2];
+ struct timeval tv[2], *tvp;
long ret;
if (times32) {
@@ -664,15 +664,10 @@
get_user(tv[1].tv_sec, ×32->mtime);
tv[1].tv_usec = 0;
set_fs (KERNEL_DS);
- } else {
- set_fs (KERNEL_DS);
- ret = sys_gettimeofday(&tv[0], 0);
- if (ret < 0)
- goto out;
- tv[1] = tv[0];
- }
- ret = sys_utimes(filename, tv);
- out:
+ tvp = tv;
+ } else
+ tvp = NULL;
+ ret = sys_utimes(filename, tvp);
set_fs (old_fs);
return ret;
}
@@ -2497,7 +2492,7 @@
case F_GETLK:
case F_SETLK:
case F_SETLKW:
- if(cmd != F_GETLK && get_flock32(&f, (struct flock32 *)((long)arg)))
+ if(get_flock32(&f, (struct flock32 *)((long)arg)))
return -EFAULT;
old_fs = get_fs();
set_fs(KERNEL_DS);
@@ -2667,14 +2662,25 @@
return(ret);
}
-asmlinkage long sys_msync(unsigned long start, size_t len, int flags);
-
asmlinkage int
sys_pause (void)
{
current->state = TASK_INTERRUPTIBLE;
schedule();
return -ERESTARTNOHAND;
+}
+
+asmlinkage long sys_msync(unsigned long start, size_t len, int flags);
+
+asmlinkage int
+sys32_msync(unsigned int start, unsigned int len, int flags)
+{
+ unsigned int addr;
+
+ if (OFFSET4K(start))
+ return -EINVAL;
+ addr = start & PAGE_MASK;
+ return(sys_msync(addr, len + (start - addr), flags));
}
#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
diff -urN linux-davidm/arch/ia64/kernel/efi_stub.S linux-2.4.1-lia/arch/ia64/kernel/efi_stub.S
--- linux-davidm/arch/ia64/kernel/efi_stub.S Fri Jul 14 16:08:11 2000
+++ linux-2.4.1-lia/arch/ia64/kernel/efi_stub.S Wed Jan 31 10:18:45 2001
@@ -58,7 +58,7 @@
;;
mov loc2=gp // save global pointer
mov loc4=ar.rsc // save RSE configuration
- mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ mov ar.rsc=0 // put RSE in enforced lazy, LE mode
;;
ld8 gp=[in0] // load EFI function's global pointer
@@ -80,7 +80,7 @@
mov out5=in6
mov out6=in7
br.call.sptk.few rp¶ // call the EFI function
-.ret1: mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+.ret1: mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3
br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
.ret2: mov ar.rsc=loc4 // restore RSE configuration
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.1-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Wed Jan 31 11:46:51 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/entry.S Wed Jan 31 10:19:12 2001
@@ -208,92 +208,102 @@
UNW(.prologue)
UNW(.altrp b7)
flushrs // flush dirty regs to backing store (must be first in insn group)
+ UNW(.save @priunat,r17)
mov r17=ar.unat // preserve caller's
- adds r2\x16,sp // r2 = &sw->caller_unat
+ UNW(.body)
+#if !defined(CONFIG_ITANIUM_ASTEP_SPECIFIC)
+ adds r3€,sp
;;
- mov r18=ar.fpsr // preserve fpsr
- mov ar.rsc=r0 // put RSE in mode: enforced lazy, little endian, pl 0
+ lfetch.fault.excl.nt1 [r3],128
+#endif
+ mov ar.rsc=0 // put RSE in mode: enforced lazy, little endian, pl 0
+#if !defined(CONFIG_ITANIUM_ASTEP_SPECIFIC)
+ adds r2\x16+128,sp
;;
- mov r19=ar.rnat
- adds r3$,sp // r3 = &sw->ar_fpsr
+ lfetch.fault.excl.nt1 [r2],128
+ lfetch.fault.excl.nt1 [r3],128
+#endif
+ adds r14=SW(R4)+16,sp
+#if !defined(CONFIG_ITANIUM_ASTEP_SPECIFIC)
+ ;;
+ lfetch.fault.excl [r2]
+ lfetch.fault.excl [r3]
+#endif
+ adds r15=SW(R5)+16,sp
;;
- .savesp ar.unat,SW(CALLER_UNAT)
- st8 [r2]=r17,16
- .savesp ar.fpsr,SW(AR_FPSR)
- st8 [r3]=r18,24
+ mov r18=ar.fpsr // preserve fpsr
+ mov r19=ar.rnat
+ add r2=SW(F2)+16,sp // r2 = &sw->f2
+.mem.offset 0,0; st8.spill [r14]=r4,16 // spill r4
+.mem.offset 8,0; st8.spill [r15]=r5,16 // spill r5
+ add r3=SW(F3)+16,sp // r3 = &sw->f3
;;
- UNW(.body)
stf.spill [r2]ò,32
stf.spill [r3]ó,32
mov r21°
+.mem.offset 0,0; st8.spill [r14]=r6,16 // spill r6
+.mem.offset 8,0; st8.spill [r15]=r7,16 // spill r7
+ mov r22±
;;
+ // since we're done with the spills, read and save ar.unat:
+ mov r29=ar.unat // M-unit
+ mov r20=ar.bspstore // M-unit
+ mov r23²
stf.spill [r2]ô,32
stf.spill [r3]õ,32
+ mov r24³
;;
+ st8 [r14]=r21,16 // save b0
+ st8 [r15]=r22,16 // save b1
+ mov r25´
stf.spill [r2]ñ0,32
stf.spill [r3]ñ1,32
- mov r22±
+ mov r26µ
;;
+ st8 [r14]=r23,16 // save b2
+ st8 [r15]=r24,16 // save b3
+ mov r21=ar.lc // I-unit
stf.spill [r2]ñ2,32
stf.spill [r3]ñ3,32
- mov r23²
;;
+ st8 [r14]=r25,16 // save b4
+ st8 [r15]=r26,16 // save b5
stf.spill [r2]ñ4,32
stf.spill [r3]ñ5,32
- mov r24³
;;
+ st8 [r14]=r16 // save ar.pfs
+ st8 [r15]=r21 // save ar.lc
stf.spill [r2]ñ6,32
stf.spill [r3]ñ7,32
- mov r25´
;;
stf.spill [r2]ñ8,32
stf.spill [r3]ñ9,32
- mov r26µ
;;
stf.spill [r2]ò0,32
stf.spill [r3]ò1,32
- mov r17=ar.lc // I-unit
;;
stf.spill [r2]ò2,32
stf.spill [r3]ò3,32
;;
stf.spill [r2]ò4,32
stf.spill [r3]ò5,32
+ add r14=SW(CALLER_UNAT)+16,sp
;;
stf.spill [r2]ò6,32
stf.spill [r3]ò7,32
+ add r15=SW(AR_FPSR)+16,sp
;;
stf.spill [r2]ò8,32
stf.spill [r3]ò9,32
- ;;
- stf.spill [r2]ó0,32
- stf.spill [r3]ó1,24
- ;;
-.mem.offset 0,0; st8.spill [r2]=r4,16
-.mem.offset 8,0; st8.spill [r3]=r5,16
- ;;
-.mem.offset 0,0; st8.spill [r2]=r6,16
-.mem.offset 8,0; st8.spill [r3]=r7,16
- ;;
- st8 [r2]=r21,16 // save b0
- st8 [r3]=r22,16 // save b1
- /* since we're done with the spills, read and save ar.unat: */
- mov r18=ar.unat // M-unit
- mov r20=ar.bspstore // M-unit
- ;;
- st8 [r2]=r23,16 // save b2
- st8 [r3]=r24,16 // save b3
- ;;
- st8 [r2]=r25,16 // save b4
- st8 [r3]=r26,16 // save b5
- ;;
- st8 [r2]=r16,16 // save ar.pfs
- st8 [r3]=r17,16 // save ar.lc
+ st8 [r14]=r17 // save caller_unat
+ st8 [r15]=r18 // save fpsr
mov r21=pr
;;
- st8 [r2]=r18,16 // save ar.unat
+ stf.spill [r2]ó0,(SW(AR_UNAT)-SW(F30))
+ stf.spill [r3]ó1,(SW(AR_RNAT)-SW(F31))
+ ;;
+ st8 [r2]=r29,16 // save ar.unat
st8 [r3]=r19,16 // save ar.rnat
- mov b7=r28
;;
st8 [r2]=r20 // save ar.bspstore
st8 [r3]=r21 // save predicate registers
@@ -303,16 +313,25 @@
/*
* load_switch_stack:
+ * - "invala" MUST be done at call site (normally in DO_LOAD_SWITCH_STACK)
* - b7 holds address to return to
+ * - must not touch r8-r11
*/
ENTRY(load_switch_stack)
UNW(.prologue)
UNW(.altrp b7)
- invala // invalidate ALAT
UNW(.body)
- adds r2=IA64_SWITCH_STACK_B0_OFFSET+16,sp // get pointer to switch_stack.b0
- mov ar.rsc=r0 // put RSE into enforced lazy mode
- adds r3=IA64_SWITCH_STACK_B0_OFFSET+24,sp // get pointer to switch_stack.b1
+#if !defined(CONFIG_ITANIUM_ASTEP_SPECIFIC)
+ lfetch.fault.nt1 [sp]
+#endif
+ adds r2=SW(AR_BSPSTORE)+16,sp
+ adds r3=SW(AR_UNAT)+16,sp
+ mov ar.rsc=0 // put RSE into enforced lazy mode
+ adds r14=SW(CALLER_UNAT)+16,sp
+ adds r15=SW(AR_FPSR)+16,sp
+ ;;
+ ld8 r27=[r2],(SW(B0)-SW(AR_BSPSTORE)) // bspstore
+ ld8 r29=[r3],(SW(B1)-SW(AR_UNAT)) // unat
;;
ld8 r21=[r2],16 // restore b0
ld8 r22=[r3],16 // restore b1
@@ -323,84 +342,77 @@
ld8 r25=[r2],16 // restore b4
ld8 r26=[r3],16 // restore b5
;;
- ld8 r16=[r2],16 // restore ar.pfs
- ld8 r17=[r3],16 // restore ar.lc
+ ld8 r16=[r2],(SW(PR)-SW(AR_PFS)) // ar.pfs
+ ld8 r17=[r3],(SW(AR_RNAT)-SW(AR_LC)) // ar.lc
;;
- ld8 r18=[r2],16 // restore ar.unat
- ld8 r19=[r3],16 // restore ar.rnat
- mov b0=r21
+ ld8 r28=[r2] // restore pr
+ ld8 r30=[r3] // restore rnat
;;
- ld8 r20=[r2] // restore ar.bspstore
- ld8 r21=[r3] // restore predicate registers
- mov ar.pfs=r16
+ ld8 r18=[r14],16 // restore caller's unat
+ ld8 r19=[r15],24 // restore fpsr
;;
- mov ar.bspstore=r20
+ ldf.fill f2=[r14],32
+ ldf.fill f3=[r15],32
;;
- loadrs // invalidate stacked regs outside current frame
- adds r2\x16-IA64_SWITCH_STACK_SIZE,r2 // get pointer to switch_stack.caller_unat
- ;; // stop bit for rnat dependency
- mov ar.rnat=r19
- mov ar.unat=r18 // establish unat holding the NaT bits for r4-r7
- adds r3\x16-IA64_SWITCH_STACK_SIZE,r3 // get pointer to switch_stack.ar_fpsr
+ ldf.fill f4=[r14],32
+ ldf.fill f5=[r15],32
;;
- ld8 r18=[r2],16 // restore caller's unat
- ld8 r19=[r3],24 // restore fpsr
- mov ar.lc=r17
+ ldf.fill f10=[r14],32
+ ldf.fill f11=[r15],32
;;
- ldf.fill f2=[r2],32
- ldf.fill f3=[r3],32
- mov pr=r21,-1
+ ldf.fill f12=[r14],32
+ ldf.fill f13=[r15],32
;;
- ldf.fill f4=[r2],32
- ldf.fill f5=[r3],32
+ ldf.fill f14=[r14],32
+ ldf.fill f15=[r15],32
;;
- ldf.fill f10=[r2],32
- ldf.fill f11=[r3],32
+ ldf.fill f16=[r14],32
+ ldf.fill f17=[r15],32
+ ;;
+ ldf.fill f18=[r14],32
+ ldf.fill f19=[r15],32
+ mov b0=r21
+ ;;
+ ldf.fill f20=[r14],32
+ ldf.fill f21=[r15],32
mov b1=r22
;;
- ldf.fill f12=[r2],32
- ldf.fill f13=[r3],32
+ ldf.fill f22=[r14],32
+ ldf.fill f23=[r15],32
mov b2=r23
;;
- ldf.fill f14=[r2],32
- ldf.fill f15=[r3],32
+ mov ar.bspstore=r27
+ mov ar.unat=r29 // establish unat holding the NaT bits for r4-r7
mov b3=r24
;;
- ldf.fill f16=[r2],32
- ldf.fill f17=[r3],32
+ ldf.fill f24=[r14],32
+ ldf.fill f25=[r15],32
mov b4=r25
;;
- ldf.fill f18=[r2],32
- ldf.fill f19=[r3],32
+ ldf.fill f26=[r14],32
+ ldf.fill f27=[r15],32
mov b5=r26
;;
- ldf.fill f20=[r2],32
- ldf.fill f21=[r3],32
- ;;
- ldf.fill f22=[r2],32
- ldf.fill f23=[r3],32
- ;;
- ldf.fill f24=[r2],32
- ldf.fill f25=[r3],32
- ;;
- ldf.fill f26=[r2],32
- ldf.fill f27=[r3],32
- ;;
- ldf.fill f28=[r2],32
- ldf.fill f29=[r3],32
+ ldf.fill f28=[r14],32
+ ldf.fill f29=[r15],32
+ mov ar.pfs=r16
;;
- ldf.fill f30=[r2],32
- ldf.fill f31=[r3],24
+ ldf.fill f30=[r14],32
+ ldf.fill f31=[r15],24
+ mov ar.lc=r17
;;
- ld8.fill r4=[r2],16
- ld8.fill r5=[r3],16
+ ld8.fill r4=[r14],16
+ ld8.fill r5=[r15],16
+ mov pr=r28,-1
;;
- ld8.fill r6=[r2],16
- ld8.fill r7=[r3],16
+ ld8.fill r6=[r14],16
+ ld8.fill r7=[r15],16
+
mov ar.unat=r18 // restore caller's unat
+ mov ar.rnat=r30 // must restore after bspstore but before rsc!
mov ar.fpsr=r19 // restore fpsr
mov ar.rsc=3 // put RSE back into eager mode, pl 0
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
END(load_switch_stack)
GLOBAL_ENTRY(__ia64_syscall)
@@ -468,8 +480,8 @@
.ret6: br.call.sptk.few rp¶ // do the syscall
strace_check_retval:
cmp.lt p6,p0=r8,r0 // syscall failed?
- adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
- adds r3=IA64_PT_REGS_R8_OFFSET+32,sp // r3 = &pt_regs.r10
+ adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8
+ adds r3=PT(R10)+16,sp // r3 = &pt_regs.r10
mov r10=0
(p6) br.cond.sptk.few strace_error // syscall failed ->
;; // avoid RAW on r10
@@ -514,8 +526,8 @@
GLOBAL_ENTRY(ia64_ret_from_syscall)
PT_REGS_UNWIND_INFO(0)
cmp.ge p6,p7=r8,r0 // syscall executed successfully?
- adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
- adds r3=IA64_PT_REGS_R8_OFFSET+32,sp // r3 = &pt_regs.r10
+ adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8
+ adds r3=PT(R10)+16,sp // r3 = &pt_regs.r10
;;
.mem.offset 0,0
(p6) st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit
@@ -538,7 +550,6 @@
;;
add r3=r2,r3
#else
- adds r16=IA64_PT_REGS_R8_OFFSET+16,r12
movl r3=irq_stat // softirq_active
#endif
;;
@@ -599,13 +610,13 @@
.ret10:
;;
ssm psr.i
-#endif
+#endif
restore_all:
// start restoring the state saved on the kernel stack (struct pt_regs):
- adds r2=IA64_PT_REGS_R8_OFFSET+16,r12
- adds r3=IA64_PT_REGS_R8_OFFSET+24,r12
+ adds r2=PT(R8)+16,r12
+ adds r3=PT(R9)+16,r12
;;
ld8.fill r8=[r2],16
ld8.fill r9=[r3],16
@@ -936,11 +947,11 @@
;;
adds out0\x16,sp // out0 = &sigscratch
br.call.sptk.few rp=ia64_rt_sigreturn
-.ret19: adds sp\x16,sp // doesn't drop pt_regs, so don't mark it as restoring sp!
- PT_REGS_UNWIND_INFO(0) // instead, create a new body section with the smaller frame
+.ret19: .restore sp
+ adds sp\x16,sp
;;
ld8 r9=[sp] // load new ar.unat
- mov b7=r8
+ MOVBR(.sptk,b7,r8,ia64_leave_kernel)
;;
mov ar.unat=r9
br b7
@@ -949,10 +960,10 @@
PT_REGS_UNWIND_INFO(0)
UNW(.prologue)
UNW(.fframe IA64_PT_REGS_SIZE+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp rp, PT(CR_IIP)+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp ar.pfs, PT(CR_IFS)+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp ar.unat, PT(AR_UNAT)+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp pr, PT(PR)+IA64_SWITCH_STACK_SIZE)
+ UNW(.spillsp rp, PT(CR_IIP)+16+IA64_SWITCH_STACK_SIZE)
+ UNW(.spillsp ar.pfs, PT(CR_IFS)+16+IA64_SWITCH_STACK_SIZE)
+ UNW(.spillsp ar.unat, PT(AR_UNAT)+16+IA64_SWITCH_STACK_SIZE)
+ UNW(.spillsp pr, PT(PR)+16+IA64_SWITCH_STACK_SIZE)
adds sp=-IA64_SWITCH_STACK_SIZE,sp
cmp.eq pNonSys,pSys=r0,r0 // sigreturn isn't a normal syscall...
;;
@@ -960,10 +971,10 @@
adds out0\x16,sp // out0 = &sigscratch
br.call.sptk.few rp=ia64_rt_sigreturn
-.ret20: adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+.ret20: adds r3=SW(CALLER_UNAT)+16,sp
;;
ld8 r9=[r3] // load new ar.unat
- mov b7=r8
+ MOVBR(.sptk,b7,r8,ia64_leave_kernel)
;;
PT_REGS_UNWIND_INFO(0)
adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch-stack frame
diff -urN linux-davidm/arch/ia64/kernel/entry.h linux-2.4.1-lia/arch/ia64/kernel/entry.h
--- linux-davidm/arch/ia64/kernel/entry.h Wed Jan 31 11:46:51 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/entry.h Wed Jan 31 10:20:40 2001
@@ -1,3 +1,12 @@
+#include <linux/config.h>
+
+/* XXX fixme */
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
+# define MOVBR(type,br,gr,lbl) mov br=gr
+#else
+# define MOVBR(type,br,gr,lbl) mov##type br=gr,lbl
+#endif
+
/*
* Preserved registers that are shared between code in ivt.S and entry.S. Be
* careful not to step on these!
@@ -6,51 +15,53 @@
#define pSys p4 /* are we processing a (synchronous) system call? */
#define pNonSys p5 /* complement of pSys */
-#define PT(f) (IA64_PT_REGS_##f##_OFFSET + 16)
-#define SW(f) (IA64_SWITCH_STACK_##f##_OFFSET + 16)
+#define PT(f) (IA64_PT_REGS_##f##_OFFSET)
+#define SW(f) (IA64_SWITCH_STACK_##f##_OFFSET)
#define PT_REGS_SAVES(off) \
UNW(.unwabi @svr4, 'i'); \
UNW(.fframe IA64_PT_REGS_SIZE+16+(off)); \
- UNW(.spillsp rp, PT(CR_IIP)+(off)); \
- UNW(.spillsp ar.pfs, PT(CR_IFS)+(off)); \
- UNW(.spillsp ar.unat, PT(AR_UNAT)+(off)); \
- UNW(.spillsp ar.fpsr, PT(AR_FPSR)+(off)); \
- UNW(.spillsp pr, PT(PR)+(off));
+ UNW(.spillsp rp, PT(CR_IIP)+16+(off)); \
+ UNW(.spillsp ar.pfs, PT(CR_IFS)+16+(off)); \
+ UNW(.spillsp ar.unat, PT(AR_UNAT)+16+(off)); \
+ UNW(.spillsp ar.fpsr, PT(AR_FPSR)+16+(off)); \
+ UNW(.spillsp pr, PT(PR)+16+(off));
#define PT_REGS_UNWIND_INFO(off) \
UNW(.prologue); \
PT_REGS_SAVES(off); \
UNW(.body)
-#define SWITCH_STACK_SAVES(off) \
- UNW(.savesp ar.unat,SW(CALLER_UNAT)+(off)); UNW(.savesp ar.fpsr,SW(AR_FPSR)+(off)); \
- UNW(.spillsp f2,SW(F2)+(off)); UNW(.spillsp f3,SW(F3)+(off)); \
- UNW(.spillsp f4,SW(F4)+(off)); UNW(.spillsp f5,SW(F5)+(off)); \
- UNW(.spillsp f16,SW(F16)+(off)); UNW(.spillsp f17,SW(F17)+(off)); \
- UNW(.spillsp f18,SW(F18)+(off)); UNW(.spillsp f19,SW(F19)+(off)); \
- UNW(.spillsp f20,SW(F20)+(off)); UNW(.spillsp f21,SW(F21)+(off)); \
- UNW(.spillsp f22,SW(F22)+(off)); UNW(.spillsp f23,SW(F23)+(off)); \
- UNW(.spillsp f24,SW(F24)+(off)); UNW(.spillsp f25,SW(F25)+(off)); \
- UNW(.spillsp f26,SW(F26)+(off)); UNW(.spillsp f27,SW(F27)+(off)); \
- UNW(.spillsp f28,SW(F28)+(off)); UNW(.spillsp f29,SW(F29)+(off)); \
- UNW(.spillsp f30,SW(F30)+(off)); UNW(.spillsp f31,SW(F31)+(off)); \
- UNW(.spillsp r4,SW(R4)+(off)); UNW(.spillsp r5,SW(R5)+(off)); \
- UNW(.spillsp r6,SW(R6)+(off)); UNW(.spillsp r7,SW(R7)+(off)); \
- UNW(.spillsp b0,SW(B0)+(off)); UNW(.spillsp b1,SW(B1)+(off)); \
- UNW(.spillsp b2,SW(B2)+(off)); UNW(.spillsp b3,SW(B3)+(off)); \
- UNW(.spillsp b4,SW(B4)+(off)); UNW(.spillsp b5,SW(B5)+(off)); \
- UNW(.spillsp ar.pfs,SW(AR_PFS)+(off)); UNW(.spillsp ar.lc,SW(AR_LC)+(off)); \
- UNW(.spillsp @priunat,SW(AR_UNAT)+(off)); \
- UNW(.spillsp ar.rnat,SW(AR_RNAT)+(off)); UNW(.spillsp ar.bspstore,SW(AR_BSPSTORE)+(off)); \
- UNW(.spillsp pr,SW(PR)+(off))
+#define SWITCH_STACK_SAVES(off) \
+ UNW(.savesp ar.unat,SW(CALLER_UNAT)+16+(off)); \
+ UNW(.savesp ar.fpsr,SW(AR_FPSR)+16+(off)); \
+ UNW(.spillsp f2,SW(F2)+16+(off)); UNW(.spillsp f3,SW(F3)+16+(off)); \
+ UNW(.spillsp f4,SW(F4)+16+(off)); UNW(.spillsp f5,SW(F5)+16+(off)); \
+ UNW(.spillsp f16,SW(F16)+16+(off)); UNW(.spillsp f17,SW(F17)+16+(off)); \
+ UNW(.spillsp f18,SW(F18)+16+(off)); UNW(.spillsp f19,SW(F19)+16+(off)); \
+ UNW(.spillsp f20,SW(F20)+16+(off)); UNW(.spillsp f21,SW(F21)+16+(off)); \
+ UNW(.spillsp f22,SW(F22)+16+(off)); UNW(.spillsp f23,SW(F23)+16+(off)); \
+ UNW(.spillsp f24,SW(F24)+16+(off)); UNW(.spillsp f25,SW(F25)+16+(off)); \
+ UNW(.spillsp f26,SW(F26)+16+(off)); UNW(.spillsp f27,SW(F27)+16+(off)); \
+ UNW(.spillsp f28,SW(F28)+16+(off)); UNW(.spillsp f29,SW(F29)+16+(off)); \
+ UNW(.spillsp f30,SW(F30)+16+(off)); UNW(.spillsp f31,SW(F31)+16+(off)); \
+ UNW(.spillsp r4,SW(R4)+16+(off)); UNW(.spillsp r5,SW(R5)+16+(off)); \
+ UNW(.spillsp r6,SW(R6)+16+(off)); UNW(.spillsp r7,SW(R7)+16+(off)); \
+ UNW(.spillsp b0,SW(B0)+16+(off)); UNW(.spillsp b1,SW(B1)+16+(off)); \
+ UNW(.spillsp b2,SW(B2)+16+(off)); UNW(.spillsp b3,SW(B3)+16+(off)); \
+ UNW(.spillsp b4,SW(B4)+16+(off)); UNW(.spillsp b5,SW(B5)+16+(off)); \
+ UNW(.spillsp ar.pfs,SW(AR_PFS)+16+(off)); UNW(.spillsp ar.lc,SW(AR_LC)+16+(off)); \
+ UNW(.spillsp @priunat,SW(AR_UNAT)+16+(off)); \
+ UNW(.spillsp ar.rnat,SW(AR_RNAT)+16+(off)); \
+ UNW(.spillsp ar.bspstore,SW(AR_BSPSTORE)+16+(off)); \
+ UNW(.spillsp pr,SW(PR)+16+(off))
#define DO_SAVE_SWITCH_STACK \
movl r28\x1f; \
;; \
.fframe IA64_SWITCH_STACK_SIZE; \
adds sp=-IA64_SWITCH_STACK_SIZE,sp; \
- mov b7=r28; \
+ MOVBR(.ret.sptk,b7,r28,1f); \
SWITCH_STACK_SAVES(0); \
br.cond.sptk.many save_switch_stack; \
1:
@@ -58,7 +69,8 @@
#define DO_LOAD_SWITCH_STACK \
movl r28\x1f; \
;; \
- mov b7=r28; \
+ invala; \
+ MOVBR(.ret.sptk,b7,r28,1f); \
br.cond.sptk.many load_switch_stack; \
1: UNW(.restore sp); \
adds sp=IA64_SWITCH_STACK_SIZE,sp
diff -urN linux-davidm/arch/ia64/kernel/gate.S linux-2.4.1-lia/arch/ia64/kernel/gate.S
--- linux-davidm/arch/ia64/kernel/gate.S Fri Jul 14 16:08:11 2000
+++ linux-2.4.1-lia/arch/ia64/kernel/gate.S Wed Jan 31 10:21:10 2001
@@ -153,7 +153,7 @@
ENTRY(setup_rbs)
flushrs // must be first in insn
- mov ar.rsc=r0 // put RSE into enforced lazy mode
+ mov ar.rsc=0 // put RSE into enforced lazy mode
adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
mov r14=ar.rnat // get rnat as updated by flushrs
@@ -167,7 +167,7 @@
ENTRY(restore_rbs)
flushrs
- mov ar.rsc=r0 // put RSE into enforced lazy mode
+ mov ar.rsc=0 // put RSE into enforced lazy mode
adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
ld8 r14=[r16] // get new rnat
diff -urN linux-davidm/arch/ia64/kernel/head.S linux-2.4.1-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Thu Jan 4 22:40:10 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/head.S Wed Jan 31 10:21:20 2001
@@ -113,7 +113,7 @@
*/
addl r12=IA64_STK_OFFSET-IA64_PT_REGS_SIZE-16,r2
addl r2=IA64_RBS_OFFSET,r2 // initialize the RSE
- mov ar.rsc=r0 // place RSE in enforced lazy mode
+ mov ar.rsc=0 // place RSE in enforced lazy mode
;;
mov ar.bspstore=r2 // establish the new RSE stack
;;
diff -urN linux-davidm/arch/ia64/kernel/minstate.h linux-2.4.1-lia/arch/ia64/kernel/minstate.h
--- linux-davidm/arch/ia64/kernel/minstate.h Wed Jan 31 11:46:52 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/minstate.h Wed Jan 31 10:21:41 2001
@@ -29,7 +29,7 @@
*/
#define MINSTATE_START_SAVE_MIN_VIRT \
dep r1=-1,r1,61,3; /* r1 = current (virtual) */ \
-(p7) mov ar.rsc=r0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
+(p7) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
;; \
(p7) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of RBS */ \
(p7) mov rARRNAT=ar.rnat; \
@@ -55,7 +55,7 @@
*/
#define MINSTATE_START_SAVE_MIN_PHYS \
(pKern) movl sp=ia64_init_stack+IA64_STK_OFFSET-IA64_PT_REGS_SIZE; \
-(p7) mov ar.rsc=r0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
+(p7) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
(p7) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
;; \
(p7) mov rARRNAT=ar.rnat; \
diff -urN linux-davidm/arch/ia64/kernel/pal.S linux-2.4.1-lia/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S Thu Jan 4 22:40:10 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/pal.S Wed Jan 31 10:21:52 2001
@@ -171,7 +171,7 @@
dep.z r8=r8,0,61 // convert rp to physical
;;
mov b7 = loc2 // install target to branch reg
- mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ mov ar.rsc=0 // put RSE in enforced lazy, LE mode
movl r16=PAL_PSR_BITS_TO_CLEAR
movl r17=PAL_PSR_BITS_TO_SET
;;
@@ -182,7 +182,7 @@
.ret1: mov rp = r8 // install return address (physical)
br.cond.sptk.few b7
1:
- mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
.ret2:
@@ -224,7 +224,7 @@
mov loc4=ar.rsc // save RSE configuration
dep.z loc2=loc2,0,61 // convert pal entry point to physical
;;
- mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ mov ar.rsc=0 // put RSE in enforced lazy, LE mode
movl r16=PAL_PSR_BITS_TO_CLEAR
movl r17=PAL_PSR_BITS_TO_SET
;;
@@ -236,7 +236,7 @@
.ret6:
br.call.sptk.many rp· // now make the call
.ret7:
- mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.1-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Wed Jan 31 11:46:52 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/process.c Mon Jan 8 23:41:03 2001
@@ -347,6 +347,7 @@
unw_get_gr(info, i, &dst[i], &nat);
if (nat)
nat_bits |= mask;
+printk("r%u = %c%016lx\n", i, nat ? '*' : ' ', dst[i]);
mask <<= 1;
}
dst[32] = nat_bits;
diff -urN linux-davidm/arch/ia64/kernel/signal.c linux-2.4.1-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Wed Jan 31 11:46:52 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/signal.c Wed Jan 31 10:22:01 2001
@@ -52,7 +52,6 @@
struct sigcontext sc;
};
-extern long sys_wait4 (int, int *, int, struct rusage *);
extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); /* forward decl */
long
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.1-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.1-lia/arch/ia64/kernel/traps.c Wed Jan 31 10:26:01 2001
@@ -319,15 +319,14 @@
if (copy_from_user(bundle, (void *) fault_ip, sizeof(bundle)))
return -1;
-#ifdef FPSWA_DEBUG
if (fpu_swa_count > 5 && jiffies - last_time > 5*HZ)
fpu_swa_count = 0;
if (++fpu_swa_count < 5) {
last_time = jiffies;
- printk("%s(%d): floating-point assist fault at ip %016lx\n",
+ printk(KERN_WARNING "%s(%d): floating-point assist fault at ip %016lx\n",
current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri);
}
-#endif
+
exception = fp_emulate(fp_fault, bundle, ®s->cr_ipsr, ®s->ar_fpsr, &isr, ®s->pr,
®s->cr_ifs, regs);
if (fp_fault) {
diff -urN linux-davidm/arch/ia64/lib/idiv64.S linux-2.4.1-lia/arch/ia64/lib/idiv64.S
--- linux-davidm/arch/ia64/lib/idiv64.S Mon Oct 9 17:54:56 2000
+++ linux-2.4.1-lia/arch/ia64/lib/idiv64.S Wed Jan 31 10:22:48 2001
@@ -42,6 +42,7 @@
// Transfer inputs to FP registers.
setf.sig f8 = in0
setf.sig f9 = in1
+ ;;
UNW(.fframe 16)
UNW(.save.f 0x20)
stf.spill [sp] = f17,-16
diff -urN linux-davidm/drivers/acpi/acpiconf.c linux-2.4.1-lia/drivers/acpi/acpiconf.c
--- linux-davidm/drivers/acpi/acpiconf.c Wed Jan 31 11:46:52 2001
+++ linux-2.4.1-lia/drivers/acpi/acpiconf.c Wed Jan 31 10:24:18 2001
@@ -234,8 +234,8 @@
ext_obj = (ACPI_OBJECT *) ret_buf.pointer;
switch (ext_obj->type) {
- case ACPI_TYPE_NUMBER:
- busnum = (NATIVE_UINT) ext_obj->number.value;
+ case ACPI_TYPE_INTEGER:
+ busnum = (NATIVE_UINT) ext_obj->integer.value;
next_busnum = busnum + 1;
dprintk(("Acpi cfg:_BBN busnum is %ld\n ", busnum));
break;
@@ -266,8 +266,8 @@
ext_obj = (ACPI_OBJECT *) ret_buf.pointer;
switch (ext_obj->type) {
- case ACPI_TYPE_NUMBER:
- if((NATIVE_UINT) ext_obj->number.value & ACPI_STA_DEVICE_PRESENT) {
+ case ACPI_TYPE_INTEGER:
+ if((NATIVE_UINT) ext_obj->integer.value & ACPI_STA_DEVICE_PRESENT) {
dprintk(("Acpi cfg:_STA: pci bus %ld exist\n", busnum));
} else {
printk("Acpi cfg:_STA: pci bus %ld not exist. Discarding the _PRT\n", busnum);
diff -urN linux-davidm/fs/binfmt_elf.c linux-2.4.1-lia/fs/binfmt_elf.c
--- linux-davidm/fs/binfmt_elf.c Wed Jan 31 11:46:55 2001
+++ linux-2.4.1-lia/fs/binfmt_elf.c Wed Jan 31 10:26:35 2001
@@ -484,20 +484,6 @@
if (strcmp(elf_interpreter,"/usr/lib/libc.so.1") = 0 ||
strcmp(elf_interpreter,"/usr/lib/ld.so.1") = 0)
ibcs2_interpreter = 1;
-#if defined(__ia64__) && !defined(CONFIG_BINFMT_ELF32)
- /*
- * XXX temporary gross hack until all IA-64 Linux binaries
- * use /lib/ld-linux-ia64.so.1 as the linker name.
- */
-#define INTRP64 "/lib/ld-linux-ia64.so.1"
- if (strcmp(elf_interpreter,"/lib/ld-linux.so.2") = 0) {
- kfree(elf_interpreter);
- elf_interpreter=(char *)kmalloc(sizeof(INTRP64), GFP_KERNEL);
- if (!elf_interpreter)
- goto out_free_file;
- strcpy(elf_interpreter, INTRP64);
- }
-#endif /* defined(__ia64__) && !defined(CONFIG_BINFMT_ELF32) */
#if 0
printk("Using ELF interpreter %s\n", elf_interpreter);
#endif
diff -urN linux-davidm/include/asm-ia64/byteorder.h linux-2.4.1-lia/include/asm-ia64/byteorder.h
--- linux-davidm/include/asm-ia64/byteorder.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.1-lia/include/asm-ia64/byteorder.h Wed Jan 31 10:28:45 2001
@@ -20,18 +20,18 @@
static __inline__ __const__ __u32
__ia64_swab32 (__u32 x)
{
- return __ia64_swab64 (x) >> 32;
+ return __ia64_swab64(x) >> 32;
}
static __inline__ __const__ __u16
__ia64_swab16(__u16 x)
{
- return __ia64_swab64 (x) >> 48;
+ return __ia64_swab64(x) >> 48;
}
-#define __arch__swab64(x) __ia64_swab64 (x)
-#define __arch__swab32(x) __ia64_swab32 (x)
-#define __arch__swab16(x) __ia64_swab16 (x)
+#define __arch__swab64(x) __ia64_swab64(x)
+#define __arch__swab32(x) __ia64_swab32(x)
+#define __arch__swab16(x) __ia64_swab16(x)
#define __BYTEORDER_HAS_U64__
diff -urN linux-davidm/include/asm-ia64/errno.h linux-2.4.1-lia/include/asm-ia64/errno.h
--- linux-davidm/include/asm-ia64/errno.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.1-lia/include/asm-ia64/errno.h Wed Jan 31 10:29:01 2001
@@ -135,5 +135,5 @@
#define ENOMEDIUM 123 /* No medium found */
#define EMEDIUMTYPE 124 /* Wrong medium type */
-
+#define EHASHCOLLISION 125 /* Number of hash collisons exceeds max. generation value */
#endif /* _ASM_IA64_ERRNO_H */
diff -urN linux-davidm/include/asm-ia64/mca_asm.h linux-2.4.1-lia/include/asm-ia64/mca_asm.h
--- linux-davidm/include/asm-ia64/mca_asm.h Fri Apr 21 15:21:24 2000
+++ linux-2.4.1-lia/include/asm-ia64/mca_asm.h Wed Jan 31 10:29:15 2001
@@ -72,7 +72,7 @@
;; \
dep old_psr = 0, old_psr, 32, 32; \
\
- mov ar.rsc = r0 ; \
+ mov ar.rsc = 0 ; \
;; \
mov temp2 = ar.bspstore; \
;; \
@@ -148,7 +148,7 @@
dep temp2 = 0, temp2, PSR_IC, 2; \
;; \
mov psr.l = temp2; \
- mov ar.rsc = r0; \
+ mov ar.rsc = 0; \
;; \
srlz.d; \
mov temp2 = ar.bspstore; \
diff -urN linux-davidm/include/asm-ia64/system.h linux-2.4.1-lia/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Thu Jan 4 22:40:21 2001
+++ linux-2.4.1-lia/include/asm-ia64/system.h Wed Jan 31 10:29:42 2001
@@ -350,7 +350,7 @@
case 2: _o_ = (__u16) (long) (old); break; \
case 4: _o_ = (__u32) (long) (old); break; \
case 8: _o_ = (__u64) (long) (old); break; \
- default: \
+ default: break; \
} \
__asm__ __volatile__ ("mov ar.ccv=%0;;" :: "rO"(_o_)); \
switch (size) { \
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.2)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (36 preceding siblings ...)
2001-01-31 20:32 ` [Linux-ia64] kernel update (relative to 2.4.1) David Mosberger
@ 2001-03-01 7:12 ` David Mosberger
2001-03-01 10:17 ` Andreas Schwab
` (177 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-03-01 7:12 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.2-ia64-010228.diff*
What's in this patch:
- everything that was in the previous test patch, including:
o cpu-state clearing on execve()
o virtually mapped per-CPU data
o clearing of "invalid partition" when lowering privilege level
- sync with kernel v2.4.2
- CONFIG_ITANIUM_PTCG is now turned on automatically for CPUs
newer than B2
- lots of SN1 updates from Kanoj
- add support for multiple interrupt domains (i.e., systems
where the interrupt vectors are not necessarily shared by
all CPUs)
- perfmon update from Stephane
- add Jack's ptc.g workaround fix and increase the ptc.g timeout
value
Note: since there were non-trivial changes to the kernel exit path, I
still consider this patch somewhat experimental. It has worked very
well for me so far, but as always, ymmv and you may want to give it a
good workout before burning it on a cd or similar...
This kernel has been tested on 4-way Lion, 2-way Big Sur, and 1-way
Ski simulator.
Enjoy,
--david
diff -urN --ignore-all-space linux-davidm/arch/ia64/config.in linux-2.4.2-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Wed Feb 28 22:20:34 2001
+++ linux-2.4.2-lia/arch/ia64/config.in Wed Feb 28 14:43:27 2001
@@ -18,7 +18,6 @@
comment 'General setup'
define_bool CONFIG_IA64 y
-define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data structure to 64 bytes
define_bool CONFIG_ISA n
define_bool CONFIG_EISA n
@@ -58,7 +57,12 @@
if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium C0-step specific code' CONFIG_ITANIUM_C0_SPECIFIC
fi
- bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
+ if [ "$CONFIG_ITANIUM_ASTEP_SPECIFIC" = "y" -o "$CONFIG_ITANIUM_B0_SPECIFIC" = "y"
+ -o "$CONFIG_ITANIUM_B1_SPECIFIC" = "y" -o "$CONFIG_ITANIUM_B2_SPECIFIC" = "y" ]; then
+ define_bool CONFIG_ITANIUM_PTCG n
+ else
+ define_bool CONFIG_ITANIUM_PTCG y
+ fi
fi
if [ "$CONFIG_IA64_DIG" = "y" ]; then
@@ -70,15 +74,11 @@
define_bool CONFIG_PM y
define_bool CONFIG_ACPI y
define_bool CONFIG_ACPI_INTERPRETER y
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data structure to 64 bytes
fi
fi
if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
- bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
- bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
- fi
bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM
define_bool CONFIG_DEVFS_DEBUG y
define_bool CONFIG_DEVFS_FS y
@@ -90,6 +90,7 @@
define_int CONFIG_CACHE_LINE_SHIFT 7
bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM
bool ' Enable NUMA support' CONFIG_NUMA
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align cache-sensitive data structure to 64 bytes
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
@@ -242,6 +243,7 @@
if [ "$CONFIG_SCSI" != "n" ]; then
bool 'Simulated SCSI disk' CONFIG_SCSI_SIM
fi
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data structure to 64 bytes
endmenu
fi
diff -urN --ignore-all-space linux-davidm/arch/ia64/hp/hpsim_irq.c linux-2.4.2-lia/arch/ia64/hp/hpsim_irq.c
--- linux-davidm/arch/ia64/hp/hpsim_irq.c Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/hp/hpsim_irq.c Wed Feb 28 14:43:45 2001
@@ -1,8 +1,8 @@
/*
* Platform dependent support for HP simulator.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/init.h>
@@ -35,10 +35,12 @@
void __init
hpsim_irq_init (void)
{
+ irq_desc_t *idesc;
int i;
- for (i = IA64_MIN_VECTORED_IRQ; i <= IA64_MAX_VECTORED_IRQ; ++i) {
- if (irq_desc[i].handler = &no_irq_type)
- irq_desc[i].handler = &irq_type_hp_sim;
+ for (i = 0; i < NR_IRQS; ++i) {
+ idesc = irq_desc(i);
+ if (idesc->handler = &no_irq_type)
+ idesc->handler = &irq_type_hp_sim;
}
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/brl_emu.c linux-2.4.2-lia/arch/ia64/kernel/brl_emu.c
--- linux-davidm/arch/ia64/kernel/brl_emu.c Wed Feb 28 22:20:35 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/brl_emu.c Wed Feb 28 14:44:20 2001
@@ -24,8 +24,8 @@
* or all 1's for the address to be valid.
*/
#define unimplemented_virtual_address(va) ( \
- ((va) & current_cpu_data->unimpl_va_mask) != 0 && \
- ((va) & current_cpu_data->unimpl_va_mask) != current_cpu_data->unimpl_va_mask \
+ ((va) & local_cpu_data->unimpl_va_mask) != 0 && \
+ ((va) & local_cpu_data->unimpl_va_mask) != local_cpu_data->unimpl_va_mask \
)
/*
@@ -35,7 +35,7 @@
* address to be valid.
*/
#define unimplemented_physical_address(pa) ( \
- ((pa) & current_cpu_data->unimpl_pa_mask) != 0 \
+ ((pa) & local_cpu_data->unimpl_pa_mask) != 0 \
)
/*
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.2-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Wed Feb 28 22:20:35 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/ia64_ksyms.c Wed Feb 28 14:44:41 2001
@@ -24,8 +24,11 @@
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strtok);
-#include <asm/hw_irq.h>
+#include <linux/irq.h>
EXPORT_SYMBOL(isa_irq_to_vector_map);
+EXPORT_SYMBOL(enable_irq);
+EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(disable_irq_nosync);
#include <linux/in6.h>
#include <asm/checksum.h>
@@ -39,11 +42,6 @@
EXPORT_SYMBOL(__ia64_memcpy_fromio);
EXPORT_SYMBOL(__ia64_memcpy_toio);
EXPORT_SYMBOL(__ia64_memset_c_io);
-
-#include <asm/irq.h>
-EXPORT_SYMBOL(enable_irq);
-EXPORT_SYMBOL(disable_irq);
-EXPORT_SYMBOL(disable_irq_nosync);
#include <asm/semaphore.h>
EXPORT_SYMBOL_NOVERS(__down);
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/iosapic.c linux-2.4.2-lia/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Wed Feb 28 22:20:35 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/iosapic.c Wed Feb 28 14:44:51 2001
@@ -83,7 +83,7 @@
unsigned char dmode : 3; /* delivery mode (see iosapic.h) */
unsigned char polarity : 1; /* interrupt polarity (see iosapic.h) */
unsigned char trigger : 1; /* trigger mode (see iosapic.h) */
-} iosapic_irq[NR_IRQS];
+} iosapic_irq[IA64_NUM_VECTORS];
/*
* Translate IOSAPIC irq number to the corresponding IA-64 interrupt vector. If no
@@ -94,7 +94,7 @@
{
int vector;
- for (vector = 0; vector < NR_IRQS; ++vector)
+ for (vector = 0; vector < IA64_NUM_VECTORS; ++vector)
if (iosapic_irq[vector].base_irq + iosapic_irq[vector].pin = irq)
return vector;
return -1;
@@ -153,15 +153,16 @@
}
static void
-mask_irq (unsigned int vector)
+mask_irq (unsigned int irq)
{
unsigned long flags;
char *addr;
u32 low32;
int pin;
+ ia64_vector vec = irq_to_vector(irq);
- addr = iosapic_irq[vector].addr;
- pin = iosapic_irq[vector].pin;
+ addr = iosapic_irq[vec].addr;
+ pin = iosapic_irq[vec].pin;
if (pin < 0)
return; /* not an IOSAPIC interrupt! */
@@ -178,15 +179,16 @@
}
static void
-unmask_irq (unsigned int vector)
+unmask_irq (unsigned int irq)
{
unsigned long flags;
char *addr;
u32 low32;
int pin;
+ ia64_vector vec = irq_to_vector(irq);
- addr = iosapic_irq[vector].addr;
- pin = iosapic_irq[vector].pin;
+ addr = iosapic_irq[vec].addr;
+ pin = iosapic_irq[vec].pin;
if (pin < 0)
return; /* not an IOSAPIC interrupt! */
@@ -203,7 +205,7 @@
static void
-iosapic_set_affinity (unsigned int vector, unsigned long mask)
+iosapic_set_affinity (unsigned int irq, unsigned long mask)
{
printk("iosapic_set_affinity: not implemented yet\n");
}
@@ -213,16 +215,18 @@
*/
static unsigned int
-iosapic_startup_level_irq (unsigned int vector)
+iosapic_startup_level_irq (unsigned int irq)
{
- unmask_irq(vector);
+ unmask_irq(irq);
return 0;
}
static void
-iosapic_end_level_irq (unsigned int vector)
+iosapic_end_level_irq (unsigned int irq)
{
- writel(vector, iosapic_irq[vector].addr + IOSAPIC_EOI);
+ ia64_vector vec = irq_to_vector(irq);
+
+ writel(vec, iosapic_irq[vec].addr + IOSAPIC_EOI);
}
#define iosapic_shutdown_level_irq mask_irq
@@ -246,9 +250,9 @@
*/
static unsigned int
-iosapic_startup_edge_irq (unsigned int vector)
+iosapic_startup_edge_irq (unsigned int irq)
{
- unmask_irq(vector);
+ unmask_irq(irq);
/*
* IOSAPIC simply drops interrupts pended while the
* corresponding pin was masked, so we can't know if an
@@ -258,15 +262,16 @@
}
static void
-iosapic_ack_edge_irq (unsigned int vector)
+iosapic_ack_edge_irq (unsigned int irq)
{
+ irq_desc_t *idesc = irq_desc(irq);
/*
* Once we have recorded IRQ_PENDING already, we can mask the
* interrupt for real. This prevents IRQ storms from unhandled
* devices.
*/
- if ((irq_desc[vector].status & (IRQ_PENDING|IRQ_DISABLED)) = (IRQ_PENDING|IRQ_DISABLED))
- mask_irq(vector);
+ if ((idesc->status & (IRQ_PENDING|IRQ_DISABLED)) = (IRQ_PENDING|IRQ_DISABLED))
+ mask_irq(irq);
}
#define iosapic_enable_edge_irq unmask_irq
@@ -329,6 +334,7 @@
{
struct hw_interrupt_type *irq_type;
int i, irq, max_pin, vector;
+ irq_desc_t *idesc;
unsigned int ver;
char *addr;
static int first_time = 1;
@@ -336,7 +342,7 @@
if (first_time) {
first_time = 0;
- for (vector = 0; vector < NR_IRQS; ++vector)
+ for (vector = 0; vector < IA64_NUM_VECTORS; ++vector)
iosapic_irq[vector].pin = -1; /* mark as unused */
/*
@@ -380,12 +386,13 @@
vector);
#endif
irq_type = &irq_type_iosapic_edge;
- if (irq_desc[vector].handler != irq_type) {
- if (irq_desc[vector].handler != &no_irq_type)
+ idesc = irq_desc(vector);
+ if (idesc->handler != irq_type) {
+ if (idesc->handler != &no_irq_type)
printk("iosapic_init: changing vector 0x%02x from %s to "
- "%s\n", irq, irq_desc[vector].handler->typename,
+ "%s\n", irq, idesc->handler->typename,
irq_type->typename);
- irq_desc[vector].handler = irq_type;
+ idesc->handler = irq_type;
}
/* program the IOSAPIC routing table: */
@@ -421,12 +428,12 @@
iosapic_irq[vector].base_irq + iosapic_irq[vector].pin, vector);
# endif
irq_type = &irq_type_iosapic_level;
- if (irq_desc[vector].handler != irq_type){
- if (irq_desc[vector].handler != &no_irq_type)
+ idesc = irq_desc(vector);
+ if (idesc->handler != irq_type){
+ if (idesc->handler != &no_irq_type)
printk("iosapic_init: changing vector 0x%02x from %s to %s\n",
- vector, irq_desc[vector].handler->typename,
- irq_type->typename);
- irq_desc[vector].handler = irq_type;
+ vector, idesc->handler->typename, irq_type->typename);
+ idesc->handler = irq_type;
}
/* program the IOSAPIC routing table: */
@@ -484,7 +491,7 @@
* Nothing to fixup
* Fix out-of-range IRQ numbers
*/
- if (dev->irq >= NR_IRQS)
+ if (dev->irq >= IA64_NUM_VECTORS)
dev->irq = 15; /* Spurious interrupts */
}
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/irq.c linux-2.4.2-lia/arch/ia64/kernel/irq.c
--- linux-davidm/arch/ia64/kernel/irq.c Wed Feb 28 12:57:33 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/irq.c Wed Feb 28 14:45:11 2001
@@ -63,7 +63,7 @@
/*
* Controller mappings for all interrupt sources:
*/
-irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned +irq_desc_t _irq_desc[NR_IRQS] __cacheline_aligned { [0 ... NR_IRQS-1] = { IRQ_DISABLED, &no_irq_type, NULL, 0, SPIN_LOCK_UNLOCKED}};
static void register_irq_proc (unsigned int irq);
@@ -131,6 +131,7 @@
{
int i, j;
struct irqaction * action;
+ irq_desc_t *idesc;
char *p = buf;
p += sprintf(p, " ");
@@ -139,7 +140,8 @@
*p++ = '\n';
for (i = 0 ; i < NR_IRQS ; i++) {
- action = irq_desc[i].action;
+ idesc = irq_desc(i);
+ action = idesc->action;
if (!action)
continue;
p += sprintf(p, "%3d: ",i);
@@ -150,7 +152,7 @@
p += sprintf(p, "%10u ",
kstat.irqs[cpu_logical_map(j)][i]);
#endif
- p += sprintf(p, " %14s", irq_desc[i].handler->typename);
+ p += sprintf(p, " %14s", idesc->handler->typename);
p += sprintf(p, " %s", action->name);
for (action¬tion->next; action; action = action->next)
@@ -193,10 +195,10 @@
printk("\n%s, CPU %d:\n", str, cpu);
printk("irq: %d [",irqs_running());
for(i=0;i < smp_num_cpus;i++)
- printk(" %d",local_irq_count(i));
+ printk(" %d",irq_count(i));
printk(" ]\nbh: %d [",spin_is_locked(&global_bh_lock) ? 1 : 0);
for(i=0;i < smp_num_cpus;i++)
- printk(" %d",local_bh_count(i));
+ printk(" %d",bh_count(i));
printk(" ]\nStack dumps:");
#if defined(__ia64__)
@@ -266,7 +268,7 @@
# endif
#endif
-static inline void wait_on_irq(int cpu)
+static inline void wait_on_irq(void)
{
int count = MAXCOUNT;
@@ -278,7 +280,7 @@
* already executing in one..
*/
if (!irqs_running())
- if (local_bh_count(cpu) || !spin_is_locked(&global_bh_lock))
+ if (local_bh_count() || !spin_is_locked(&global_bh_lock))
break;
/* Duh, we have to loop. Release the lock to avoid deadlocks */
@@ -290,13 +292,13 @@
count = ~0;
}
__sti();
- SYNC_OTHER_CORES(cpu);
+ SYNC_OTHER_CORES(smp_processor_id());
__cli();
if (irqs_running())
continue;
if (global_irq_lock)
continue;
- if (!local_bh_count(cpu) && spin_is_locked(&global_bh_lock))
+ if (!local_bh_count() && spin_is_locked(&global_bh_lock))
continue;
if (!test_and_set_bit(0,&global_irq_lock))
break;
@@ -320,11 +322,11 @@
}
}
-static inline void get_irqlock(int cpu)
+static inline void get_irqlock(void)
{
if (test_and_set_bit(0,&global_irq_lock)) {
/* do we already hold the lock? */
- if (cpu = global_irq_holder)
+ if (smp_processor_id() = global_irq_holder)
return;
/* Uhhuh.. Somebody else got it. Wait.. */
do {
@@ -336,12 +338,12 @@
* We also to make sure that nobody else is running
* in an interrupt context.
*/
- wait_on_irq(cpu);
+ wait_on_irq();
/*
* Ok, finally..
*/
- global_irq_holder = cpu;
+ global_irq_holder = smp_processor_id();
}
#define EFLAGS_IF_SHIFT 9
@@ -365,28 +367,24 @@
#ifdef __ia64__
__save_flags(flags);
if (flags & IA64_PSR_I) {
- int cpu = smp_processor_id();
__cli();
- if (!local_irq_count(cpu))
- get_irqlock(cpu);
+ if (!local_irq_count())
+ get_irqlock();
}
#else
__save_flags(flags);
if (flags & (1 << EFLAGS_IF_SHIFT)) {
- int cpu = smp_processor_id();
__cli();
- if (!local_irq_count(cpu))
- get_irqlock(cpu);
+ if (!local_irq_count())
+ get_irqlock();
}
#endif
}
void __global_sti(void)
{
- int cpu = smp_processor_id();
-
- if (!local_irq_count(cpu))
- release_irqlock(cpu);
+ if (!local_irq_count())
+ release_irqlock(smp_processor_id());
__sti();
}
@@ -414,7 +412,7 @@
retval = 2 + local_enabled;
/* check for global flags if we're not in an interrupt */
- if (!local_irq_count(cpu)) {
+ if (!local_irq_count()) {
if (local_enabled)
retval = 1;
if (global_irq_holder = cpu)
@@ -456,9 +454,8 @@
int handle_IRQ_event(unsigned int irq, struct pt_regs * regs, struct irqaction * action)
{
int status;
- int cpu = smp_processor_id();
- irq_enter(cpu, irq);
+ local_irq_enter(irq);
status = 1; /* Force the "do bottom halves" bit */
@@ -474,7 +471,7 @@
add_interrupt_randomness(irq);
__cli();
- irq_exit(cpu, irq);
+ local_irq_exit(irq);
return status;
}
@@ -487,7 +484,7 @@
*/
void inline disable_irq_nosync(unsigned int irq)
{
- irq_desc_t *desc = irq_desc + irq;
+ irq_desc_t *desc = irq_desc(irq);
unsigned long flags;
spin_lock_irqsave(&desc->lock, flags);
@@ -507,17 +504,17 @@
disable_irq_nosync(irq);
#ifdef CONFIG_SMP
- if (!local_irq_count(smp_processor_id())) {
+ if (!local_irq_count()) {
do {
barrier();
- } while (irq_desc[irq].status & IRQ_INPROGRESS);
+ } while (irq_desc(irq)->status & IRQ_INPROGRESS);
}
#endif
}
void enable_irq(unsigned int irq)
{
- irq_desc_t *desc = irq_desc + irq;
+ irq_desc_t *desc = irq_desc(irq);
unsigned long flags;
spin_lock_irqsave(&desc->lock, flags);
@@ -541,18 +538,6 @@
spin_unlock_irqrestore(&desc->lock, flags);
}
-void do_IRQ_per_cpu(unsigned long irq, struct pt_regs *regs)
-{
- irq_desc_t *desc = irq_desc + irq;
- int cpu = smp_processor_id();
-
- kstat.irqs[cpu][irq]++;
-
- desc->handler->ack(irq);
- handle_IRQ_event(irq, regs, desc->action);
- desc->handler->end(irq);
-}
-
/*
* do_IRQ handles all normal device IRQ's (the special
* SMP cross-CPU interrupts have their own specific
@@ -571,16 +556,23 @@
* handled by some other CPU. (or is disabled)
*/
int cpu = smp_processor_id();
- irq_desc_t *desc = irq_desc + irq;
+ irq_desc_t *desc = irq_desc(irq);
struct irqaction * action;
unsigned int status;
kstat.irqs[cpu][irq]++;
+
+ if (desc->status & IRQ_PER_CPU) {
+ /* no locking required for CPU-local interrupts: */
+ desc->handler->ack(irq);
+ handle_IRQ_event(irq, regs, desc->action);
+ desc->handler->end(irq);
+ } else {
spin_lock(&desc->lock);
desc->handler->ack(irq);
/*
- REPLAY is when Linux resends an IRQ that was dropped earlier
- WAITING is used by probe to mark irqs that are being tested
+ * REPLAY is when Linux resends an IRQ that was dropped earlier
+ * WAITING is used by probe to mark irqs that are being tested
*/
status = desc->status & ~(IRQ_REPLAY | IRQ_WAITING);
status |= IRQ_PENDING; /* we _want_ to handle it */
@@ -633,7 +625,7 @@
*/
desc->handler->end(irq);
spin_unlock(&desc->lock);
-
+ }
return 1;
}
@@ -655,7 +647,7 @@
*/
if (irqflags & SA_SHIRQ) {
if (!dev_id)
- printk("Bad boy: %s (at 0x%x) called us without a dev_id!\n", devname, (&irq)[-1]);
+ printk("Bad boy: %s called us without a dev_id!\n", devname);
}
#endif
@@ -691,7 +683,7 @@
if (irq >= NR_IRQS)
return;
- desc = irq_desc + irq;
+ desc = irq_desc(irq);
spin_lock_irqsave(&desc->lock,flags);
p = &desc->action;
for (;;) {
@@ -744,11 +736,11 @@
* flush such a longstanding irq before considering it as spurious.
*/
for (i = NR_IRQS-1; i > 0; i--) {
- desc = irq_desc + i;
+ desc = irq_desc(i);
spin_lock_irq(&desc->lock);
- if (!irq_desc[i].action)
- irq_desc[i].handler->startup(i);
+ if (!desc->action)
+ desc->handler->startup(i);
spin_unlock_irq(&desc->lock);
}
@@ -762,7 +754,7 @@
* happened in the previous stage, it may have masked itself)
*/
for (i = NR_IRQS-1; i > 0; i--) {
- desc = irq_desc + i;
+ desc = irq_desc(i);
spin_lock_irq(&desc->lock);
if (!desc->action) {
@@ -784,7 +776,7 @@
*/
val = 0;
for (i = 0; i < NR_IRQS; i++) {
- irq_desc_t *desc = irq_desc + i;
+ irq_desc_t *desc = irq_desc(i);
unsigned int status;
spin_lock_irq(&desc->lock);
@@ -816,7 +808,7 @@
mask = 0;
for (i = 0; i < 16; i++) {
- irq_desc_t *desc = irq_desc + i;
+ irq_desc_t *desc = irq_desc(i);
unsigned int status;
spin_lock_irq(&desc->lock);
@@ -846,7 +838,7 @@
nr_irqs = 0;
irq_found = 0;
for (i = 0; i < NR_IRQS; i++) {
- irq_desc_t *desc = irq_desc + i;
+ irq_desc_t *desc = irq_desc(i);
unsigned int status;
spin_lock_irq(&desc->lock);
@@ -869,13 +861,12 @@
return irq_found;
}
-/* this was setup_x86_irq but it seems pretty generic */
int setup_irq(unsigned int irq, struct irqaction * new)
{
int shared = 0;
unsigned long flags;
struct irqaction *old, **p;
- irq_desc_t *desc = irq_desc + irq;
+ irq_desc_t *desc = irq_desc(irq);
/*
* Some drivers like serial.c use request_irq() heavily,
@@ -986,7 +977,7 @@
int irq = (long) data, full_count = count, err;
unsigned long new_value;
- if (!irq_desc[irq].handler->set_affinity)
+ if (!irq_desc(irq)->handler->set_affinity)
return -EIO;
err = parse_hex_value(buffer, count, &new_value);
@@ -1002,7 +993,7 @@
#endif
irq_affinity[irq] = new_value;
- irq_desc[irq].handler->set_affinity(irq, new_value);
+ irq_desc(irq)->handler->set_affinity(irq, new_value);
return full_count;
}
@@ -1037,7 +1028,7 @@
struct proc_dir_entry *entry;
char name [MAX_NAMELEN];
- if (!root_irq_dir || (irq_desc[irq].handler = &no_irq_type))
+ if (!root_irq_dir || (irq_desc(irq)->handler = &no_irq_type))
return;
memset(name, 0, MAX_NAMELEN);
@@ -1079,9 +1070,8 @@
* Create entries for all existing IRQs.
*/
for (i = 0; i < NR_IRQS; i++) {
- if (irq_desc[i].handler = &no_irq_type)
+ if (irq_desc(i)->handler = &no_irq_type)
continue;
register_irq_proc(i);
}
}
-
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/irq_ia64.c linux-2.4.2-lia/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Wed Feb 28 22:20:35 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/irq_ia64.c Wed Feb 28 14:45:47 2001
@@ -39,7 +39,7 @@
#define IRQ_DEBUG 0
/* default base addr of IPI table */
-unsigned long ipi_base_addr = (__IA64_UNCACHED_OFFSET | IPI_DEFAULT_BASE_ADDR);
+unsigned long ipi_base_addr = (__IA64_UNCACHED_OFFSET | IA64_IPI_DEFAULT_BASE_ADDR);
/*
* Legacy IRQ to IA-64 vector translation table.
@@ -53,9 +53,9 @@
int
ia64_alloc_irq (void)
{
- static int next_irq = FIRST_DEVICE_IRQ;
+ static int next_irq = IA64_FIRST_DEVICE_VECTOR;
- if (next_irq > LAST_DEVICE_IRQ)
+ if (next_irq > IA64_LAST_DEVICE_VECTOR)
/* XXX could look for sharable vectors instead of panic'ing... */
panic("ia64_alloc_irq: out of interrupt vectors!");
return next_irq++;
@@ -67,7 +67,7 @@
* function ptr.
*/
void
-ia64_handle_irq (unsigned long vector, struct pt_regs *regs)
+ia64_handle_irq (ia64_vector vector, struct pt_regs *regs)
{
unsigned long saved_tpr;
@@ -109,19 +109,10 @@
saved_tpr = ia64_get_tpr();
ia64_srlz_d();
do {
- if (vector >= NR_IRQS) {
- printk("handle_irq: invalid vector %lu\n", vector);
- ia64_set_tpr(saved_tpr);
- ia64_srlz_d();
- return;
- }
ia64_set_tpr(vector);
ia64_srlz_d();
- if ((irq_desc[vector].status & IRQ_PER_CPU) != 0)
- do_IRQ_per_cpu(vector, regs);
- else
- do_IRQ(vector, regs);
+ do_IRQ(local_vector_to_irq(vector), regs);
/*
* Disable interrupts and send EOI:
@@ -130,7 +121,7 @@
ia64_set_tpr(saved_tpr);
ia64_eoi();
vector = ia64_get_ivr();
- } while (vector != IA64_SPURIOUS_INT);
+ } while (vector != IA64_SPURIOUS_INT_VECTOR);
}
#ifdef CONFIG_SMP
@@ -144,18 +135,27 @@
};
#endif
+void
+register_percpu_irq (ia64_vector vec, struct irqaction *action)
+{
+ irq_desc_t *desc;
+ unsigned int irq;
+
+ for (irq = 0; irq < NR_IRQS; ++irq)
+ if (irq_to_vector(irq) = vec) {
+ desc = irq_desc(irq);
+ desc->status |= IRQ_PER_CPU;
+ desc->handler = &irq_type_ia64_sapic;
+ if (action)
+ setup_irq(irq, action);
+ }
+}
+
void __init
init_IRQ (void)
{
- irq_desc[IA64_SPURIOUS_INT].handler = &irq_type_ia64_sapic;
-#ifdef CONFIG_SMP
- /*
- * Configure the IPI vector and handler
- */
- irq_desc[IPI_IRQ].status |= IRQ_PER_CPU;
- irq_desc[IPI_IRQ].handler = &irq_type_ia64_sapic;
- setup_irq(IPI_IRQ, &ipi_irqaction);
-#endif
+ register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL);
+ register_percpu_irq(IA64_IPI_VECTOR, &ipi_irqaction);
platform_irq_init();
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/mca.c linux-2.4.2-lia/arch/ia64/kernel/mca.c
--- linux-davidm/arch/ia64/kernel/mca.c Wed Feb 28 22:20:35 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/mca.c Wed Feb 28 14:46:08 2001
@@ -235,7 +235,7 @@
/* Register the rendezvous interrupt vector with SAL */
if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT,
SAL_MC_PARAM_MECHANISM_INT,
- IA64_MCA_RENDEZ_INT_VECTOR,
+ IA64_MCA_RENDEZ_VECTOR,
IA64_MCA_RENDEZ_TIMEOUT,
0))
return;
@@ -243,7 +243,7 @@
/* Register the wakeup interrupt vector with SAL */
if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP,
SAL_MC_PARAM_MECHANISM_INT,
- IA64_MCA_WAKEUP_INT_VECTOR,
+ IA64_MCA_WAKEUP_VECTOR,
0,
0))
return;
@@ -252,8 +252,7 @@
/*
* Setup the correctable machine check vector
*/
- ia64_mca_cmc_vector_setup(IA64_CMC_INT_ENABLE,
- IA64_MCA_CMC_INT_VECTOR);
+ ia64_mca_cmc_vector_setup(IA64_CMC_INT_ENABLE, IA64_CMC_VECTOR);
IA64_MCA_DEBUG("ia64_mca_init : correctable mca vector setup done\n");
@@ -334,8 +333,8 @@
void
ia64_mca_wakeup_ipi_wait(void)
{
- int irr_num = (IA64_MCA_WAKEUP_INT_VECTOR >> 6);
- int irr_bit = (IA64_MCA_WAKEUP_INT_VECTOR & 0x3f);
+ int irr_num = (IA64_MCA_WAKEUP_VECTOR >> 6);
+ int irr_bit = (IA64_MCA_WAKEUP_VECTOR & 0x3f);
u64 irr = 0;
do {
@@ -368,9 +367,8 @@
void
ia64_mca_wakeup(int cpu)
{
- platform_send_ipi(cpu, IA64_MCA_WAKEUP_INT_VECTOR, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(cpu, IA64_MCA_WAKEUP_VECTOR, IA64_IPI_DM_INT, 0);
ia64_mc_info.imi_rendez_checkin[cpu] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
-
}
/*
* ia64_mca_wakeup_all
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/perfmon.c linux-2.4.2-lia/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/perfmon.c Wed Feb 28 14:46:32 2001
@@ -7,7 +7,7 @@
* Modifications by Stephane Eranian, Hewlett-Packard Co.
* Copyright (C) 1999 Ganesh Venkitachalam <venkitac@us.ibm.com>
* Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000-2001 Stephane Eranian <eranian@hpl.hp.com>
*/
#include <linux/config.h>
@@ -34,6 +34,7 @@
#include <asm/system.h>
#include <asm/system.h>
#include <asm/uaccess.h>
+#include <asm/delay.h> /* for ia64_get_itc() */
#ifdef CONFIG_PERFMON
@@ -299,12 +300,10 @@
static inline unsigned long
perfmon_get_stamp(void)
{
- unsigned long tmp;
-
- /* XXX: need more to adjust for Itanium itc bug */
- __asm__ __volatile__("mov %0=ar.itc" : "=r"(tmp) :: "memory");
-
- return tmp;
+ /*
+ * XXX: maybe find something more efficient
+ */
+ return ia64_get_itc();
}
/* Given PGD from the address space's page table, return the kernel
@@ -340,16 +339,12 @@
static inline unsigned long
kvirt_to_pa(unsigned long adr)
{
- unsigned long va, kva, ret;
-
- va = VMALLOC_VMADDR(adr);
- kva = uvirt_to_kva(pgd_offset_k(va), va);
- ret = __pa(kva);
- DBprintk(("kv2pa(%lx-->%lx)\n", adr, ret));
- return ret;
+ __u64 pa;
+ __asm__ __volatile__ ("tpa %0 = %1" : "=r"(pa) : "r"(adr) : "memory");
+ DBprintk(("kv2pa(%lx-->%lx)\n", adr, pa));
+ return pa;
}
-
static void *
rvmalloc(unsigned long size)
{
@@ -456,13 +451,11 @@
pfm_smpl_buffer_desc_t *psb;
regcount = pfm_smpl_entry_size(&which_pmds, 1);
- /*
- * ask for a sampling buffer but nothing to record !
+
+ /* note that regcount might be 0, in this case only the header for each
+ * entry will be recorded.
*/
- if (regcount = 0) {
- DBprintk((" no pmds to record\n"));
- return -EINVAL;
- }
+
/*
* 1 buffer hdr and for each entry a header + regcount PMDs to save
*/
@@ -509,7 +502,7 @@
psb->psb_entry_size = sizeof(perfmon_smpl_entry_t) + regcount*sizeof(u64);
- DBprintk((" psb @%p entry_size=%ld hdr=%p addr=%p\n", psb,psb->psb_entry_size, psb->psb_hdr, psb->psb_addr));
+ DBprintk((" psb @%p entry_size=%ld hdr=%p addr=%p\n", (void *)psb,psb->psb_entry_size, (void *)psb->psb_hdr, (void *)psb->psb_addr));
/* initialize some of the fields of header */
psb->psb_hdr->hdr_version = PFM_SMPL_HDR_VERSION;
@@ -596,7 +589,7 @@
pfm_context_t *ctx;
perfmon_req_t tmp;
void *uaddr = NULL;
- int ret = -EINVAL;
+ int ret = -EFAULT;
int ctx_flags;
/* to go away */
@@ -604,7 +597,7 @@
printk("perfmon: use context flags instead of perfmon() flags. Obsoleted API\n");
}
- copy_from_user(&tmp, req, sizeof(tmp));
+ if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
ctx_flags = tmp.pfr_ctx.flags;
@@ -635,11 +628,10 @@
sema_init(&ctx->ctx_restart_sem, 0); /* init this semaphore to locked */
- /* XXX fixme take care of errors here */
- copy_to_user(req, &tmp, sizeof(tmp));
+ if (copy_to_user(req, &tmp, sizeof(tmp))) goto buffer_error;
- DBprintk((" context=%p, pid=%d notify_sig %d notify_pid=%d\n",ctx, task->pid, ctx->ctx_notify_sig, ctx->ctx_notify_pid));
- DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+ DBprintk((" context=%p, pid=%d notify_sig %d notify_pid=%d\n",(void *)ctx, task->pid, ctx->ctx_notify_sig, ctx->ctx_notify_pid));
+ DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
/* link with task */
task->thread.pfm_context = ctx;
@@ -685,7 +677,7 @@
for (i = 0; i < count; i++, req++) {
- copy_from_user(&tmp, req, sizeof(tmp));
+ if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
cnum = tmp.pfr_reg.reg_num;
@@ -739,7 +731,7 @@
for (i = 0; i < count; i++, req++) {
int k;
- copy_from_user(&tmp, req, sizeof(tmp));
+ if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
cnum = tmp.pfr_reg.reg_num;
@@ -797,7 +789,7 @@
for (i = 0; i < count; i++, req++) {
int k;
- copy_from_user(&tmp, req, sizeof(tmp));
+ if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
if (!PMD_IS_IMPL(tmp.pfr_reg.reg_num)) return -EINVAL;
@@ -1124,8 +1116,13 @@
* which can be restarted. That's why it's declared as a system call and all 8 possible args
* are declared even though not used.
*/
+#if __GNUC__ >= 3
+void asmlinkage
+pfm_overflow_notify(void)
+#else
void asmlinkage
pfm_overflow_notify(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
+#endif
{
struct task_struct *task;
struct thread_struct *th = ¤t->thread;
@@ -1145,7 +1142,7 @@
return;
}
- DBprintk((" current=%d ctx=%p bv=0%lx\n", current->pid, ctx, ctx->ctx_ovfl_regs));
+ DBprintk((" current=%d ctx=%p bv=0%lx\n", current->pid, (void *)ctx, ctx->ctx_ovfl_regs));
/*
* NO matter what notify_pid is,
* we clear overflow, won't notify again
@@ -1179,7 +1176,7 @@
si.si_pid = current->pid; /* who is sending */
si.si_pfm_ovfl = ctx->ctx_ovfl_regs;
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, task));
+ DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
/* must be done with tasklist_lock locked */
ret = send_sig_info(ctx->ctx_notify_sig, &si, task);
@@ -1279,7 +1276,7 @@
task = find_task_by_pid(info->to_pid);
- DBprintk((" after find %p\n", task));
+ DBprintk((" after find %p\n", (void *)task));
if (task) {
int ret;
@@ -1291,7 +1288,7 @@
si.si_pid = info->from_pid; /* who is sending */
si.si_pfm_ovfl = info->bitvect;
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, task));
+ DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
/* must be done with tasklist_lock locked */
ret = send_sig_info(SIGPROF, &si, task);
@@ -1305,7 +1302,7 @@
read_unlock(&tasklist_lock);
- DBprintk((" after unlock %p\n", task));
+ DBprintk((" after unlock %p\n", (void *)task));
if (!task) {
printk("perfmon: CPU%d cannot find process %d\n", smp_processor_id(), info->to_pid);
@@ -1419,7 +1416,7 @@
+ (ia64_get_pmd(j) & pmu_conf.perf_ovfl_val);
else
*e = ia64_get_pmd(j); /* slow */
- DBprintk((" e=%p pmd%d =0x%lx\n", e, j, *e));
+ DBprintk((" e=%p pmd%d =0x%lx\n", (void *)e, j, *e));
e++;
}
}
@@ -1594,7 +1591,7 @@
ia64_set_pmc(0, pmc0);
ia64_srlz_d();
} else {
- printk("perfmon: Spurious PMU overflow interrupt: pmc0=0x%lx owner=%p\n", pmc0, PMU_OWNER());
+ printk("perfmon: Spurious PMU overflow interrupt: pmc0=0x%lx owner=%p\n", pmc0, (void *)PMU_OWNER());
}
}
@@ -1643,17 +1640,15 @@
pal_perf_mon_info_u_t pm_info;
s64 status;
- irq_desc[PERFMON_IRQ].status |= IRQ_PER_CPU;
- irq_desc[PERFMON_IRQ].handler = &irq_type_ia64_sapic;
- setup_irq(PERFMON_IRQ, &perfmon_irqaction);
+ register_percpu_irq(IA64_PERFMON_VECTOR, &perfmon_irqaction);
- ia64_set_pmv(PERFMON_IRQ);
+ ia64_set_pmv(IA64_PERFMON_VECTOR);
ia64_srlz_d();
pmu_conf.pfm_is_disabled = 1;
printk("perfmon: version %s\n", PFM_VERSION);
- printk("perfmon: Interrupt vectored to %u\n", PERFMON_IRQ);
+ printk("perfmon: Interrupt vectored to %u\n", IA64_PERFMON_VECTOR);
if ((status=ia64_pal_perf_mon_info(pmu_conf.impl_regs, &pm_info)) != 0) {
printk("perfmon: PAL call failed (%ld)\n", status);
@@ -1693,7 +1688,7 @@
void
perfmon_init_percpu (void)
{
- ia64_set_pmv(PERFMON_IRQ);
+ ia64_set_pmv(IA64_PERFMON_VECTOR);
ia64_srlz_d();
}
@@ -1738,7 +1733,7 @@
* This will cause the interrupt handler to do nothing in case an overflow
* interrupt was in-flight
* This also guarantees that pmc0 will contain the final state
- * It virtually gives us full control on overflow processing from that point
+ * It virtually gives us full control over overflow processing from that point
* on.
* It must be an atomic operation.
*/
@@ -1771,7 +1766,7 @@
* next time the task exits from the kernel.
*/
if (pmc0 & ~0x1) {
- if (owner != ta) printk(__FUNCTION__" owner=%p task=%p\n", owner, ta);
+ if (owner != ta) printk(__FUNCTION__" owner=%p task=%p\n", (void *)owner, (void *)ta);
printk(__FUNCTION__" Warning: pmc[0]=0x%lx explicit call\n", pmc0);
pmc0 = update_counters(owner, pmc0, NULL);
@@ -1931,7 +1926,7 @@
/*
* restore PSR for context switch to save
*/
- __asm__ __volatile__ ("mov psr.l=%0;;"::"r"(psr): "memory");
+ __asm__ __volatile__ ("mov psr.l=%0;;srlz.i;"::"r"(psr): "memory");
/*
* This loop flushes the PMD into the PFM context.
@@ -2054,7 +2049,7 @@
/* link with new task */
th->pfm_context = nctx;
- DBprintk((" nctx=%p for process %d\n", nctx, task->pid));
+ DBprintk((" nctx=%p for process %d\n", (void *)nctx, task->pid));
/*
* the copy_thread routine automatically clears
@@ -2092,7 +2087,7 @@
DBprintk((" pid %d: cleaning task %d sampling buffer\n", current->pid, task->pid ));
}
}
- DBprintk((" pid %d: task %d pfm_context is freed @%p\n", current->pid, task->pid, ctx));
+ DBprintk((" pid %d: task %d pfm_context is freed @%p\n", current->pid, task->pid, (void *)ctx));
pfm_context_free(ctx);
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/setup.c linux-2.4.2-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/setup.c Wed Feb 28 14:46:42 2001
@@ -263,8 +263,8 @@
ia64_mca_init();
#endif
- paging_init();
platform_setup(cmdline_p);
+ paging_init();
}
/*
@@ -417,7 +417,7 @@
ia64_mmu_init();
- identify_cpu(current_cpu_data);
+ identify_cpu(local_cpu_data);
#ifdef CONFIG_IA32_SUPPORT
/* initialize global ia32 state - CR0 and CR4 */
@@ -456,5 +456,5 @@
printk ("cpu_init: PAL RSE info failed, assuming 96 physical stacked regs\n");
num_phys_stacked = 96;
}
- current_cpu_data->phys_stacked_size_p8 = num_phys_stacked*8 + 8;
+ local_cpu_data->phys_stacked_size_p8 = num_phys_stacked*8 + 8;
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/smp.c linux-2.4.2-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/smp.c Wed Feb 28 14:47:26 2001
@@ -159,11 +159,11 @@
void
handle_IPI (int irq, void *dev_id, struct pt_regs *regs)
{
- unsigned long *pending_ipis = ¤t_cpu_data->ipi_operation;
+ unsigned long *pending_ipis = &local_cpu_data->ipi_operation;
unsigned long ops;
/* Count this now; we may make a call that never returns. */
- current_cpu_data->ipi_count++;
+ local_cpu_data->ipi_count++;
mb(); /* Order interrupt and bit testing. */
while ((ops = xchg(pending_ipis, 0)) != 0) {
@@ -278,7 +278,7 @@
return;
set_bit(op, &cpu_data[dest_cpu].ipi_operation);
- platform_send_ipi(dest_cpu, IPI_IRQ, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(dest_cpu, IA64_IPI_VECTOR, IA64_IPI_DM_INT, 0);
}
static inline void
@@ -320,11 +320,23 @@
}
#ifndef CONFIG_ITANIUM_PTCG
+
void
smp_send_flush_tlb (void)
{
send_IPI_allbutself(IPI_FLUSH_TLB);
}
+
+void
+smp_resend_flush_tlb(void)
+{
+ /*
+ * Really need a null IPI but since this rarely should happen & since this code
+ * will go away, lets not add one.
+ */
+ send_IPI_allbutself(IPI_RESCHEDULE);
+}
+
#endif /* !CONFIG_ITANIUM_PTCG */
/*
@@ -427,7 +439,7 @@
int i;
for (i = 0; i < smp_num_cpus; i++) {
if (i != smp_processor_id())
- platform_send_ipi(i, IPI_IRQ, IA64_IPI_DM_INT, 0);
+ platform_send_ipi(i, IA64_IPI_VECTOR, IA64_IPI_DM_INT, 0);
}
goto retry;
#else
@@ -460,8 +472,8 @@
static inline void __init
smp_setup_percpu_timer(void)
{
- current_cpu_data->prof_counter = 1;
- current_cpu_data->prof_multiplier = 1;
+ local_cpu_data->prof_counter = 1;
+ local_cpu_data->prof_multiplier = 1;
}
void
@@ -469,8 +481,8 @@
{
int user = user_mode(regs);
- if (--current_cpu_data->prof_counter <= 0) {
- current_cpu_data->prof_counter = current_cpu_data->prof_multiplier;
+ if (--local_cpu_data->prof_counter <= 0) {
+ local_cpu_data->prof_counter = local_cpu_data->prof_multiplier;
update_process_times(user);
}
}
@@ -506,11 +518,10 @@
#ifdef CONFIG_PERFMON
perfmon_init_percpu();
#endif
-
local_irq_enable(); /* Interrupts have been off until now */
calibrate_delay();
- current_cpu_data->loops_per_jiffy = loops_per_jiffy;
+ local_cpu_data->loops_per_jiffy = loops_per_jiffy;
/* allow the master to continue */
set_bit(cpu, &cpu_callin_map);
@@ -602,7 +613,7 @@
unsigned long bogosum;
/* on the BP, the kernel already called calibrate_delay_loop() in init/main.c */
- current_cpu_data->loops_per_jiffy = loops_per_jiffy;
+ local_cpu_data->loops_per_jiffy = loops_per_jiffy;
#if 0
smp_tune_scheduling();
#endif
@@ -680,7 +691,7 @@
* Assume that CPU's have been discovered by some platform-dependant
* interface. For SoftSDV/Lion, that would be ACPI.
*
- * Setup of the IPI irq handler is done in irq.c:init_IRQ_SMP().
+ * Setup of the IPI irq handler is done in irq.c:init_IRQ().
*
* This also registers the AP OS_MC_REDVEZ address with SAL.
*/
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/time.c linux-2.4.2-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/time.c Wed Feb 28 14:47:38 2001
@@ -74,7 +74,7 @@
unsigned long now = ia64_get_itc(), last_tick;
unsigned long elapsed_cycles, lost = jiffies - wall_jiffies;
- last_tick = (current_cpu_data->itm_next - (lost+1)*current_cpu_data->itm_delta);
+ last_tick = (local_cpu_data->itm_next - (lost+1)*local_cpu_data->itm_delta);
# if 1
if ((long) (now - last_tick) < 0) {
printk("Yikes: now < last_tick (now=0x%lx,last_tick=%lx)! No can do.\n",
@@ -83,7 +83,7 @@
}
# endif
elapsed_cycles = now - last_tick;
- return (elapsed_cycles*current_cpu_data->usec_per_cyc) >> IA64_USEC_PER_CYC_SHIFT;
+ return (elapsed_cycles*local_cpu_data->usec_per_cyc) >> IA64_USEC_PER_CYC_SHIFT;
#endif
}
@@ -144,7 +144,7 @@
{
unsigned long new_itm;
- new_itm = current_cpu_data->itm_next;
+ new_itm = local_cpu_data->itm_next;
if (!time_after(ia64_get_itc(), new_itm))
printk("Oops: timer tick before it's due (itc=%lx,itm=%lx)\n",
@@ -174,8 +174,8 @@
write_unlock(&xtime_lock);
}
- new_itm += current_cpu_data->itm_delta;
- current_cpu_data->itm_next = new_itm;
+ new_itm += local_cpu_data->itm_delta;
+ local_cpu_data->itm_next = new_itm;
if (time_after(new_itm, ia64_get_itc()))
break;
}
@@ -188,8 +188,8 @@
* let our clock run too fast (with the potentially devastating effect of
* losing monotony of time).
*/
- while (!time_after(new_itm, ia64_get_itc() + current_cpu_data->itm_delta/2))
- new_itm += current_cpu_data->itm_delta;
+ while (!time_after(new_itm, ia64_get_itc() + local_cpu_data->itm_delta/2))
+ new_itm += local_cpu_data->itm_delta;
ia64_set_itm(new_itm);
/* double check, in case we got hit by a (slow) PMI: */
} while (time_after_eq(ia64_get_itc(), new_itm));
@@ -205,9 +205,9 @@
unsigned long shift = 0, delta;
/* arrange for the cycle counter to generate a timer interrupt: */
- ia64_set_itv(TIMER_IRQ);
+ ia64_set_itv(IA64_TIMER_VECTOR);
- delta = current_cpu_data->itm_delta;
+ delta = local_cpu_data->itm_delta;
/*
* Stagger the timer tick for each CPU so they don't occur all at (almost) the
* same time:
@@ -216,8 +216,8 @@
unsigned long hi = 1UL << ia64_fls(cpu);
shift = (2*(cpu - hi) + 1) * delta/hi/2;
}
- current_cpu_data->itm_next = ia64_get_itc() + delta + shift;
- ia64_set_itm(current_cpu_data->itm_next);
+ local_cpu_data->itm_next = ia64_get_itc() + delta + shift;
+ ia64_set_itm(local_cpu_data->itm_next);
}
void __init
@@ -258,16 +258,16 @@
itc_ratio.den = 1; /* avoid division by zero */
itc_freq = (platform_base_freq*itc_ratio.num)/itc_ratio.den;
- current_cpu_data->itm_delta = (itc_freq + HZ/2) / HZ;
+ local_cpu_data->itm_delta = (itc_freq + HZ/2) / HZ;
printk("CPU %d: base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
smp_processor_id(),
platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000,
itc_ratio.num, itc_ratio.den, itc_freq / 1000000, (itc_freq / 1000) % 1000);
- current_cpu_data->proc_freq = (platform_base_freq*proc_ratio.num)/proc_ratio.den;
- current_cpu_data->itc_freq = itc_freq;
- current_cpu_data->cyc_per_usec = (itc_freq + 500000) / 1000000;
- current_cpu_data->usec_per_cyc = ((1000000UL<<IA64_USEC_PER_CYC_SHIFT)
+ local_cpu_data->proc_freq = (platform_base_freq*proc_ratio.num)/proc_ratio.den;
+ local_cpu_data->itc_freq = itc_freq;
+ local_cpu_data->cyc_per_usec = (itc_freq + 500000) / 1000000;
+ local_cpu_data->usec_per_cyc = ((1000000UL<<IA64_USEC_PER_CYC_SHIFT)
+ itc_freq/2)/itc_freq;
/* Setup the CPU local timer tick */
@@ -283,11 +283,7 @@
void __init
time_init (void)
{
- /* we can't do request_irq() here because the kmalloc() would fail... */
- irq_desc[TIMER_IRQ].status |= IRQ_PER_CPU;
- irq_desc[TIMER_IRQ].handler = &irq_type_ia64_sapic;
- setup_irq(TIMER_IRQ, &timer_irqaction);
-
+ register_percpu_irq(IA64_TIMER_VECTOR, &timer_irqaction);
efi_gettimeofday(&xtime);
ia64_init_itm();
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.2-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/unwind.c Wed Feb 28 14:48:03 2001
@@ -1613,7 +1613,7 @@
case UNW_INSN_LOAD:
#if UNW_DEBUG
- if ((s[val] & (current_cpu_data->unimpl_va_mask | 0x7)) != 0
+ if ((s[val] & (local_cpu_data->unimpl_va_mask | 0x7)) != 0
|| s[val] < TASK_SIZE)
{
debug(1, "unwind: rejecting bad psp=0x%lx\n", s[val]);
@@ -1647,7 +1647,7 @@
int have_write_lock = 0;
struct unw_script *scr;
- if ((info->ip & (current_cpu_data->unimpl_va_mask | 0xf)) || info->ip < TASK_SIZE) {
+ if ((info->ip & (local_cpu_data->unimpl_va_mask | 0xf)) || info->ip < TASK_SIZE) {
/* don't let obviously bad addresses pollute the cache */
debug(1, "unwind: rejecting bad ip=0x%lx\n", info->ip);
info->rp_loc = 0;
@@ -1945,7 +1945,7 @@
return -1;
info->ip = read_reg(info, sol - 2, &is_nat);
- if (is_nat || (info->ip & (current_cpu_data->unimpl_va_mask | 0xf)))
+ if (is_nat || (info->ip & (local_cpu_data->unimpl_va_mask | 0xf)))
/* reject let obviously bad addresses */
return -1;
diff -urN --ignore-all-space linux-davidm/arch/ia64/mm/init.c linux-2.4.2-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/arch/ia64/mm/init.c Wed Feb 28 14:48:13 2001
@@ -1,8 +1,8 @@
/*
* Initialize MMU support.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
#include <linux/kernel.h>
@@ -368,7 +368,7 @@
# define vmlpt_bits (impl_va_bits - PAGE_SHIFT + pte_bits)
# define POW2(n) (1ULL << (n))
- impl_va_bits = ffz(~(current_cpu_data->unimpl_va_mask | (7UL << 61)));
+ impl_va_bits = ffz(~(local_cpu_data->unimpl_va_mask | (7UL << 61)));
if (impl_va_bits < 51 || impl_va_bits > 61)
panic("CPU has bogus IMPL_VA_MSB value of %lu!\n", impl_va_bits - 1);
diff -urN --ignore-all-space linux-davidm/arch/ia64/mm/tlb.c linux-2.4.2-lia/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/arch/ia64/mm/tlb.c Wed Feb 28 14:48:23 2001
@@ -71,7 +71,7 @@
if (!(flags & IA64_PSR_I)) {
saved_tpr = ia64_get_tpr();
ia64_srlz_d();
- ia64_set_tpr(IPI_IRQ - 16);
+ ia64_set_tpr(IA64_IPI_VECTOR - 16);
ia64_srlz_d();
local_irq_enable();
}
@@ -97,13 +97,14 @@
/*
* Wait for other CPUs to finish purging entries.
*/
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
{
+ extern void smp_resend_flush_tlb (void);
unsigned long start = ia64_get_itc();
+
while (atomic_read(&flush_cpu_count) > 0) {
- if ((ia64_get_itc() - start) > 40000UL) {
- atomic_set(&flush_cpu_count, smp_num_cpus - 1);
- smp_send_flush_tlb();
+ if ((ia64_get_itc() - start) > 400000UL) {
+ smp_resend_flush_tlb();
start = ia64_get_itc();
}
}
@@ -166,11 +167,11 @@
{
unsigned long i, j, flags, count0, count1, stride0, stride1, addr;
- addr = current_cpu_data->ptce_base;
- count0 = current_cpu_data->ptce_count[0];
- count1 = current_cpu_data->ptce_count[1];
- stride0 = current_cpu_data->ptce_stride[0];
- stride1 = current_cpu_data->ptce_stride[1];
+ addr = local_cpu_data->ptce_base;
+ count0 = local_cpu_data->ptce_count[0];
+ count1 = local_cpu_data->ptce_count[1];
+ stride0 = local_cpu_data->ptce_stride[0];
+ stride1 = local_cpu_data->ptce_stride[1];
local_irq_save(flags);
for (i = 0; i < count0; ++i) {
@@ -249,11 +250,11 @@
ia64_ptce_info_t ptce_info;
ia64_get_ptce(&ptce_info);
- current_cpu_data->ptce_base = ptce_info.base;
- current_cpu_data->ptce_count[0] = ptce_info.count[0];
- current_cpu_data->ptce_count[1] = ptce_info.count[1];
- current_cpu_data->ptce_stride[0] = ptce_info.stride[0];
- current_cpu_data->ptce_stride[1] = ptce_info.stride[1];
+ local_cpu_data->ptce_base = ptce_info.base;
+ local_cpu_data->ptce_count[0] = ptce_info.count[0];
+ local_cpu_data->ptce_count[1] = ptce_info.count[1];
+ local_cpu_data->ptce_stride[0] = ptce_info.stride[0];
+ local_cpu_data->ptce_stride[1] = ptce_info.stride[1];
__flush_tlb_all(); /* nuke left overs from bootstrapping... */
}
diff -urN --ignore-all-space linux-davidm/drivers/acpi/acpiconf.c linux-2.4.2-lia/drivers/acpi/acpiconf.c
--- linux-davidm/drivers/acpi/acpiconf.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/drivers/acpi/acpiconf.c Wed Feb 28 20:54:48 2001
@@ -353,9 +353,9 @@
if (prt) {
for ( ; prt->length > 0; pvec++) {
pvec->bus = (UINT16)i;
- pvec->pci_id = prt->data.address;
- pvec->pin = (UINT8)prt->data.pin;
- pvec->irq = (UINT8)prt->data.source_index;
+ pvec->pci_id = prt->address;
+ pvec->pin = (UINT8)prt->pin;
+ pvec->irq = (UINT8)prt->source_index;
prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
//prt = ROUND_PTR_UP_TO_4(prt, PCI_ROUTING_TABLE);
diff -urN --ignore-all-space linux-davidm/drivers/acpi/os.c linux-2.4.2-lia/drivers/acpi/os.c
--- linux-davidm/drivers/acpi/os.c Wed Feb 28 22:20:36 2001
+++ linux-2.4.2-lia/drivers/acpi/os.c Wed Feb 28 20:54:33 2001
@@ -244,7 +244,7 @@
acpi_irq_handler = NULL;
- desc = irq_desc + irq;
+ desc = irq_desc(irq);
spin_lock_irqsave(&desc->lock,flags);
p = &desc->action;
for (;;) {
diff -urN --ignore-all-space linux-davidm/drivers/char/simserial.c linux-2.4.2-lia/drivers/char/simserial.c
--- linux-davidm/drivers/char/simserial.c Wed Feb 28 22:20:37 2001
+++ linux-2.4.2-lia/drivers/char/simserial.c Wed Feb 28 20:54:05 2001
@@ -153,6 +153,7 @@
} else if ( seen_esc = 2 ) {
if ( ch = 'P' ) show_state(); /* F1 key */
if ( ch = 'Q' ) show_buffers(); /* F2 key */
+
seen_esc = 0;
continue;
}
diff -urN --ignore-all-space linux-davidm/fs/proc/base.c linux-2.4.2-lia/fs/proc/base.c
--- linux-davidm/fs/proc/base.c Wed Feb 28 22:20:39 2001
+++ linux-2.4.2-lia/fs/proc/base.c Wed Feb 28 20:52:18 2001
@@ -400,8 +400,10 @@
switch (orig) {
case 0:
file->f_pos = offset;
+ break;
case 1:
file->f_pos += offset;
+ break;
default:
return -EINVAL;
}
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/delay.h linux-2.4.2-lia/include/asm-ia64/delay.h
--- linux-davidm/include/asm-ia64/delay.h Wed Feb 28 22:20:39 2001
+++ linux-2.4.2-lia/include/asm-ia64/delay.h Wed Feb 28 20:52:04 2001
@@ -76,7 +76,7 @@
udelay (unsigned long usecs)
{
unsigned long start = ia64_get_itc();
- unsigned long cycles = usecs*current_cpu_data->cyc_per_usec;
+ unsigned long cycles = usecs*local_cpu_data->cyc_per_usec;
while (ia64_get_itc() - start < cycles)
/* skip */;
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/hardirq.h linux-2.4.2-lia/include/asm-ia64/hardirq.h
--- linux-davidm/include/asm-ia64/hardirq.h Wed Feb 28 22:20:39 2001
+++ linux-2.4.2-lia/include/asm-ia64/hardirq.h Wed Feb 28 20:51:49 2001
@@ -18,24 +18,31 @@
*/
#define softirq_active(cpu) (cpu_data[cpu].softirq.active)
#define softirq_mask(cpu) (cpu_data[cpu].softirq.mask)
-#define local_irq_count(cpu) (cpu_data[cpu].irq_stat.f.irq_count)
-#define local_bh_count(cpu) (cpu_data[cpu].irq_stat.f.bh_count)
+#define irq_count(cpu) (cpu_data[cpu].irq_stat.f.irq_count)
+#define bh_count(cpu) (cpu_data[cpu].irq_stat.f.bh_count)
#define syscall_count(cpu) /* unused on IA-64 */
#define nmi_count(cpu) 0
+#define local_softirq_active() (local_cpu_data->softirq.active)
+#define local_softirq_mask() (local_cpu_data->softirq.mask)
+#define local_irq_count() (local_cpu_data->irq_stat.f.irq_count)
+#define local_bh_count() (local_cpu_data->irq_stat.f.bh_count)
+#define local_syscall_count() /* unused on IA-64 */
+#define local_nmi_count() 0
+
/*
* Are we in an interrupt context? Either doing bottom half or hardware interrupt
* processing?
*/
-#define in_interrupt() (current_cpu_data->irq_stat.irq_and_bh_counts != 0)
-#define in_irq() (current_cpu_data->irq_stat.f.irq_count != 0)
+#define in_interrupt() (local_cpu_data->irq_stat.irq_and_bh_counts != 0)
+#define in_irq() (local_cpu_data->irq_stat.f.irq_count != 0)
#ifndef CONFIG_SMP
-# define hardirq_trylock(cpu) (local_irq_count(cpu) = 0)
-# define hardirq_endlock(cpu) do { } while (0)
+# define local_hardirq_trylock() (local_irq_count() = 0)
+# define local_hardirq_endlock() do { } while (0)
-# define irq_enter(cpu, irq) (local_irq_count(cpu)++)
-# define irq_exit(cpu, irq) (local_irq_count(cpu)--)
+# define local_irq_enter(irq) (local_irq_count(cpu)++)
+# define local_irq_exit(irq) (local_irq_count(cpu)--)
# define synchronize_irq() barrier()
#else
@@ -52,7 +59,7 @@
int i;
for (i = 0; i < smp_num_cpus; i++)
- if (local_irq_count(i))
+ if (irq_count(i))
return 1;
return 0;
}
@@ -68,9 +75,9 @@
}
static inline void
-irq_enter (int cpu, int irq)
+local_irq_enter (int irq)
{
- local_irq_count(cpu)++;
+ local_irq_count()++;
while (test_bit(0,&global_irq_lock)) {
/* nothing */;
@@ -78,18 +85,18 @@
}
static inline void
-irq_exit (int cpu, int irq)
+local_irq_exit (int irq)
{
- local_irq_count(cpu)--;
+ local_irq_count()--;
}
static inline int
-hardirq_trylock (int cpu)
+local_hardirq_trylock (void)
{
- return !local_irq_count(cpu) && !test_bit(0,&global_irq_lock);
+ return !local_irq_count() && !test_bit(0,&global_irq_lock);
}
-#define hardirq_endlock(cpu) do { } while (0)
+#define local_hardirq_endlock() do { } while (0)
extern void synchronize_irq (void);
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/hw_irq.h linux-2.4.2-lia/include/asm-ia64/hw_irq.h
--- linux-davidm/include/asm-ia64/hw_irq.h Wed Feb 28 22:20:39 2001
+++ linux-2.4.2-lia/include/asm-ia64/hw_irq.h Wed Feb 28 20:51:32 2001
@@ -13,6 +13,8 @@
#include <asm/ptrace.h>
#include <asm/smp.h>
+typedef u8 ia64_vector;
+
/*
* 0 special
*
@@ -28,29 +30,30 @@
*/
#define IA64_MIN_VECTORED_IRQ 16
#define IA64_MAX_VECTORED_IRQ 255
+#define IA64_NUM_VECTORS 256
-#define IA64_SPURIOUS_INT 0x0f
+#define IA64_SPURIOUS_INT_VECTOR 0x0f
/*
* Vectors 0x10-0x1f are used for low priority interrupts, e.g. CMCI.
*/
-#define PCE_IRQ 0x1e /* platform corrected error interrupt vector */
-#define CMC_IRQ 0x1f /* correctable machine-check interrupt vector */
+#define IA64_PCE_VECTOR 0x1e /* platform corrected error interrupt vector */
+#define IA64_CMC_VECTOR 0x1f /* correctable machine-check interrupt vector */
/*
* Vectors 0x20-0x2f are reserved for legacy ISA IRQs.
*/
-#define FIRST_DEVICE_IRQ 0x30
-#define LAST_DEVICE_IRQ 0xe7
+#define IA64_FIRST_DEVICE_VECTOR 0x30
+#define IA64_LAST_DEVICE_VECTOR 0xe7
-#define MCA_RENDEZ_IRQ 0xe8 /* MCA rendez interrupt */
-#define PERFMON_IRQ 0xee /* performanc monitor interrupt vector */
-#define TIMER_IRQ 0xef /* use highest-prio group 15 interrupt for timer */
-#define MCA_WAKEUP_IRQ 0xf0 /* MCA wakeup interrupt (must be >MCA_RENDEZ_IRQ) */
-#define IPI_IRQ 0xfe /* inter-processor interrupt vector */
+#define IA64_MCA_RENDEZ_VECTOR 0xe8 /* MCA rendez interrupt */
+#define IA64_PERFMON_VECTOR 0xee /* performanc monitor interrupt vector */
+#define IA64_TIMER_VECTOR 0xef /* use highest-prio group 15 interrupt for timer */
+#define IA64_MCA_WAKEUP_VECTOR 0xf0 /* MCA wakeup (must be >MCA_RENDEZ_VECTOR) */
+#define IA64_IPI_VECTOR 0xfe /* inter-processor interrupt vector */
/* IA64 inter-cpu interrupt related definitions */
-#define IPI_DEFAULT_BASE_ADDR 0xfee00000
+#define IA64_IPI_DEFAULT_BASE_ADDR 0xfee00000
/* Delivery modes for inter-cpu interrupts */
enum {
@@ -70,11 +73,71 @@
extern int ia64_alloc_irq (void); /* allocate a free irq */
extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect);
+extern void register_percpu_irq (ia64_vector vec, struct irqaction *action);
static inline void
hw_resend_irq (struct hw_interrupt_type *h, unsigned int vector)
{
platform_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
+}
+
+/*
+ * Default implementations for the irq-descriptor API:
+ */
+
+extern struct irq_desc _irq_desc[NR_IRQS];
+
+static inline struct irq_desc *
+__ia64_irq_desc (unsigned int irq)
+{
+ return _irq_desc + irq;
+}
+
+static inline ia64_vector
+__ia64_irq_to_vector (unsigned int irq)
+{
+ return (ia64_vector) irq;
+}
+
+static inline unsigned int
+__ia64_local_vector_to_irq (ia64_vector vec)
+{
+ return (unsigned int) vec;
+}
+
+/*
+ * Next follows the irq descriptor interface. On IA-64, each CPU supports 256 interrupt
+ * vectors. On smaller systems, there is a one-to-one correspondence between interrupt
+ * vectors and the Linux irq numbers. However, larger systems may have multiple interrupt
+ * domains meaning that the translation from vector number to irq number depends on the
+ * interrupt domain that a CPU belongs to. This API abstracts such platform-dependent
+ * differences and provides a uniform means to translate between vector and irq numbers
+ * and to obtain the irq descriptor for a given irq number.
+ */
+
+/* Return a pointer to the irq descriptor for IRQ. */
+static inline struct irq_desc *
+irq_desc (int irq)
+{
+ return platform_irq_desc(irq);
+}
+
+/* Extract the IA-64 vector that corresponds to IRQ. */
+static inline ia64_vector
+irq_to_vector (int irq)
+{
+ return platform_irq_to_vector(irq);
+}
+
+/*
+ * Convert the local IA-64 vector to the corresponding irq number. This translation is
+ * done in the context of the interrupt domain that the currently executing CPU belongs
+ * to.
+ */
+static inline unsigned int
+local_vector_to_irq (ia64_vector vec)
+{
+ return platform_local_vector_to_irq(vec);
}
#endif /* _ASM_IA64_HW_IRQ_H */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/machvec.h linux-2.4.2-lia/include/asm-ia64/machvec.h
--- linux-davidm/include/asm-ia64/machvec.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.2-lia/include/asm-ia64/machvec.h Wed Feb 28 20:51:15 2001
@@ -4,8 +4,8 @@
* Copyright (C) 1999 Silicon Graphics, Inc.
* Copyright (C) Srinivasa Thirumalachar <sprasad@engr.sgi.com>
* Copyright (C) Vijay Chander <vijay@engr.sgi.com>
- * Copyright (C) 1999-2000 Hewlett-Packard Co.
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co.
+ * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#ifndef _ASM_IA64_MACHVEC_H
#define _ASM_IA64_MACHVEC_H
@@ -17,6 +17,7 @@
struct pci_dev;
struct pt_regs;
struct scatterlist;
+struct irq_desc;
typedef void ia64_mv_setup_t (char **);
typedef void ia64_mv_irq_init_t (void);
@@ -27,6 +28,9 @@
typedef void ia64_mv_cmci_handler_t (int, void *, struct pt_regs *);
typedef void ia64_mv_log_print_t (void);
typedef void ia64_mv_send_ipi_t (int, int, int, int);
+typedef struct irq_desc *ia64_mv_irq_desc (unsigned int);
+typedef u8 ia64_mv_irq_to_vector (u8);
+typedef unsigned int ia64_mv_local_vector_to_irq (u8 vector);
/* PCI-DMA interface: */
typedef void ia64_mv_pci_dma_init (void);
@@ -88,6 +92,9 @@
# define platform_pci_dma_sync_single ia64_mv.sync_single
# define platform_pci_dma_sync_sg ia64_mv.sync_sg
# define platform_pci_dma_address ia64_mv.dma_address
+# define platform_irq_desc ia64_mv.irq_desc
+# define platform_irq_to_vector ia64_mv.irq_to_vector
+# define platform_local_vector_to_irq ia64_mv.local_vector_to_irq
# define platform_inb ia64_mv.inb
# define platform_inw ia64_mv.inw
# define platform_inl ia64_mv.inl
@@ -117,6 +124,9 @@
ia64_mv_pci_dma_sync_single *sync_single;
ia64_mv_pci_dma_sync_sg *sync_sg;
ia64_mv_pci_dma_address *dma_address;
+ ia64_mv_irq_desc *irq_desc;
+ ia64_mv_irq_to_vector *irq_to_vector;
+ ia64_mv_local_vector_to_irq *local_vector_to_irq;
ia64_mv_inb_t *inb;
ia64_mv_inw_t *inw;
ia64_mv_inl_t *inl;
@@ -147,6 +157,9 @@
platform_pci_dma_sync_single, \
platform_pci_dma_sync_sg, \
platform_pci_dma_address, \
+ platform_irq_desc, \
+ platform_irq_to_vector, \
+ platform_local_vector_to_irq, \
platform_inb, \
platform_inw, \
platform_inl, \
@@ -233,6 +246,15 @@
#endif
#ifndef platform_pci_dma_address
# define platform_pci_dma_address swiotlb_dma_address
+#endif
+#ifndef platform_irq_desc
+# define platform_irq_desc __ia64_irq_desc
+#endif
+#ifndef platform_irq_to_vector
+# define platform_irq_to_vector __ia64_irq_to_vector
+#endif
+#ifndef platform_local_vector_to_irq
+# define platform_local_vector_to_irq __ia64_local_vector_to_irq
#endif
#ifndef platform_inb
# define platform_inb __ia64_inb
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/machvec_sn1.h linux-2.4.2-lia/include/asm-ia64/machvec_sn1.h
--- linux-davidm/include/asm-ia64/machvec_sn1.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.2-lia/include/asm-ia64/machvec_sn1.h Wed Feb 28 20:50:55 2001
@@ -41,6 +41,7 @@
#define platform_outb sn1_outb
#define platform_outw sn1_outw
#define platform_outl sn1_outl
+#define platform_pci_dma_init machvec_noop
#define platform_pci_alloc_consistent sn1_pci_alloc_consistent
#define platform_pci_free_consistent sn1_pci_free_consistent
#define platform_pci_map_single sn1_pci_map_single
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/mca.h linux-2.4.2-lia/include/asm-ia64/mca.h
--- linux-davidm/include/asm-ia64/mca.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.2-lia/include/asm-ia64/mca.h Wed Feb 28 20:50:44 2001
@@ -18,7 +18,6 @@
#include <asm/param.h>
#include <asm/sal.h>
#include <asm/processor.h>
-#include <asm/hw_irq.h>
/* These are the return codes from all the IA64_MCA specific interfaces */
typedef int ia64_mca_return_code_t;
@@ -30,11 +29,6 @@
#define IA64_MCA_RENDEZ_TIMEOUT (100 * HZ) /* 1000 milliseconds */
-/* Interrupt vectors reserved for MC handling. */
-#define IA64_MCA_RENDEZ_INT_VECTOR MCA_RENDEZ_IRQ /* Rendez interrupt */
-#define IA64_MCA_WAKEUP_INT_VECTOR MCA_WAKEUP_IRQ /* Wakeup interrupt */
-#define IA64_MCA_CMC_INT_VECTOR CMC_IRQ /* Correctable machine check interrupt */
-
#define IA64_CMC_INT_DISABLE 0
#define IA64_CMC_INT_ENABLE 1
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/offsets.h linux-2.4.2-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Wed Feb 28 22:20:39 2001
+++ linux-2.4.2-lia/include/asm-ia64/offsets.h Wed Feb 28 20:50:30 2001
@@ -11,7 +11,7 @@
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 3376 /* 0xd30 */
+#define IA64_TASK_SIZE 3904 /* 0xf40 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
@@ -24,7 +24,8 @@
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
#define IA64_TASK_THREAD_OFFSET 1456 /* 0x5b0 */
#define IA64_TASK_THREAD_KSP_OFFSET 1456 /* 0x5b0 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 3224 /* 0xc98 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 3752 /* 0xea8 */
+#define IA64_TASK_PFM_NOTIFY_OFFSET 3648 /* 0xe40 */
#define IA64_TASK_PID_OFFSET 196 /* 0xc4 */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/pgalloc.h linux-2.4.2-lia/include/asm-ia64/pgalloc.h
--- linux-davidm/include/asm-ia64/pgalloc.h Wed Feb 28 22:20:40 2001
+++ linux-2.4.2-lia/include/asm-ia64/pgalloc.h Wed Feb 28 20:50:15 2001
@@ -8,8 +8,8 @@
* This hopefully works with any (fixed) ia-64 page-size, as defined
* in <asm/page.h> (currently 8192).
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 2000, Goutham Rao <goutham.rao@intel.com>
*/
@@ -28,10 +28,10 @@
* a lot of work and caused unnecessary memory traffic. How broken...
* We fix this by caching them.
*/
-#define pgd_quicklist (current_cpu_data->pgd_quick)
-#define pmd_quicklist (current_cpu_data->pmd_quick)
-#define pte_quicklist (current_cpu_data->pte_quick)
-#define pgtable_cache_size (current_cpu_data->pgtable_cache_sz)
+#define pgd_quicklist (local_cpu_data->pgd_quick)
+#define pmd_quicklist (local_cpu_data->pmd_quick)
+#define pte_quicklist (local_cpu_data->pte_quick)
+#define pgtable_cache_size (local_cpu_data->pgtable_cache_sz)
static __inline__ pgd_t*
get_pgd_slow (void)
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/pgtable.h linux-2.4.2-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Wed Feb 28 22:20:40 2001
+++ linux-2.4.2-lia/include/asm-ia64/pgtable.h Wed Feb 28 20:49:58 2001
@@ -175,7 +175,7 @@
static inline long
ia64_phys_addr_valid (unsigned long addr)
{
- return (addr & (current_cpu_data->unimpl_pa_mask)) = 0;
+ return (addr & (local_cpu_data->unimpl_pa_mask)) = 0;
}
/*
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/processor.h linux-2.4.2-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Wed Feb 28 22:20:40 2001
+++ linux-2.4.2-lia/include/asm-ia64/processor.h Wed Feb 28 20:49:47 2001
@@ -283,7 +283,11 @@
#endif
} __attribute__ ((aligned (PAGE_SIZE))) ;
-#define current_cpu_data ((struct cpuinfo_ia64 *) PERCPU_ADDR)
+/*
+ * The "local" data pointer. It points to the per-CPU data of the currently executing
+ * CPU, much like "current" points to the per-task data of the currently executing task.
+ */
+#define local_cpu_data ((struct cpuinfo_ia64 *) PERCPU_ADDR)
extern struct cpuinfo_ia64 cpu_data[NR_CPUS];
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/softirq.h linux-2.4.2-lia/include/asm-ia64/softirq.h
--- linux-davidm/include/asm-ia64/softirq.h Fri Mar 10 15:24:02 2000
+++ linux-2.4.2-lia/include/asm-ia64/softirq.h Wed Feb 28 20:47:06 2001
@@ -2,17 +2,14 @@
#define _ASM_IA64_SOFTIRQ_H
/*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/hardirq.h>
-#define cpu_bh_disable(cpu) do { local_bh_count(cpu)++; barrier(); } while (0)
-#define cpu_bh_enable(cpu) do { barrier(); local_bh_count(cpu)--; } while (0)
+#define local_bh_disable() do { local_bh_count()++; barrier(); } while (0)
+#define local_bh_enable() do { barrier(); local_bh_count()--; } while (0)
-#define local_bh_disable() cpu_bh_disable(smp_processor_id())
-#define local_bh_enable() cpu_bh_enable(smp_processor_id())
-
-#define in_softirq() (local_bh_count(smp_processor_id()) != 0)
+#define in_softirq() (local_bh_count() != 0)
#endif /* _ASM_IA64_SOFTIRQ_H */
diff -urN --ignore-all-space linux-davidm/include/linux/irq.h linux-2.4.2-lia/include/linux/irq.h
--- linux-davidm/include/linux/irq.h Wed Feb 28 22:20:40 2001
+++ linux-2.4.2-lia/include/linux/irq.h Wed Feb 28 20:46:15 2001
@@ -44,15 +44,13 @@
*
* Pad this out to 32 bytes for cache and indexing reasons.
*/
-typedef struct {
+typedef struct irq_desc {
unsigned int status; /* IRQ status */
hw_irq_controller *handler;
struct irqaction *action; /* IRQ action list */
unsigned int depth; /* nested irq disables */
spinlock_t lock;
} ____cacheline_aligned irq_desc_t;
-
-extern irq_desc_t irq_desc [NR_IRQS];
#include <asm/hw_irq.h> /* the arch dependent stuff */
diff -urN --ignore-all-space linux-davidm/include/linux/irq_cpustat.h linux-2.4.2-lia/include/linux/irq_cpustat.h
--- linux-davidm/include/linux/irq_cpustat.h Sun Dec 31 11:10:16 2000
+++ linux-2.4.2-lia/include/linux/irq_cpustat.h Wed Feb 28 20:46:00 2001
@@ -28,8 +28,8 @@
/* arch independent irq_stat fields */
#define softirq_active(cpu) __IRQ_STAT((cpu), __softirq_active)
#define softirq_mask(cpu) __IRQ_STAT((cpu), __softirq_mask)
-#define local_irq_count(cpu) __IRQ_STAT((cpu), __local_irq_count)
-#define local_bh_count(cpu) __IRQ_STAT((cpu), __local_bh_count)
+#define irq_count(cpu) __IRQ_STAT((cpu), __irq_count)
+#define bh_count(cpu) __IRQ_STAT((cpu), __bh_count)
#define syscall_count(cpu) __IRQ_STAT((cpu), __syscall_count)
/* arch dependent irq_stat fields */
#define nmi_count(cpu) __IRQ_STAT((cpu), __nmi_count) /* i386, ia64 */
diff -urN --ignore-all-space linux-davidm/kernel/softirq.c linux-2.4.2-lia/kernel/softirq.c
--- linux-davidm/kernel/softirq.c Wed Feb 28 22:20:48 2001
+++ linux-2.4.2-lia/kernel/softirq.c Wed Feb 28 20:45:09 2001
@@ -245,18 +245,16 @@
static void bh_action(unsigned long nr)
{
- int cpu = smp_processor_id();
-
if (!spin_trylock(&global_bh_lock))
goto resched;
- if (!hardirq_trylock(cpu))
+ if (!local_hardirq_trylock())
goto resched_unlock;
if (bh_base[nr])
bh_base[nr]();
- hardirq_endlock(cpu);
+ local_hardirq_endlock();
spin_unlock(&global_bh_lock);
return;
diff -urN --ignore-all-space linux-davidm/kernel/timer.c linux-2.4.2-lia/kernel/timer.c
--- linux-davidm/kernel/timer.c Wed Feb 28 22:20:48 2001
+++ linux-2.4.2-lia/kernel/timer.c Wed Feb 28 20:44:50 2001
@@ -592,7 +592,7 @@
else
kstat.per_cpu_user[cpu] += user_tick;
kstat.per_cpu_system[cpu] += system;
- } else if (local_bh_count(cpu) || local_irq_count(cpu) > 1)
+ } else if (local_bh_count() || local_irq_count() > 1)
kstat.per_cpu_system[cpu] += system;
}
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.2)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (37 preceding siblings ...)
2001-03-01 7:12 ` [Linux-ia64] kernel update (relative to 2.4.2) David Mosberger
@ 2001-03-01 10:17 ` Andreas Schwab
2001-03-01 10:27 ` Andreas Schwab
` (176 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-03-01 10:17 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> The latest IA-64 patch is now available at:
|>
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
|>
|> in file linux-2.4.2-ia64-010228.diff*
This doesn't compile:
irq.c:458:28: too many arguments in invocation of macro "local_irq_count"
irq.c:474:27: too many arguments in invocation of macro "local_irq_count"
irq.c: In function `handle_IRQ_event':
irq.c:458: `local_irq_count' undeclared (first use in this function)
irq.c:458: (Each undeclared identifier is reported only once
irq.c:458: for each function it appears in.)
Andreas.
--
Andreas Schwab "And now for something
SuSE Labs completely different."
Andreas.Schwab@suse.de
SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.2)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (38 preceding siblings ...)
2001-03-01 10:17 ` Andreas Schwab
@ 2001-03-01 10:27 ` Andreas Schwab
2001-03-01 15:29 ` David Mosberger
` (175 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-03-01 10:27 UTC (permalink / raw)
To: linux-ia64
Andreas Schwab <schwab@suse.de> writes:
|> David Mosberger <davidm@hpl.hp.com> writes:
|>
|> |> The latest IA-64 patch is now available at:
|> |>
|> |> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
|> |>
|> |> in file linux-2.4.2-ia64-010228.diff*
|>
|> This doesn't compile:
|>
|> irq.c:458:28: too many arguments in invocation of macro "local_irq_count"
|> irq.c:474:27: too many arguments in invocation of macro "local_irq_count"
|> irq.c: In function `handle_IRQ_event':
|> irq.c:458: `local_irq_count' undeclared (first use in this function)
|> irq.c:458: (Each undeclared identifier is reported only once
|> irq.c:458: for each function it appears in.)
Another bug:
irq_ia64.c: In function `init_IRQ':
irq_ia64.c:158: `ipi_irqaction' undeclared (first use in this function)
irq_ia64.c:158: (Each undeclared identifier is reported only once
irq_ia64.c:158: for each function it appears in.)
Andreas.
--
Andreas Schwab "And now for something
SuSE Labs completely different."
Andreas.Schwab@suse.de
SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.2)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (39 preceding siblings ...)
2001-03-01 10:27 ` Andreas Schwab
@ 2001-03-01 15:29 ` David Mosberger
2001-03-02 12:26 ` Keith Owens
` (174 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-03-01 15:29 UTC (permalink / raw)
To: linux-ia64
OK, turns out there were two silly typos which I didn't catch because
the simulator configuration was accidentally set for CONFIG_SMP...
Here are the fixes:
--- arch/ia64/kernel/irq_ia64.c~ Wed Feb 28 12:55:54 2001
+++ arch/ia64/kernel/irq_ia64.c Thu Mar 1 07:08:01 2001
@@ -155,7 +155,9 @@
init_IRQ (void)
{
register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL);
+#ifdef CONFIG_SMP
register_percpu_irq(IA64_IPI_VECTOR, &ipi_irqaction);
+#endif
platform_irq_init();
}
--- include/asm-ia64/hardirq.h~ Wed Feb 28 22:35:39 2001
+++ include/asm-ia64/hardirq.h Thu Mar 1 07:07:18 2001
@@ -41,8 +41,8 @@
# define local_hardirq_trylock() (local_irq_count() == 0)
# define local_hardirq_endlock() do { } while (0)
-# define local_irq_enter(irq) (local_irq_count(cpu)++)
-# define local_irq_exit(irq) (local_irq_count(cpu)--)
+# define local_irq_enter(irq) (local_irq_count()++)
+# define local_irq_exit(irq) (local_irq_count()--)
# define synchronize_irq() barrier()
#else
I also updated the patch on linux.kernel.org. The fixed version
has a size of 456023 bytes.
--david
>>>>> On 01 Mar 2001 11:27:02 +0100, Andreas Schwab <schwab@suse.de> said:
Andreas> Andreas Schwab <schwab@suse.de> writes: |> David Mosberger
Andreas> <davidm@hpl.hp.com> writes: |> |> |> The latest IA-64 patch
Andreas> is now available at: |> |> |> |>
Andreas> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ |> |> |>
Andreas> |> in file linux-2.4.2-ia64-010228.diff* |> |> This doesn't
Andreas> compile: |> |> irq.c:458:28: too many arguments in
Andreas> invocation of macro "local_irq_count" |> irq.c:474:27: too
Andreas> many arguments in invocation of macro "local_irq_count" |>
Andreas> irq.c: In function `handle_IRQ_event': |> irq.c:458:
Andreas> `local_irq_count' undeclared (first use in this function)
Andreas> |> irq.c:458: (Each undeclared identifier is reported only
Andreas> once |> irq.c:458: for each function it appears in.)
Andreas> Another bug:
Andreas> irq_ia64.c: In function `init_IRQ': irq_ia64.c:158:
Andreas> `ipi_irqaction' undeclared (first use in this function)
Andreas> irq_ia64.c:158: (Each undeclared identifier is reported
Andreas> only once irq_ia64.c:158: for each function it appears in.)
Andreas> Andreas.
Andreas> -- Andreas Schwab "And now for something SuSE Labs
Andreas> completely different." Andreas.Schwab@suse.de SuSE GmbH,
Andreas> Schanzäckerstr. 10, D-90443 Nürnberg Key fingerprint = 58CA
Andreas> 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.2)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (40 preceding siblings ...)
2001-03-01 15:29 ` David Mosberger
@ 2001-03-02 12:26 ` Keith Owens
2001-05-09 4:52 ` [Linux-ia64] kernel update (relative to 2.4.4) Keith Owens
` (173 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-03-02 12:26 UTC (permalink / raw)
To: linux-ia64
On Wed, 28 Feb 2001 23:12:35 -0800,
David Mosberger <davidm@hpl.hp.com> wrote:
>The latest IA-64 patch is now available at:
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>in file linux-2.4.2-ia64-010228.diff*
The latest IA-64 kdb patch is now available at:
ftp://oss.sgi.com/projects/kdb/download/ia64
in file kdb-v1.8-2.4.2-ia64-010228.gz.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (41 preceding siblings ...)
2001-03-02 12:26 ` Keith Owens
@ 2001-05-09 4:52 ` Keith Owens
2001-05-09 5:07 ` David Mosberger
` (172 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-05-09 4:52 UTC (permalink / raw)
To: linux-ia64
On Tue, 8 May 2001 22:07:41 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>The latest IA-64 patch is (finally) available at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>
>in file linux-2.4.4-ia64-010508.diff*.
kdb is available
ftp://oss.sgi.com/projects/kdb/download/ia64/kdb-v1.8-2.4.4-ia64-010508.gz
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (42 preceding siblings ...)
2001-05-09 4:52 ` [Linux-ia64] kernel update (relative to 2.4.4) Keith Owens
@ 2001-05-09 5:07 ` David Mosberger
2001-05-09 11:45 ` Keith Owens
` (171 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-05-09 5:07 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is (finally) available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.4-ia64-010508.diff*. Thanks to Johannes Erdfelt,
USB is working again and I hope this kernel will be quite stable for
everybody (famous last words...).
What's changed:
- clean up boot parameter passing between elilo and kernel
(Stephane Eranian)
- optimized do_csum() some more (Jun Nakajima)
- AGP GART update (Chris Ahna)
- fix pci-pool routines to work on 64-bit platforms (Johannes
Erdfelt)
- more IA-32 updates (Don Dugger, me)
- make machine_halt() do nothing instead of rebooting the
machine; this has the desirable effect that "shutdown -h"
will really halt instead of rebooting the machine; the
machine does *not* power off, however (I don't think the
current firmware/hw supports this)
- use KERNEL_PG_* constants instead of hardcoding the page
size used in the kernel's identity mapped regions (Tony Luck)
- added getunwind() system call as a means to get unwind info
for the gate page (signal handler trampoline for now;
light-weight syscall stubs in the future) and add unwind info
to signal trampoline
- move EFI vars from /proc/efi to /proc/efi/vars/ (Matt Domsch)
- for MP machines, sync cycle counter register (ar.itc) of
each CPU with the time keeper CPU on boot up & exploit this
to provide fine-grained time in gettimeofday()
- clean up & fix coredump creation
- clean up debug messages in cs4281 drivers to avoid compiler
warnings
- make the register backing store of a newly execve()'d
process start at a nicely aligned address again
- fix spin_unlock() to do barrier() in correct place (Jack Steiner)
- add a couple of spare longs to sigcontext structure to
provide room for future extensibility
Some notes: I didn't apply Jes's qla1280 clean up patch yet. I'd very
much like to do that, but since qlogic is maintaing this driver, I'm
waiting until they've had a chance to review it.
If you look at the cycle counter sync'ing code, you'll see that it's
very different from what has been proposed in the past. There is a
good reason for this: the new code allows arbitrary pairs of CPUs to
sync at any point in time. At this point, the only time the syncing
is actually done is at boot time, but in the future we'll want to
support online replacement of CPUs and with a pair-wise syncing
approach, this is trivial to support. Also, the new code uses a close
loop to sync the counters. This ensures that there are no
unaccounted-for error sources. The syncing tends to work quite well
in pratice: usually, we seem to get a pair of CPUs to within <10
cycles of each other (assuming that the interconnect between the two
CPUs has symmetric latencies). However, I'm also quite sure that
someone with a better memory of control theory could optimize the
current code for faster convergence (and perhaps more stable behavior
in the presence of large amounts of noise). Please feel free to take
this as a challenge! ;-)
BIG CAVEAT: even though the kernel currently is syncing the cycle
counters, application code MUST NOT rely on this. Instead, if an
application needs fine-grained and light-weight timestamps, it should
use the POSIX *clock* routines that Uli added recently to libc. I can
almost guarantee you that future machines will NOT have synchronized
clocks (and we'll also want to support systems where different CPUs
have different clock speeds) and because of this, the *clock* routines
and gettimeofday() are the recommended and RELIABLE way for obtaining
fine-grained timestamps.
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (43 preceding siblings ...)
2001-05-09 5:07 ` David Mosberger
@ 2001-05-09 11:45 ` Keith Owens
2001-05-09 13:38 ` Jack Steiner
` (170 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-05-09 11:45 UTC (permalink / raw)
To: linux-ia64
On Tue, 8 May 2001 22:07:41 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>The latest IA-64 patch is (finally) available at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>
>in file linux-2.4.4-ia64-010508.diff*.
There were two versions of linux-2.4.4-ia64-010508.diff.bz2 on
ports/ia64, about 45 minutes apart. The kdb patch was against the
first version of ia64-010508 and gets a compile error on E1_PAUSE. I
have uploaded a new copy of kdb-v1.8-2.4.4-ia64-010508.gz to
oss.sgi.com, this one is timestamped May 9 11:37 UTC and is against the
copy of ia64-010508 with timestamp May 9 04:51 UTC.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (44 preceding siblings ...)
2001-05-09 11:45 ` Keith Owens
@ 2001-05-09 13:38 ` Jack Steiner
2001-05-09 14:06 ` David Mosberger
` (169 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Jack Steiner @ 2001-05-09 13:38 UTC (permalink / raw)
To: linux-ia64
>> BIG CAVEAT: even though the kernel currently is syncing the cycle
>> counters, application code MUST NOT rely on this. Instead, if an
>> application needs fine-grained and light-weight timestamps, it should
>> use the POSIX *clock* routines that Uli added recently to libc. I can
>> almost guarantee you that future machines will NOT have synchronized
>> clocks (and we'll also want to support systems where different CPUs
>> have different clock speeds) and because of this, the *clock* routines
>> and gettimeofday() are the recommended and RELIABLE way for obtaining
>> fine-grained timestamps.
FYI, The IA64 system being built by SGI has multiple nodes. The clocks on
different nodes are not synchronized.
--
Thanks
Jack Steiner (651-683-5302) (vnet 233-5302) steiner@sgi.com
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (45 preceding siblings ...)
2001-05-09 13:38 ` Jack Steiner
@ 2001-05-09 14:06 ` David Mosberger
2001-05-09 14:21 ` Jack Steiner
` (168 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-05-09 14:06 UTC (permalink / raw)
To: linux-ia64
>>>>> On Wed, 9 May 2001 08:38:42 -0500 (CDT), Jack Steiner <steiner@sgi.com> said:
Jack> FYI, The IA64 system being built by SGI has multiple
Jack> nodes. The clocks on different nodes are not synchronized.
This means that (a) you'll have to disable gettimeofday()
interpolation in the kernel and (b) implement an alternate version of
the *clock* routines in libc. Does SN1 provides some other means to
obtain a system-wide fine-grained timestamp?
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (46 preceding siblings ...)
2001-05-09 14:06 ` David Mosberger
@ 2001-05-09 14:21 ` Jack Steiner
2001-05-10 4:14 ` David Mosberger
` (167 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Jack Steiner @ 2001-05-09 14:21 UTC (permalink / raw)
To: linux-ia64
>
> >>>>> On Wed, 9 May 2001 08:38:42 -0500 (CDT), Jack Steiner <steiner@sgi.com> said:
>
> Jack> FYI, The IA64 system being built by SGI has multiple
> Jack> nodes. The clocks on different nodes are not synchronized.
>
> This means that (a) you'll have to disable gettimeofday()
> interpolation in the kernel and (b) implement an alternate version of
> the *clock* routines in libc. Does SN1 provides some other means to
> obtain a system-wide fine-grained timestamp?
OK, thanks. I'll add that to our (never ending) worklist.
Up to this point, we were not aware that we would have to make any
changes in any library routines. I have not looked at the libc clock
routine, but it sounds like I should take a look...
BTW, The SN platform does have a hi-res synchronized clock that can be
mmapped & read from user space. The clock is located in the "chipset"
and is synchronized across all nodes/FSBs.
--
Thanks
Jack Steiner (651-683-5302) (vnet 233-5302) steiner@sgi.com
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.4)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (47 preceding siblings ...)
2001-05-09 14:21 ` Jack Steiner
@ 2001-05-10 4:14 ` David Mosberger
2001-05-31 7:37 ` [Linux-ia64] kernel update (relative to 2.4.5) David Mosberger
` (166 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-05-10 4:14 UTC (permalink / raw)
To: linux-ia64
>>>>> On Wed, 9 May 2001 09:21:49 -0500 (CDT), Jack Steiner <steiner@sgi.com> said:
Jack> BTW, The SN platform does have a hi-res synchronized clock that can be
Jack> mmapped & read from user space. The clock is located in the "chipset"
Jack> and is synchronized across all nodes/FSBs.
Good. This was indeed the working assumption: on machines where ITCs
aren't synchronous, there needs to be some other mechanism to obtain
fine-grained time stamps. So, I think what you'll need is a device
driver for your chipset which will allow mapping the hi-res clock into
user space where libc can use it to implement the *clock* routines.
In the kernel, we can make gettimeoffset() a platform-specific
function, so SN1 and other platforms with non-synchronous clocks can
use an external hi-res timer to implement it.
The only challenge is to decide when to use ITCs and when to use other
hardware. Unfortunately, I don't believe SAL provides info on whether
the ITC clocks are synchronized (I think it should!). For now, we
could implement it such that an external hi-res timer is used in favor
of ITCs whenever such hardware is present.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.5)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (48 preceding siblings ...)
2001-05-10 4:14 ` David Mosberger
@ 2001-05-31 7:37 ` David Mosberger
2001-06-27 7:09 ` David Mosberger
` (165 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-05-31 7:37 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.5-ia64-010530.diff*.
What's changed:
- updates for 2.4.5
- fix exception table lookup so it really works for modules (Tony Luck)
- fix perfmon bug (Sumit Roy & Stephane Eranian)
- clean up smp boot code and structure it like on x86 (Rohit Seth)
- set siginfo.si_code to SI_FAULT for SIGFPE (Asit Mallick)
- fix ACPI code to handle _PRT for secondary buses (Jung-Ik Lee)
- more cleanup for gcc3.0
- fix some dependency-violations in MCA code (note: this doesn't
fix the MCA INIT handler yet; that patch is yet to come...)
- fix IA-32 context switch typo
- fix ptrace() so "strace -f" works again
- don't call tty_write_message() when an unaligned fault occurs
inside the kernel (could deadlock otherwise)
- tune the VM system to allow page table cache to grow bigger
- reorg thread_struct for better cache-usage
- initialize archdata for the module structure representing the kernel
The only really important fix here is the one for the exception table
lookup. It turns out that there was a typo in that code which
prevented the module-specific code to get compiled into the kernel
when modules were enabled. Thanks to Tony Luck for finding this!
Keith, you may want to look at my comment in kernel/module.c for a
suggestion on how to clean up kernel module initialization at some
point in the future.
As usual, the patch below is fyi only. The patch has been tested on
Big Sur (SMP) and on HP Ski simulator (UP).
Enjoy,
--david
diff -urN linux-davidm/arch/ia64/Makefile linux-2.4.5-lia/arch/ia64/Makefile
--- linux-davidm/arch/ia64/Makefile Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/Makefile Wed May 30 22:41:05 2001
@@ -23,7 +23,7 @@
GCC_VERSION=$(shell $(CROSS_COMPILE)$(HOSTCC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.')
ifneq ($(GCC_VERSION),2)
- CFLAGS += -frename-registers -fno-optimize-sibling-calls
+ CFLAGS += -frename-registers
endif
ifeq ($(CONFIG_ITANIUM_ASTEP_SPECIFIC),y)
diff -urN linux-davidm/arch/ia64/boot/bootloader.c linux-2.4.5-lia/arch/ia64/boot/bootloader.c
--- linux-davidm/arch/ia64/boot/bootloader.c Sun Apr 29 15:49:24 2001
+++ linux-2.4.5-lia/arch/ia64/boot/bootloader.c Wed May 30 22:41:23 2001
@@ -87,9 +87,6 @@
asm volatile ("movl gp=__gp;;" ::: "memory");
asm volatile ("mov sp=%0" :: "r"(stack) : "memory");
asm volatile ("bsw.1;;");
-#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
- asm volative ("nop 0;; nop 0;; nop 0;;");
-#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
ssc(0, 0, 0, 0, SSC_CONSOLE_INIT);
diff -urN linux-davidm/arch/ia64/hp/hpsim_setup.c linux-2.4.5-lia/arch/ia64/hp/hpsim_setup.c
--- linux-davidm/arch/ia64/hp/hpsim_setup.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.5-lia/arch/ia64/hp/hpsim_setup.c Wed May 30 22:41:37 2001
@@ -27,28 +27,15 @@
/*
* Simulator system call.
*/
-inline long
-ia64_ssc (long arg0, long arg1, long arg2, long arg3, int nr)
-{
-#ifdef __GCC_DOESNT_KNOW_IN_REGS__
- register long in0 asm ("r32") = arg0;
- register long in1 asm ("r33") = arg1;
- register long in2 asm ("r34") = arg2;
- register long in3 asm ("r35") = arg3;
-#else
- register long in0 asm ("in0") = arg0;
- register long in1 asm ("in1") = arg1;
- register long in2 asm ("in2") = arg2;
- register long in3 asm ("in3") = arg3;
-#endif
- register long r8 asm ("r8");
- register long r15 asm ("r15") = nr;
-
- asm volatile ("break 0x80001"
- : "=r"(r8)
- : "r"(r15), "r"(in0), "r"(in1), "r"(in2), "r"(in3));
- return r8;
-}
+asm (".text\n"
+ ".align 32\n"
+ ".global ia64_ssc\n"
+ ".proc ia64_ssc\n"
+ "ia64_ssc:\n"
+ "mov r15=r36\n"
+ "break 0x80001\n"
+ "br.ret.sptk.many rp\n"
+ ".endp\n");
void
ia64_ssc_connect_irq (long intr, long irq)
diff -urN linux-davidm/arch/ia64/kernel/head.S linux-2.4.5-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/head.S Thu May 31 00:06:01 2001
@@ -123,11 +123,12 @@
#define isAP p2 // are we an Application Processor?
#define isBP p3 // are we the Bootstrap Processor?
+#ifdef CONFIG_SMP
/*
* Find the init_task for the currently booting CPU. At poweron, and in
- * UP mode, cpu_now_booting is 0.
+ * UP mode, cpucount is 0.
*/
- movl r3=cpu_now_booting
+ movl r3=cpucount
;;
ld4 r3=[r3] // r3 <- smp_processor_id()
movl r2=init_tasks
@@ -135,6 +136,11 @@
shladd r2=r3,3,r2
;;
ld8 r2=[r2]
+#else
+ mov r3=0
+ movl r2=init_task_union
+ ;;
+#endif
cmp4.ne isAP,isBP=r3,r0
;; // RAW on r2
extr r3=r2,0,61 // r3 = phys addr of task struct
@@ -182,7 +188,7 @@
#endif /* CONFIG_IA64_EARLY_PRINTK */
#ifdef CONFIG_SMP
-(isAP) br.call.sptk.few rp=smp_callin
+(isAP) br.call.sptk.few rp=start_secondary
.ret0:
(isAP) br.cond.sptk.few self
#endif
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.5-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/ia64_ksyms.c Wed May 30 22:42:41 2001
@@ -7,6 +7,7 @@
#include <linux/string.h>
EXPORT_SYMBOL_NOVERS(memset);
+EXPORT_SYMBOL(memchr);
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL_NOVERS(memcpy);
EXPORT_SYMBOL(memmove);
@@ -75,6 +76,7 @@
EXPORT_SYMBOL(smp_call_function);
EXPORT_SYMBOL(smp_call_function_single);
EXPORT_SYMBOL(cpu_online_map);
+EXPORT_SYMBOL(ia64_cpu_to_sapicid);
#include <linux/smp.h>
EXPORT_SYMBOL(smp_num_cpus);
diff -urN linux-davidm/arch/ia64/kernel/mca_asm.S linux-2.4.5-lia/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Sun Apr 29 15:49:25 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/mca_asm.S Wed May 30 22:42:51 2001
@@ -672,7 +672,7 @@
//
mov r17=cr.lid
// XXX fix me: this is wrong: hard_smp_processor_id() is a pair of lid/eid
- movl r18=__cpu_physical_id
+ movl r18=ia64_cpu_to_sapicid
;;
dep r18=0,r18,61,3 // convert to physical address
;;
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c linux-2.4.5-lia/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/perfmon.c Wed May 30 22:43:01 2001
@@ -787,26 +787,22 @@
/* XXX: ctx locking may be required here */
for (i = 0; i < count; i++, req++) {
- int k;
-
if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
if (!PMD_IS_IMPL(tmp.pfr_reg.reg_num)) return -EINVAL;
- k = tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER;
-
if (PMD_IS_COUNTER(tmp.pfr_reg.reg_num)) {
if (ta = current){
val = ia64_get_pmd(tmp.pfr_reg.reg_num);
} else {
- val = th->pmd[k];
+ val = th->pmd[tmp.pfr_reg.reg_num];
}
val &= pmu_conf.perf_ovfl_val;
/*
* lower part of .val may not be zero, so we must be an addition because of
* residual count (see update_counters).
*/
- val += ctx->ctx_pmds[k].val;
+ val += ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER].val;
} else {
/* for now */
if (ta != current) return -EINVAL;
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.5-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/process.c Wed May 30 22:43:16 2001
@@ -282,10 +282,11 @@
* state from the current task to the new task
*/
if (IS_IA32_PROCESS(ia64_task_regs(current)))
- ia32_save_state(&p->thread);
+ ia32_save_state(p);
#endif
#ifdef CONFIG_PERFMON
- if (current->thread.pfm_context)
+ p->thread.pfm_pend_notify = 0;
+ if (p->thread.pfm_context)
retval = pfm_inherit(p);
#endif
return retval;
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.5-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/ptrace.c Wed May 30 22:43:28 2001
@@ -438,6 +438,25 @@
}
/*
+ * Simulate user-level "flushrs". Note: we can't just add pt->loadrs>>16 to
+ * pt->ar_bspstore because the kernel backing store and the user-level backing store may
+ * have different alignments (and therefore a different number of intervening rnat slots).
+ */
+static void
+user_flushrs (struct task_struct *task, struct pt_regs *pt)
+{
+ unsigned long *krbs;
+ long ndirty;
+
+ krbs = (unsigned long *) task + IA64_RBS_OFFSET/8;
+ ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 19));
+
+ pt->ar_bspstore = (unsigned long) ia64_rse_skip_regs((unsigned long *) pt->ar_bspstore,
+ ndirty);
+ pt->loadrs = 0;
+}
+
+/*
* Synchronize the RSE backing store of CHILD and all tasks that share the address space
* with it. CHILD_URBS_END is the address of the end of the register backing store of
* CHILD. If MAKE_WRITABLE is set, a user-level "flushrs" is simulated such that the VM
@@ -467,11 +486,8 @@
sw = (struct switch_stack *) (child->thread.ksp + 16);
pt = ia64_task_regs(child);
ia64_sync_user_rbs(child, sw, pt->ar_bspstore, child_urbs_end);
- if (make_writable) {
- /* simulate a user-level "flushrs": */
- pt->loadrs = 0;
- pt->ar_bspstore = child_urbs_end;
- }
+ if (make_writable)
+ user_flushrs(child, pt);
} else {
read_lock(&tasklist_lock);
{
@@ -481,11 +497,8 @@
pt = ia64_task_regs(p);
urbs_end = ia64_get_user_rbs_end(p, pt, NULL);
ia64_sync_user_rbs(p, sw, pt->ar_bspstore, urbs_end);
- if (make_writable) {
- /* simulate a user-level "flushrs": */
- pt->loadrs = 0;
- pt->ar_bspstore = urbs_end;
- }
+ if (make_writable)
+ user_flushrs(p, pt);
}
}
}
@@ -781,8 +794,8 @@
long arg4, long arg5, long arg6, long arg7, long stack)
{
struct pt_regs *pt, *regs = (struct pt_regs *) &stack;
- struct task_struct *child;
unsigned long flags, urbs_end;
+ struct task_struct *child;
struct switch_stack *sw;
long ret;
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.5-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/smp.c Wed May 30 22:44:12 2001
@@ -6,12 +6,16 @@
*
* Lots of stuff stolen from arch/alpha/kernel/smp.c
*
- * 00/09/11 David Mosberger <davidm@hpl.hp.com> Do loops_per_jiffy calibration on each CPU.
- * 00/08/23 Asit Mallick <asit.k.mallick@intel.com> fixed logical processor id
- * 00/03/31 Rohit Seth <rohit.seth@intel.com> Fixes for Bootstrap Processor & cpu_online_map
- * now gets done here (instead of setup.c)
- * 99/10/05 davidm Update to bring it in sync with new command-line processing scheme.
- * 10/13/00 Goutham Rao <goutham.rao@intel.com> Updated smp_call_function and
+ * 01/05/16 Rohit Seth <rohit.seth@intel.com> IA64-SMP functions. Reorganized
+ * the existing code (on the lines of x86 port).
+ * 00/09/11 David Mosberger <davidm@hpl.hp.com> Do loops_per_jiffy
+ * calibration on each CPU.
+ * 00/08/23 Asit Mallick <asit.k.mallick@intel.com> fixed logical processor id
+ * 00/03/31 Rohit Seth <rohit.seth@intel.com> Fixes for Bootstrap Processor
+ * & cpu_online_map now gets done here (instead of setup.c)
+ * 99/10/05 davidm Update to bring it in sync with new command-line processing
+ * scheme.
+ * 10/13/00 Goutham Rao <goutham.rao@intel.com> Updated smp_call_function and
* smp_call_function_single to resend IPI on timeouts
*/
#define __KERNEL_SYSCALLS__
@@ -45,120 +49,48 @@
#include <asm/system.h>
#include <asm/unistd.h>
-extern void __init calibrate_delay(void);
-extern int cpu_idle(void * unused);
-extern void machine_halt(void);
-extern void start_ap(void);
-
-extern int cpu_now_booting; /* used by head.S to find idle task */
-extern volatile unsigned long cpu_online_map; /* bitmap of available cpu's */
-
-struct smp_boot_data smp_boot_data __initdata;
-
+/* The 'big kernel lock' */
spinlock_t kernel_flag = SPIN_LOCK_UNLOCKED;
-char __initdata no_int_routing;
-
-/* don't make this a CPU-local variable: it's used for IPIs, mostly... */
-int __cpu_physical_id[NR_CPUS]; /* logical ID -> physical CPU ID map */
-
-unsigned char smp_int_redirect; /* are INT and IPI redirectable by the chipset? */
-int smp_num_cpus = 1;
-volatile int smp_threads_ready; /* set when the idlers are all forked */
-unsigned long ap_wakeup_vector; /* external Int to use to wakeup AP's */
-
-static volatile unsigned long cpu_callin_map;
-static volatile int smp_commenced;
-
-static int max_cpus = -1; /* command line */
+/*
+ * Structure and data for smp_call_function(). This is designed to minimise static memory
+ * requirements. It also looks cleaner.
+ */
+static spinlock_t call_lock = SPIN_LOCK_UNLOCKED;
-struct smp_call_struct {
+struct call_data_struct {
void (*func) (void *info);
void *info;
long wait;
- atomic_t unstarted_count;
- atomic_t unfinished_count;
+ atomic_t started;
+ atomic_t finished;
};
-static volatile struct smp_call_struct *smp_call_function_data;
-#define IPI_RESCHEDULE 0
-#define IPI_CALL_FUNC 1
-#define IPI_CPU_STOP 2
+static volatile struct call_data_struct *call_data;
+
+#define IPI_RESCHEDULE 0
+#define IPI_CALL_FUNC 1
+#define IPI_CPU_STOP 2
#ifndef CONFIG_ITANIUM_PTCG
# define IPI_FLUSH_TLB 3
-#endif /*!CONFIG_ITANIUM_PTCG */
-
-/*
- * Setup routine for controlling SMP activation
- *
- * Command-line option of "nosmp" or "maxcpus=0" will disable SMP
- * activation entirely (the MPS table probe still happens, though).
- *
- * Command-line option of "maxcpus=<NUM>", where <NUM> is an integer
- * greater than 0, limits the maximum number of CPUs activated in
- * SMP mode to <NUM>.
- */
+#endif /*!CONFIG_ITANIUM_PTCG */
-static int __init
-nosmp (char *str)
-{
- max_cpus = 0;
- return 1;
-}
-
-__setup("nosmp", nosmp);
-
-static int __init
-maxcpus (char *str)
-{
- get_option(&str, &max_cpus);
- return 1;
-}
-
-__setup("maxcpus=", maxcpus);
-
-static int __init
-nointroute (char *str)
-{
- no_int_routing = 1;
- return 1;
-}
-
-__setup("nointroute", nointroute);
-
-/*
- * Yoink this CPU from the runnable list...
- */
-void
-halt_processor (void)
+static void
+stop_this_cpu (void)
{
+ /*
+ * Remove this CPU:
+ */
clear_bit(smp_processor_id(), &cpu_online_map);
max_xtp();
__cli();
- for (;;)
- ;
-}
-
-static inline int
-pointer_lock (void *lock, void *data, int retry)
-{
- volatile long *ptr = lock;
- again:
- if (cmpxchg_acq((void **) lock, 0, data) = 0)
- return 0;
-
- if (!retry)
- return -EBUSY;
-
- while (*ptr)
- ;
-
- goto again;
+ for (;;);
}
void
handle_IPI (int irq, void *dev_id, struct pt_regs *regs)
{
+ int this_cpu = smp_processor_id();
unsigned long *pending_ipis = &local_cpu_data->ipi_operation;
unsigned long ops;
@@ -184,19 +116,19 @@
case IPI_CALL_FUNC:
{
- struct smp_call_struct *data;
+ struct call_data_struct *data;
void (*func)(void *info);
void *info;
int wait;
/* release the 'pointer lock' */
- data = (struct smp_call_struct *) smp_call_function_data;
+ data = (struct call_data_struct *) call_data;
func = data->func;
info = data->info;
wait = data->wait;
mb();
- atomic_dec(&data->unstarted_count);
+ atomic_inc(&data->started);
/* At this point the structure may be gone unless wait is true. */
(*func)(info);
@@ -204,12 +136,12 @@
/* Notify the sending CPU that the task is done. */
mb();
if (wait)
- atomic_dec(&data->unfinished_count);
+ atomic_inc(&data->finished);
}
break;
case IPI_CPU_STOP:
- halt_processor();
+ stop_this_cpu();
break;
#ifndef CONFIG_ITANIUM_PTCG
@@ -224,13 +156,9 @@
/*
* Current CPU may be running with different RID so we need to
- * reload the RID of flushed address.
- * Current CPU may be running with different
- * RID so we need to reload the RID of flushed
- * address. Purging the translation also
- * needs ALAT invalidation; we do not need
- * "invala" here since it is done in
- * ia64_leave_kernel.
+ * reload the RID of flushed address. Purging the translation
+ * also needs ALAT invalidation; we do not need "invala" here
+ * since it is done in ia64_leave_kernel.
*/
ia64_srlz_d();
if (saved_rid != flush_rid) {
@@ -260,8 +188,7 @@
#endif /* !CONFIG_ITANIUM_PTCG */
default:
- printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n",
- smp_processor_id(), which);
+ printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which);
break;
} /* Switch */
} while (ops);
@@ -313,12 +240,6 @@
send_IPI_single(cpu, IPI_RESCHEDULE);
}
-void
-smp_send_stop (void)
-{
- send_IPI_allbutself(IPI_CPU_STOP);
-}
-
#ifndef CONFIG_ITANIUM_PTCG
void
@@ -328,7 +249,7 @@
}
void
-smp_resend_flush_tlb(void)
+smp_resend_flush_tlb (void)
{
/*
* Really need a null IPI but since this rarely should happen & since this code
@@ -337,13 +258,20 @@
send_IPI_allbutself(IPI_RESCHEDULE);
}
-#endif /* !CONFIG_ITANIUM_PTCG */
+#endif /* !CONFIG_ITANIUM_PTCG */
+
+void
+smp_flush_tlb_all (void)
+{
+ smp_call_function ((void (*)(void *))__flush_tlb_all,0,1,1);
+ __flush_tlb_all();
+}
/*
* Run a function on another CPU
* <func> The function to run. This must be fast and non-blocking.
* <info> An arbitrary pointer to pass to the function.
- * <retry> If true, keep retrying until ready.
+ * <nonatomic> Currently unused.
* <wait> If true, wait until function has completed on other CPUs.
* [RETURNS] 0 on success, else a negative status code.
*
@@ -352,11 +280,14 @@
*/
int
-smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int retry, int wait)
+smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int nonatomic,
+ int wait)
{
- struct smp_call_struct data;
- unsigned long timeout;
+ struct call_data_struct data;
int cpus = 1;
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+ unsigned long timeout;
+#endif
if (cpuid = smp_processor_id()) {
printk(__FUNCTION__" trying to call self\n");
@@ -365,115 +296,103 @@
data.func = func;
data.info = info;
+ atomic_set(&data.started, 0);
data.wait = wait;
- atomic_set(&data.unstarted_count, cpus);
- atomic_set(&data.unfinished_count, cpus);
+ if (wait)
+ atomic_set(&data.finished, 0);
- if (pointer_lock(&smp_call_function_data, &data, retry))
- return -EBUSY;
+ spin_lock_bh(&call_lock);
+ call_data = &data;
-resend:
- /* Send a message to the other CPU and wait for it to respond */
- send_IPI_single(cpuid, IPI_CALL_FUNC);
+ resend:
+ send_IPI_single(cpuid, IPI_CALL_FUNC);
- /* Wait for response */
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+ /* Wait for response */
timeout = jiffies + HZ;
- while ((atomic_read(&data.unstarted_count) > 0) && time_before(jiffies, timeout))
+ while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
barrier();
- if (atomic_read(&data.unstarted_count) > 0) {
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+ if (atomic_read(&data.started) != cpus)
goto resend;
#else
- smp_call_function_data = NULL;
- return -ETIMEDOUT;
+ /* Wait for response */
+ while (atomic_read(&data.started) != cpus)
+ barrier();
#endif
- }
if (wait)
- while (atomic_read(&data.unfinished_count) > 0)
+ while (atomic_read(&data.finished) != cpus)
barrier();
- /* unlock pointer */
- smp_call_function_data = NULL;
+ call_data = NULL;
+
+ spin_unlock_bh(&call_lock);
return 0;
}
/*
- * Run a function on all other CPUs.
+ * this function sends a 'generic call function' IPI to all other CPUs
+ * in the system.
+ */
+
+/*
+ * [SUMMARY] Run a function on all other CPUs.
* <func> The function to run. This must be fast and non-blocking.
* <info> An arbitrary pointer to pass to the function.
- * <retry> If true, keep retrying until ready.
- * <wait> If true, wait until function has completed on other CPUs.
+ * <nonatomic> currently unused.
+ * <wait> If true, wait (atomically) until function has completed on other CPUs.
* [RETURNS] 0 on success, else a negative status code.
*
- * Does not return until remote CPUs are nearly ready to execute <func>
- * or are or have executed.
+ * Does not return until remote CPUs are nearly ready to execute <func> or are or have
+ * executed.
+ *
+ * You must not call this function with disabled interrupts or from a hardware interrupt
+ * handler, you may call it from a bottom half handler.
*/
int
-smp_call_function (void (*func) (void *info), void *info, int retry, int wait)
+smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wait)
{
- struct smp_call_struct data;
+ struct call_data_struct data;
+ int cpus = smp_num_cpus-1;
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
unsigned long timeout;
- int cpus = smp_num_cpus - 1;
+#endif
- if (cpus = 0)
+ if (!cpus)
return 0;
data.func = func;
data.info = info;
+ atomic_set(&data.started, 0);
data.wait = wait;
- atomic_set(&data.unstarted_count, cpus);
- atomic_set(&data.unfinished_count, cpus);
+ if (wait)
+ atomic_set(&data.finished, 0);
- if (pointer_lock(&smp_call_function_data, &data, retry))
- return -EBUSY;
+ spin_lock_bh(&call_lock);
+ call_data = &data;
- /* Send a message to all other CPUs and wait for them to respond */
+ resend:
+ /* Send a message to all other CPUs and wait for them to respond */
send_IPI_allbutself(IPI_CALL_FUNC);
-retry:
- /* Wait for response */
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+ /* Wait for response */
timeout = jiffies + HZ;
- while ((atomic_read(&data.unstarted_count) > 0) && time_before(jiffies, timeout))
+ while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
barrier();
- if (atomic_read(&data.unstarted_count) > 0) {
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
- int i;
- for (i = 0; i < smp_num_cpus; i++) {
- if (i != smp_processor_id())
- platform_send_ipi(i, IA64_IPI_VECTOR, IA64_IPI_DM_INT, 0);
- }
- goto retry;
+ if (atomic_read(&data.started) != cpus)
+ goto resend;
#else
- smp_call_function_data = NULL;
- return -ETIMEDOUT;
+ /* Wait for response */
+ while (atomic_read(&data.started) != cpus)
+ barrier();
#endif
- }
+
if (wait)
- while (atomic_read(&data.unfinished_count) > 0)
+ while (atomic_read(&data.finished) != cpus)
barrier();
- /* unlock pointer */
- smp_call_function_data = NULL;
- return 0;
-}
-
-/*
- * Flush all other CPU's tlb and then mine. Do this with smp_call_function() as we
- * want to ensure all TLB's flushed before proceeding.
- */
-void
-smp_flush_tlb_all (void)
-{
- smp_call_function((void (*)(void *))__flush_tlb_all, NULL, 1, 1);
- __flush_tlb_all();
-}
+ call_data = NULL;
-/*
- * Ideally sets up per-cpu profiling hooks. Doesn't do much now...
- */
-static inline void __init
-smp_setup_percpu_timer(void)
-{
- local_cpu_data->prof_counter = 1;
- local_cpu_data->prof_multiplier = 1;
+ spin_unlock_bh(&call_lock);
+ return 0;
}
void
@@ -487,235 +406,18 @@
}
}
-
-/*
- * AP's start using C here.
- */
-void __init
-smp_callin (void)
-{
- extern void ia64_rid_init (void);
- extern void ia64_init_itm (void);
- extern void ia64_cpu_local_tick (void);
- extern void ia64_sync_itc (void *);
-#ifdef CONFIG_PERFMON
- extern void perfmon_init_percpu (void);
-#endif
- int cpu = smp_processor_id();
-
- if (test_and_set_bit(cpu, &cpu_online_map)) {
- printk("CPU#%d already initialized!\n", cpu);
- machine_halt();
- }
-
- efi_map_pal_code();
- cpu_init();
-
- ia64_sync_itc(0); /* synchronize ar.itc with master */
-
- smp_setup_percpu_timer();
-
- /* setup the CPU local timer tick */
- ia64_init_itm();
-
-#ifdef CONFIG_PERFMON
- perfmon_init_percpu();
-#endif
- local_irq_enable(); /* Interrupts have been off until now */
-
- calibrate_delay();
- local_cpu_data->loops_per_jiffy = loops_per_jiffy;
-
- /* allow the master to continue */
- set_bit(cpu, &cpu_callin_map);
-
- /* finally, wait for the BP to finish initialization: */
- while (!smp_commenced);
-
- cpu_idle(NULL);
-}
-
-/*
- * Create the idle task for a new AP. DO NOT use kernel_thread() because
- * that could end up calling schedule() in the ia64_leave_kernel exit
- * path in which case the new idle task could get scheduled before we
- * had a chance to remove it from the run-queue...
- */
-static int __init
-fork_by_hand (void)
-{
- /*
- * Don't care about the usp and regs settings since we'll never
- * reschedule the forked task.
- */
- return do_fork(CLONE_VM|CLONE_PID, 0, 0, 0);
-}
-
-/*
- * Bring one cpu online. Return 0 if this fails for any reason.
- */
-static int __init
-smp_boot_one_cpu (int cpu)
-{
- struct task_struct *idle;
- int cpu_phys_id = cpu_physical_id(cpu);
- long timeout;
-
- /*
- * Create an idle task for this CPU. Note that the address we
- * give to kernel_thread is irrelevant -- it's going to start
- * where OS_BOOT_RENDEVZ vector in SAL says to start. But
- * this gets all the other task-y sort of data structures set
- * up like we wish. We need to pull the just created idle task
- * off the run queue and stuff it into the init_tasks[] array.
- * Sheesh . . .
- */
- if (fork_by_hand() < 0)
- panic("failed fork for CPU 0x%x", cpu_phys_id);
- /*
- * We remove it from the pidhash and the runqueue
- * once we got the process:
- */
- idle = init_task.prev_task;
- if (!idle)
- panic("No idle process for CPU 0x%x", cpu_phys_id);
- init_tasks[cpu] = idle;
- del_from_runqueue(idle);
- unhash_process(idle);
-
- /* Schedule the first task manually. */
- idle->processor = cpu;
- idle->has_cpu = 1;
-
- /* Let _start know what logical CPU we're booting (offset into init_tasks[] */
- cpu_now_booting = cpu;
-
- /* Kick the AP in the butt */
- platform_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
-
- /* wait up to 10s for the AP to start */
- for (timeout = 0; timeout < 100000; timeout++) {
- if (test_bit(cpu, &cpu_callin_map))
- return 1;
- udelay(100);
- }
-
- printk(KERN_ERR "SMP: CPU 0x%x is stuck\n", cpu_phys_id);
- return 0;
-}
-
-
-
/*
- * Called by smp_init bring all the secondaries online and hold them.
+ * this function calls the 'stop' function on all other CPUs in the system.
*/
-void __init
-smp_boot_cpus (void)
-{
- int i, cpu_count = 1;
- unsigned long bogosum;
-
- /* on the BP, the kernel already called calibrate_delay_loop() in init/main.c */
- local_cpu_data->loops_per_jiffy = loops_per_jiffy;
-#if 0
- smp_tune_scheduling();
-#endif
- smp_setup_percpu_timer();
-
- if (test_and_set_bit(0, &cpu_online_map)) {
- printk("CPU#%d already initialized!\n", smp_processor_id());
- machine_halt();
- }
- init_idle();
-
- /* Nothing to do when told not to. */
- if (max_cpus = 0) {
- printk(KERN_INFO "SMP mode deactivated.\n");
- return;
- }
-
- if (max_cpus != -1)
- printk("Limiting CPUs to %d\n", max_cpus);
-
- if (smp_boot_data.cpu_count > 1) {
- printk(KERN_INFO "SMP: starting up secondaries.\n");
-
- for (i = 0; i < smp_boot_data.cpu_count; i++) {
- /* skip performance restricted and bootstrap cpu: */
- if (smp_boot_data.cpu_phys_id[i] = -1
- || smp_boot_data.cpu_phys_id[i] = hard_smp_processor_id())
- continue;
-
- cpu_physical_id(cpu_count) = smp_boot_data.cpu_phys_id[i];
- if (!smp_boot_one_cpu(cpu_count))
- continue; /* failed */
-
- cpu_count++; /* Count good CPUs only... */
- /*
- * Bail if we've started as many CPUS as we've been told to.
- */
- if (cpu_count = max_cpus)
- break;
- }
- }
-
- if (cpu_count = 1) {
- printk(KERN_ERR "SMP: Bootstrap processor only.\n");
- }
-
- bogosum = 0;
- for (i = 0; i < NR_CPUS; i++) {
- if (cpu_online_map & (1L << i))
- bogosum += cpu_data[i].loops_per_jiffy;
- }
-
- printk(KERN_INFO "SMP: Total of %d processors activated (%lu.%02lu BogoMIPS).\n",
- cpu_count, bogosum*HZ/500000, (bogosum*HZ/5000) % 100);
-
- smp_num_cpus = cpu_count;
-}
-
-/*
- * Called when the BP is just about to fire off init.
- */
-void __init
-smp_commence (void)
+void
+smp_send_stop (void)
{
- smp_commenced = 1;
+ send_IPI_allbutself(IPI_CPU_STOP);
+ smp_num_cpus = 1;
}
int __init
setup_profiling_timer (unsigned int multiplier)
{
return -EINVAL;
-}
-
-/*
- * Assume that CPU's have been discovered by some platform-dependant
- * interface. For SoftSDV/Lion, that would be ACPI.
- *
- * Setup of the IPI irq handler is done in irq.c:init_IRQ().
- *
- * This also registers the AP OS_MC_REDVEZ address with SAL.
- */
-void __init
-init_smp_config (void)
-{
- struct fptr {
- unsigned long fp;
- unsigned long gp;
- } *ap_startup;
- long sal_ret;
-
- /* Tell SAL where to drop the AP's. */
- ap_startup = (struct fptr *) start_ap;
- sal_ret = ia64_sal_set_vectors(SAL_VECTOR_OS_BOOT_RENDEZ, __pa(ap_startup->fp),
- __pa(ap_startup->gp), 0, 0, 0, 0);
- if (sal_ret < 0) {
- printk("SMP: Can't set SAL AP Boot Rendezvous: %s\n", ia64_sal_strerror(sal_ret));
- printk(" Forcing UP mode\n");
- max_cpus = 0;
- smp_num_cpus = 1;
- }
-
}
diff -urN linux-davidm/arch/ia64/kernel/smpboot.c linux-2.4.5-lia/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/smpboot.c Wed May 30 22:44:32 2001
@@ -2,15 +2,56 @@
* SMP boot-related support
*
* Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 01/05/16 Rohit Seth <rohit.seth@intel.com> Moved SMP booting functions from smp.c to here.
+ * 01/04/27 David Mosberger <davidm@hpl.hp.com> Added ITC synching code.
*/
+
+#define __KERNEL_SYSCALLS__
+
+#include <linux/config.h>
+
+#include <linux/bootmem.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
#include <linux/spinlock.h>
+#include <asm/atomic.h>
+#include <asm/bitops.h>
#include <asm/cache.h>
+#include <asm/current.h>
#include <asm/delay.h>
+#include <asm/efi.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/machvec.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/sal.h>
#include <asm/system.h>
+#include <asm/unistd.h>
+
+#if SMP_DEBUG
+#define Dprintk(x...) printk(x)
+#else
+#define Dprintk(x...)
+#endif
+
+/*
+ * ITC synchronization related stuff:
+ */
#define MASTER 0
#define SLAVE (SMP_CACHE_BYTES/8)
@@ -20,7 +61,75 @@
static spinlock_t itc_sync_lock = SPIN_LOCK_UNLOCKED;
static volatile unsigned long go[SLAVE + 1];
-#define DEBUG_ITC_SYNC 1
+#define DEBUG_ITC_SYNC 0
+
+extern void __init calibrate_delay(void);
+extern void start_ap(void);
+
+int cpucount;
+
+/* Setup configured maximum number of CPUs to activate */
+static int max_cpus = -1;
+
+/* Total count of live CPUs */
+int smp_num_cpus = 1;
+
+/* Bitmask of currently online CPUs */
+volatile unsigned long cpu_online_map;
+
+/* which logical CPU number maps to which CPU (physical APIC ID) */
+volatile int ia64_cpu_to_sapicid[NR_CPUS];
+
+static volatile unsigned long cpu_callin_map;
+
+struct smp_boot_data smp_boot_data __initdata;
+
+/* Set when the idlers are all forked */
+volatile int smp_threads_ready;
+
+unsigned long ap_wakeup_vector = -1; /* External Int use to wakeup APs */
+
+char __initdata no_int_routing;
+
+unsigned char smp_int_redirect; /* are INT and IPI redirectable by the chipset? */
+
+/*
+ * Setup routine for controlling SMP activation
+ *
+ * Command-line option of "nosmp" or "maxcpus=0" will disable SMP
+ * activation entirely (the MPS table probe still happens, though).
+ *
+ * Command-line option of "maxcpus=<NUM>", where <NUM> is an integer
+ * greater than 0, limits the maximum number of CPUs activated in
+ * SMP mode to <NUM>.
+ */
+
+static int __init
+nosmp (char *str)
+{
+ max_cpus = 0;
+ return 1;
+}
+
+__setup("nosmp", nosmp);
+
+static int __init
+maxcpus (char *str)
+{
+ get_option(&str, &max_cpus);
+ return 1;
+}
+
+__setup("maxcpus=", maxcpus);
+
+static int __init
+nointroute (char *str)
+{
+ no_int_routing = 1;
+ return 1;
+}
+
+__setup("nointroute", nointroute);
void
sync_master (void *arg)
@@ -164,4 +273,292 @@
printk("CPU %d: synchronized ITC with CPU %u (last diff %ld cycles, maxerr %lu cycles)\n",
smp_processor_id(), master, delta, rt);
+}
+
+/*
+ * Ideally sets up per-cpu profiling hooks. Doesn't do much now...
+ */
+static inline void __init
+smp_setup_percpu_timer (void)
+{
+ local_cpu_data->prof_counter = 1;
+ local_cpu_data->prof_multiplier = 1;
+}
+
+/*
+ * Architecture specific routine called by the kernel just before init is
+ * fired off. This allows the BP to have everything in order [we hope].
+ * At the end of this all the APs will hit the system scheduling and off
+ * we go. Each AP will jump through the kernel
+ * init into idle(). At this point the scheduler will one day take over
+ * and give them jobs to do. smp_callin is a standard routine
+ * we use to track CPUs as they power up.
+ */
+
+static volatile atomic_t smp_commenced = ATOMIC_INIT(0);
+
+void __init
+smp_commence (void)
+{
+ /*
+ * Lets the callins below out of their loop.
+ */
+ Dprintk("Setting commenced=1, go go go\n");
+
+ wmb();
+ atomic_set(&smp_commenced,1);
+}
+
+
+void __init
+smp_callin (void)
+{
+ int cpuid, phys_id;
+ extern void ia64_init_itm(void);
+
+#ifdef CONFIG_PERFMON
+ extern void perfmon_init_percpu(void);
+#endif
+
+ cpuid = smp_processor_id();
+ phys_id = hard_smp_processor_id();
+
+ if (test_and_set_bit(cpuid, &cpu_online_map)) {
+ printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n",
+ phys_id, cpuid);
+ BUG();
+ }
+
+ smp_setup_percpu_timer();
+
+ /*
+ * Synchronize the ITC with the BP
+ */
+ Dprintk("Going to syncup ITC with BP.\n");
+
+ ia64_sync_itc(0);
+ /*
+ * Get our bogomips.
+ */
+ ia64_init_itm();
+#ifdef CONFIG_PERFMON
+ perfmon_init_percpu();
+#endif
+
+ local_irq_enable();
+ calibrate_delay();
+ local_cpu_data->loops_per_jiffy = loops_per_jiffy;
+ /*
+ * Allow the master to continue.
+ */
+ set_bit(cpuid, &cpu_callin_map);
+ Dprintk("Stack on CPU %d at about %p\n",cpuid, &cpuid);
+}
+
+
+/*
+ * Activate a secondary processor. head.S calls this.
+ */
+int __init
+start_secondary (void *unused)
+{
+ extern int cpu_idle (void);
+
+ efi_map_pal_code();
+ cpu_init();
+ smp_callin();
+ Dprintk("CPU %d is set to go. \n", smp_processor_id());
+ while (!atomic_read(&smp_commenced))
+ ;
+
+ Dprintk("CPU %d is starting idle. \n", smp_processor_id());
+ return cpu_idle();
+}
+
+static int __init
+fork_by_hand (void)
+{
+ /*
+ * don't care about the eip and regs settings since
+ * we'll never reschedule the forked task.
+ */
+ return do_fork(CLONE_VM|CLONE_PID, 0, 0, 0);
+}
+
+static void __init
+do_boot_cpu (int sapicid)
+{
+ struct task_struct *idle;
+ int timeout, cpu;
+
+ cpu = ++cpucount;
+ /*
+ * We can't use kernel_thread since we must avoid to
+ * reschedule the child.
+ */
+ if (fork_by_hand() < 0)
+ panic("failed fork for CPU %d", cpu);
+
+ /*
+ * We remove it from the pidhash and the runqueue
+ * once we got the process:
+ */
+ idle = init_task.prev_task;
+ if (!idle)
+ panic("No idle process for CPU %d", cpu);
+
+ idle->processor = cpu;
+ ia64_cpu_to_sapicid[cpu] = sapicid;
+ idle->has_cpu = 1; /* we schedule the first task manually */
+
+ del_from_runqueue(idle);
+ unhash_process(idle);
+ init_tasks[cpu] = idle;
+
+ Dprintk("Sending Wakeup Vector to AP 0x%x/0x%x.\n", cpu, sapicid);
+
+ platform_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
+
+ /*
+ * Wait 10s total for the AP to start
+ */
+ Dprintk("Waiting on callin_map ...");
+ for (timeout = 0; timeout < 100000; timeout++) {
+ Dprintk(".");
+ if (test_bit(cpu, &cpu_callin_map))
+ break; /* It has booted */
+ udelay(100);
+ }
+ Dprintk("\n");
+
+ if (test_bit(cpu, &cpu_callin_map)) {
+ /* number CPUs logically, starting from 1 (BSP is 0) */
+ printk("CPU%d: ", cpu);
+ /*print_cpu_info(&cpu_data[cpu]); */
+ printk("CPU has booted.\n");
+ } else {
+ printk(KERN_ERR "Processor 0x%x/0x%x is stuck.\n", cpu, sapicid);
+ ia64_cpu_to_sapicid[cpu] = -1;
+ cpucount--;
+ }
+}
+
+/*
+ * Cycle through the APs sending Wakeup IPIs to boot each.
+ */
+void __init
+smp_boot_cpus (void)
+{
+ int sapicid, cpu;
+ int boot_cpu_id = hard_smp_processor_id();
+
+ /*
+ * Initialize the logical to physical CPU number mapping
+ * and the per-CPU profiling counter/multiplier
+ */
+
+ for (cpu = 0; cpu < NR_CPUS; cpu++)
+ ia64_cpu_to_sapicid[cpu] = -1;
+ smp_setup_percpu_timer();
+
+ /*
+ * We have the boot CPU online for sure.
+ */
+ set_bit(0, &cpu_online_map);
+ set_bit(0, &cpu_callin_map);
+
+ printk("Loops_per_jiffy for BOOT CPU = 0x%x\n", loops_per_jiffy);
+
+ local_cpu_data->loops_per_jiffy = loops_per_jiffy;
+ ia64_cpu_to_sapicid[0] = boot_cpu_id;
+
+ printk("Boot processor id 0x%x/0x%x\n", 0, boot_cpu_id);
+
+ global_irq_holder = 0;
+ current->processor = 0;
+ init_idle();
+
+ /*
+ * If SMP should be disabled, then really disable it!
+ */
+ if ((!max_cpus) || (max_cpus < -1)) {
+ printk(KERN_INFO "SMP mode deactivated.\n");
+ cpu_online_map = 1;
+ smp_num_cpus = 1;
+ goto smp_done;
+ }
+ if (max_cpus != -1)
+ printk (KERN_INFO "Limiting CPUs to %d\n", max_cpus);
+
+ if (smp_boot_data.cpu_count > 1) {
+
+ printk(KERN_INFO "SMP: starting up secondaries.\n");
+
+ for (cpu = 0; cpu < smp_boot_data.cpu_count; cpu++) {
+ /*
+ * Don't even attempt to start the boot CPU!
+ */
+ sapicid = smp_boot_data.cpu_phys_id[cpu];
+ if ((sapicid = -1) || (sapicid = hard_smp_processor_id()))
+ continue;
+
+ if ((max_cpus > 0) && (max_cpus = cpucount+1))
+ break;
+
+ do_boot_cpu(sapicid);
+
+ /*
+ * Make sure we unmap all failed CPUs
+ */
+ if (ia64_cpu_to_sapicid[cpu] = -1)
+ printk("phys CPU#%d not responding - cannot use it.\n", cpu);
+ }
+
+ smp_num_cpus = cpucount + 1;
+
+ /*
+ * Allow the user to impress friends.
+ */
+
+ printk("Before bogomips.\n");
+ if (!cpucount) {
+ printk(KERN_ERR "Error: only one processor found.\n");
+ } else {
+ unsigned long bogosum = 0;
+ for (cpu = 0; cpu < NR_CPUS; cpu++)
+ if (cpu_online_map & (1<<cpu))
+ bogosum += cpu_data[cpu].loops_per_jiffy;
+
+ printk(KERN_INFO"Total of %d processors activated (%lu.%02lu BogoMIPS).\n",
+ cpucount + 1, bogosum/(500000/HZ), (bogosum/(5000/HZ))%100);
+ }
+ }
+ smp_done:
+}
+
+/*
+ * Assume that CPU's have been discovered by some platform-dependant interface. For
+ * SoftSDV/Lion, that would be ACPI.
+ *
+ * Setup of the IPI irq handler is done in irq.c:init_IRQ_SMP().
+ */
+void __init
+init_smp_config(void)
+{
+ struct fptr {
+ unsigned long fp;
+ unsigned long gp;
+ } *ap_startup;
+ long sal_ret;
+
+ /* Tell SAL where to drop the AP's. */
+ ap_startup = (struct fptr *) start_ap;
+ sal_ret = ia64_sal_set_vectors(SAL_VECTOR_OS_BOOT_RENDEZ,
+ __pa(ap_startup->fp), __pa(ap_startup->gp), 0, 0, 0, 0);
+ if (sal_ret < 0) {
+ printk("SMP: Can't set SAL AP Boot Rendezvous: %s\n Forcing UP mode\n",
+ ia64_sal_strerror(sal_ret));
+ max_cpus = 0;
+ smp_num_cpus = 1;
+ }
}
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.5-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/traps.c Wed May 30 22:45:44 2001
@@ -320,7 +320,7 @@
}
siginfo.si_signo = SIGFPE;
siginfo.si_errno = 0;
- siginfo.si_code = 0;
+ siginfo.si_code = __SI_FAULT; /* default code */
siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
if (isr & 0x11) {
siginfo.si_code = FPE_FLTINV;
@@ -338,7 +338,7 @@
/* raise exception */
siginfo.si_signo = SIGFPE;
siginfo.si_errno = 0;
- siginfo.si_code = 0;
+ siginfo.si_code = __SI_FAULT; /* default code */
siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
if (isr & 0x880) {
siginfo.si_code = FPE_FLTOVF;
diff -urN linux-davidm/arch/ia64/kernel/unaligned.c linux-2.4.5-lia/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/unaligned.c Wed May 30 22:46:22 2001
@@ -1299,7 +1299,12 @@
len = sprintf(buf, "%s(%d): unaligned access to 0x%016lx, "
"ip=0x%016lx\n\r", current->comm, current->pid,
ifa, regs->cr_iip + ipsr->ri);
- tty_write_message(current->tty, buf);
+ /*
+ * Don't call tty_write_message() if we're in the kernel; we might
+ * be holding locks...
+ */
+ if (user_mode(regs))
+ tty_write_message(current->tty, buf);
buf[len-1] = '\0'; /* drop '\r' */
printk(KERN_WARNING "%s", buf); /* watch for command names containing %s */
}
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.5-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/unwind.c Wed May 30 22:46:37 2001
@@ -1847,9 +1847,9 @@
static void
init_unwind_table (struct unw_table *table, const char *name, unsigned long segment_base,
- unsigned long gp, void *table_start, void *table_end)
+ unsigned long gp, const void *table_start, const void *table_end)
{
- struct unw_table_entry *start = table_start, *end = table_end;
+ const struct unw_table_entry *start = table_start, *end = table_end;
table->name = name;
table->segment_base = segment_base;
@@ -1862,9 +1862,9 @@
void *
unw_add_unwind_table (const char *name, unsigned long segment_base, unsigned long gp,
- void *table_start, void *table_end)
+ const void *table_start, const void *table_end)
{
- struct unw_table_entry *start = table_start, *end = table_end;
+ const struct unw_table_entry *start = table_start, *end = table_end;
struct unw_table *table;
unsigned long flags;
diff -urN linux-davidm/arch/ia64/lib/swiotlb.c linux-2.4.5-lia/arch/ia64/lib/swiotlb.c
--- linux-davidm/arch/ia64/lib/swiotlb.c Sun Apr 29 15:49:26 2001
+++ linux-2.4.5-lia/arch/ia64/lib/swiotlb.c Wed May 30 22:47:32 2001
@@ -263,7 +263,7 @@
memset(ret, 0, size);
pci_addr = virt_to_phys(ret);
- if ((pci_addr & ~hwdev->dma_mask) != 0)
+ if (hwdev && (pci_addr & ~hwdev->dma_mask) != 0)
panic("swiotlb_alloc_consistent: allocated memory is out of range for PCI device");
*dma_handle = pci_addr;
return ret;
diff -urN linux-davidm/arch/ia64/mm/extable.c linux-2.4.5-lia/arch/ia64/mm/extable.c
--- linux-davidm/arch/ia64/mm/extable.c Sun Apr 29 15:49:26 2001
+++ linux-2.4.5-lia/arch/ia64/mm/extable.c Wed May 30 22:47:51 2001
@@ -6,8 +6,9 @@
*/
#include <linux/config.h>
-#include <linux/module.h>
+
#include <asm/uaccess.h>
+#include <asm/module.h>
extern const struct exception_table_entry __start___ex_table[];
extern const struct exception_table_entry __stop___ex_table[];
@@ -15,35 +16,25 @@
static inline const struct exception_table_entry *
search_one_table (const struct exception_table_entry *first,
const struct exception_table_entry *last,
- signed long value)
+ unsigned long ip, unsigned long gp)
{
- /* Abort early if the search value is out of range. */
- if (value != (signed int)value)
- return 0;
-
while (first <= last) {
const struct exception_table_entry *mid;
long diff;
- /*
- * We know that first and last are both kernel virtual
- * pointers (region 7) so first+last will cause an
- * overflow. We fix that by calling __va() on the
- * result, which will ensure that the top two bits get
- * set again.
- */
- mid = (void *) __va((((__u64) first + (__u64) last)/2/sizeof(*mid))*sizeof(*mid));
- diff = mid->addr - value;
+
+ mid = &first[(last - first)/2];
+ diff = (mid->addr + gp) - ip;
if (diff = 0)
return mid;
else if (diff < 0)
- first = mid+1;
+ first = mid + 1;
else
- last = mid-1;
+ last = mid - 1;
}
return 0;
}
-#ifndef CONFIG_MODULE
+#ifndef CONFIG_MODULES
register unsigned long main_gp __asm__("gp");
#endif
@@ -53,23 +44,25 @@
const struct exception_table_entry *entry;
struct exception_fixup fix = { 0 };
-#ifndef CONFIG_MODULE
+#ifndef CONFIG_MODULES
/* There is only the kernel to search. */
- entry = search_one_table(__start___ex_table, __stop___ex_table - 1, addr - main_gp);
+ entry = search_one_table(__start___ex_table, __stop___ex_table - 1, addr, main_gp);
if (entry)
fix.cont = entry->cont + main_gp;
return fix;
#else
- struct exception_table_entry *ret;
- /* The kernel is the last "module" -- no need to treat it special. */
+ struct archdata *archdata;
struct module *mp;
+ /* The kernel is the last "module" -- no need to treat it special. */
for (mp = module_list; mp ; mp = mp->next) {
if (!mp->ex_table_start)
continue;
- entry = search_one_table(mp->ex_table_start, mp->ex_table_end - 1, addr - mp->gp);
+ archdata = (struct archdata *) mp->archdata_start;
+ entry = search_one_table(mp->ex_table_start, mp->ex_table_end - 1,
+ addr, (unsigned long) archdata->gp);
if (entry) {
- fix.cont = entry->cont + mp->gp;
+ fix.cont = entry->cont + (unsigned long) archdata->gp;
return fix;
}
}
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.5-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/arch/ia64/mm/init.c Wed May 30 22:52:30 2001
@@ -354,6 +354,7 @@
{
extern char __start_gate_section[];
long reserved_pages, codesize, datasize, initsize;
+ unsigned long num_pgt_pages;
#ifdef CONFIG_PCI
/*
@@ -386,6 +387,19 @@
(unsigned long) nr_free_pages() << (PAGE_SHIFT - 10),
max_mapnr << (PAGE_SHIFT - 10), codesize >> 10, reserved_pages << (PAGE_SHIFT - 10),
datasize >> 10, initsize >> 10);
+
+ /*
+ * Allow for enough (cached) page table pages so that we can map the entire memory
+ * at least once. Each task also needs a couple of page tables pages, so add in a
+ * fudge factor for that (don't use "threads-max" here; that would be wrong!).
+ * Don't allow the cache to be more than 10% of total memory, though.
+ */
+# define NUM_TASKS 500 /* typical number of tasks */
+ num_pgt_pages = nr_free_pages() / PTRS_PER_PGD + NUM_TASKS;
+ if (num_pgt_pages > nr_free_pages() / 10)
+ num_pgt_pages = nr_free_pages() / 10;
+ if (num_pgt_pages > pgt_cache_water[1])
+ pgt_cache_water[1] = num_pgt_pages;
/* install the gate page in the global page table: */
put_gate_page(virt_to_page(__start_gate_section), GATE_ADDR);
diff -urN linux-davidm/drivers/acpi/acpiconf.c linux-2.4.5-lia/drivers/acpi/acpiconf.c
--- linux-davidm/drivers/acpi/acpiconf.c Wed May 30 23:17:48 2001
+++ linux-2.4.5-lia/drivers/acpi/acpiconf.c Wed May 30 22:52:55 2001
@@ -1,21 +1,24 @@
/*
* acpiconf.c - ACPI based kernel configuration
*
- * Copyright (C) 2000 Intel Corp.
- * Copyright (C) 2000 J.I. Lee <Jung-Ik.Lee@intel.com>
+ * Copyright (C) 2000-2001 Intel Corp.
+ * Copyright (C) 2000-2001 J.I. Lee <Jung-Ik.Lee@intel.com>
*
- * revision history:
+ * Revision History:
* 9/15/2000 J.I.
* Major revision: for new ACPI initialization requirements
- * for ACPI CA label 915
* 11/15/2000 J.I.
- * Major revision: ACPI 2.0 tables support with ACPI CA label 1115
+ * Major revision: ACPI 2.0 tables support
+ * 04/23/2001 J.I.
+ * Rewrote functions to support multiple _PRTs of child P2Ps
+ * under root pci bus
*/
#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel.h>
+#include <linux/pci.h>
#include <asm/system.h>
#include <asm/iosapic.h>
#include <asm/efi.h>
@@ -114,7 +117,7 @@
status = acpi_cf_get_prt (&prts);
if (ACPI_FAILURE (status)) {
- printk("Acpi Cfg: get prt fail\n");
+ printk("Acpi cfg: get prt fail\n");
return status;
}
@@ -124,59 +127,76 @@
acpi_cf_print_pci_vectors (*vectors, *num_pci_vectors);
}
#endif
- printk("Acpi Cfg: get PRT %s\n", (ACPI_SUCCESS(status))?"pass":"fail");
+ printk("Acpi cfg: get PCI interrupt vectors %s\n",
+ (ACPI_SUCCESS(status))?"pass":"fail");
return status;
}
static PCI_ROUTING_TABLE *pci_routing_tables[PCI_MAX_BUS] __initdata = {NULL};
-static struct pci_vector_struct *vectors_to_free __initdata = NULL;
+
+
+typedef struct _acpi_rpb {
+ NATIVE_UINT rpb_busnum;
+ NATIVE_UINT lastbusnum;
+ ACPI_HANDLE rpb_handle;
+} acpi_rpb_t;
static ACPI_STATUS __init
-acpi_cf_get_prt (
- void **prts
+acpi_cf_evaluate_method (
+ ACPI_HANDLE handle,
+ UINT8 *method_name,
+ NATIVE_UINT *nuint
)
{
- ACPI_STATUS status;
+ UINT32 tnuint = 0;
+ ACPI_STATUS status;
- status = acpi_walk_namespace ( ACPI_TYPE_DEVICE,
- ACPI_ROOT_OBJECT,
- ACPI_UINT32_MAX,
- acpi_cf_get_prt_callback,
- NULL,
- NULL );
+ ACPI_BUFFER ret_buf;
+ ACPI_OBJECT *ext_obj;
+ UINT8 buf[PATHNAME_MAX];
+
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) buf;
+ status = acpi_evaluate_object(handle, method_name, NULL, &ret_buf);
if (ACPI_FAILURE(status)) {
- printk("Acpi cfg:walk namespace error=0x%x\n", status);
- }
+ if (status = AE_NOT_FOUND) {
+ printk("Acpi cfg: no %s found\n", method_name);
+ } else {
+ printk("Acpi cfg: %s fail=0x%x\n", method_name, status);
+ }
+ } else {
+ ext_obj = (ACPI_OBJECT *) ret_buf.pointer;
- *prts = (void *)pci_routing_tables;
+ switch (ext_obj->type) {
+ case ACPI_TYPE_INTEGER:
+ tnuint = (NATIVE_UINT) ext_obj->integer.value;
+ break;
+ default:
+ printk("Acpi cfg: %s obj type incorrect\n", method_name);
+ status = AE_TYPE;
+ break;
+ }
+ }
- return status;
+ *nuint = tnuint;
+ return (status);
}
static ACPI_STATUS __init
-acpi_cf_get_prt_callback (
+acpi_cf_evaluate_PRT (
ACPI_HANDLE handle,
- UINT32 Level,
- void *context,
- void **retval
+ PCI_ROUTING_TABLE **prt
)
{
ACPI_BUFFER acpi_buffer;
- PCI_ROUTING_TABLE *prt;
- NATIVE_UINT busnum = 0;
- static NATIVE_UINT next_busnum = 0;
ACPI_STATUS status;
- ACPI_BUFFER ret_buf;
- ACPI_OBJECT *ext_obj;
- UINT8 buf[PATHNAME_MAX];
-
-
acpi_buffer.length = 0;
acpi_buffer.pointer = NULL;
@@ -184,105 +204,277 @@
switch (status) {
case AE_BUFFER_OVERFLOW:
- dprintk(("Acpi Cfg: _PRT found. Need %d bytes\n", acpi_buffer.length));
+ dprintk(("Acpi cfg: _PRT found. need %d bytes\n",
+ acpi_buffer.length));
break; /* found */
- case AE_NOT_FOUND:
- return AE_OK; /* let acpi_walk_namespace continue. */
default:
- printk("Acpi cfg:get irq routing table fail=0x%x\n", status);
- return AE_ERROR;
+ printk("Acpi cfg: _PRT fail=0x%x\n", status);
+ case AE_NOT_FOUND:
+ return status;
}
- prt = (PCI_ROUTING_TABLE *) acpi_os_callocate (acpi_buffer.length);
- if (prt = NULL) {
- printk("Acpi cfg:callocate %d bytes Fail\n", acpi_buffer.length);
- return AE_ERROR;
+ *prt = (PCI_ROUTING_TABLE *) acpi_os_callocate (acpi_buffer.length);
+ if (!*prt) {
+ printk("Acpi cfg: callocate %d bytes for _PRT fail\n",
+ acpi_buffer.length);
+ return AE_NO_MEMORY;
}
- acpi_buffer.pointer = (void *) prt;
+ acpi_buffer.pointer = (void *) *prt;
status = acpi_get_irq_routing_table (handle, &acpi_buffer);
if (ACPI_FAILURE(status)) {
- printk("Acpi cfg:get irq routing table Fail=0x%x\n", status);
+ printk("Acpi cfg: _PRT fail=0x%x.\n", status);
acpi_os_free(prt);
- return AE_OK;
}
+ return status;
+}
+
+static ACPI_STATUS __init
+acpi_cf_get_root_pci_callback (
+ ACPI_HANDLE handle,
+ UINT32 Level,
+ void *context,
+ void **retval
+ )
+{
+ NATIVE_UINT busnum = 0;
+ ACPI_STATUS status;
+ acpi_rpb_t rpb;
+ PCI_ROUTING_TABLE *prt;
+
+ ACPI_BUFFER ret_buf;
+ UINT8 path_name[PATHNAME_MAX];
+
#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
ret_buf.length = PATHNAME_MAX;
- ret_buf.pointer = (void *) buf;
+ ret_buf.pointer = (void *) path_name;
status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
- if (ACPI_SUCCESS(status)) {
- printk("Acpi cfg:path=[%s]\n", (char *)ret_buf.pointer);
- }
+#else
+ memset(path_name, 0, sizeof (path_name));
#endif
+ /*
+ * get bus number of this pci root bridge
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__BBN, &busnum);
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s evaluate _BBN fail=0x%x\n",
+ path_name, status);
+ return (status);
+ }
+ printk("Acpi cfg:%s ROOT PCI bus %ld\n", path_name, busnum);
+
+ /*
+ * evaluate root pci bridge's _CRS for Bus number range for child P2P
+ * (bus min/max/len) - not yet.
+ */
+
+ /*
+ * get immediate _PRT of this root pci bridge if any
+ */
+ status = acpi_cf_evaluate_PRT (handle, &prt);
+ switch(status) {
+ case AE_NOT_FOUND:
+ break;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _PRT fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ dprintk(("Acpi cfg:%s bus %ld got _PRT\n", path_name, busnum));
+ acpi_cf_add_to_pci_routing_tables (busnum, prt);
+ break;
+ }
+
+
+ /*
+ * walk down this root pci bridge to get _PRTs if any
+ */
+ rpb.rpb_busnum = rpb.lastbusnum = busnum;
+ rpb.rpb_handle = handle;
+ status = acpi_walk_namespace ( ACPI_TYPE_DEVICE,
+ handle,
+ ACPI_UINT32_MAX,
+ acpi_cf_get_prt_callback,
+ &rpb,
+ NULL );
+ if (ACPI_FAILURE(status))
+ printk("Acpi cfg:%s walk namespace for _PRT error=0x%x\n",
+ path_name, status);
+
+ return (status);
+}
+
+
+/*
+ * handle _PRTs of immediate P2Ps of root pci.
+ */
+static ACPI_STATUS __init
+acpi_cf_associate_prt_to_bus (
+ ACPI_HANDLE handle,
+ acpi_rpb_t *rpb,
+ NATIVE_UINT *retbusnum,
+ NATIVE_UINT depth
+ )
+{
+ ACPI_STATUS status;
+ UINT32 segbus;
+ NATIVE_UINT devfn;
+ UINT8 bn;
+
+ ACPI_BUFFER ret_buf;
+ UINT8 path_name[PATHNAME_MAX];
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
ret_buf.length = PATHNAME_MAX;
- ret_buf.pointer = (void *) buf;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
- status = acpi_evaluate_object(handle, METHOD_NAME__BBN, NULL, &ret_buf);
+ /*
+ * get devfn from _ADR
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__ADR, &devfn);
if (ACPI_FAILURE(status)) {
- if (status = AE_NOT_FOUND) {
- printk("Acpi cfg:_BBN not found for _PRT %ld: set busnum to %ld\n", next_busnum, next_busnum);
- } else {
- printk("Acpi cfg:_BBN fail=0x%x for _PRT %ld: set busnum to %ld\n", status, next_busnum, next_busnum);
- }
- if (next_busnum)
- printk("Acpi cfg: Warning: Invalid or unimplemented _BBNs for PRTs may cause incorrect PCI vector configuration. Check AML\n");
- busnum = next_busnum++;
- } else {
- ext_obj = (ACPI_OBJECT *) ret_buf.pointer;
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s _ADR fail=0x%x. Set busnum to %ld\n",
+ path_name, status, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s _ADR =0x%x\n", path_name, (UINT32)devfn));
- switch (ext_obj->type) {
- case ACPI_TYPE_INTEGER:
- busnum = (NATIVE_UINT) ext_obj->integer.value;
- next_busnum = busnum + 1;
- dprintk(("Acpi cfg:_BBN busnum is %ld\n ", busnum));
- break;
- default:
- printk("Acpi cfg:_BBN object type incorrect: set busnum to %ld\n ", next_busnum);
- printk("Acpi cfg: Warning: Invalid _BBN for PRT may cause incorrect PCI vector configuration. Check AML\n");
- busnum = next_busnum++;
- break;
- }
- /*
- * Placeholder for PCI SEG when PAL supports it,
- * status = acpi_evaluate_object(handle, "_SEG", NULL, &ret_buf);
- */
+ /*
+ * access pci config space for bus number
+ * segbus = from rpb, devfn = from _ADR
+ */
+ segbus = (UINT32)(rpb->rpb_busnum & 0xFFFFFFFF);
+
+ status = acpi_os_read_pci_cfg_byte(segbus,
+ (UINT32)devfn, PCI_PRIMARY_BUS, &bn);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
+ path_name, status, segbus, (UINT32)devfn,
+ PCI_PRIMARY_BUS);
+ printk("Acpi cfg:%s Set busnum to %ld\n",
+ path_name, *retbusnum);
+ return AE_OK;
}
+ dprintk(("Acpi cfg:%s pribus %d\n", path_name, bn));
- ret_buf.length = PATHNAME_MAX;
- ret_buf.pointer = (void *) buf;
+ status = acpi_os_read_pci_cfg_byte(segbus,
+ (UINT32)devfn, PCI_SECONDARY_BUS, &bn);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
+ path_name, status, segbus, (UINT32)devfn,
+ PCI_SECONDARY_BUS);
+ printk("Acpi cfg:%s Set busnum to %ld\n",
+ path_name, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s busnum %d\n", path_name, bn));
+
+ *retbusnum = (NATIVE_UINT)bn;
+ return AE_OK;
+}
+
+
+static ACPI_STATUS __init
+acpi_cf_get_prt (
+ void **prts
+ )
+{
+ ACPI_STATUS status;
+
+ status = acpi_get_devices ( PCI_ROOT_HID_STRING,
+ acpi_cf_get_root_pci_callback,
+ NULL,
+ NULL );
- status = acpi_evaluate_object(handle, METHOD_NAME__STA, NULL, &ret_buf);
if (ACPI_FAILURE(status)) {
- if (status = AE_NOT_FOUND)
- dprintk(("Acpi cfg:no _STA: pci bus %ld exist\n", busnum));
- else
- printk("Acpi cfg:_STA fail=0x%x: pci bus %ld exist. Check AML\n", status, busnum);
- } else {
- ext_obj = (ACPI_OBJECT *) ret_buf.pointer;
+ printk("Acpi cfg:get_device PCI ROOT HID error=0x%x\n", status);
+ }
- switch (ext_obj->type) {
- case ACPI_TYPE_INTEGER:
- if((NATIVE_UINT) ext_obj->integer.value & ACPI_STA_DEVICE_PRESENT) {
- dprintk(("Acpi cfg:_STA: pci bus %ld exist\n", busnum));
- } else {
- printk("Acpi cfg:_STA: pci bus %ld not exist. Discarding the _PRT\n", busnum);
- next_busnum--;
- acpi_os_free(prt);
- return AE_OK;
- }
- break;
- default:
- printk("Acpi cfg:_STA object type incorrect: pci bus %ld exist. Check AML\n", busnum);
- break;
+ *prts = (void *)pci_routing_tables;
+
+ return status;
+}
+
+static ACPI_STATUS __init
+acpi_cf_get_prt_callback (
+ ACPI_HANDLE handle,
+ UINT32 Level,
+ void *context,
+ void **retval
+ )
+{
+ PCI_ROUTING_TABLE *prt;
+ NATIVE_UINT busnum = 0;
+ NATIVE_UINT temp = 0x0F;
+ ACPI_STATUS status;
+
+ ACPI_BUFFER ret_buf;
+ UINT8 path_name[PATHNAME_MAX];
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
+
+ status = acpi_cf_evaluate_PRT (handle, &prt);
+ switch(status) {
+ case AE_NOT_FOUND:
+ return AE_OK;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _PRT fail=0x%x\n",
+ path_name, status);
+ return status;
}
}
+ /*
+ * evaluate _STA in case this device does not exist
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__STA, &temp);
+ switch(status) {
+ case AE_NOT_FOUND:
+ break;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _STA fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ if (!(temp & ACPI_STA_DEVICE_PRESENT)) {
+ dprintk(("Acpi cfg:%s not exist. _PRT discarded\n",
+ path_name));
+ acpi_os_free(prt);
+ return AE_OK;
+ }
+ break;
+ }
+ /*
+ * associate a bus number to this _PRT since
+ * this _PRT is not on root pci bridge
+ */
+ acpi_cf_associate_prt_to_bus(handle, context, &busnum, 0);
+
+ printk("Acpi cfg:%s busnum %ld got _PRT\n", path_name, busnum);
acpi_cf_add_to_pci_routing_tables (busnum, prt);
return AE_OK;
@@ -304,7 +496,6 @@
if (pci_routing_tables[busnum]) {
printk("Acpi cfg:duplicate PRT for pci bus %ld. overiding...\n", busnum);
acpi_os_free(pci_routing_tables[busnum]);
- /* override... */
}
pci_routing_tables[busnum] = prt;
@@ -333,15 +524,14 @@
if (prt) {
for ( ; prt->length > 0; nvec++) {
prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
- //prt = ROUND_PTR_UP_TO_4(prt, PCI_ROUTING_TABLE);
}
}
}
*num_pci_vectors = nvec;
*vectors = acpi_os_callocate (sizeof(struct pci_vector_struct) * nvec);
- if (*vectors = NULL) {
- printk("Acpi cfg:callocate error\n");
+ if (!*vectors) {
+ printk("Acpi cfg: callocate for pci_vector error\n");
return AE_NO_MEMORY;
}
@@ -358,13 +548,10 @@
pvec->irq = (UINT8)prt->source_index;
prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
- //prt = ROUND_PTR_UP_TO_4(prt, PCI_ROUTING_TABLE);
}
acpi_os_free((void *)prtf);
}
}
-
- vectors_to_free = *vectors;
return AE_OK;
}
diff -urN linux-davidm/include/asm-ia64/bitops.h linux-2.4.5-lia/include/asm-ia64/bitops.h
--- linux-davidm/include/asm-ia64/bitops.h Sun Apr 29 15:50:41 2001
+++ linux-2.4.5-lia/include/asm-ia64/bitops.h Wed May 30 23:29:37 2001
@@ -2,8 +2,8 @@
#define _ASM_IA64_BITOPS_H
/*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*
* 02/04/00 D. Mosberger Require 64-bit alignment for bitops, per suggestion from davem
*/
@@ -11,11 +11,10 @@
#include <asm/system.h>
/*
- * These operations need to be atomic. The address must be (at least)
- * 32-bit aligned. Note that there are driver (e.g., eepro100) which
- * use these operations to operate on hw-defined data-structures, so
- * we can't easily change these operations to force a bigger
- * alignment.
+ * These operations need to be atomic. The address must be (at least) "long" aligned.
+ * Note that there are driver (e.g., eepro100) which use these operations to operate on
+ * hw-defined data-structures, so we can't easily change these operations to force a
+ * bigger alignment.
*
* bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
*/
@@ -57,6 +56,18 @@
} while (cmpxchg_acq(m, old, new) != old);
}
+/*
+ * WARNING: non atomic version.
+ */
+static __inline__ void
+__change_bit (int nr, void *addr)
+{
+ volatile __u32 *m = (__u32 *) addr + (nr >> 5);
+ __u32 bit = (1 << (nr & 31));
+
+ *m = *m ^ bit;
+}
+
static __inline__ void
change_bit (int nr, volatile void *addr)
{
@@ -107,6 +118,20 @@
return (old & ~mask) != 0;
}
+/*
+ * WARNING: non atomic version.
+ */
+static __inline__ int
+__test_and_change_bit (int nr, void *addr)
+{
+ __u32 old, bit = (1 << (nr & 31));
+ __u32 *m = (__u32 *) addr + (nr >> 5);
+
+ old = *m;
+ *m = old ^ bit;
+ return (old & bit) != 0;
+}
+
static __inline__ int
test_and_change_bit (int nr, volatile void *addr)
{
@@ -131,8 +156,9 @@
}
/*
- * ffz = Find First Zero in word. Undefined if no zero exists,
- * so code should check against ~0UL first..
+ * ffz = "find first zero". Returns the bit number (0..63) of the first (least
+ * significant) bit that is zero in X. Undefined if no zero exists, so code should check
+ * against ~0UL first...
*/
static inline unsigned long
ffz (unsigned long x)
@@ -160,9 +186,10 @@
}
/*
- * ffs: find first bit set. This is defined the same way as
- * the libc and compiler builtin ffs routines, therefore
- * differs in spirit from the above ffz (man ffs).
+ * ffs: find first bit set. This is defined the same way as the libc and compiler builtin
+ * ffs routines, therefore differs in spirit from the above ffz (man ffs): it operates on
+ * "int" values only and the result value is the bit number + 1. ffs(0) is defined to
+ * return zero.
*/
#define ffs(x) __builtin_ffs(x)
diff -urN linux-davidm/include/asm-ia64/mca_asm.h linux-2.4.5-lia/include/asm-ia64/mca_asm.h
--- linux-davidm/include/asm-ia64/mca_asm.h Sun Apr 29 15:50:41 2001
+++ linux-2.4.5-lia/include/asm-ia64/mca_asm.h Wed May 30 22:56:29 2001
@@ -1,5 +1,5 @@
/*
- * File: mca_asm.h
+ * File: mca_asm.h
*
* Copyright (C) 1999 Silicon Graphics, Inc.
* Copyright (C) Vijay Chander (vijay@engr.sgi.com)
@@ -16,23 +16,23 @@
#define PSR_RT 27
#define PSR_IT 36
#define PSR_BN 44
-
+
/*
* This macro converts a instruction virtual address to a physical address
* Right now for simulation purposes the virtual addresses are
* direct mapped to physical addresses.
- * 1. Lop off bits 61 thru 63 in the virtual address
+ * 1. Lop off bits 61 thru 63 in the virtual address
*/
#define INST_VA_TO_PA(addr) \
- dep addr = 0, addr, 61, 3;
+ dep addr = 0, addr, 61, 3;
/*
* This macro converts a data virtual address to a physical address
* Right now for simulation purposes the virtual addresses are
* direct mapped to physical addresses.
- * 1. Lop off bits 61 thru 63 in the virtual address
+ * 1. Lop off bits 61 thru 63 in the virtual address
*/
#define DATA_VA_TO_PA(addr) \
- dep addr = 0, addr, 61, 3;
+ dep addr = 0, addr, 61, 3;
/*
* This macro converts a data physical address to a virtual address
* Right now for simulation purposes the virtual addresses are
@@ -40,7 +40,7 @@
* 1. Put 0x7 in bits 61 thru 63.
*/
#define DATA_PA_TO_VA(addr,temp) \
- mov temp = 0x7 ; \
+ mov temp = 0x7 ;; \
dep addr = temp, addr, 61, 3;
/*
@@ -48,11 +48,11 @@
* and starts execution in physical mode with all the address
* translations turned off.
* 1. Save the current psr
- * 2. Make sure that all the upper 32 bits are off
+ * 2. Make sure that all the upper 32 bits are off
*
* 3. Clear the interrupt enable and interrupt state collection bits
* in the psr before updating the ipsr and iip.
- *
+ *
* 4. Turn off the instruction, data and rse translation bits of the psr
* and store the new value into ipsr
* Also make sure that the interrupts are disabled.
@@ -71,7 +71,7 @@
mov old_psr = psr; \
;; \
dep old_psr = 0, old_psr, 32, 32; \
- \
+ \
mov ar.rsc = 0 ; \
;; \
mov temp2 = ar.bspstore; \
@@ -86,7 +86,7 @@
mov temp1 = psr; \
mov temp2 = psr; \
;; \
- \
+ \
dep temp2 = 0, temp2, PSR_IC, 2; \
;; \
mov psr.l = temp2; \
@@ -94,11 +94,11 @@
srlz.d; \
dep temp1 = 0, temp1, 32, 32; \
;; \
- dep temp1 = 0, temp1, PSR_IT, 1; \
+ dep temp1 = 0, temp1, PSR_IT, 1; \
;; \
- dep temp1 = 0, temp1, PSR_DT, 1; \
+ dep temp1 = 0, temp1, PSR_DT, 1; \
;; \
- dep temp1 = 0, temp1, PSR_RT, 1; \
+ dep temp1 = 0, temp1, PSR_RT, 1; \
;; \
dep temp1 = 0, temp1, PSR_I, 1; \
;; \
@@ -125,72 +125,73 @@
* This macro jumps to the instruction at the given virtual address
* and starts execution in virtual mode with all the address
* translations turned on.
- * 1. Get the old saved psr
- *
- * 2. Clear the interrupt enable and interrupt state collection bits
+ * 1. Get the old saved psr
+ *
+ * 2. Clear the interrupt enable and interrupt state collection bits
* in the current psr.
- *
+ *
* 3. Set the instruction translation bit back in the old psr
* Note we have to do this since we are right now saving only the
* lower 32-bits of old psr.(Also the old psr has the data and
* rse translation bits on)
- *
+ *
* 4. Set ipsr to this old_psr with "it" bit set and "bn" = 1.
*
- * 5. Set iip to the virtual address of the next instruction bundle.
+ * 5. Set iip to the virtual address of the next instruction bundle.
*
* 6. Do an rfi to move ipsr to psr and iip to ip.
*/
-#define VIRTUAL_MODE_ENTER(temp1, temp2, start_addr, old_psr) \
- mov temp2 = psr; \
- ;; \
- dep temp2 = 0, temp2, PSR_IC, 2; \
- ;; \
- mov psr.l = temp2; \
- mov ar.rsc = 0; \
- ;; \
- srlz.d; \
- mov temp2 = ar.bspstore; \
- ;; \
- DATA_PA_TO_VA(temp2,temp1); \
- ;; \
- mov temp1 = ar.rnat; \
- ;; \
- mov ar.bspstore = temp2; \
- ;; \
- mov ar.rnat = temp1; \
- ;; \
- mov temp1 = old_psr; \
- ;; \
- mov temp2 = 1 ; \
- dep temp1 = temp2, temp1, PSR_I, 1; \
- ;; \
- dep temp1 = temp2, temp1, PSR_IC, 1; \
- ;; \
- dep temp1 = temp2, temp1, PSR_IT, 1; \
- ;; \
- dep temp1 = temp2, temp1, PSR_DT, 1; \
- ;; \
- dep temp1 = temp2, temp1, PSR_RT, 1; \
- ;; \
- dep temp1 = temp2, temp1, PSR_BN, 1; \
- ;; \
- \
- mov cr.ipsr = temp1; \
- movl temp2 = start_addr; \
- ;; \
- mov cr.iip = temp2; \
- DATA_PA_TO_VA(sp, temp1); \
- DATA_PA_TO_VA(gp, temp2); \
- ;; \
- nop 1; \
- nop 2; \
- nop 1; \
- rfi; \
+#define VIRTUAL_MODE_ENTER(temp1, temp2, start_addr, old_psr) \
+ mov temp2 = psr; \
+ ;; \
+ dep temp2 = 0, temp2, PSR_IC, 2; \
+ ;; \
+ mov psr.l = temp2; \
+ mov ar.rsc = 0; \
+ ;; \
+ srlz.d; \
+ mov temp2 = ar.bspstore; \
+ ;; \
+ DATA_PA_TO_VA(temp2,temp1); \
+ ;; \
+ mov temp1 = ar.rnat; \
+ ;; \
+ mov ar.bspstore = temp2; \
+ ;; \
+ mov ar.rnat = temp1; \
+ ;; \
+ mov temp1 = old_psr; \
+ ;; \
+ mov temp2 = 1 \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_I, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_IC, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_IT, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_DT, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_RT, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_BN, 1; \
+ ;; \
+ \
+ mov cr.ipsr = temp1; \
+ movl temp2 = start_addr; \
+ ;; \
+ mov cr.iip = temp2; \
+ DATA_PA_TO_VA(sp, temp1); \
+ DATA_PA_TO_VA(gp, temp2); \
+ ;; \
+ nop 1; \
+ nop 2; \
+ nop 1; \
+ rfi; \
;;
-/*
+/*
* The following offsets capture the order in which the
* RSE related registers from the old context are
* saved onto the new stack frame.
@@ -198,15 +199,15 @@
* +-----------------------+
* |NDIRTY [BSP - BSPSTORE]|
* +-----------------------+
- * | RNAT |
+ * | RNAT |
* +-----------------------+
- * | BSPSTORE |
+ * | BSPSTORE |
* +-----------------------+
- * | IFS |
+ * | IFS |
* +-----------------------+
- * | PFS |
+ * | PFS |
* +-----------------------+
- * | RSC |
+ * | RSC |
* +-----------------------+ <-------- Bottom of new stack frame
*/
#define rse_rsc_offset 0
@@ -229,23 +230,23 @@
* 8. Read and save the new BSP to calculate the #dirty registers
* NOTE: Look at pages 11-10, 11-11 in PRM Vol 2
*/
-#define rse_switch_context(temp,p_stackframe,p_bspstore) \
- ;; \
- mov temp=ar.rsc;; \
- st8 [p_stackframe]=temp,8;; \
- mov temp=ar.pfs;; \
- st8 [p_stackframe]=temp,8; \
- cover ;; \
- mov temp=cr.ifs;; \
- st8 [p_stackframe]=temp,8;; \
- mov temp=ar.bspstore;; \
- st8 [p_stackframe]=temp,8;; \
- mov temp=ar.rnat;; \
- st8 [p_stackframe]=temp,8; \
- mov ar.bspstore=p_bspstore;; \
- mov temp=ar.bsp;; \
- sub temp=temp,p_bspstore;; \
- st8 [p_stackframe]=temp,8
+#define rse_switch_context(temp,p_stackframe,p_bspstore) \
+ ;; \
+ mov temp=ar.rsc;; \
+ st8 [p_stackframe]=temp,8;; \
+ mov temp=ar.pfs;; \
+ st8 [p_stackframe]=temp,8; \
+ cover ;; \
+ mov temp=cr.ifs;; \
+ st8 [p_stackframe]=temp,8;; \
+ mov temp=ar.bspstore;; \
+ st8 [p_stackframe]=temp,8;; \
+ mov temp=ar.rnat;; \
+ st8 [p_stackframe]=temp,8; \
+ mov ar.bspstore=p_bspstore;; \
+ mov temp=ar.bsp;; \
+ sub temp=temp,p_bspstore;; \
+ st8 [p_stackframe]=temp,8
/*
* rse_return_context
@@ -253,7 +254,7 @@
* 2. Store the number of dirty registers RSC.loadrs field
* 3. Issue a loadrs to insure that any registers from the interrupted
* context which were saved on the new stack frame have been loaded
- * back into the stacked registers
+ * back into the stacked registers
* 4. Restore BSPSTORE
* 5. Restore RNAT
* 6. Restore PFS
@@ -261,44 +262,44 @@
* 8. Restore RSC
* 9. Issue an RFI
*/
-#define rse_return_context(psr_mask_reg,temp,p_stackframe) \
- ;; \
- alloc temp=ar.pfs,0,0,0,0; \
- add p_stackframe=rse_ndirty_offset,p_stackframe;; \
- ld8 temp=[p_stackframe];; \
- shl temp=temp,16;; \
- mov ar.rsc=temp;; \
- loadrs;; \
- add p_stackframe=-rse_ndirty_offset+rse_bspstore_offset,p_stackframe;;\
- ld8 temp=[p_stackframe];; \
- mov ar.bspstore=temp;; \
- add p_stackframe=-rse_bspstore_offset+rse_rnat_offset,p_stackframe;;\
- ld8 temp=[p_stackframe];; \
- mov ar.rnat=temp;; \
- add p_stackframe=-rse_rnat_offset+rse_pfs_offset,p_stackframe;; \
- ld8 temp=[p_stackframe];; \
- mov ar.pfs=temp; \
- add p_stackframe=-rse_pfs_offset+rse_ifs_offset,p_stackframe;; \
- ld8 temp=[p_stackframe];; \
- mov cr.ifs=temp; \
- add p_stackframe=-rse_ifs_offset+rse_rsc_offset,p_stackframe;; \
- ld8 temp=[p_stackframe];; \
- mov ar.rsc=temp ; \
- add p_stackframe=-rse_rsc_offset,p_stackframe; \
- mov temp=cr.ipsr;; \
- st8 [p_stackframe]=temp,8; \
- mov temp=cr.iip;; \
- st8 [p_stackframe]=temp,-8; \
- mov temp=psr;; \
- or temp=temp,psr_mask_reg;; \
- mov cr.ipsr=temp;; \
- mov temp=ip;; \
- add temp=0x30,temp;; \
- mov cr.iip=temp;; \
- rfi;; \
- ld8 temp=[p_stackframe],8;; \
- mov cr.ipsr=temp;; \
- ld8 temp=[p_stackframe];; \
- mov cr.iip=temp
+#define rse_return_context(psr_mask_reg,temp,p_stackframe) \
+ ;; \
+ alloc temp=ar.pfs,0,0,0,0; \
+ add p_stackframe=rse_ndirty_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ shl temp=temp,16;; \
+ mov ar.rsc=temp;; \
+ loadrs;; \
+ add p_stackframe=-rse_ndirty_offset+rse_bspstore_offset,p_stackframe;;\
+ ld8 temp=[p_stackframe];; \
+ mov ar.bspstore=temp;; \
+ add p_stackframe=-rse_bspstore_offset+rse_rnat_offset,p_stackframe;;\
+ ld8 temp=[p_stackframe];; \
+ mov ar.rnat=temp;; \
+ add p_stackframe=-rse_rnat_offset+rse_pfs_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ mov ar.pfs=temp; \
+ add p_stackframe=-rse_pfs_offset+rse_ifs_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ mov cr.ifs=temp; \
+ add p_stackframe=-rse_ifs_offset+rse_rsc_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ mov ar.rsc=temp ; \
+ add p_stackframe=-rse_rsc_offset,p_stackframe; \
+ mov temp=cr.ipsr;; \
+ st8 [p_stackframe]=temp,8; \
+ mov temp=cr.iip;; \
+ st8 [p_stackframe]=temp,-8; \
+ mov temp=psr;; \
+ or temp=temp,psr_mask_reg;; \
+ mov cr.ipsr=temp;; \
+ mov temp=ip;; \
+ add temp=0x30,temp;; \
+ mov cr.iip=temp;; \
+ rfi;; \
+ ld8 temp=[p_stackframe],8;; \
+ mov cr.ipsr=temp;; \
+ ld8 temp=[p_stackframe];; \
+ mov cr.iip=temp
#endif /* _ASM_IA64_MCA_ASM_H */
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.5-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Wed May 30 23:17:53 2001
+++ linux-2.4.5-lia/include/asm-ia64/processor.h Wed May 30 23:29:37 2001
@@ -314,20 +314,10 @@
struct thread_struct {
__u64 ksp; /* kernel stack pointer */
unsigned long flags; /* various flags */
- struct ia64_fpreg fph[96]; /* saved/loaded on demand */
- __u64 dbr[IA64_NUM_DBG_REGS];
- __u64 ibr[IA64_NUM_DBG_REGS];
-#ifdef CONFIG_PERFMON
- __u64 pmc[IA64_NUM_PMC_REGS];
- __u64 pmd[IA64_NUM_PMD_REGS];
- unsigned long pfm_pend_notify; /* non-zero if we need to notify and block */
- void *pfm_context; /* pointer to detailed PMU context */
-# define INIT_THREAD_PM {0, }, {0, }, 0, 0,
-#else
-# define INIT_THREAD_PM
-#endif
__u64 map_base; /* base address for get_unmapped_area() */
__u64 task_size; /* limit for task size */
+ struct siginfo *siginfo; /* current siginfo struct for ptrace() */
+
#ifdef CONFIG_IA32_SUPPORT
__u64 eflag; /* IA32 EFLAGS reg */
__u64 fsr; /* IA32 floating pt status reg */
@@ -345,7 +335,18 @@
#else
# define INIT_THREAD_IA32
#endif /* CONFIG_IA32_SUPPORT */
- struct siginfo *siginfo; /* current siginfo struct for ptrace() */
+#ifdef CONFIG_PERFMON
+ __u64 pmc[IA64_NUM_PMC_REGS];
+ __u64 pmd[IA64_NUM_PMD_REGS];
+ unsigned long pfm_pend_notify; /* non-zero if we need to notify and block */
+ void *pfm_context; /* pointer to detailed PMU context */
+# define INIT_THREAD_PM {0, }, {0, }, 0, 0,
+#else
+# define INIT_THREAD_PM
+#endif
+ __u64 dbr[IA64_NUM_DBG_REGS];
+ __u64 ibr[IA64_NUM_DBG_REGS];
+ struct ia64_fpreg fph[96]; /* saved/loaded on demand */
};
#define INIT_MMAP { \
@@ -356,14 +357,14 @@
#define INIT_THREAD { \
0, /* ksp */ \
0, /* flags */ \
- {{{{0}}}, }, /* fph */ \
- {0, }, /* dbr */ \
- {0, }, /* ibr */ \
- INIT_THREAD_PM \
DEFAULT_MAP_BASE, /* map_base */ \
DEFAULT_TASK_SIZE, /* task_size */ \
+ 0, /* siginfo */ \
INIT_THREAD_IA32 \
- 0 /* siginfo */ \
+ INIT_THREAD_PM \
+ {0, }, /* dbr */ \
+ {0, }, /* ibr */ \
+ {{{{0}}}, } /* fph */ \
}
#define start_thread(regs,new_ip,new_sp) do { \
diff -urN linux-davidm/include/asm-ia64/smp.h linux-2.4.5-lia/include/asm-ia64/smp.h
--- linux-davidm/include/asm-ia64/smp.h Sun Apr 29 15:50:42 2001
+++ linux-2.4.5-lia/include/asm-ia64/smp.h Wed May 30 23:29:37 2001
@@ -1,7 +1,7 @@
/*
* SMP Support
*
- * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 2001 Hewlett-Packard Co
* Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
@@ -35,14 +35,13 @@
extern char no_int_routing __initdata;
-extern unsigned long cpu_present_map;
extern unsigned long cpu_online_map;
extern unsigned long ipi_base_addr;
-extern int __cpu_physical_id[NR_CPUS];
extern unsigned char smp_int_redirect;
extern int smp_num_cpus;
-#define cpu_physical_id(i) __cpu_physical_id[i]
+extern volatile int ia64_cpu_to_sapicid[];
+#define cpu_physical_id(i) ia64_cpu_to_sapicid[i]
#define cpu_number_map(i) (i)
#define cpu_logical_map(i) (i)
@@ -70,7 +69,7 @@
* max_xtp : never deliver interrupts to this CPU.
*/
-static inline void
+static inline void
min_xtp (void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
@@ -85,13 +84,13 @@
}
static inline void
-max_xtp (void)
+max_xtp (void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
writeb(0x0f, ipi_base_addr | XTP_OFFSET); /* Set XTP to max */
}
-static inline unsigned int
+static inline unsigned int
hard_smp_processor_id (void)
{
union {
diff -urN linux-davidm/include/asm-ia64/unaligned.h linux-2.4.5-lia/include/asm-ia64/unaligned.h
--- linux-davidm/include/asm-ia64/unaligned.h Mon Oct 9 17:55:00 2000
+++ linux-2.4.5-lia/include/asm-ia64/unaligned.h Wed May 30 22:58:19 2001
@@ -1,6 +1,8 @@
#ifndef _ASM_IA64_UNALIGNED_H
#define _ASM_IA64_UNALIGNED_H
+#include <linux/types.h>
+
/*
* The main single-value unaligned transfer routines. Derived from
* the Linux/Alpha version.
diff -urN linux-davidm/include/asm-ia64/unwind.h linux-2.4.5-lia/include/asm-ia64/unwind.h
--- linux-davidm/include/asm-ia64/unwind.h Wed May 30 23:17:53 2001
+++ linux-2.4.5-lia/include/asm-ia64/unwind.h Wed May 30 22:58:29 2001
@@ -97,7 +97,7 @@
extern void unw_create_gate_table (void);
extern void *unw_add_unwind_table (const char *name, unsigned long segment_base, unsigned long gp,
- void *table_start, void *table_end);
+ const void *table_start, const void *table_end);
extern void unw_remove_unwind_table (void *handle);
diff -urN linux-davidm/kernel/module.c linux-2.4.5-lia/kernel/module.c
--- linux-davidm/kernel/module.c Wed May 30 09:58:18 2001
+++ linux-2.4.5-lia/kernel/module.c Wed May 30 23:02:44 2001
@@ -246,8 +246,23 @@
{
kernel_module.nsyms = __stop___ksymtab - __start___ksymtab;
+ /*
+ * XXX fix me: this should be cleaned up. Perhaps calling
+ * module_arch_init(&kernel_module) here would be cleanest, but this would require
+ * updating all platform specific header files.
+ */
#ifdef __alpha__
__asm__("stq $29,%0" : "=m"(kernel_module.gp));
+#endif
+#ifdef __ia64__
+ {
+ static struct archdata archdata;
+ register char *kernel_gp asm ("gp");
+
+ archdata.gp = kernel_gp;
+ kernel_module.archdata_start = (const char *) &archdata;
+ kernel_module.archdata_end = (const char *) (&archdata + 1);
+ }
#endif
}
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.5)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (49 preceding siblings ...)
2001-05-31 7:37 ` [Linux-ia64] kernel update (relative to 2.4.5) David Mosberger
@ 2001-06-27 7:09 ` David Mosberger
2001-06-27 17:24 ` Richard Hirst
` (164 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-06-27 7:09 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.5-ia64-010626.diff*.
There are no major new features in this patch, mostly bug fixes,
clean-ups for gcc3.0, etc. However, I did start to tune things a bit
more for Itanium and those changes result in dramatically better
fork+exit performance as well as large improvements in various
bandwidth intensive tests (such as pipe bandwidth or TCP bandwidth).
The tuning didn't require any significant code changes (thanks largely
to the software-pipelining support in ia64) but still, I'd recommend
testing things heavily before including the new kernel in a
distribution (I should say that I have run with these mods for weeks
now without any problems).
More detailed change log:
- included Martin's /dev/guid support (the implementation is
likely to contiue to evolve, but I think the features is
important enough to warrant inclusion in the normal
sources)
- NEC contributed an I/O SAPIC implementation of
set_affinity() which allows to direct a particular irq to a
particular cpu via /proc/irq/. Note: I/O SAPIC can direct
to only one CPU at a time. If you attempt to target
multiple CPUs, only the CPU with the smallest id will
receive the interrupt. Needless to say: it's easy to shoot
yourself in the foot by playing with the irq redirection,
but it can be useful, of course.
- fixed ia32 version of lseek() (needed to get
profile-feedback to work in the Intel compilers)
- use a separate irq vector for smp rescheduling instead of
going through the generic (and slow) IPI path
- fix the mmap() problem reported by the Debian project
- bring FPSWA support in sync with documentation
- tune clear_page(), clear_user(), and copy_user() also for
uncached case (important, e.g., for smp fork/exit)
- include the inode initialization bug fix forwarded by Keith
As usual the patch below is fyi only. Oh, I should say that gcc3.0
compiles this kernel just fine. My earlier suspicion turned out to be
wrong. The problem I was seeing wasn't related to gcc3.0 after all.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help linux-2.4.5-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Tue Jun 26 22:55:31 2001
+++ linux-2.4.5-lia/Documentation/Configure.help Tue Jun 26 22:21:03 2001
@@ -12036,6 +12036,12 @@
were partitioned using EFI GPT. Presently only useful on the
IA-64 platform.
+/dev/guid support (EXPERIMENTAL)
+CONFIG_DEVFS_GUID
+ Say Y here if you would like to access disks and partitions by
+ their Globally Unique Identifiers (GUIDs) which will appear as
+ symbolic links in /dev/guid.
+
Ultrix partition support
CONFIG_ULTRIX_PARTITION
Say Y here if you would like to be able to read the hard disk
diff -urN linux-davidm/MAINTAINERS linux-2.4.5-lia/MAINTAINERS
--- linux-davidm/MAINTAINERS Wed May 30 09:57:00 2001
+++ linux-2.4.5-lia/MAINTAINERS Tue Jun 26 22:21:18 2001
@@ -602,6 +602,13 @@
W: http://www.kernelconcepts.de/
S: Maintained
+IA64 (Itanium) PLATFORM
+P: David Mosberger-Tang
+M: davidm@hpl.hp.com
+L: linux-ia64@linuxia64.org
+W: http://www.linuxia64.org/
+S: Maintained
+
IBM MCA SCSI SUBSYSTEM DRIVER
P: Michael Lang
M: langa2@kph.uni-mainz.de
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S linux-2.4.5-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Tue Jun 26 22:55:32 2001
+++ linux-2.4.5-lia/arch/ia64/ia32/ia32_entry.S Tue Jun 26 22:21:54 2001
@@ -140,7 +140,7 @@
data8 sys_lchown
data8 sys32_ni_syscall /* old break syscall holder */
data8 sys32_ni_syscall
- data8 sys_lseek
+ data8 sys32_lseek
data8 sys_getpid /* 20 */
data8 sys_mount
data8 sys_oldumount
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.4.5-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Tue Jun 26 22:55:32 2001
+++ linux-2.4.5-lia/arch/ia64/ia32/sys_ia32.c Tue Jun 26 22:22:37 2001
@@ -2784,6 +2784,15 @@
return ret;
}
+int
+sys32_lseek (unsigned int fd, int offset, unsigned int whence)
+{
+ extern off_t sys_lseek (unsigned int fd, off_t offset, unsigned int origin);
+
+ /* Sign-extension of "offset" is important here... */
+ return sys_lseek(fd, offset, whence);
+}
+
#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
/* In order to reduce some races, while at the same time doing additional
diff -urN linux-davidm/arch/ia64/kernel/Makefile linux-2.4.5-lia/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Tue Jun 26 22:55:32 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/Makefile Tue Jun 26 22:23:14 2001
@@ -13,7 +13,7 @@
export-objs := ia64_ksyms.o
-obj-y := acpi.o entry.o gate.o efi.o efi_stub.o ia64_ksyms.o irq.o irq_ia64.o irq_sapic.o ivt.o \
+obj-y := acpi.o entry.o gate.o efi.o efi_stub.o ia64_ksyms.o irq.o irq_ia64.o irq_lsapic.o ivt.o \
machvec.o pal.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
obj-$(CONFIG_IA64_GENERIC) += machvec.o iosapic.o
diff -urN linux-davidm/arch/ia64/kernel/fw-emu.c linux-2.4.5-lia/arch/ia64/kernel/fw-emu.c
--- linux-davidm/arch/ia64/kernel/fw-emu.c Tue Jun 26 22:55:32 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/fw-emu.c Tue Jun 26 22:23:26 2001
@@ -121,68 +121,63 @@
*/
extern void pal_emulator_static (void);
-asm ("
- .proc pal_emulator_static
-pal_emulator_static:
- mov r8=-1
-
- mov r9%6
- ;;
- cmp.gtu p6,p7=r9,r28 /* r28 <= 255? */
-(p6) br.cond.sptk.few static
- ;;
- mov r9Q2
- ;;
- cmp.gtu p6,p7=r9,r28
-(p6) br.cond.sptk.few stacked
- ;;
-static: cmp.eq p6,p7=6,r28 /* PAL_PTCE_INFO */
-(p7) br.cond.sptk.few 1f
- ;;
- mov r8=0 /* status = 0 */
- movl r9=0x100000000 /* tc.base */
- movl r10=0x0000000200000003 /* count[0], count[1] */
- movl r11=0x1000000000002000 /* stride[0], stride[1] */
- br.cond.sptk.few rp
-
-1: cmp.eq p6,p7\x14,r28 /* PAL_FREQ_RATIOS */
-(p7) br.cond.sptk.few 1f
- mov r8=0 /* status = 0 */
- movl r9 =0x100000064 /* proc_ratio (1/100) */
- movl r10=0x100000100 /* bus_ratio<<32 (1/256) */
- movl r11=0x100000064 /* itc_ratio<<32 (1/100) */
- ;;
-1: cmp.eq p6,p7\x19,r28 /* PAL_RSE_INFO */
-(p7) br.cond.sptk.few 1f
- mov r8=0 /* status = 0 */
- mov r9– /* num phys stacked */
- mov r10=0 /* hints */
- mov r11=0
- br.cond.sptk.few rp
-
-1: cmp.eq p6,p7=1,r28 /* PAL_CACHE_FLUSH */
-(p7) br.cond.sptk.few 1f
- mov r9=ar.lc
- movl r8R4288 /* flush 512k million cache lines (16MB) */
- ;;
- mov ar.lc=r8
- movl r8=0xe000000000000000
- ;;
-.loop: fc r8
- add r82,r8
- br.cloop.sptk.few .loop
- sync.i
- ;;
- srlz.i
- ;;
- mov ar.lc=r9
- mov r8=r0
-1: br.cond.sptk.few rp
-
-stacked:
- br.ret.sptk.few rp
-
- .endp pal_emulator_static\n");
+asm (
+" .proc pal_emulator_static\n"
+"pal_emulator_static:
+ mov r8=-1\n"
+" mov r9%6\n"
+" ;;\n"
+" cmp.gtu p6,p7=r9,r28 /* r28 <= 255? */\n"
+"(p6) br.cond.sptk.few static\n"
+" ;;\n"
+" mov r9Q2\n"
+" ;;\n"
+" cmp.gtu p6,p7=r9,r28\n"
+"(p6) br.cond.sptk.few stacked\n"
+" ;;\n"
+"static: cmp.eq p6,p7=6,r28 /* PAL_PTCE_INFO */\n"
+"(p7) br.cond.sptk.few 1f\n"
+" ;;\n"
+" mov r8=0 /* status = 0 */\n"
+" movl r9=0x100000000 /* tc.base */\n"
+" movl r10=0x0000000200000003 /* count[0], count[1] */\n"
+" movl r11=0x1000000000002000 /* stride[0], stride[1] */\n"
+" br.cond.sptk.few rp\n"
+"1: cmp.eq p6,p7\x14,r28 /* PAL_FREQ_RATIOS */\n"
+"(p7) br.cond.sptk.few 1f\n"
+" mov r8=0 /* status = 0 */\n"
+" movl r9 =0x100000064 /* proc_ratio (1/100) */\n"
+" movl r10=0x100000100 /* bus_ratio<<32 (1/256) */\n"
+" movl r11=0x100000064 /* itc_ratio<<32 (1/100) */\n"
+" ;;\n"
+"1: cmp.eq p6,p7\x19,r28 /* PAL_RSE_INFO */\n"
+"(p7) br.cond.sptk.few 1f\n"
+" mov r8=0 /* status = 0 */\n"
+" mov r9– /* num phys stacked */\n"
+" mov r10=0 /* hints */\n"
+" mov r11=0\n"
+" br.cond.sptk.few rp\n"
+"1: cmp.eq p6,p7=1,r28 /* PAL_CACHE_FLUSH */\n"
+"(p7) br.cond.sptk.few 1f\n"
+" mov r9=ar.lc\n"
+" movl r8R4288 /* flush 512k million cache lines (16MB) */\n"
+" ;;\n"
+" mov ar.lc=r8\n"
+" movl r8=0xe000000000000000\n"
+" ;;\n"
+".loop: fc r8\n"
+" add r82,r8\n"
+" br.cloop.sptk.few .loop\n"
+" sync.i\n"
+" ;;\n"
+" srlz.i\n"
+" ;;\n"
+" mov ar.lc=r9\n"
+" mov r8=r0\n"
+"1: br.cond.sptk.few rp\n"
+"stacked:\n"
+" br.ret.sptk.few rp\n"
+" .endp pal_emulator_static\n");
/* Macro to emulate SAL call using legacy IN and OUT calls to CF8, CFC etc.. */
diff -urN linux-davidm/arch/ia64/kernel/gate.S linux-2.4.5-lia/arch/ia64/kernel/gate.S
--- linux-davidm/arch/ia64/kernel/gate.S Tue Jun 26 22:55:32 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/gate.S Tue Jun 26 22:23:38 2001
@@ -15,8 +15,6 @@
.section .text.gate,"ax"
- .align PAGE_SIZE
-
# define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET)
# define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET)
# define ARG2_OFF (16 + IA64_SIGFRAME_ARG2_OFFSET)
diff -urN linux-davidm/arch/ia64/kernel/iosapic.c linux-2.4.5-lia/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Tue Jun 26 22:55:32 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/iosapic.c Tue Jun 26 22:23:50 2001
@@ -20,7 +20,7 @@
* Here is what the interrupt logic between a PCI device and the CPU looks like:
*
* (1) A PCI device raises one of the four interrupt pins (INTA, INTB, INTC, INTD). The
- * device is uniquely identified by its bus-, device-, and slot-number (the function
+ * device is uniquely identified by its bus--, and slot-number (the function
* number does not matter here because all functions share the same interrupt
* lines).
*
@@ -205,7 +205,43 @@
static void
iosapic_set_affinity (unsigned int irq, unsigned long mask)
{
- printk("iosapic_set_affinity: not implemented yet\n");
+ unsigned long flags;
+ u32 high32, low32;
+ int dest, pin;
+ char *addr;
+
+ mask &= (1UL << smp_num_cpus) - 1;
+
+ if (!mask || irq >= IA64_NUM_VECTORS)
+ return;
+
+ dest = ffz(~mask);
+
+ pin = iosapic_irq[irq].pin;
+ addr = iosapic_irq[irq].addr;
+
+ if (pin < 0)
+ return; /* not an IOSAPIC interrupt */
+
+ /* dest contains both id and eid */
+ high32 = dest << IOSAPIC_DEST_SHIFT;
+
+ spin_lock_irqsave(&iosapic_lock, flags);
+ {
+ /* get current delivery mode by reading the low32 */
+ writel(IOSAPIC_RTE_LOW(pin), addr + IOSAPIC_REG_SELECT);
+ low32 = readl(addr + IOSAPIC_WINDOW);
+
+ /* change delivery mode to fixed */
+ low32 &= ~(7 << IOSAPIC_DELIVERY_SHIFT);
+ low32 |= (IOSAPIC_FIXED << IOSAPIC_DELIVERY_SHIFT);
+
+ writel(IOSAPIC_RTE_HIGH(pin), addr + IOSAPIC_REG_SELECT);
+ writel(high32, addr + IOSAPIC_WINDOW);
+ writel(IOSAPIC_RTE_LOW(pin), addr + IOSAPIC_REG_SELECT);
+ writel(low32, addr + IOSAPIC_WINDOW);
+ }
+ spin_unlock_irqrestore(&iosapic_lock, flags);
}
/*
diff -urN linux-davidm/arch/ia64/kernel/irq_ia64.c linux-2.4.5-lia/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Sun Apr 29 15:49:25 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/irq_ia64.c Tue Jun 26 22:24:20 2001
@@ -72,6 +72,11 @@
ia64_handle_irq (ia64_vector vector, struct pt_regs *regs)
{
unsigned long saved_tpr;
+#ifdef CONFIG_SMP
+# define IS_RESCHEDULE(vec) (vec = IA64_IPI_RESCHEDULE)
+#else
+# define IS_RESCHEDULE(vec) (0)
+#endif
#if IRQ_DEBUG
{
@@ -110,24 +115,25 @@
*/
saved_tpr = ia64_get_tpr();
ia64_srlz_d();
- do {
- ia64_set_tpr(vector);
- ia64_srlz_d();
-
- do_IRQ(local_vector_to_irq(vector), regs);
-
- /*
- * Disable interrupts and send EOI:
- */
- local_irq_disable();
- ia64_set_tpr(saved_tpr);
+ while (vector != IA64_SPURIOUS_INT_VECTOR) {
+ if (!IS_RESCHEDULE(vector)) {
+ ia64_set_tpr(vector);
+ ia64_srlz_d();
+
+ do_IRQ(local_vector_to_irq(vector), regs);
+
+ /*
+ * Disable interrupts and send EOI:
+ */
+ local_irq_disable();
+ ia64_set_tpr(saved_tpr);
+ }
ia64_eoi();
vector = ia64_get_ivr();
- } while (vector != IA64_SPURIOUS_INT_VECTOR);
+ }
}
#ifdef CONFIG_SMP
-
extern void handle_IPI (int irq, void *dev_id, struct pt_regs *regs);
static struct irqaction ipi_irqaction = {
@@ -147,7 +153,7 @@
if (irq_to_vector(irq) = vec) {
desc = irq_desc(irq);
desc->status |= IRQ_PER_CPU;
- desc->handler = &irq_type_ia64_sapic;
+ desc->handler = &irq_type_ia64_lsapic;
if (action)
setup_irq(irq, action);
}
diff -urN linux-davidm/arch/ia64/kernel/irq_lsapic.c linux-2.4.5-lia/arch/ia64/kernel/irq_lsapic.c
--- linux-davidm/arch/ia64/kernel/irq_lsapic.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.5-lia/arch/ia64/kernel/irq_lsapic.c Tue Jun 26 22:24:36 2001
@@ -0,0 +1,38 @@
+/*
+ * LSAPIC Interrupt Controller
+ *
+ * This takes care of interrupts that are generated by the CPU's
+ * internal Streamlined Advanced Programmable Interrupt Controller
+ * (LSAPIC), such as the ITC and IPI interrupts.
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 2000 Hewlett-Packard Co
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/irq.h>
+
+static unsigned int
+lsapic_noop_startup (unsigned int irq)
+{
+ return 0;
+}
+
+static void
+lsapic_noop (unsigned int irq)
+{
+ /* nuthing to do... */
+}
+
+struct hw_interrupt_type irq_type_ia64_lsapic = {
+ typename: "LSAPIC",
+ startup: lsapic_noop_startup,
+ shutdown: lsapic_noop,
+ enable: lsapic_noop,
+ disable: lsapic_noop,
+ ack: lsapic_noop,
+ end: lsapic_noop,
+ set_affinity: (void (*)(unsigned int, unsigned long)) lsapic_noop
+};
diff -urN linux-davidm/arch/ia64/kernel/irq_sapic.c linux-2.4.5-lia/arch/ia64/kernel/irq_sapic.c
--- linux-davidm/arch/ia64/kernel/irq_sapic.c Fri Apr 21 15:21:24 2000
+++ linux-2.4.5-lia/arch/ia64/kernel/irq_sapic.c Wed Dec 31 16:00:00 1969
@@ -1,38 +0,0 @@
-/*
- * SAPIC Interrupt Controller
- *
- * This takes care of interrupts that are generated by the CPU's
- * internal Streamlined Advanced Programmable Interrupt Controller
- * (SAPIC), such as the ITC and IPI interrupts.
- *
- * Copyright (C) 1999 VA Linux Systems
- * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
- */
-
-#include <linux/sched.h>
-#include <linux/irq.h>
-
-static unsigned int
-sapic_noop_startup (unsigned int irq)
-{
- return 0;
-}
-
-static void
-sapic_noop (unsigned int irq)
-{
- /* nuthing to do... */
-}
-
-struct hw_interrupt_type irq_type_ia64_sapic = {
- typename: "SAPIC",
- startup: sapic_noop_startup,
- shutdown: sapic_noop,
- enable: sapic_noop,
- disable: sapic_noop,
- ack: sapic_noop,
- end: sapic_noop,
- set_affinity: (void (*)(unsigned int, unsigned long)) sapic_noop
-};
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.5-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Tue Jun 26 22:55:32 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/ivt.S Tue Jun 26 22:25:00 2001
@@ -9,16 +9,14 @@
* 00/12/20 David Mosberger-Tang <davidm@hpl.hp.com> DTLB/ITLB handler now uses virtual PT.
*/
/*
- * This file defines the interrupt vector table used by the CPU.
+ * This file defines the interruption vector table used by the CPU.
* It does not include one entry per possible cause of interruption.
*
- * External interrupts only use 1 entry. All others are internal interrupts
- *
* The first 20 entries of the table contain 64 bundles each while the
* remaining 48 entries contain only 16 bundles each.
*
* The 64 bundles are used to allow inlining the whole handler for critical
- * interrupts like TLB misses.
+ * interruptions like TLB misses.
*
* For each entry, the comment is as follows:
*
@@ -27,7 +25,7 @@
* entry number ---------/ / / /
* size of the entry -------------/ / /
* vector name -------------------------------------/ /
- * related interrupts (what is the real interrupt?) ----------/
+ * interruptions triggering this vector ----------------------/
*
* The table is 32KB in size and must be aligned on 32KB boundary.
* (The CPU ignores the 15 lower bits of the address)
@@ -363,7 +361,7 @@
;;
ssm psr.ic | PSR_DEFAULT_BITS
;;
- srlz.i // guarantee that interrupt collection is enabled
+ srlz.i // guarantee that interruption collectin is on
;;
(p15) ssm psr.i // restore psr.i
movl r14=ia64_leave_kernel
@@ -650,7 +648,7 @@
ssm psr.ic | PSR_DEFAULT_BITS
;;
- srlz.i // guarantee that interrupt collection is enabled
+ srlz.i // guarantee that interruption collection is on
cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
;;
(p15) ssm psr.i // restore psr.i
@@ -724,11 +722,14 @@
tnat.nz p15,p0=in7
(p11) mov in3=-1
+ tnat.nz p8,p0=r15 // demining r15 is not a must, but it is safer
+
(p12) mov in4=-1
(p13) mov in5=-1
;;
(p14) mov in6=-1
(p15) mov in7=-1
+(p8) mov r15=-1
br.ret.sptk.many rp
END(demine_args)
@@ -790,7 +791,7 @@
SAVE_MIN_WITH_COVER
ssm psr.ic | PSR_DEFAULT_BITS
;;
- srlz.i // guarantee that interrupt collection is enabled
+ srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer for SAVE_REST
@@ -839,7 +840,7 @@
mov r14=cr.isr
ssm psr.ic | PSR_DEFAULT_BITS
;;
- srlz.i // guarantee that interrupt collection is enabled
+ srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i
adds r3=8,r2 // Base pointer for SAVE_REST
@@ -927,7 +928,7 @@
ssm psr.ic | PSR_DEFAULT_BITS
;;
- srlz.i // guarantee that interrupt collection is enabled
+ srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i // restore psr.i
movl r15=ia64_leave_kernel
@@ -960,7 +961,7 @@
ssm psr.ic | PSR_DEFAULT_BITS
;;
- srlz.i // guarantee that interrupt collection is enabled
+ srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer
@@ -1002,7 +1003,7 @@
;;
ssm psr.ic | PSR_DEFAULT_BITS
;;
- srlz.i // guarantee that interrupt collection is enabled
+ srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer for SAVE_REST
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.5-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Tue Jun 26 22:55:33 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/setup.c Tue Jun 26 22:25:11 2001
@@ -57,12 +57,6 @@
unsigned long ia64_cycles_per_usec;
struct ia64_boot_param *ia64_boot_param;
struct screen_info screen_info;
-/* This tells _start which CPU is booting. */
-int cpu_now_booting;
-
-#ifdef CONFIG_SMP
-volatile unsigned long cpu_online_map;
-#endif
unsigned long ia64_iobase; /* virtual address for I/O accesses */
@@ -514,9 +508,7 @@
#ifdef CONFIG_IA32_SUPPORT
/* initialize global ia32 state - CR0 and CR4 */
- __asm__("mov ar.cflg = %0"
- : /* no outputs */
- : "r" (((ulong) IA32_CR4 << 32) | IA32_CR0));
+ asm volatile ("mov ar.cflg = %0" :: "r" (((ulong) IA32_CR4 << 32) | IA32_CR0));
#endif
/* disable all local interrupt sources: */
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.5-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Tue Jun 26 22:55:33 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/smp.c Tue Jun 26 22:26:02 2001
@@ -68,11 +68,10 @@
static volatile struct call_data_struct *call_data;
-#define IPI_RESCHEDULE 0
-#define IPI_CALL_FUNC 1
-#define IPI_CPU_STOP 2
+#define IPI_CALL_FUNC 0
+#define IPI_CPU_STOP 1
#ifndef CONFIG_ITANIUM_PTCG
-# define IPI_FLUSH_TLB 3
+# define IPI_FLUSH_TLB 2
#endif /*!CONFIG_ITANIUM_PTCG */
static void
@@ -107,13 +106,6 @@
ops &= ~(1 << which);
switch (which) {
- case IPI_RESCHEDULE:
- /*
- * Reschedule callback. Everything to be done is done by the
- * interrupt return path.
- */
- break;
-
case IPI_CALL_FUNC:
{
struct call_data_struct *data;
@@ -200,10 +192,6 @@
static inline void
send_IPI_single (int dest_cpu, int op)
{
-
- if (dest_cpu = -1)
- return;
-
set_bit(op, &cpu_data[dest_cpu].ipi_operation);
platform_send_ipi(dest_cpu, IA64_IPI_VECTOR, IA64_IPI_DM_INT, 0);
}
@@ -237,7 +225,7 @@
void
smp_send_reschedule (int cpu)
{
- send_IPI_single(cpu, IPI_RESCHEDULE);
+ platform_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0);
}
#ifndef CONFIG_ITANIUM_PTCG
@@ -255,7 +243,9 @@
* Really need a null IPI but since this rarely should happen & since this code
* will go away, lets not add one.
*/
- send_IPI_allbutself(IPI_RESCHEDULE);
+ for (i = 0; i < smp_num_cpus; ++i) {
+ if (i != smp_processor_id())
+ smp_send_reschedule(i);
}
#endif /* !CONFIG_ITANIUM_PTCG */
@@ -285,7 +275,8 @@
{
struct call_data_struct data;
int cpus = 1;
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
unsigned long timeout;
#endif
@@ -307,7 +298,8 @@
resend:
send_IPI_single(cpuid, IPI_CALL_FUNC);
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
/* Wait for response */
timeout = jiffies + HZ;
while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
@@ -352,7 +344,8 @@
{
struct call_data_struct data;
int cpus = smp_num_cpus-1;
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
unsigned long timeout;
#endif
@@ -373,7 +366,8 @@
/* Send a message to all other CPUs and wait for them to respond */
send_IPI_allbutself(IPI_CALL_FUNC);
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC))
+#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
/* Wait for response */
timeout = jiffies + HZ;
while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
diff -urN linux-davidm/arch/ia64/kernel/smpboot.c linux-2.4.5-lia/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Tue Jun 26 22:55:33 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/smpboot.c Tue Jun 26 22:26:26 2001
@@ -467,7 +467,7 @@
set_bit(0, &cpu_online_map);
set_bit(0, &cpu_callin_map);
- printk("Loops_per_jiffy for BOOT CPU = 0x%x\n", loops_per_jiffy);
+ printk("Loops_per_jiffy for BOOT CPU = 0x%lx\n", loops_per_jiffy);
local_cpu_data->loops_per_jiffy = loops_per_jiffy;
ia64_cpu_to_sapicid[0] = boot_cpu_id;
@@ -508,8 +508,8 @@
do_boot_cpu(sapicid);
/*
- * Make sure we unmap all failed CPUs
- */
+ * Make sure we unmap all failed CPUs
+ */
if (ia64_cpu_to_sapicid[cpu] = -1)
printk("phys CPU#%d not responding - cannot use it.\n", cpu);
}
@@ -517,8 +517,8 @@
smp_num_cpus = cpucount + 1;
/*
- * Allow the user to impress friends.
- */
+ * Allow the user to impress friends.
+ */
printk("Before bogomips.\n");
if (!cpucount) {
@@ -534,6 +534,7 @@
}
}
smp_done:
+ ;
}
/*
diff -urN linux-davidm/arch/ia64/kernel/sys_ia64.c linux-2.4.5-lia/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c Tue Jun 26 22:55:33 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/sys_ia64.c Tue Jun 26 22:26:57 2001
@@ -178,11 +178,22 @@
unsigned long roff;
struct file *file = 0;
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ return -EBADF;
+
+ if (!file->f_op || !file->f_op->mmap)
+ return -ENODEV;
+ }
+
/*
- * A zero mmap always succeeds in Linux, independent of
- * whether or not the remaining arguments are valid.
+ * A zero mmap always succeeds in Linux, independent of whether or not the
+ * remaining arguments are valid.
*/
- if (PAGE_ALIGN(len) = 0)
+ len = PAGE_ALIGN(len);
+ if (len = 0)
return addr;
/* don't permit mappings into unmapped space or the virtual page table of a region: */
@@ -193,13 +204,6 @@
/* don't permit mappings that would cross a region boundary: */
if (rgn_index(addr) != rgn_index(addr + len))
return -EINVAL;
-
- flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
- if (!(flags & MAP_ANONYMOUS)) {
- file = fget(fd);
- if (!file)
- return -EBADF;
- }
down_write(¤t->mm->mmap_sem);
addr = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.4.5-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Tue Jun 26 22:55:33 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/time.c Tue Jun 26 22:27:11 2001
@@ -292,6 +292,6 @@
time_init (void)
{
register_percpu_irq(IA64_TIMER_VECTOR, &timer_irqaction);
- efi_gettimeofday(&xtime);
+ efi_gettimeofday((struct timeval *) &xtime);
ia64_init_itm();
}
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.5-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Tue Jun 26 22:55:33 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/traps.c Tue Jun 26 22:28:08 2001
@@ -215,11 +215,9 @@
fp_emulate (int fp_fault, void *bundle, long *ipsr, long *fpsr, long *isr, long *pr, long *ifs,
struct pt_regs *regs)
{
+ struct ia64_fpreg f6_11[6];
fp_state_t fp_state;
fpswa_ret_t ret;
-#ifdef FPSWA_BUG
- struct ia64_fpreg f6_15[10];
-#endif
if (!fpswa_interface)
return -1;
@@ -231,23 +229,12 @@
* kernel, so set those bits in the mask and set the low volatile
* pointer to point to these registers.
*/
-#ifndef FPSWA_BUG
- fp_state.bitmask_low64 = 0x3c0; /* bit 6..9 */
- fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) ®s->f6;
-#else
- fp_state.bitmask_low64 = 0xffc0; /* bit6..bit15 */
- f6_15[0] = regs->f6;
- f6_15[1] = regs->f7;
- f6_15[2] = regs->f8;
- f6_15[3] = regs->f9;
- __asm__ ("stf.spill %0ñ0%P0" : "=m"(f6_15[4]));
- __asm__ ("stf.spill %0ñ1%P0" : "=m"(f6_15[5]));
- __asm__ ("stf.spill %0ñ2%P0" : "=m"(f6_15[6]));
- __asm__ ("stf.spill %0ñ3%P0" : "=m"(f6_15[7]));
- __asm__ ("stf.spill %0ñ4%P0" : "=m"(f6_15[8]));
- __asm__ ("stf.spill %0ñ5%P0" : "=m"(f6_15[9]));
- fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) f6_15;
-#endif
+ fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */
+ f6_11[0] = regs->f6; f6_11[1] = regs->f7;
+ f6_11[2] = regs->f8; f6_11[3] = regs->f9;
+ __asm__ ("stf.spill %0ñ0%P0" : "=m"(f6_11[4]));
+ __asm__ ("stf.spill %0ñ1%P0" : "=m"(f6_11[5]));
+ fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) f6_11;
/*
* unsigned long (*EFI_FPSWA) (
* unsigned long trap_type,
@@ -263,18 +250,10 @@
(unsigned long *) ipsr, (unsigned long *) fpsr,
(unsigned long *) isr, (unsigned long *) pr,
(unsigned long *) ifs, &fp_state);
-#ifdef FPSWA_BUG
- __asm__ ("ldf.fill f10=%0%P0" :: "m"(f6_15[4]));
- __asm__ ("ldf.fill f11=%0%P0" :: "m"(f6_15[5]));
- __asm__ ("ldf.fill f12=%0%P0" :: "m"(f6_15[6]));
- __asm__ ("ldf.fill f13=%0%P0" :: "m"(f6_15[7]));
- __asm__ ("ldf.fill f14=%0%P0" :: "m"(f6_15[8]));
- __asm__ ("ldf.fill f15=%0%P0" :: "m"(f6_15[9]));
- regs->f6 = f6_15[0];
- regs->f7 = f6_15[1];
- regs->f8 = f6_15[2];
- regs->f9 = f6_15[3];
-#endif
+ regs->f6 = f6_11[0]; regs->f7 = f6_11[1];
+ regs->f8 = f6_11[2]; regs->f9 = f6_11[3];
+ __asm__ ("ldf.fill f10=%0%P0" :: "m"(f6_11[4]));
+ __asm__ ("ldf.fill f11=%0%P0" :: "m"(f6_11[5]));
return ret.status;
}
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.5-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Tue Jun 26 22:55:33 2001
+++ linux-2.4.5-lia/arch/ia64/kernel/unwind.c Tue Jun 26 22:28:21 2001
@@ -1361,10 +1361,10 @@
}
}
-static inline struct unw_table_entry *
+static inline const struct unw_table_entry *
lookup (struct unw_table *table, unsigned long rel_ip)
{
- struct unw_table_entry *e = 0;
+ const struct unw_table_entry *e = 0;
unsigned long lo, hi, mid;
/* do a binary search for right entry: */
@@ -1389,7 +1389,7 @@
build_script (struct unw_frame_info *info)
{
struct unw_reg_state *rs, *next;
- struct unw_table_entry *e = 0;
+ const struct unw_table_entry *e = 0;
struct unw_script *script = 0;
unsigned long ip = info->ip;
struct unw_state_record sr;
@@ -1951,7 +1951,7 @@
{
extern char __start_gate_section[], __stop_gate_section[];
unsigned long *lp, start, end, segbase = unw.kernel_table.segment_base;
- struct unw_table_entry *entry, *first;
+ const struct unw_table_entry *entry, *first;
size_t info_size, size;
char *info;
diff -urN linux-davidm/arch/ia64/kernel/unwind_i.h linux-2.4.5-lia/arch/ia64/kernel/unwind_i.h
--- linux-davidm/arch/ia64/kernel/unwind_i.h Mon Oct 9 17:54:56 2000
+++ linux-2.4.5-lia/arch/ia64/kernel/unwind_i.h Tue Jun 26 22:28:30 2001
@@ -58,7 +58,7 @@
unsigned long segment_base; /* base for offsets in the unwind table entries */
unsigned long start;
unsigned long end;
- struct unw_table_entry *array;
+ const struct unw_table_entry *array;
unsigned long length;
};
diff -urN linux-davidm/arch/ia64/lib/clear_page.S linux-2.4.5-lia/arch/ia64/lib/clear_page.S
--- linux-davidm/arch/ia64/lib/clear_page.S Sun Apr 29 15:49:26 2001
+++ linux-2.4.5-lia/arch/ia64/lib/clear_page.S Tue Jun 26 22:30:32 2001
@@ -1,8 +1,6 @@
/*
*
- * Optimized version of the standard clearpage() function
- *
- * Based on comments from ddd. Try not to overflow the write buffer.
+ * Optimized function to clear a page of memory.
*
* Inputs:
* in0: address of page
@@ -13,27 +11,41 @@
* Copyright (C) 1999-2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 1/06/01 davidm Tuned for Itanium.
*/
#include <asm/asmmacro.h>
#include <asm/page.h>
+#define saved_lc r2
+#define dst0 in0
+#define dst1 r8
+#define dst2 r9
+#define dst3 r10
+#define dst_fetch r11
+
GLOBAL_ENTRY(clear_page)
.prologue
- alloc r11=ar.pfs,1,0,0,0
- .save ar.lc, r16
- mov r16=ar.lc // slow
-
- .body
-
- mov r17=PAGE_SIZE/32-1 // -1 = repeat/until
+ .regstk 1,0,0,0
+ mov r16 = PAGE_SIZE/64-1 // -1 = repeat/until
;;
- adds r18\x16,in0
- mov ar.lc=r17
+ .save ar.lc, saved_lc
+ mov saved_lc = ar.lc
+ .body
+ mov ar.lc = r16
+ adds dst1 = 16, dst0
+ adds dst2 = 32, dst0
+ adds dst3 = 48, dst0
+ adds dst_fetch = 512, dst0
;;
-1: stf.spill.nta [in0]ð,32
- stf.spill.nta [r18]ð,32
+1: stf.spill.nta [dst0] = f0, 64
+ stf.spill.nta [dst1] = f0, 64
+ stf.spill.nta [dst2] = f0, 64
+ stf.spill.nta [dst3] = f0, 64
+
+ lfetch [dst_fetch], 64
br.cloop.dptk.few 1b
;;
- mov ar.lc=r16 // restore lc
+ mov ar.lc = r2 // restore lc
br.ret.sptk.few rp
END(clear_page)
diff -urN linux-davidm/arch/ia64/lib/clear_user.S linux-2.4.5-lia/arch/ia64/lib/clear_user.S
--- linux-davidm/arch/ia64/lib/clear_user.S Sun Apr 29 15:49:26 2001
+++ linux-2.4.5-lia/arch/ia64/lib/clear_user.S Tue Jun 26 22:30:43 2001
@@ -69,7 +69,7 @@
(p6) br.cond.dptk.few long_do_clear
;; // WAR on ar.lc
//
- // worst case 16 cyles, avg 8 cycles
+ // worst case 16 iterations, avg 8 iterations
//
// We could have played with the predicates to use the extra
// M slot for 2 stores/iteration but the cost the initialization
diff -urN linux-davidm/arch/ia64/lib/copy_page.S linux-2.4.5-lia/arch/ia64/lib/copy_page.S
--- linux-davidm/arch/ia64/lib/copy_page.S Sun Apr 29 15:49:26 2001
+++ linux-2.4.5-lia/arch/ia64/lib/copy_page.S Tue Jun 26 22:30:55 2001
@@ -2,8 +2,6 @@
*
* Optimized version of the standard copy_page() function
*
- * Based on comments from ddd. Try not to overflow write buffer.
- *
* Inputs:
* in0: address of target page
* in1: address of source page
@@ -12,11 +10,14 @@
*
* Copyright (C) 1999, 2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2001 David Mosberger <davidm@hpl.hp.com>
+ *
+ * 4/06/01 davidm Tuned to make it perform well both for cached and uncached copies.
*/
#include <asm/asmmacro.h>
#include <asm/page.h>
-#define PIPE_DEPTH 6
+#define PIPE_DEPTH 3
#define EPI p[PIPE_DEPTH-1]
#define lcount r16
@@ -27,62 +28,67 @@
#define src2 r21
#define tgt1 r22
#define tgt2 r23
+#define srcf r24
+#define tgtf r25
+
+#define Nrot ((8*PIPE_DEPTH+7)&~7)
GLOBAL_ENTRY(copy_page)
.prologue
.save ar.pfs, saved_pfs
- alloc saved_pfs=ar.pfs,3,((2*PIPE_DEPTH+7)&~7),0,((2*PIPE_DEPTH+7)&~7)
+ alloc saved_pfs=ar.pfs,3,Nrot-3,0,Nrot
- .rotr t1[PIPE_DEPTH], t2[PIPE_DEPTH]
+ .rotr t1[PIPE_DEPTH], t2[PIPE_DEPTH], t3[PIPE_DEPTH], t4[PIPE_DEPTH], \
+ t5[PIPE_DEPTH], t6[PIPE_DEPTH], t7[PIPE_DEPTH], t8[PIPE_DEPTH]
.rotp p[PIPE_DEPTH]
.save ar.lc, saved_lc
- mov saved_lc=ar.lc // save ar.lc ahead of time
+ mov saved_lc=ar.lc
+ mov ar.ec=PIPE_DEPTH
+
+ mov lcount=PAGE_SIZE/64-1
.save pr, saved_pr
- mov saved_pr=pr // rotating predicates are preserved
- // resgisters we must save.
+ mov saved_pr=pr
+ mov pr.rot=1<<16
+
.body
- mov src1=in1 // initialize 1st stream source
- adds src2=8,in1 // initialize 2nd stream source
- mov lcount=PAGE_SIZE/16-1 // as many 16bytes as there are on a page
- // -1 is because br.ctop is repeat/until
-
- adds tgt2=8,in0 // initialize 2nd stream target
- mov tgt1=in0 // initialize 1st stream target
- ;;
- mov pr.rot=1<<16 // pr16=1 & pr[17-63]=0 , 63 not modified
-
- mov ar.lc=lcount // set loop counter
- mov ar.ec=PIPE_DEPTH // ar.ec must match pipeline depth
- ;;
-
- // We need to preload the n-1 stages of the pipeline (nÞpth).
- // We do this during the "prolog" of the loop: we execute
- // n-1 times the "load" bundle. Then both loads & stores are
- // enabled until we reach the end of the last word of the page
- // on the load side. Then, we enter the epilog (controlled by ec)
- // where we just do the stores and no loads n times : drain the pipe
- // (we exit the loop when ec=1).
- //
- // The initialization of the prolog is done via the predicate registers:
- // the choice of EPI DEPENDS on the depth of the pipeline (n).
- // When lc > 0 pr63=1 and it is fed back into pr16 and pr16-pr62
- // are then shifted right at every iteration,
- // Thus by initializing pr16=1 and the rest to 0 before the loop
- // we get EPI=1 after n iterations.
- //
-1: // engage loop now, let the magic happen...
-(p16) ld8 t1[0]=[src1],16 // new data on top of pipeline in 1st stream
-(p16) ld8 t2[0]=[src2],16 // new data on top of pipeline in 2nd stream
- nop.i 0x0
-(EPI) st8 [tgt1]=t1[PIPE_DEPTH-1],16 // store top of 1st pipeline
-(EPI) st8 [tgt2]=t2[PIPE_DEPTH-1],16 // store top of 2nd pipeline
- br.ctop.dptk.few 1b // once lc=0, ec-- & p16=0
- // stores but no loads anymore
+ mov src1=in1
+ adds src2=8,in1
+ ;;
+ adds tgt2=8,in0
+ add srcfQ2,in1
+ mov ar.lc=lcount
+ mov tgt1=in0
+ add tgtfQ2,in0
+ ;;
+1:
+(p[0]) ld8 t1[0]=[src1],16
+(EPI) st8 [tgt1]=t1[PIPE_DEPTH-1],16
+(p[0]) ld8 t2[0]=[src2],16
+(EPI) st8 [tgt2]=t2[PIPE_DEPTH-1],16
+ ;;
+(p[0]) ld8 t3[0]=[src1],16
+(EPI) st8 [tgt1]=t3[PIPE_DEPTH-1],16
+(p[0]) ld8 t4[0]=[src2],16
+(EPI) st8 [tgt2]=t4[PIPE_DEPTH-1],16
+ ;;
+(p[0]) ld8 t5[0]=[src1],16
+(EPI) st8 [tgt1]=t5[PIPE_DEPTH-1],16
+(p[0]) ld8 t6[0]=[src2],16
+(EPI) st8 [tgt2]=t6[PIPE_DEPTH-1],16
+ ;;
+(p[0]) ld8 t7[0]=[src1],16
+(EPI) st8 [tgt1]=t7[PIPE_DEPTH-1],16
+(p[0]) ld8 t8[0]=[src2],16
+(EPI) st8 [tgt2]=t8[PIPE_DEPTH-1],16
+
+ lfetch [srcf], 64
+ lfetch [tgtf], 64
+ br.ctop.sptk.few 1b
;;
mov pr=saved_pr,0xffffffffffff0000 // restore predicates
- mov ar.pfs=saved_pfs // restore ar.ec
- mov ar.lc=saved_lc // restore saved lc
- br.ret.sptk.few rp // bye...
+ mov ar.pfs=saved_pfs
+ mov ar.lc=saved_lc
+ br.ret.sptk.few rp
END(copy_page)
diff -urN linux-davidm/arch/ia64/lib/copy_user.S linux-2.4.5-lia/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S Sun Apr 29 15:49:26 2001
+++ linux-2.4.5-lia/arch/ia64/lib/copy_user.S Tue Jun 26 22:31:21 2001
@@ -35,7 +35,7 @@
// Tuneable parameters
//
#define COPY_BREAK 16 // we do byte copy below (must be >\x16)
-#define PIPE_DEPTH 4 // pipe depth
+#define PIPE_DEPTH 21 // pipe depth
#define EPI p[PIPE_DEPTH-1] // PASTE(p,16+PIPE_DEPTH-1)
diff -urN linux-davidm/arch/ia64/vmlinux.lds.S linux-2.4.5-lia/arch/ia64/vmlinux.lds.S
--- linux-davidm/arch/ia64/vmlinux.lds.S Tue Jun 26 22:55:34 2001
+++ linux-2.4.5-lia/arch/ia64/vmlinux.lds.S Tue Jun 26 22:32:14 2001
@@ -25,7 +25,7 @@
.text : AT(ADDR(.text) - PAGE_OFFSET)
{
*(.text.ivt)
- /* these are not really text pages, but the zero page needs to be in a fixed location: */
+ /* these are not really text pages, but they need to be page aligned: */
*(__special_page_section)
__start_gate_section = .;
*(.text.gate)
diff -urN linux-davidm/fs/block_dev.c linux-2.4.5-lia/fs/block_dev.c
--- linux-davidm/fs/block_dev.c Wed May 30 09:58:06 2001
+++ linux-2.4.5-lia/fs/block_dev.c Tue Jun 26 22:32:56 2001
@@ -595,14 +595,15 @@
int ioctl_by_bdev(struct block_device *bdev, unsigned cmd, unsigned long arg)
{
- kdev_t rdev = to_kdev_t(bdev->bd_dev);
struct inode inode_fake;
int res;
mm_segment_t old_fs = get_fs();
if (!bdev->bd_op->ioctl)
return -EINVAL;
- inode_fake.i_rdev=rdev;
+ memset(&inode_fake, 0, sizeof(inode_fake));
+ inode_fake.i_rdev = to_kdev_t(bdev->bd_dev);
+ inode_fake.i_bdev = bdev;
init_waitqueue_head(&inode_fake.i_wait);
set_fs(KERNEL_DS);
res = bdev->bd_op->ioctl(&inode_fake, NULL, cmd, arg);
diff -urN linux-davidm/fs/devfs/base.c linux-2.4.5-lia/fs/devfs/base.c
--- linux-davidm/fs/devfs/base.c Wed May 30 09:58:06 2001
+++ linux-2.4.5-lia/fs/devfs/base.c Tue Jun 26 22:33:16 2001
@@ -1903,6 +1903,27 @@
return master->slave;
} /* End Function devfs_get_unregister_slave */
+#ifdef CONFIG_DEVFS_GUID
+/**
+ * devfs_unregister_slave - remove the slave that is unregistered when @master is unregistered.
+ * Destroys the connection established by devfs_auto_unregister.
+ *
+ * @master: The master devfs entry.
+ */
+
+void devfs_unregister_slave (devfs_handle_t master)
+{
+ devfs_handle_t slave;
+
+ if (master = NULL) return;
+
+ slave = master->slave;
+ if (slave) {
+ master->slave = NULL;
+ unregister (slave);
+ };
+}
+#endif /* CONFIG_DEVFS_GUID */
/**
* devfs_get_name - Get the name for a device entry in its parent directory.
@@ -2103,6 +2124,9 @@
EXPORT_SYMBOL(devfs_get_next_sibling);
EXPORT_SYMBOL(devfs_auto_unregister);
EXPORT_SYMBOL(devfs_get_unregister_slave);
+#ifdef CONFIG_DEVFS_GUID
+EXPORT_SYMBOL(devfs_unregister_slave);
+#endif
EXPORT_SYMBOL(devfs_register_chrdev);
EXPORT_SYMBOL(devfs_register_blkdev);
EXPORT_SYMBOL(devfs_unregister_chrdev);
diff -urN linux-davidm/fs/partitions/Config.in linux-2.4.5-lia/fs/partitions/Config.in
--- linux-davidm/fs/partitions/Config.in Tue Jun 26 22:55:38 2001
+++ linux-2.4.5-lia/fs/partitions/Config.in Tue Jun 26 22:33:28 2001
@@ -25,6 +25,7 @@
bool ' Solaris (x86) partition table support' CONFIG_SOLARIS_X86_PARTITION
bool ' Unixware slices support' CONFIG_UNIXWARE_DISKLABEL
bool ' EFI GUID Partition support' CONFIG_EFI_PARTITION
+ dep_bool ' /dev/guid support (EXPERIMENTAL)' CONFIG_DEVFS_GUID $CONFIG_DEVFS_FS $CONFIG_EFI_PARTITION
fi
bool ' SGI partition support' CONFIG_SGI_PARTITION
bool ' Ultrix partition table support' CONFIG_ULTRIX_PARTITION
diff -urN linux-davidm/fs/partitions/check.c linux-2.4.5-lia/fs/partitions/check.c
--- linux-davidm/fs/partitions/check.c Tue Jun 26 22:55:38 2001
+++ linux-2.4.5-lia/fs/partitions/check.c Tue Jun 26 22:33:40 2001
@@ -79,6 +79,20 @@
NULL
};
+#ifdef CONFIG_DEVFS_GUID
+static devfs_handle_t guid_top_handle;
+
+#define GUID_UNPARSED_LEN 36
+static void
+uuid_unparse_1(efi_guid_t *guid, char *out)
+{
+ sprintf(out, "%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x",
+ guid->data1, guid->data2, guid->data3,
+ guid->data4[0], guid->data4[1], guid->data4[2], guid->data4[3],
+ guid->data4[4], guid->data4[5], guid->data4[6], guid->data4[7]);
+}
+#endif
+
/*
* disk_name() is used by partition check code and the md driver.
* It formats the devicename of the indicated disk into
@@ -314,6 +328,101 @@
devfs_register_partitions (hd, i, hd->sizes ? 0 : 1);
}
+#ifdef CONFIG_DEVFS_GUID
+/*
+ devfs_register_guid: create a /dev/guid entry for a disk or partition
+ if it has a GUID.
+
+ The /dev/guid entry will be a symlink to the "real" devfs device.
+ It is marked as "slave" of the real device,
+ to be automatically unregistered by devfs if that device is unregistered.
+
+ If the partition already had a /dev/guid entry, delete (unregister) it.
+ (If the disk was repartitioned, it's likely the old GUID entry will be wrong).
+
+ dev, minor: Device for which an entry is to be created.
+
+ Prerequisites: dev->part[minor].guid must be either NULL or point
+ to a valid kmalloc'ed GUID.
+*/
+
+static void devfs_register_guid (struct gendisk *dev, int minor)
+{
+ efi_guid_t *guid = dev->part[minor].guid;
+ devfs_handle_t guid_handle, slave,
+ real_master = dev->part[minor].de;
+ devfs_handle_t master = real_master;
+ char guid_link[GUID_UNPARSED_LEN + 1];
+ char dirname[128];
+ int pos, st;
+
+ if (!guid_top_handle)
+ guid_top_handle = devfs_mk_dir (NULL, "guid", NULL);
+
+ if (!guid || !master) return;
+
+ do {
+ slave = devfs_get_unregister_slave (master);
+ if (slave) {
+ if (slave = master || slave = real_master) {
+ printk (KERN_WARNING
+ "devfs_register_guid: infinite slave loop!\n");
+ return;
+ } else if (devfs_get_parent (slave) = guid_top_handle) {
+ printk (KERN_INFO
+ "devfs_register_guid: unregistering %s\n",
+ devfs_get_name (slave, NULL));
+ devfs_unregister_slave (master);
+ slave = NULL;
+ } else
+ master = slave;
+ };
+ } while (slave);
+
+ uuid_unparse_1 (guid, guid_link);
+ pos = devfs_generate_path (real_master, dirname + 3,
+ sizeof (dirname) - 3);
+ if (pos < 0) {
+ printk (KERN_WARNING
+ "devfs_register_guid: error generating path: %d\n",
+ pos);
+ return;
+ };
+
+ strncpy (dirname + pos, "../", 3);
+
+ st = devfs_mk_symlink (guid_top_handle, guid_link,
+ DEVFS_FL_DEFAULT,
+ dirname + pos, &guid_handle, NULL);
+
+ if (st < 0) {
+ printk ("Error %d creating symlink\n", st);
+ } else {
+ devfs_auto_unregister (master, guid_handle);
+ };
+};
+
+/*
+ free_disk_guids: kfree all guid data structures alloced for
+ the disk device specified by (dev, minor) and all its partitions.
+
+ This function does not remove symlinks in /dev/guid.
+*/
+static void free_disk_guids (struct gendisk *dev, int minor)
+{
+ int i;
+ efi_guid_t *guid;
+
+ for (i = 0; i < dev->max_p; i++) {
+ guid = dev->part[minor + i].guid;
+ if (!guid) continue;
+ kfree (guid);
+ dev->part[minor + i].guid = NULL;
+ };
+}
+
+#endif /* CONFIG_DEVFS_GUID */
+
#ifdef CONFIG_DEVFS_FS
static void devfs_register_partition (struct gendisk *dev, int minor, int part)
{
@@ -322,7 +431,11 @@
unsigned int devfs_flags = DEVFS_FL_DEFAULT;
char devname[16];
- if (dev->part[minor + part].de) return;
+ /* Even if the devfs handle is still up-to-date,
+ the GUID entry probably isn't */
+ if (dev->part[minor + part].de)
+ goto do_guid;
+
dir = devfs_get_parent (dev->part[minor].de);
if (!dir) return;
if ( dev->flags && (dev->flags[devnum] & GENHD_FL_REMOVABLE) )
@@ -333,6 +446,11 @@
dev->major, minor + part,
S_IFBLK | S_IRUSR | S_IWUSR,
dev->fops, NULL);
+ do_guid:
+#ifdef CONFIG_DEVFS_GUID
+ devfs_register_guid (dev, minor + part);
+#endif
+ return;
}
static void devfs_register_disc (struct gendisk *dev, int minor)
@@ -345,7 +463,9 @@
static unsigned int disc_counter;
static devfs_handle_t devfs_handle;
- if (dev->part[minor].de) return;
+ if (dev->part[minor].de)
+ goto do_guid;
+
if ( dev->flags && (dev->flags[devnum] & GENHD_FL_REMOVABLE) )
devfs_flags |= DEVFS_FL_REMOVABLE;
if (dev->de_arr) {
@@ -372,6 +492,12 @@
devfs_auto_unregister (dev->part[minor].de, slave);
if (!dev->de_arr)
devfs_auto_unregister (slave, dir);
+
+ do_guid:
+#ifdef CONFIG_DEVFS_GUID
+ devfs_register_guid (dev, minor);
+#endif
+ return;
}
#endif /* CONFIG_DEVFS_FS */
@@ -393,6 +519,7 @@
if (unregister) {
devfs_unregister (dev->part[minor].de);
dev->part[minor].de = NULL;
+ free_disk_guids (dev, minor);
}
#endif /* CONFIG_DEVFS_FS */
}
@@ -410,8 +537,21 @@
void register_disk(struct gendisk *gdev, kdev_t dev, unsigned minors,
struct block_device_operations *ops, long size)
{
+ int i;
+
if (!gdev)
return;
+
+#ifdef CONFIG_DEVFS_GUID
+ /* Initialize all guid fields to NULL (=^ not kmalloc'ed).
+ It is assumed that drivers call register_disk after
+ allocating the gen_hd structure, and call grok_partitions
+ directly for a revalidate event, as those drives I've inspected
+ (among which hd and sd) do. */
+ for (i = 0; i < gdev->max_p; i++)
+ gdev->part[MINOR(dev) + i].guid = NULL;
+#endif
+
grok_partitions(gdev, MINOR(dev)>>gdev->minor_shift, minors, size);
}
@@ -429,6 +569,13 @@
if (!size || minors = 1)
return;
+#ifdef CONFIG_DEVFS_GUID
+ /* In case this is a revalidation, free GUID memory.
+ On the first call for this device,
+ register_disk has set all entries to NULL,
+ and nothing will happen. */
+ free_disk_guids (dev, first_minor);
+#endif
blk_size[dev->major] = NULL;
check_partition(dev, MKDEV(dev->major, first_minor), 1 + first_minor);
diff -urN linux-davidm/fs/partitions/efi.c linux-2.4.5-lia/fs/partitions/efi.c
--- linux-davidm/fs/partitions/efi.c Tue Jun 26 22:55:38 2001
+++ linux-2.4.5-lia/fs/partitions/efi.c Tue Jun 26 22:33:56 2001
@@ -61,7 +61,7 @@
#include "check.h"
#include "efi.h"
-#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+#if CONFIG_BLK_DEV_MD
extern void md_autodetect_dev(kdev_t dev);
#endif
@@ -479,7 +479,30 @@
return 0;
}
+#ifdef CONFIG_DEVFS_GUID
+/* set_partition_guid */
+/* Fill in the GUID field of the partition.
+ It is set to NULL by register_disk before. */
+static void set_partition_guid (struct gendisk *hd,
+ const int minor,
+ const efi_guid_t *guid)
+{
+ efi_guid_t *part_guid = hd->part[minor].guid;
+
+ if (!guid || !hd) return;
+
+ part_guid = kmalloc (sizeof (efi_guid_t), GFP_KERNEL);
+ if (part_guid) {
+ memcpy (part_guid, guid, sizeof (efi_guid_t));
+ } else {
+ printk (KERN_WARNING
+ "add_gpt_partitions: cannot allocate GUID memory!\n");
+ };
+
+ hd->part[minor].guid = part_guid;
+}
+#endif /* CONFIG_DEVFS_GUID */
/*
* Create devices for each entry in the GUID Partition Table Entries.
@@ -502,7 +525,7 @@
u32 i, nummade=0;
efi_guid_t unusedGuid = UNUSED_ENTRY_GUID;
-#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+#if CONFIG_BLK_DEV_MD
efi_guid_t raidGuid = PARTITION_LINUX_RAID_GUID;
#endif
@@ -516,6 +539,11 @@
}
debug_printk(efi_printk_level "GUID Partition Table is valid! Yea!\n");
+
+#ifdef CONFIG_DEVFS_GUID
+ set_partition_guid (hd, nextminor - 1, &(gpt->DiskGUID));
+#endif
+
for (i = 0; i < gpt->NumberOfPartitionEntries &&
nummade < (hd->max_p - 1); i++) {
if (!efi_guidcmp(unusedGuid, ptes[i].PartitionTypeGuid))
@@ -524,8 +552,12 @@
add_gd_partition(hd, nextminor, ptes[i].StartingLBA,
(ptes[i].EndingLBA-ptes[i].StartingLBA + 1));
+#ifdef CONFIG_DEVFS_GUID
+ set_partition_guid (hd, nextminor, &(ptes[i].UniquePartitionGuid));
+#endif
+
/* If there's this is a RAID volume, tell md */
-#if CONFIG_BLK_DEV_MD && CONFIG_AUTODETECT_RAID
+#if CONFIG_BLK_DEV_MD
if (!efi_guidcmp(raidGuid, ptes[i].PartitionTypeGuid)) {
md_autodetect_dev(MKDEV(MAJOR(dev),nextminor));
}
diff -urN linux-davidm/include/asm-ia64/fpswa.h linux-2.4.5-lia/include/asm-ia64/fpswa.h
--- linux-davidm/include/asm-ia64/fpswa.h Sun Feb 13 10:30:50 2000
+++ linux-2.4.5-lia/include/asm-ia64/fpswa.h Tue Jun 26 22:34:10 2001
@@ -9,10 +9,6 @@
* Copyright (C) 1999 Goutham Rao <goutham.rao@intel.com>
*/
-#if 1
-#define FPSWA_BUG
-#endif
-
typedef struct {
/* 4 * 128 bits */
unsigned long fp_lp[4*2];
diff -urN linux-davidm/include/asm-ia64/hw_irq.h linux-2.4.5-lia/include/asm-ia64/hw_irq.h
--- linux-davidm/include/asm-ia64/hw_irq.h Sun Apr 29 15:50:41 2001
+++ linux-2.4.5-lia/include/asm-ia64/hw_irq.h Tue Jun 26 22:34:23 2001
@@ -49,6 +49,7 @@
#define IA64_PERFMON_VECTOR 0xee /* performanc monitor interrupt vector */
#define IA64_TIMER_VECTOR 0xef /* use highest-prio group 15 interrupt for timer */
#define IA64_MCA_WAKEUP_VECTOR 0xf0 /* MCA wakeup (must be >MCA_RENDEZ_VECTOR) */
+#define IA64_IPI_RESCHEDULE 0xfd /* SMP reschedule */
#define IA64_IPI_VECTOR 0xfe /* inter-processor interrupt vector */
/* IA64 inter-cpu interrupt related definitions */
@@ -69,7 +70,7 @@
extern unsigned long ipi_base_addr;
-extern struct hw_interrupt_type irq_type_ia64_sapic; /* CPU-internal interrupt controller */
+extern struct hw_interrupt_type irq_type_ia64_lsapic; /* CPU-internal interrupt controller */
extern int ia64_alloc_irq (void); /* allocate a free irq */
extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect);
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.5-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Tue Jun 26 22:55:38 2001
+++ linux-2.4.5-lia/include/asm-ia64/offsets.h Tue Jun 26 22:34:41 2001
@@ -25,8 +25,8 @@
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
#define IA64_TASK_THREAD_OFFSET 1456 /* 0x5b0 */
#define IA64_TASK_THREAD_KSP_OFFSET 1456 /* 0x5b0 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 3752 /* 0xea8 */
-#define IA64_TASK_PFM_NOTIFY_OFFSET 3648 /* 0xe40 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 1568 /* 0x620 */
+#define IA64_TASK_PFM_NOTIFY_OFFSET 2088 /* 0x828 */
#define IA64_TASK_PID_OFFSET 196 /* 0xc4 */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/signal.h linux-2.4.5-lia/include/asm-ia64/signal.h
--- linux-davidm/include/asm-ia64/signal.h Wed Feb 9 19:45:43 2000
+++ linux-2.4.5-lia/include/asm-ia64/signal.h Tue Jun 26 22:34:54 2001
@@ -56,7 +56,7 @@
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
+ * SA_INTERRUPT is a no-op, but left due to historical reasons.
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
@@ -105,7 +105,6 @@
#define SA_PROBE SA_ONESHOT
#define SA_SAMPLE_RANDOM SA_RESTART
#define SA_SHIRQ 0x04000000
-#define SA_LEGACY 0x02000000 /* installed via a legacy irq? */
#endif /* __KERNEL__ */
diff -urN linux-davidm/include/asm-ia64/smp.h linux-2.4.5-lia/include/asm-ia64/smp.h
--- linux-davidm/include/asm-ia64/smp.h Tue Jun 26 22:55:38 2001
+++ linux-2.4.5-lia/include/asm-ia64/smp.h Tue Jun 26 22:35:08 2001
@@ -35,7 +35,7 @@
extern char no_int_routing __initdata;
-extern unsigned long cpu_online_map;
+extern volatile unsigned long cpu_online_map;
extern unsigned long ipi_base_addr;
extern unsigned char smp_int_redirect;
extern int smp_num_cpus;
diff -urN linux-davidm/include/linux/devfs_fs_kernel.h linux-2.4.5-lia/include/linux/devfs_fs_kernel.h
--- linux-davidm/include/linux/devfs_fs_kernel.h Sun Apr 29 15:50:52 2001
+++ linux-2.4.5-lia/include/linux/devfs_fs_kernel.h Tue Jun 26 22:35:22 2001
@@ -81,6 +81,9 @@
extern devfs_handle_t devfs_get_next_sibling (devfs_handle_t de);
extern void devfs_auto_unregister (devfs_handle_t master,devfs_handle_t slave);
extern devfs_handle_t devfs_get_unregister_slave (devfs_handle_t master);
+#ifdef CONFIG_DEVFS_GUID
+extern void devfs_unregister_slave (devfs_handle_t master);
+#endif
extern const char *devfs_get_name (devfs_handle_t de, unsigned int *namelen);
extern int devfs_register_chrdev (unsigned int major, const char *name,
struct file_operations *fops);
diff -urN linux-davidm/include/linux/genhd.h linux-2.4.5-lia/include/linux/genhd.h
--- linux-davidm/include/linux/genhd.h Mon Apr 2 19:01:59 2001
+++ linux-2.4.5-lia/include/linux/genhd.h Tue Jun 26 22:35:35 2001
@@ -13,6 +13,10 @@
#include <linux/types.h>
#include <linux/major.h>
+#ifdef CONFIG_DEVFS_GUID
+#include <asm/efi.h>
+#endif
+
/* These three have identical behaviour; use the second one if DOS fdisk gets
confused about extended/logical partitions starting past cylinder 1023. */
#define DOS_EXTENDED_PARTITION 5
@@ -51,6 +55,9 @@
long start_sect;
long nr_sects;
devfs_handle_t de; /* primary (master) devfs entry */
+#ifdef CONFIG_DEVFS_GUID
+ efi_guid_t *guid;
+#endif
};
#define GENHD_FL_REMOVABLE 1
diff -urN linux-davidm/include/linux/irq.h linux-2.4.5-lia/include/linux/irq.h
--- linux-davidm/include/linux/irq.h Tue Jun 26 22:55:38 2001
+++ linux-2.4.5-lia/include/linux/irq.h Tue Jun 26 22:35:47 2001
@@ -54,7 +54,6 @@
#include <asm/hw_irq.h> /* the arch dependent stuff */
-extern void do_IRQ_per_cpu (unsigned long irq, struct pt_regs *regs);
extern int handle_IRQ_event(unsigned int, struct pt_regs *, struct irqaction *);
extern int setup_irq(unsigned int , struct irqaction * );
diff -urN linux-davidm/include/linux/sched.h linux-2.4.5-lia/include/linux/sched.h
--- linux-davidm/include/linux/sched.h Sun Apr 29 15:50:54 2001
+++ linux-2.4.5-lia/include/linux/sched.h Tue Jun 26 22:40:35 2001
@@ -537,7 +537,7 @@
extern unsigned long volatile jiffies;
extern unsigned long itimer_ticks;
extern unsigned long itimer_next;
-extern struct timeval xtime;
+extern volatile struct timeval xtime;
extern void do_timer(struct pt_regs *);
extern unsigned int * prof_buffer;
diff -urN linux-davidm/include/linux/smp.h linux-2.4.5-lia/include/linux/smp.h
--- linux-davidm/include/linux/smp.h Sun Dec 31 11:10:17 2000
+++ linux-2.4.5-lia/include/linux/smp.h Tue Jun 26 22:36:19 2001
@@ -53,7 +53,7 @@
/*
* True once the per process idle is forked
*/
-extern int smp_threads_ready;
+extern volatile int smp_threads_ready;
extern int smp_num_cpus;
diff -urN linux-davidm/mm/memory.c linux-2.4.5-lia/mm/memory.c
--- linux-davidm/mm/memory.c Tue Jun 26 22:55:39 2001
+++ linux-2.4.5-lia/mm/memory.c Tue Jun 26 22:37:29 2001
@@ -103,8 +103,10 @@
}
pmd = pmd_offset(dir, 0);
pgd_clear(dir);
- for (j = 0; j < PTRS_PER_PMD ; j++)
+ for (j = 0; j < PTRS_PER_PMD ; j++) {
+ asm volatile ("lfetch.excl [%0]" :: "r"(pmd + j + 16));
free_one_pmd(pmd+j);
+ }
pmd_free(pmd);
}
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.5)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (50 preceding siblings ...)
2001-06-27 7:09 ` David Mosberger
@ 2001-06-27 17:24 ` Richard Hirst
2001-06-27 18:10 ` Martin Wilck
` (163 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Richard Hirst @ 2001-06-27 17:24 UTC (permalink / raw)
To: linux-ia64
On Wed, Jun 27, 2001 at 12:09:08AM -0700, David Mosberger wrote:
> The latest IA-64 patch is available at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>
> in file linux-2.4.5-ia64-010626.diff*.
I appled that to a Linus 2.4.5 tree, but it doesn't compile with
devfs enabled but dev/guid disabled. fs/partitions/check.c:522
has an unprotected call to free_disk_guids().
Richard
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.5)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (51 preceding siblings ...)
2001-06-27 17:24 ` Richard Hirst
@ 2001-06-27 18:10 ` Martin Wilck
2001-07-23 23:49 ` [Linux-ia64] kernel update (relative to 2.4.7) David Mosberger
` (162 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Martin Wilck @ 2001-06-27 18:10 UTC (permalink / raw)
To: linux-ia64
> I appled that to a Linus 2.4.5 tree, but it doesn't compile with
> devfs enabled but dev/guid disabled. fs/partitions/check.c:522
> has an unprotected call to free_disk_guids().
I am sorry. The patch is almost too trivial to post it here, but may
still save people time:
--- 2.4.5mw/fs/partitions/check.c~ Wed Jun 27 12:36:18 2001
+++ 2.4.5mw/fs/partitions/check.c Wed Jun 27 21:53:10 2001
@@ -519,7 +519,9 @@
if (unregister) {
devfs_unregister (dev->part[minor].de);
dev->part[minor].de = NULL;
+#ifdef CONFIG_DEVFS_GUID
free_disk_guids (dev, minor);
+#endif
}
#endif /* CONFIG_DEVFS_FS */
}
Martin
--
Martin Wilck <Martin.Wilck@fujitsu-siemens.com>
FSC EP PS DS1, Paderborn Tel. +49 5251 8 15113
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.7)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (52 preceding siblings ...)
2001-06-27 18:10 ` Martin Wilck
@ 2001-07-23 23:49 ` David Mosberger
2001-07-24 1:50 ` Keith Owens
` (161 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-07-23 23:49 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.7-ia64-010723.diff*.
There are no major new features in this patch but mostly bug fixes and
syncing up with 2.4.7. Note that 2.4.7 introduces a somewhat
non-trivial change in how softirqs are handled: they are now checked
only as a result of a hard irq and not every time the kernel returns
to user mode. While I'm not particularly happy to see such changes
this late in the 2.4.x series, I do believe it is safe and I have been
running a fairly busy machine since Saturday without any problems. As
usual your mileage may vary, so test well before you ship with this
kernel.
One big caveat: the ACPI power management does NOT work at the moment.
Do not turn on CONFIG_ACPI_BUSMGR etc. or you may end up with a
non-booting system. Intel is working on fixing this, so stay tuned.
More detailed change log:
- more McKinley related updates from Alex
- get legacy I/O base address from EFI memory map (Alex)
- mark MMX and FXSR extensions as available in IA-32 mode (Asit)
- add support for placing per-CPU data in NUMA-node local memory (Jack)
- perfmon fix from Stephane
- more error checking in IA-32 subsystem (based on patch from Arnaldo)
- drop support for A-step CPUs
- make iosapic_set_affinity() a no-op on UP machines
- update /proc/cpuinfo to output "CPU family" info in a manner consistent
with Intel's interpretation of this field; specifically, the architecture
(IA-64) is now listed with tag "arch" and the "family" tag now contains
"Itanium", "McKinley", or whatever...
- fix copy_user() to work properly with PIPE_DEPTH!=4 (this patch was already
sent to the list earlier on)
- fix time conversion in "joydev" driver
- merge in QLA 2100 driver from Qlogic
As usual the patch below is fyi only.
Note: I'd like to remove support for B0-B2 step CPUs asap. If you
really need support for such old CPUs, make your case now... ;-)
Enjoy,
--david
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.7-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/config.in Mon Jul 23 14:00:15 2001
@@ -25,9 +25,12 @@
define_bool CONFIG_SBUS n
define_bool CONFIG_RWSEM_GENERIC_SPINLOCK y
define_bool CONFIG_RWSEM_XCHGADD_ALGORITHM n
-define_bool CONFIG_ACPI y
-define_bool CONFIG_ACPI_INTERPRETER y
-define_bool CONFIG_ACPI_KERNEL_CONFIG y
+
+if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
+ define_bool CONFIG_ACPI y
+ define_bool CONFIG_ACPI_INTERPRETER y
+ define_bool CONFIG_ACPI_KERNEL_CONFIG y
+fi
choice 'IA-64 processor type' \
"Itanium CONFIG_ITANIUM \
@@ -47,7 +50,6 @@
if [ "$CONFIG_ITANIUM" = "y" ]; then
define_bool CONFIG_IA64_BRL_EMU y
- bool ' Enable Itanium A-step specific code' CONFIG_ITANIUM_ASTEP_SPECIFIC
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
@@ -62,7 +64,7 @@
if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium C0-step specific code' CONFIG_ITANIUM_C0_SPECIFIC
fi
- if [ "$CONFIG_ITANIUM_ASTEP_SPECIFIC" = "y" -o "$CONFIG_ITANIUM_B0_SPECIFIC" = "y" \
+ if [ "$CONFIG_ITANIUM_B0_SPECIFIC" = "y" \
-o "$CONFIG_ITANIUM_B1_SPECIFIC" = "y" -o "$CONFIG_ITANIUM_B2_SPECIFIC" = "y" ]; then
define_bool CONFIG_ITANIUM_PTCG n
else
@@ -87,7 +89,6 @@
if [ "$CONFIG_IA64_DIG" = "y" ]; then
bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
- bool ' Enable ACPI 2.0 with errata 1.3' CONFIG_ACPI20
define_bool CONFIG_PM y
fi
@@ -121,6 +122,8 @@
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
+source drivers/acpi/Config.in
+
bool 'PCI support' CONFIG_PCI
source drivers/pci/Config.in
@@ -244,6 +247,10 @@
endmenu
source drivers/usb/Config.in
+
+if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ source net/bluetooth/Config.in
+fi
fi # !HP_SIM
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.4.7-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/ia32/sys_ia32.c Mon Jul 23 14:00:32 2001
@@ -232,10 +232,17 @@
back = NULL;
if ((baddr = (addr & PAGE_MASK)) != addr && get_user(c, (char *)baddr) = 0) {
front = kmalloc(addr - baddr, GFP_KERNEL);
+ if (!front)
+ return -ENOMEM;
__copy_user(front, (void *)baddr, addr - baddr);
}
if (addr && ((addr + len) & ~PAGE_MASK) && get_user(c, (char *)(addr + len)) = 0) {
back = kmalloc(PAGE_SIZE - ((addr + len) & ~PAGE_MASK), GFP_KERNEL);
+ if (!back) {
+ if (front)
+ kfree(front);
+ return -ENOMEM;
+ }
__copy_user(back, (char *)addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
}
down_write(¤t->mm->mmap_sem);
@@ -660,9 +667,11 @@
long ret;
if (times32) {
- get_user(tv[0].tv_sec, ×32->atime);
+ if (get_user(tv[0].tv_sec, ×32->atime))
+ return -EFAULT;
tv[0].tv_usec = 0;
- get_user(tv[1].tv_sec, ×32->mtime);
+ if (get_user(tv[1].tv_sec, ×32->mtime))
+ return -EFAULT;
tv[1].tv_usec = 0;
set_fs (KERNEL_DS);
tvp = tv;
@@ -747,15 +756,18 @@
buf->error = -EINVAL; /* only used if we fail.. */
if (reclen > buf->count)
return -EINVAL;
+ buf->error = -EFAULT; /* only used if we fail.. */
dirent = buf->previous;
if (dirent)
- put_user(offset, &dirent->d_off);
+ if (put_user(offset, &dirent->d_off))
+ return -EFAULT;
dirent = buf->current_dir;
buf->previous = dirent;
- put_user(ino, &dirent->d_ino);
- put_user(reclen, &dirent->d_reclen);
- copy_to_user(dirent->d_name, name, namlen);
- put_user(0, dirent->d_name + namlen);
+ if (put_user(ino, &dirent->d_ino)
+ || put_user(reclen, &dirent->d_reclen)
+ || copy_to_user(dirent->d_name, name, namlen)
+ || put_user(0, dirent->d_name + namlen))
+ return -EFAULT;
((char *) dirent) += reclen;
buf->current_dir = dirent;
buf->count -= reclen;
@@ -786,7 +798,9 @@
error = buf.error;
lastdirent = buf.previous;
if (lastdirent) {
- put_user(file->f_pos, &lastdirent->d_off);
+ error = -EINVAL;
+ if (put_user(file->f_pos, &lastdirent->d_off))
+ goto out_putf;
error = count - buf.count;
}
@@ -807,11 +821,12 @@
return -EINVAL;
buf->count++;
dirent = buf->dirent;
- put_user(ino, &dirent->d_ino);
- put_user(offset, &dirent->d_offset);
- put_user(namlen, &dirent->d_namlen);
- copy_to_user(dirent->d_name, name, namlen);
- put_user(0, dirent->d_name + namlen);
+ if (put_user(ino, &dirent->d_ino)
+ || put_user(offset, &dirent->d_offset)
+ || put_user(namlen, &dirent->d_namlen)
+ || copy_to_user(dirent->d_name, name, namlen)
+ || put_user(0, dirent->d_name + namlen))
+ return -EFAULT;
return 0;
}
@@ -862,8 +877,10 @@
if (tvp32) {
time_t sec, usec;
- get_user(sec, &tvp32->tv_sec);
- get_user(usec, &tvp32->tv_usec);
+ ret = -EFAULT;
+ if (get_user(sec, &tvp32->tv_sec)
+ || get_user(usec, &tvp32->tv_usec))
+ goto out_nofds;
ret = -EINVAL;
if (sec < 0 || usec < 0)
@@ -916,8 +933,12 @@
usec = timeout % HZ;
usec *= (1000000/HZ);
}
- put_user(sec, (int *)&tvp32->tv_sec);
- put_user(usec, (int *)&tvp32->tv_usec);
+ if (put_user(sec, (int *)&tvp32->tv_sec)
+ || put_user(usec, (int *)&tvp32->tv_usec))
+ {
+ ret = -EFAULT;
+ goto out;
+ }
}
if (ret < 0)
@@ -1558,16 +1579,15 @@
{
union semun fourth;
u32 pad;
- int err, err2;
+ int err = 0, err2;
struct semid64_ds s;
struct semid_ds32 *usp;
mm_segment_t old_fs;
if (!uptr)
return -EINVAL;
- err = -EFAULT;
- if (get_user (pad, (u32 *)uptr))
- return err;
+ if (get_user(pad, (u32 *)uptr))
+ return -EFAULT;
if(third = SETVAL)
fourth.val = (int)pad;
else
@@ -1749,15 +1769,14 @@
{
unsigned long raddr;
u32 *uaddr = (u32 *)A((u32)third);
- int err = -EINVAL;
+ int err;
if (version = 1)
- return err;
+ return -EINVAL;
err = sys_shmat (first, uptr, second, &raddr);
if (err)
return err;
- err = put_user (raddr, uaddr);
- return err;
+ return put_user(raddr, uaddr);
}
static int
@@ -2124,7 +2143,7 @@
case PT_CS:
return((unsigned int)__USER_CS);
default:
- printk("getregs:unknown register %d\n", regno);
+ printk(KERN_ERR "getregs:unknown register %d\n", regno);
break;
}
@@ -2176,14 +2195,16 @@
case PT_GS:
case PT_SS:
if (value != __USER_DS)
- printk("setregs:try to set invalid segment register %d = %x\n", regno, value);
+ printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
+ regno, value);
break;
case PT_CS:
if (value != __USER_CS)
- printk("setregs:try to set invalid segment register %d = %x\n", regno, value);
+ printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
+ regno, value);
break;
default:
- printk("getregs:unknown register %d\n", regno);
+ printk(KERN_ERR "getregs:unknown register %d\n", regno);
break;
}
@@ -2239,7 +2260,6 @@
}
__copy_to_user(reg, f, sizeof(*reg));
- return;
}
void
@@ -2546,8 +2566,8 @@
{
struct pt_regs *regs = (struct pt_regs *)&stack;
- printk("IA32 syscall #%d issued, maybe we should implement it\n",
- (int)regs->r1);
+ printk(KERN_WARNING "IA32 syscall #%d issued, maybe we should implement it\n",
+ (int)regs->r1);
return(sys_ni_syscall());
}
diff -urN linux-davidm/arch/ia64/kernel/acpi.c linux-2.4.7-lia/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/acpi.c Mon Jul 23 14:00:58 2001
@@ -205,11 +205,16 @@
case ACPI20_ENTRY_IO_SAPIC:
iosapic = (acpi_entry_iosapic_t *) p;
if (iosapic_init)
- iosapic_init(iosapic->address, iosapic->irq_base);
+ /*
+ * The PCAT_COMPAT flag indicates that the system has a
+ * dual-8259 compatible setup.
+ */
+ iosapic_init(iosapic->address, iosapic->irq_base,
+ (madt->flags & MADT_PCAT_COMPAT));
break;
case ACPI20_ENTRY_PLATFORM_INT_SOURCE:
- printk("ACPI 2.0 MADT: PLATFORM INT SOUCE\n");
+ printk("ACPI 2.0 MADT: PLATFORM INT SOURCE\n");
acpi20_platform(p);
break;
@@ -256,6 +261,7 @@
int __init
acpi20_parse (acpi20_rsdp_t *rsdp20)
{
+# ifdef CONFIG_ACPI
acpi_xsdt_t *xsdt;
acpi_desc_table_hdr_t *hdrp;
int tables, i;
@@ -304,13 +310,14 @@
acpi_cf_terminate();
-#ifdef CONFIG_SMP
+# ifdef CONFIG_SMP
if (available_cpus = 0) {
printk("ACPI: Found 0 CPUS; assuming 1\n");
available_cpus = 1; /* We've got at least one of these, no? */
}
smp_boot_data.cpu_count = total_cpus;
-#endif
+# endif
+# endif /* CONFIG_ACPI */
return 1;
}
/*
@@ -390,7 +397,12 @@
case ACPI_ENTRY_IO_SAPIC:
iosapic = (acpi_entry_iosapic_t *) p;
if (iosapic_init)
- iosapic_init(iosapic->address, iosapic->irq_base);
+ /*
+ * The ACPI I/O SAPIC table doesn't have a PCAT_COMPAT
+ * flag like the MADT table, but we can safely assume that
+ * ACPI 1.0b systems have a dual-8259 setup.
+ */
+ iosapic_init(iosapic->address, iosapic->irq_base, 1);
break;
case ACPI_ENTRY_INT_SRC_OVERRIDE:
@@ -416,6 +428,7 @@
int __init
acpi_parse (acpi_rsdp_t *rsdp)
{
+# ifdef CONFIG_ACPI
acpi_rsdt_t *rsdt;
acpi_desc_table_hdr_t *hdrp;
long tables, i;
@@ -449,12 +462,13 @@
acpi_cf_terminate();
-#ifdef CONFIG_SMP
+# ifdef CONFIG_SMP
if (available_cpus = 0) {
printk("ACPI: Found 0 CPUS; assuming 1\n");
available_cpus = 1; /* We've got at least one of these, no? */
}
smp_boot_data.cpu_count = total_cpus;
-#endif
+# endif
+# endif /* CONFIG_ACPI */
return 1;
}
diff -urN linux-davidm/arch/ia64/kernel/efi.c linux-2.4.7-lia/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/efi.c Mon Jul 23 14:01:11 2001
@@ -453,6 +453,32 @@
efi.reset_system = __va(runtime->reset_system);
}
+/*
+ * Walk the EFI memory map looking for the I/O port range. There can only be one entry of
+ * this type, other I/O port ranges should be described via ACPI.
+ */
+u64
+efi_get_iobase (void)
+{
+ void *efi_map_start, *efi_map_end, *p;
+ efi_memory_desc_t *md;
+ u64 efi_desc_size;
+
+ efi_map_start = __va(ia64_boot_param->efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
+ efi_desc_size = ia64_boot_param->efi_memdesc_size;
+
+ for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
+ md = p;
+ if (md->type = EFI_MEMORY_MAPPED_IO_PORT_SPACE) {
+ /* paranoia attribute checking */
+ if (md->attribute = (EFI_MEMORY_UC | EFI_MEMORY_RUNTIME))
+ return md->phys_addr;
+ }
+ }
+ return 0;
+}
+
static void __exit
efivars_exit(void)
{
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.7-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/entry.S Mon Jul 23 14:01:24 2001
@@ -212,23 +212,20 @@
.save @priunat,r17
mov r17=ar.unat // preserve caller's
.body
-#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
+#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
adds r3€,sp
;;
lfetch.fault.excl.nt1 [r3],128
#endif
mov ar.rsc=0 // put RSE in mode: enforced lazy, little endian, pl 0
-#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
+#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
adds r2\x16+128,sp
;;
lfetch.fault.excl.nt1 [r2],128
lfetch.fault.excl.nt1 [r3],128
#endif
adds r14=SW(R4)+16,sp
-#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
+#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
;;
lfetch.fault.excl [r2]
lfetch.fault.excl [r3]
@@ -325,8 +322,7 @@
.prologue
.altrp b7
.body
-#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
+#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
lfetch.fault.nt1 [sp]
#endif
@@ -530,14 +526,9 @@
// fall through
GLOBAL_ENTRY(ia64_leave_kernel)
PT_REGS_UNWIND_INFO(0)
- cmp.eq p16,p0=r0,r0 // set the "first_time" flag
- movl r15=PERCPU_ADDR+IA64_CPU_SOFTIRQ_ACTIVE_OFFSET // r15 = &cpu_data.softirq.active
- ;;
- ld8 r2=[r15]
+ lfetch.fault [sp]
movl r14=.restart
;;
- lfetch.fault [sp]
- shr.u r3=r2,32 // r3 = cpu_data.softirq.mask
MOVBR(.ret.sptk,rp,r14,.restart)
.restart:
adds r17=IA64_TASK_NEED_RESCHED_OFFSET,r13
@@ -546,27 +537,20 @@
adds r19=IA64_TASK_PFM_NOTIFY_OFFSET,r13
#endif
;;
- ld8 r17=[r17] // load current->need_resched
- ld4 r18=[r18] // load current->sigpending
-(p16) and r2=r2,r3 // r2 <- (softirq.active & softirq.mask)
- ;;
#ifdef CONFIG_PERFMON
ld8 r19=[r19] // load current->task.pfm_notify
#endif
-(p16) cmp4.ne.unc p6,p0=r2,r0 // p6 <- (softirq.active & softirq.mask) != 0
-(pUser) cmp.ne.unc p7,p0=r17,r0 // current->need_resched != 0?
+ ld8 r17=[r17] // load current->need_resched
+ ld4 r18=[r18] // load current->sigpending
;;
-(pUser) cmp.ne.unc p8,p0=r18,r0 // current->sigpending != 0?
#ifdef CONFIG_PERFMON
cmp.ne p9,p0=r19,r0 // current->task.pfm_notify != 0?
#endif
- cmp.ne p16,p0=r0,r0 // clear the "first_time" flag
+(pUser) cmp.ne.unc p7,p0=r17,r0 // current->need_resched != 0?
+(pUser) cmp.ne.unc p8,p0=r18,r0 // current->sigpending != 0?
;;
-# if __GNUC__ < 3
-(p6) br.call.spnt.many b7=invoke_do_softirq
-# else
-(p6) br.call.spnt.many b7=do_softirq
-# endif
+ adds r2=PT(R8)+16,r12
+ adds r3=PT(R9)+16,r12
#ifdef CONFIG_PERFMON
(p9) br.call.spnt.many b7=pfm_overflow_notify
#endif
@@ -575,8 +559,6 @@
#else
(p7) br.call.spnt.many b7=schedule
#endif
- adds r2=PT(R8)+16,r12
- adds r3=PT(R9)+16,r12
(p8) br.call.spnt.many b7=handle_signal_delivery // check & deliver pending signals
;;
// start restoring the state saved on the kernel stack (struct pt_regs):
@@ -634,14 +616,6 @@
;;
bsw.0 // switch back to bank 0
;;
-#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
- nop.i 0x0
- ;;
- nop.i 0x0
- ;;
- nop.i 0x0
- ;;
-#endif
adds r16\x16,r12
adds r17$,r12
;;
@@ -811,28 +785,6 @@
# endif /* CONFIG_SMP */
-#if __GNUC__ < 3
- /*
- * Invoke do_softirq() while preserving in0-in7, which may be needed
- * in case a system call gets restarted. Note that declaring do_softirq()
- * with asmlinkage() is NOT enough because that will only preserve as many
- * registers as there are formal arguments.
- *
- * XXX fix me: with gcc 3.0, we won't need this anymore because syscall_linkage
- * renders all eight input registers (in0-in7) as "untouchable".
- */
-ENTRY(invoke_do_softirq)
- .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
- alloc loc1=ar.pfs,8,2,0,0
- mov loc0=rp
- ;;
- .body
- br.call.sptk.few rp=do_softirq
-.ret13: mov ar.pfs=loc1
- mov rp=loc0
- br.ret.sptk.many rp
-END(invoke_do_softirq)
-
/*
* Invoke schedule() while preserving in0-in7, which may be needed
* in case a system call gets restarted. Note that declaring schedule()
@@ -853,8 +805,6 @@
mov rp=loc0
br.ret.sptk.many rp
END(invoke_schedule)
-
-#endif /* __GNUC__ < 3 */
/*
* Setup stack and call ia64_do_signal. Note that pSys and pNonSys need to
diff -urN linux-davidm/arch/ia64/kernel/entry.h linux-2.4.7-lia/arch/ia64/kernel/entry.h
--- linux-davidm/arch/ia64/kernel/entry.h Sun Apr 29 15:49:25 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/entry.h Mon Jul 23 14:38:11 2001
@@ -1,7 +1,7 @@
#include <linux/config.h>
/* XXX fixme */
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC)
+#if defined(CONFIG_ITANIUM_B1_SPECIFIC)
# define MOVBR(type,br,gr,lbl) mov br=gr
#else
# define MOVBR(type,br,gr,lbl) mov##type br=gr,lbl
diff -urN linux-davidm/arch/ia64/kernel/fw-emu.c linux-2.4.7-lia/arch/ia64/kernel/fw-emu.c
--- linux-davidm/arch/ia64/kernel/fw-emu.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/fw-emu.c Mon Jul 23 14:25:16 2001
@@ -123,8 +123,8 @@
asm (
" .proc pal_emulator_static\n"
-"pal_emulator_static:
- mov r8=-1\n"
+"pal_emulator_static:"
+" mov r8=-1\n"
" mov r9%6\n"
" ;;\n"
" cmp.gtu p6,p7=r9,r28 /* r28 <= 255? */\n"
diff -urN linux-davidm/arch/ia64/kernel/head.S linux-2.4.7-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/head.S Mon Jul 23 14:01:58 2001
@@ -218,8 +218,7 @@
add r19=IA64_NUM_DBG_REGS*8,in0
;;
1: mov r16Ûr[r18]
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_C0_SPECIFIC)
+#if defined(CONFIG_ITANIUM_C0_SPECIFIC)
;;
srlz.d
#endif
@@ -236,8 +235,7 @@
GLOBAL_ENTRY(ia64_load_debug_regs)
alloc r16=ar.pfs,1,0,0,0
-#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
+#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
lfetch.nta [in0]
#endif
mov r20=ar.lc // preserve ar.lc
@@ -250,8 +248,7 @@
add r18=1,r18
;;
mov dbr[r18]=r16
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_C0_SPECIFIC)
+#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) || defined(CONFIG_ITANIUM_C0_SPECIFIC)
;;
srlz.d
#endif
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.7-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/ia64_ksyms.c Mon Jul 23 14:02:07 2001
@@ -31,6 +31,9 @@
EXPORT_SYMBOL(disable_irq);
EXPORT_SYMBOL(disable_irq_nosync);
+#include <linux/interrupt.h>
+EXPORT_SYMBOL(probe_irq_mask);
+
#include <linux/in6.h>
#include <asm/checksum.h>
/* not coded yet?? EXPORT_SYMBOL(csum_ipv6_magic); */
@@ -54,7 +57,9 @@
EXPORT_SYMBOL(clear_page);
#include <asm/processor.h>
-EXPORT_SYMBOL(cpu_data);
+# ifndef CONFIG_NUMA
+EXPORT_SYMBOL(_cpu_data);
+# endif
EXPORT_SYMBOL(kernel_thread);
#include <asm/system.h>
diff -urN linux-davidm/arch/ia64/kernel/iosapic.c linux-2.4.7-lia/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/iosapic.c Mon Jul 23 15:49:13 2001
@@ -205,6 +205,7 @@
static void
iosapic_set_affinity (unsigned int irq, unsigned long mask)
{
+#ifdef CONFIG_SMP
unsigned long flags;
u32 high32, low32;
int dest, pin;
@@ -215,7 +216,7 @@
if (!mask || irq >= IA64_NUM_VECTORS)
return;
- dest = ffz(~mask);
+ dest = cpu_physical_id(ffz(~mask));
pin = iosapic_irq[irq].pin;
addr = iosapic_irq[irq].addr;
@@ -242,6 +243,7 @@
writel(low32, addr + IOSAPIC_WINDOW);
}
spin_unlock_irqrestore(&iosapic_lock, flags);
+#endif
}
/*
@@ -364,7 +366,7 @@
}
void __init
-iosapic_init (unsigned long phys_addr, unsigned int base_irq)
+iosapic_init (unsigned long phys_addr, unsigned int base_irq, int pcat_compat)
{
struct hw_interrupt_type *irq_type;
int i, irq, max_pin, vector;
@@ -393,7 +395,7 @@
printk("IOSAPIC: version %x.%x, address 0x%lx, IRQs 0x%02x-0x%02x\n",
(ver & 0xf0) >> 4, (ver & 0x0f), phys_addr, base_irq, base_irq + max_pin);
- if (base_irq = 0)
+ if ((base_irq = 0) && pcat_compat)
/*
* Map the legacy ISA devices into the IOSAPIC data. Some of these may
* get reprogrammed later on with data from the ACPI Interrupt Source
diff -urN linux-davidm/arch/ia64/kernel/irq.c linux-2.4.7-lia/arch/ia64/kernel/irq.c
--- linux-davidm/arch/ia64/kernel/irq.c Sun Apr 29 15:49:25 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/irq.c Mon Jul 23 14:02:40 2001
@@ -626,6 +626,8 @@
desc->handler->end(irq);
spin_unlock(&desc->lock);
}
+ if (local_softirq_pending())
+ do_softirq();
return 1;
}
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.7-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/ivt.S Mon Jul 23 14:02:50 2001
@@ -534,8 +534,7 @@
;;
1: ld8 r18=[r17]
;;
-# if defined(CONFIG_IA32_SUPPORT) && \
- (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC))
+# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC)
/*
* Erratum 85 (Access bit fault could be reported before page not present fault)
* If the PTE is indicates the page is not present, then just turn this into a
@@ -565,8 +564,7 @@
;;
1: ld8 r18=[r17]
;;
-# if defined(CONFIG_IA32_SUPPORT) && \
- (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC))
+# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC)
/*
* Erratum 85 (Access bit fault could be reported before page not present fault)
* If the PTE is indicates the page is not present, then just turn this into a
diff -urN linux-davidm/arch/ia64/kernel/minstate.h linux-2.4.7-lia/arch/ia64/kernel/minstate.h
--- linux-davidm/arch/ia64/kernel/minstate.h Sun Apr 29 15:49:25 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/minstate.h Mon Jul 23 14:38:11 2001
@@ -235,12 +235,6 @@
stf.spill [r2]ø,32; \
stf.spill [r3]ù,32
-#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
-# define STOPS nop.i 0x0;; nop.i 0x0;; nop.i 0x0;;
-#else
-# define STOPS
-#endif
-
-#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover, mov rCRIFS=cr.ifs,) STOPS
-#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover, mov rCRIFS=cr.ifs, mov r15=r19) STOPS
-#define SAVE_MIN DO_SAVE_MIN( , mov rCRIFS=r0, ) STOPS
+#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover, mov rCRIFS=cr.ifs,)
+#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover, mov rCRIFS=cr.ifs, mov r15=r19)
+#define SAVE_MIN DO_SAVE_MIN( , mov rCRIFS=r0, )
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c linux-2.4.7-lia/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/perfmon.c Mon Jul 23 14:14:59 2001
@@ -574,11 +574,7 @@
/* cannot send to process 1, 0 means do not notify */
if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return 0;
- /* asked for sampling, but nothing to record ! */
- if (pfx->smpl_entries > 0 && pfm_smpl_entry_size(&pfx->smpl_regs, 1) = 0) return 0;
-
/* probably more to add here */
-
return 1;
}
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.7-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/setup.c Mon Jul 23 14:03:13 2001
@@ -51,8 +51,11 @@
extern char _end;
-/* cpu_data[0] is data for the bootstrap processor: */
-struct cpuinfo_ia64 cpu_data[NR_CPUS] __attribute__ ((section ("__special_page_section")));
+#ifdef CONFIG_NUMA
+ struct cpuinfo_ia64 *boot_cpu_data;
+#else
+ struct cpuinfo_ia64 _cpu_data[NR_CPUS] __attribute__ ((section ("__special_page_section")));
+#endif
unsigned long ia64_cycles_per_usec;
struct ia64_boot_param *ia64_boot_param;
@@ -304,27 +307,37 @@
/*
* Set `iobase' to the appropriate address in region 6
* (uncached access range)
+ *
+ * The EFI memory map is the "prefered" location to get the I/O port
+ * space base, rather the relying on AR.KR0. This should become more
+ * clear in future SAL specs. We'll fall back to getting it out of
+ * AR.KR0 if no appropriate entry is found in the memory map.
*/
- ia64_iobase = ia64_get_kr(IA64_KR_IO_BASE);
+ ia64_iobase = efi_get_iobase();
+ if (ia64_iobase)
+ /* set AR.KR0 since this is all we use it for anyway */
+ ia64_set_kr(IA64_KR_IO_BASE, ia64_iobase);
+ else {
+ ia64_iobase = ia64_get_kr(IA64_KR_IO_BASE);
+ printk("No I/O port range found in EFI memory map, falling back to AR.KR0\n");
+ printk("I/O port base = 0x%lx\n", ia64_iobase);
+ }
ia64_iobase = __IA64_UNCACHED_OFFSET | (ia64_iobase & ~PAGE_OFFSET);
- cpu_init(); /* initialize the bootstrap CPU */
-
#ifdef CONFIG_SMP
cpu_physical_id(0) = hard_smp_processor_id();
#endif
+ cpu_init(); /* initialize the bootstrap CPU */
+
#ifdef CONFIG_IA64_GENERIC
machvec_init(acpi_get_sysname());
#endif
-#ifdef CONFIG_ACPI20
if (efi.acpi20) {
/* Parse the ACPI 2.0 tables */
acpi20_parse(efi.acpi20);
- } else
-#endif
- if (efi.acpi) {
+ } else if (efi.acpi) {
/* Parse the ACPI tables */
acpi_parse(efi.acpi);
}
@@ -359,26 +372,18 @@
#else
# define lpj loops_per_jiffy
#endif
- char family[32], model[32], features[128], *cp, *p = buffer;
+ char family[32], features[128], *cp, *p = buffer;
struct cpuinfo_ia64 *c;
- unsigned long mask;
-
- for (c = cpu_data; c < cpu_data + NR_CPUS; ++c) {
-#ifdef CONFIG_SMP
- if (!(cpu_online_map & (1UL << (c - cpu_data))))
- continue;
-#endif
+ unsigned long mask, cpu;
+ for (cpu = 0; cpu < smp_num_cpus; ++cpu) {
+ c = cpu_data(cpu);
mask = c->features;
- if (c->family = 7)
- memcpy(family, "IA-64", 6);
- else
- sprintf(family, "%u", c->family);
-
- switch (c->model) {
- case 0: strcpy(model, "Itanium"); break;
- default: sprintf(model, "%u", c->model); break;
+ switch (c->family) {
+ case 0x07: memcpy(family, "Itanium", 8); break;
+ case 0x1f: memcpy(family, "McKinley", 9); break;
+ default: sprintf(family, "%u", c->family); break;
}
/* build the feature string: */
@@ -395,8 +400,9 @@
p += sprintf(p,
"processor : %lu\n"
"vendor : %s\n"
+ "arch : IA-64\n"
"family : %s\n"
- "model : %s\n"
+ "model : %u\n"
"revision : %u\n"
"archrev : %u\n"
"features :%s\n" /* don't change this---it _is_ right! */
@@ -405,8 +411,7 @@
"cpu MHz : %lu.%06lu\n"
"itc MHz : %lu.%06lu\n"
"BogoMIPS : %lu.%02lu\n\n",
- c - cpu_data, c->vendor, family, model, c->revision, c->archrev,
- features,
+ cpu, c->vendor, family, c->model, c->revision, c->archrev, features,
c->ppn, c->number, c->proc_freq / 1000000, c->proc_freq % 1000000,
c->itc_freq / 1000000, c->itc_freq % 1000000,
lpj*HZ/500000, (lpj*HZ/5000) % 100);
@@ -474,18 +479,54 @@
void
cpu_init (void)
{
- extern void __init ia64_mmu_init (void);
+ extern void __init ia64_mmu_init (void *);
unsigned long num_phys_stacked;
pal_vm_info_2_u_t vmi;
unsigned int max_ctx;
+ struct cpuinfo_ia64 *my_cpu_data;
+#ifdef CONFIG_NUMA
+ int cpu, order;
+
+ /*
+ * If NUMA is configured, the cpu_data array is not preallocated. The boot cpu
+ * allocates entries for every possible cpu. As the remaining cpus come online,
+ * they reallocate a new cpu_data structure on their local node. This extra work
+ * is required because some boot code references all cpu_data structures
+ * before the cpus are actually started.
+ */
+ if (!boot_cpu_data) {
+ my_cpu_data = alloc_bootmem_pages_node(NODE_DATA(numa_node_id()),
+ sizeof(struct cpuinfo_ia64));
+ boot_cpu_data = my_cpu_data;
+ my_cpu_data->cpu_data[0] = my_cpu_data;
+ for (cpu = 1; cpu < NR_CPUS; ++cpu)
+ my_cpu_data->cpu_data[cpu]
+ = alloc_bootmem_pages_node(NODE_DATA(numa_node_id()),
+ sizeof(struct cpuinfo_ia64));
+ for (cpu = 1; cpu < NR_CPUS; ++cpu)
+ memcpy(my_cpu_data->cpu_data[cpu]->cpu_data_ptrs,
+ my_cpu_data->cpu_data, sizeof(my_cpu_data->cpu_data));
+ } else {
+ order = get_order(sizeof(struct cpuinfo_ia64));
+ my_cpu_data = page_address(alloc_pages_node(numa_node_id(), GFP_KERNEL, order));
+ memcpy(my_cpu_data, boot_cpu_data->cpu_data[smp_processor_id()],
+ sizeof(struct cpuinfo_ia64));
+ __free_pages(virt_to_page(boot_cpu_data->cpu_data[smp_processor_id()]),
+ order);
+ for (cpu = 0; cpu < NR_CPUS; ++cpu)
+ boot_cpu_data->cpu_data[cpu]->cpu_data[smp_processor_id()] = my_cpu_data;
+ }
+#else
+ my_cpu_data = cpu_data(smp_processor_id());
+#endif
/*
- * We can't pass "local_cpu_data" do identify_cpu() because we haven't called
+ * We can't pass "local_cpu_data" to identify_cpu() because we haven't called
* ia64_mmu_init() yet. And we can't call ia64_mmu_init() first because it
* depends on the data returned by identify_cpu(). We break the dependency by
- * accessing cpu_data[] the old way, through identity mapped space.
+ * accessing cpu_data() the old way, through identity mapped space.
*/
- identify_cpu(&cpu_data[smp_processor_id()]);
+ identify_cpu(my_cpu_data);
/* Clear the stack memory reserved for pt_regs: */
memset(ia64_task_regs(current), 0, sizeof(struct pt_regs));
@@ -504,7 +545,7 @@
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
- ia64_mmu_init();
+ ia64_mmu_init(my_cpu_data);
#ifdef CONFIG_IA32_SUPPORT
/* initialize global ia32 state - CR0 and CR4 */
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.7-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/smp.c Mon Jul 23 14:03:54 2001
@@ -192,7 +192,7 @@
static inline void
send_IPI_single (int dest_cpu, int op)
{
- set_bit(op, &cpu_data[dest_cpu].ipi_operation);
+ set_bit(op, &cpu_data(dest_cpu)->ipi_operation);
platform_send_ipi(dest_cpu, IA64_IPI_VECTOR, IA64_IPI_DM_INT, 0);
}
@@ -239,11 +239,13 @@
void
smp_resend_flush_tlb (void)
{
+ int i;
+
/*
* Really need a null IPI but since this rarely should happen & since this code
* will go away, lets not add one.
*/
- for (i = 0; i < smp_num_cpus; ++i) {
+ for (i = 0; i < smp_num_cpus; ++i)
if (i != smp_processor_id())
smp_send_reschedule(i);
}
@@ -275,7 +277,7 @@
{
struct call_data_struct data;
int cpus = 1;
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
|| defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
unsigned long timeout;
#endif
@@ -295,11 +297,11 @@
spin_lock_bh(&call_lock);
call_data = &data;
+#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
resend:
send_IPI_single(cpuid, IPI_CALL_FUNC);
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
/* Wait for response */
timeout = jiffies + HZ;
while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
@@ -307,6 +309,8 @@
if (atomic_read(&data.started) != cpus)
goto resend;
#else
+ send_IPI_single(cpuid, IPI_CALL_FUNC);
+
/* Wait for response */
while (atomic_read(&data.started) != cpus)
barrier();
@@ -344,7 +348,7 @@
{
struct call_data_struct data;
int cpus = smp_num_cpus-1;
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
|| defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
unsigned long timeout;
#endif
@@ -362,12 +366,12 @@
spin_lock_bh(&call_lock);
call_data = &data;
+#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
+ || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
resend:
/* Send a message to all other CPUs and wait for them to respond */
send_IPI_allbutself(IPI_CALL_FUNC);
-#if (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
/* Wait for response */
timeout = jiffies + HZ;
while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
@@ -375,6 +379,8 @@
if (atomic_read(&data.started) != cpus)
goto resend;
#else
+ send_IPI_allbutself(IPI_CALL_FUNC);
+
/* Wait for response */
while (atomic_read(&data.started) != cpus)
barrier();
diff -urN linux-davidm/arch/ia64/kernel/smpboot.c linux-2.4.7-lia/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/smpboot.c Mon Jul 23 14:04:10 2001
@@ -467,8 +467,6 @@
set_bit(0, &cpu_online_map);
set_bit(0, &cpu_callin_map);
- printk("Loops_per_jiffy for BOOT CPU = 0x%lx\n", loops_per_jiffy);
-
local_cpu_data->loops_per_jiffy = loops_per_jiffy;
ia64_cpu_to_sapicid[0] = boot_cpu_id;
@@ -481,7 +479,7 @@
/*
* If SMP should be disabled, then really disable it!
*/
- if ((!max_cpus) || (max_cpus < -1)) {
+ if (!max_cpus || (max_cpus < -1)) {
printk(KERN_INFO "SMP mode deactivated.\n");
cpu_online_map = 1;
smp_num_cpus = 1;
@@ -502,7 +500,7 @@
if ((sapicid = -1) || (sapicid = hard_smp_processor_id()))
continue;
- if ((max_cpus > 0) && (max_cpus = cpucount+1))
+ if ((max_cpus > 0) && (cpucount + 1 >= max_cpus))
break;
do_boot_cpu(sapicid);
@@ -527,7 +525,7 @@
unsigned long bogosum = 0;
for (cpu = 0; cpu < NR_CPUS; cpu++)
if (cpu_online_map & (1<<cpu))
- bogosum += cpu_data[cpu].loops_per_jiffy;
+ bogosum += cpu_data(cpu)->loops_per_jiffy;
printk(KERN_INFO"Total of %d processors activated (%lu.%02lu BogoMIPS).\n",
cpucount + 1, bogosum/(500000/HZ), (bogosum/(5000/HZ))%100);
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.4.7-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/time.c Mon Jul 23 14:05:17 2001
@@ -67,8 +67,8 @@
unsigned long now, last_tick;
# define time_keeper_id 0 /* smp_processor_id() of time-keeper */
- last_tick = (cpu_data[time_keeper_id].itm_next
- - (lost + 1)*cpu_data[time_keeper_id].itm_delta);
+ last_tick = (cpu_data(time_keeper_id)->itm_next
+ - (lost + 1)*cpu_data(time_keeper_id)->itm_delta);
now = ia64_get_itc();
if ((long) (now - last_tick) < 0) {
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.7-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/kernel/traps.c Mon Jul 23 14:05:03 2001
@@ -421,14 +421,12 @@
sprintf(buf, "General Exception: %s%s", reason[code],
(code = 3) ? ((isr & (1UL << 37))
? " (RSE access)" : " (data access)") : "");
-#ifndef CONFIG_ITANIUM_ASTEP_SPECIFIC
if (code = 8) {
# ifdef CONFIG_IA64_PRINT_HAZARDS
printk("%016lx:possible hazard, pr = %016lx\n", regs->cr_iip, regs->pr);
# endif
return;
}
-#endif
break;
case 25: /* Disabled FP-Register */
diff -urN linux-davidm/arch/ia64/lib/copy_user.S linux-2.4.7-lia/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/lib/copy_user.S Mon Jul 23 14:05:31 2001
@@ -37,7 +37,7 @@
#define COPY_BREAK 16 // we do byte copy below (must be >\x16)
#define PIPE_DEPTH 21 // pipe depth
-#define EPI p[PIPE_DEPTH-1] // PASTE(p,16+PIPE_DEPTH-1)
+#define EPI p[PIPE_DEPTH-1]
//
// arguments
@@ -148,8 +148,8 @@
//
//
- // Optimization. If dst1 is 8-byte aligned (not rarely), we don't need
- // to copy the head to dst1, to start 8-byte copy software pipleline.
+ // Optimization. If dst1 is 8-byte aligned (quite common), we don't need
+ // to copy the head to dst1, to start 8-byte copy software pipeline.
// We know src1 is not 8-byte aligned in this case.
//
cmp.eq p14,p15=r0,dst2
@@ -233,15 +233,23 @@
#define SWITCH(pred, shift) cmp.eq pred,p0=shift,rshift
#define CASE(pred, shift) \
(pred) br.cond.spnt.few copy_user_bit##shift
-#define BODY(rshift) \
-copy_user_bit##rshift: \
-1: \
- EX(failure_out,(EPI) st8 [dst1]=tmp,8); \
-(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
- EX(failure_in2,(p16) ld8 val1[0]=[src1],8); \
- br.ctop.dptk.few 1b; \
- ;; \
- br.cond.spnt.few .diff_align_do_tail
+#define BODY(rshift) \
+copy_user_bit##rshift: \
+1: \
+ EX(failure_out,(EPI) st8 [dst1]=tmp,8); \
+(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
+ EX(3f,(p16) ld8 val1[0]=[src1],8); \
+ br.ctop.dptk.few 1b; \
+ ;; \
+ br.cond.sptk.few .diff_align_do_tail; \
+2: \
+(EPI) st8 [dst1]=tmp,8; \
+(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
+3: \
+(p16) mov val1[0]=r0; \
+ br.ctop.dptk.few 2b; \
+ ;; \
+ br.cond.sptk.few failure_in2
//
// Since the instruction 'shrp' requires a fixed 128-bit value
@@ -581,13 +589,7 @@
br.ret.dptk.few rp
failure_in2:
- sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
- ;;
-3:
-(p16) mov val1[0]=r0
-(EPI) st8 [dst1]=val1[PIPE_DEPTH-1],8
- br.ctop.dptk.few 3b
- ;;
+ sub ret0=endsrc,src1
cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
sub len=enddst,dst1,1 // precompute len
(p6) br.cond.dptk.few failure_in1bis
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.4.7-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/mm/init.c Mon Jul 23 14:07:33 2001
@@ -140,6 +140,8 @@
printk ("Freeing initrd memory: %ldkB freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
+ if (!VALID_PAGE(virt_to_page(start)))
+ continue;
clear_bit(PG_reserved, &virt_to_page(start)->flags);
set_page_count(virt_to_page(start), 1);
free_page(start);
@@ -225,7 +227,7 @@
}
void __init
-ia64_mmu_init (void)
+ia64_mmu_init (void *my_cpu_data)
{
unsigned long flags, rid, pta, impl_va_bits;
extern void __init tlb_init (void);
@@ -251,8 +253,7 @@
ia64_srlz_d();
ia64_itr(0x2, IA64_TR_PERCPU_DATA, PERCPU_ADDR,
- pte_val(mk_pte_phys(__pa(&cpu_data[smp_processor_id()]), PAGE_KERNEL)),
- PAGE_SHIFT);
+ pte_val(mk_pte_phys(__pa(my_cpu_data), PAGE_KERNEL)), PAGE_SHIFT);
__restore_flags(flags);
ia64_srlz_i();
diff -urN linux-davidm/arch/ia64/mm/tlb.c linux-2.4.7-lia/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Sun Apr 29 15:49:26 2001
+++ linux-2.4.7-lia/arch/ia64/mm/tlb.c Mon Jul 23 14:07:44 2001
@@ -97,7 +97,7 @@
/*
* Wait for other CPUs to finish purging entries.
*/
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
+#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
{
extern void smp_resend_flush_tlb (void);
unsigned long start = ia64_get_itc();
diff -urN linux-davidm/arch/ia64/tools/print_offsets.c linux-2.4.7-lia/arch/ia64/tools/print_offsets.c
--- linux-davidm/arch/ia64/tools/print_offsets.c Mon Jul 23 16:14:58 2001
+++ linux-2.4.7-lia/arch/ia64/tools/print_offsets.c Mon Jul 23 14:07:54 2001
@@ -175,8 +175,6 @@
{ "IA64_CLONE_VM", CLONE_VM },
{ "IA64_CPU_IRQ_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.irq_count) },
{ "IA64_CPU_BH_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.bh_count) },
- { "IA64_CPU_SOFTIRQ_ACTIVE_OFFSET", offsetof (struct cpuinfo_ia64, softirq.active) },
- { "IA64_CPU_SOFTIRQ_MASK_OFFSET", offsetof (struct cpuinfo_ia64, softirq.mask) },
{ "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET", offsetof (struct cpuinfo_ia64, phys_stacked_size_p8) },
};
diff -urN linux-davidm/drivers/acpi/Makefile linux-2.4.7-lia/drivers/acpi/Makefile
--- linux-davidm/drivers/acpi/Makefile Fri Jul 20 22:52:44 2001
+++ linux-2.4.7-lia/drivers/acpi/Makefile Mon Jul 23 14:08:02 2001
@@ -37,7 +37,7 @@
obj-$(CONFIG_ACPI) += os.o acpi_ksyms.o
obj-$(CONFIG_ACPI) += $(foreach dir,$(acpi-subdirs),$(dir)/$(dir).o)
ifdef CONFIG_ACPI_KERNEL_CONFIG
- obj-$(CONFIG_ACPI) += acpiconf.o osconf.o
+ obj-$(CONFIG_ACPI) += acpiconf.o osconf.o driver.o
else
obj-$(CONFIG_ACPI) += driver.o
endif
diff -urN linux-davidm/drivers/acpi/driver.c linux-2.4.7-lia/drivers/acpi/driver.c
--- linux-davidm/drivers/acpi/driver.c Fri Jul 20 22:52:45 2001
+++ linux-2.4.7-lia/drivers/acpi/driver.c Mon Jul 23 14:08:21 2001
@@ -128,7 +128,9 @@
printk(KERN_INFO "ACPI: Subsystem enabled\n");
+#ifdef CONFIG_PM
pm_active = 1;
+#endif
return 0;
}
@@ -141,7 +143,9 @@
{
acpi_terminate();
+#ifdef CONFIG_PM
pm_active = 0;
+#endif
printk(KERN_ERR "ACPI: Subsystem disabled\n");
}
diff -urN linux-davidm/drivers/acpi/os.c linux-2.4.7-lia/drivers/acpi/os.c
--- linux-davidm/drivers/acpi/os.c Mon Jul 23 16:15:03 2001
+++ linux-2.4.7-lia/drivers/acpi/os.c Mon Jul 23 14:08:30 2001
@@ -1,15 +1,9 @@
-/******************************************************************************
- *
- * Module Name: os.c - Linux OSL functions
- * $Revision: 28 $
- *
- *****************************************************************************/
-
/*
* os.c - OS-dependent functions
*
* Copyright (C) 2000 Andrew Henroid
- * Copyright (C) 2001 Andrew Grover
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -31,16 +25,21 @@
* - Fixed improper kernel_thread parameters
*/
+#include <linux/config.h>
+
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/kmod.h>
+#include <linux/irq.h>
#include <linux/delay.h>
#include <asm/io.h>
#include <acpi.h>
+#ifndef CONFIG_ACPI_KERNEL_CONFIG_ONLY
#include "driver.h"
+#endif
#define _COMPONENT ACPI_OS_SERVICES
MODULE_NAME ("os")
@@ -56,6 +55,35 @@
* Debugger Stuff
*****************************************************************************/
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+#include "osconf.h"
+
+struct acpi_osd acpi_osd_rt = {
+ /* these are runtime osd entries that differ from boottime entries */
+ acpi_os_allocate_rt,
+ acpi_os_callocate_rt,
+ acpi_os_free_rt,
+ acpi_os_queue_for_execution_rt,
+ acpi_os_read_pci_cfg_byte_rt,
+ acpi_os_read_pci_cfg_word_rt,
+ acpi_os_read_pci_cfg_dword_rt,
+ acpi_os_write_pci_cfg_byte_rt,
+ acpi_os_write_pci_cfg_word_rt,
+ acpi_os_write_pci_cfg_dword_rt
+};
+#else
+#define acpi_os_allocate_rt acpi_os_allocate
+#define acpi_os_callocate_rt acpi_os_callocate
+#define acpi_os_free_rt acpi_os_free
+#define acpi_os_queue_for_execution_rt acpi_os_queue_for_execution
+#define acpi_os_read_pci_cfg_byte_rt acpi_os_read_pci_cfg_byte
+#define acpi_os_read_pci_cfg_word_rt acpi_os_read_pci_cfg_word
+#define acpi_os_read_pci_cfg_dword_rt acpi_os_read_pci_cfg_dword
+#define acpi_os_write_pci_cfg_byte_rt acpi_os_write_pci_cfg_byte
+#define acpi_os_write_pci_cfg_word_rt acpi_os_write_pci_cfg_word
+#define acpi_os_write_pci_cfg_dword_rt acpi_os_write_pci_cfg_dword
+#endif
+
#ifdef ENABLE_DEBUGGER
#include <linux/kdb.h>
@@ -371,19 +399,19 @@
UINT8
acpi_os_mem_in8 (ACPI_PHYSICAL_ADDRESS phys_addr)
{
- return (*(u8*) (u32) phys_addr);
+ return (*(u8*) phys_addr);
}
UINT16
acpi_os_mem_in16 (ACPI_PHYSICAL_ADDRESS phys_addr)
{
- return (*(u16*) (u32) phys_addr);
+ return (*(u16*) phys_addr);
}
UINT32
acpi_os_mem_in32 (ACPI_PHYSICAL_ADDRESS phys_addr)
{
- return (*(u32*) (u32) phys_addr);
+ return (*(u32*) phys_addr);
}
void
@@ -405,7 +433,7 @@
}
ACPI_STATUS
-acpi_os_read_pci_cfg_byte(
+acpi_os_read_pci_cfg_byte_rt(
u32 bus,
u32 func,
u32 addr,
@@ -419,7 +447,7 @@
}
ACPI_STATUS
-acpi_os_read_pci_cfg_word(
+acpi_os_read_pci_cfg_word_rt(
u32 bus,
u32 func,
u32 addr,
@@ -433,7 +461,7 @@
}
ACPI_STATUS
-acpi_os_read_pci_cfg_dword(
+acpi_os_read_pci_cfg_dword_rt(
u32 bus,
u32 func,
u32 addr,
@@ -447,7 +475,7 @@
}
ACPI_STATUS
-acpi_os_write_pci_cfg_byte(
+acpi_os_write_pci_cfg_byte_rt(
u32 bus,
u32 func,
u32 addr,
@@ -461,7 +489,7 @@
}
ACPI_STATUS
-acpi_os_write_pci_cfg_word(
+acpi_os_write_pci_cfg_word_rt(
u32 bus,
u32 func,
u32 addr,
@@ -475,7 +503,7 @@
}
ACPI_STATUS
-acpi_os_write_pci_cfg_dword(
+acpi_os_write_pci_cfg_dword_rt(
u32 bus,
u32 func,
u32 addr,
@@ -487,6 +515,27 @@
return AE_ERROR;
return AE_OK;
}
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+/*
+ * Queue for interpreter thread
+ */
+
+ACPI_STATUS
+acpi_os_queue_for_execution_rt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context)
+{
+# ifndef CONFIG_ACPI_KERNEL_CONFIG_ONLY
+ if (acpi_run(callback, context))
+ return AE_ERROR;
+# else
+ (*callback)(context);
+# endif
+ return AE_OK;
+}
+#endif
ACPI_STATUS
acpi_os_load_module (
diff -urN linux-davidm/drivers/acpi/osconf.c linux-2.4.7-lia/drivers/acpi/osconf.c
--- linux-davidm/drivers/acpi/osconf.c Mon Jul 23 16:15:03 2001
+++ linux-2.4.7-lia/drivers/acpi/osconf.c Mon Jul 23 14:10:17 2001
@@ -112,15 +112,6 @@
ACPI_STATUS
-acpi_os_queue_for_execution(
- u32 priority,
- OSD_EXECUTION_CALLBACK callback,
- void *context)
-{
- return acpi_osd->queue_for_exec(priority, callback, context);
-}
-
-ACPI_STATUS
acpi_os_read_pci_cfg_byte( u32 segbus, u32 func, u32 addr, u8 * val)
{
return acpi_osd->read_pci_cfg_byte(segbus, func, addr, val);
@@ -251,6 +242,7 @@
(*callback)(context);
return AE_OK;
}
+
static ACPI_STATUS __init
acpi_os_read_pci_cfg_byte_bt( u32 segbus, u32 func, u32 addr, u8 * val)
diff -urN linux-davidm/drivers/acpi/osconf.h linux-2.4.7-lia/drivers/acpi/osconf.h
--- linux-davidm/drivers/acpi/osconf.h Mon Jul 23 16:15:03 2001
+++ linux-2.4.7-lia/drivers/acpi/osconf.h Mon Jul 23 14:10:27 2001
@@ -34,13 +34,6 @@
ACPI_STATUS
-acpi_os_queue_for_execution(
- u32 priority,
- OSD_EXECUTION_CALLBACK callback,
- void *context
- );
-
-ACPI_STATUS
acpi_os_read_pci_cfg_byte( u32 segbus, u32 func, u32 addr, u8 * val);
ACPI_STATUS
diff -urN linux-davidm/drivers/char/Config.in linux-2.4.7-lia/drivers/char/Config.in
--- linux-davidm/drivers/char/Config.in Mon Jul 23 16:15:03 2001
+++ linux-2.4.7-lia/drivers/char/Config.in Mon Jul 23 14:10:47 2001
@@ -188,12 +188,17 @@
dep_tristate '/dev/agpgart (AGP Support)' CONFIG_AGP $CONFIG_DRM_AGP
if [ "$CONFIG_AGP" != "n" ]; then
bool ' Intel 440LX/BX/GX and I815/I840/I850 support' CONFIG_AGP_INTEL
+ dep_bool ' Intel 460GX support (EXPERIMENTAL)' CONFIG_AGP_I460 $CONFIG_AGP_PTE_FIXUPS
+ if [ "$CONFIG_AGP_I460" != "n" ]; then
+ bool ' Enable Full AGP RQ (Requires BigSur BIOS 99 or Newer)' CONFIG_AGP_I460_FULLRQ
+ fi
bool ' Intel I810/I815 (on-board) support' CONFIG_AGP_I810
bool ' VIA chipset support' CONFIG_AGP_VIA
bool ' AMD Irongate support' CONFIG_AGP_AMD
bool ' Generic SiS support' CONFIG_AGP_SIS
bool ' ALI chipset support' CONFIG_AGP_ALI
bool ' Serverworks LE/HE support' CONFIG_AGP_SWORKS
+ bool 'AGPGART PTE Fixups (Required by 460GX)' CONFIG_AGP_PTE_FIXUPS
fi
source drivers/char/drm/Config.in
diff -urN linux-davidm/drivers/input/joydev.c linux-2.4.7-lia/drivers/input/joydev.c
--- linux-davidm/drivers/input/joydev.c Sun Apr 29 15:49:45 2001
+++ linux-2.4.7-lia/drivers/input/joydev.c Mon Jul 23 14:11:15 2001
@@ -86,6 +86,12 @@
MODULE_DESCRIPTION("Joystick device driver");
MODULE_SUPPORTED_DEVICE("input/js");
+static inline unsigned long
+jiffies_to_msec (unsigned long t)
+{
+ return 1000*(t / HZ) + 1000*(t % HZ)/HZ;
+}
+
static int joydev_correct(int value, struct js_corr *corr)
{
switch (corr->type) {
@@ -133,7 +139,7 @@
return;
}
- event.time = jiffies * (1000 / HZ);
+ event.time = jiffies_to_msec(jiffies);
while (list) {
@@ -278,7 +284,7 @@
struct js_event event;
- event.time = jiffies * (1000/HZ);
+ event.time = jiffies_to_msec(jiffies);
if (list->startup < joydev->nkey) {
event.type = JS_EVENT_BUTTON | JS_EVENT_INIT;
diff -urN linux-davidm/drivers/net/eepro100.c linux-2.4.7-lia/drivers/net/eepro100.c
--- linux-davidm/drivers/net/eepro100.c Mon Jul 23 16:15:07 2001
+++ linux-2.4.7-lia/drivers/net/eepro100.c Mon Jul 23 14:11:25 2001
@@ -43,13 +43,17 @@
static int txdmacount = 128;
static int rxdmacount /* = 0 */;
+#if defined(__ia64__) || defined(__alpha__) || defined(__sparc__) || defined(__arm__)
+ /* align rx buffers to 2 bytes so that IP header is aligned */
+# define RX_ALIGN
+# define RxFD_ALIGNMENT __attribute__ ((aligned (2), packed))
+#else
+# define RxFD_ALIGNMENT
+#endif
+
/* Set the copy breakpoint for the copy-only-tiny-buffer Rx method.
Lower values use more memory, but are faster. */
-#if defined(__alpha__) || defined(__sparc__) || defined(__arm__)
-static int rx_copybreak = 1518;
-#else
static int rx_copybreak = 200;
-#endif
/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
static int max_interrupt_work = 20;
diff -urN linux-davidm/drivers/scsi/Config.in linux-2.4.7-lia/drivers/scsi/Config.in
--- linux-davidm/drivers/scsi/Config.in Sat Jul 21 20:18:38 2001
+++ linux-2.4.7-lia/drivers/scsi/Config.in Mon Jul 23 14:11:36 2001
@@ -152,6 +152,7 @@
dep_tristate 'Qlogic ISP SCSI support' CONFIG_SCSI_QLOGIC_ISP $CONFIG_SCSI
dep_tristate 'Qlogic ISP FC SCSI support' CONFIG_SCSI_QLOGIC_FC $CONFIG_SCSI
dep_tristate 'Qlogic QLA 1280 SCSI support' CONFIG_SCSI_QLOGIC_1280 $CONFIG_SCSI
+ dep_tristate 'Qlogic QLA 2100 driver support' CONFIG_SCSI_QLOGIC_QLA2100 $CONFIG_SCSI
fi
if [ "$CONFIG_X86" = "y" ]; then
dep_tristate 'Seagate ST-02 and Future Domain TMC-8xx SCSI support' CONFIG_SCSI_SEAGATE $CONFIG_SCSI
diff -urN linux-davidm/drivers/scsi/Makefile linux-2.4.7-lia/drivers/scsi/Makefile
--- linux-davidm/drivers/scsi/Makefile Mon Jul 23 16:15:07 2001
+++ linux-2.4.7-lia/drivers/scsi/Makefile Mon Jul 23 14:11:45 2001
@@ -86,6 +86,7 @@
obj-$(CONFIG_SCSI_QLOGIC_ISP) += qlogicisp.o
obj-$(CONFIG_SCSI_QLOGIC_FC) += qlogicfc.o
obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
+obj-$(CONFIG_SCSI_QLOGIC_QLA2100) += qla2x00.o
obj-$(CONFIG_SCSI_PAS16) += pas16.o
obj-$(CONFIG_SCSI_SEAGATE) += seagate.o
obj-$(CONFIG_SCSI_FD_8xx) += seagate.o
diff -urN linux-davidm/drivers/scsi/qla1280.c linux-2.4.7-lia/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Mon Jul 23 16:15:07 2001
+++ linux-2.4.7-lia/drivers/scsi/qla1280.c Sun Apr 29 15:56:25 2001
@@ -1624,7 +1624,6 @@
{
printk(KERN_INFO "scsi(): Interrupt with NULL host ptr\n");
COMTRACE('X')
- spin_unlock_irqrestore(&io_request_lock, cpu_flags);
return;
}
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,1,95)
diff -urN linux-davidm/fs/devfs/base.c linux-2.4.7-lia/fs/devfs/base.c
--- linux-davidm/fs/devfs/base.c Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/fs/devfs/base.c Mon Jul 23 14:15:21 2001
@@ -2143,6 +2143,9 @@
EXPORT_SYMBOL(devfs_register_blkdev);
EXPORT_SYMBOL(devfs_unregister_chrdev);
EXPORT_SYMBOL(devfs_unregister_blkdev);
+#ifdef CONFIG_DEVFS_GUID
+EXPORT_SYMBOL(devfs_unregister_slave);
+#endif
/**
diff -urN linux-davidm/fs/partitions/check.c linux-2.4.7-lia/fs/partitions/check.c
--- linux-davidm/fs/partitions/check.c Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/fs/partitions/check.c Mon Jul 23 14:16:33 2001
@@ -46,6 +46,9 @@
#ifdef CONFIG_ACORN_PARTITION
acorn_partition,
#endif
+#ifdef CONFIG_EFI_PARTITION
+ efi_partition,
+#endif
#ifdef CONFIG_MSDOS_PARTITION
msdos_partition,
#endif
@@ -73,9 +76,6 @@
#ifdef CONFIG_IBM_PARTITION
ibm_partition,
#endif
-#ifdef CONFIG_EFI_PARTITION
- efi_partition,
-#endif
NULL
};
@@ -523,6 +523,9 @@
dev->part[minor].de = NULL;
devfs_dealloc_unique_number (&disc_numspace,
dev->part[minor].number);
+# ifdef CONFIG_DEVFS_GUID
+ free_disk_guids (dev, minor);
+# endif
}
#endif /* CONFIG_DEVFS_FS */
}
@@ -572,6 +575,13 @@
if (!size || minors = 1)
return;
+#ifdef CONFIG_DEVFS_GUID
+ /* In case this is a revalidation, free GUID memory.
+ On the first call for this device,
+ register_disk has set all entries to NULL,
+ and nothing will happen. */
+ free_disk_guids (dev, first_minor);
+#endif
check_partition(dev, MKDEV(dev->major, first_minor), 1 + first_minor);
/*
diff -urN linux-davidm/include/asm-ia64/acpi-ext.h linux-2.4.7-lia/include/asm-ia64/acpi-ext.h
--- linux-davidm/include/asm-ia64/acpi-ext.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.7-lia/include/asm-ia64/acpi-ext.h Mon Jul 23 14:16:42 2001
@@ -5,12 +5,12 @@
* Advanced Configuration and Power Infterface
* Based on 'ACPI Specification 1.0b' Febryary 2, 1999
* and 'IA-64 Extensions to the ACPI Specification' Rev 0.6
- *
+ *
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 2000 Intel Corp.
* Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
- * ACPI 2.0 specification
+ * ACPI 2.0 specification
*/
#include <linux/types.h>
@@ -146,6 +146,9 @@
u32 lapic_address;
u32 flags;
} acpi_madt_t;
+
+/* acpi 2.0 MADT flags */
+#define MADT_PCAT_COMPAT (1<<0)
/* acpi 2.0 MADT structure types */
#define ACPI20_ENTRY_LOCAL_APIC 0
diff -urN linux-davidm/include/asm-ia64/efi.h linux-2.4.7-lia/include/asm-ia64/efi.h
--- linux-davidm/include/asm-ia64/efi.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/asm-ia64/efi.h Mon Jul 23 14:38:28 2001
@@ -238,7 +238,7 @@
extern void efi_memmap_walk (efi_freemem_callback_t callback, void *arg);
extern void efi_gettimeofday (struct timeval *tv);
extern void efi_enter_virtual_mode (void); /* switch EFI to virtual mode, if possible */
-
+extern u64 efi_get_iobase (void);
/*
* Variable Attributes
diff -urN linux-davidm/include/asm-ia64/hardirq.h linux-2.4.7-lia/include/asm-ia64/hardirq.h
--- linux-davidm/include/asm-ia64/hardirq.h Sun Apr 29 15:50:41 2001
+++ linux-2.4.7-lia/include/asm-ia64/hardirq.h Mon Jul 23 14:38:18 2001
@@ -16,15 +16,15 @@
/*
* No irq_cpustat_t for IA-64. The data is held in the per-CPU data structure.
*/
-#define softirq_active(cpu) (cpu_data[cpu].softirq.active)
-#define softirq_mask(cpu) (cpu_data[cpu].softirq.mask)
-#define irq_count(cpu) (cpu_data[cpu].irq_stat.f.irq_count)
-#define bh_count(cpu) (cpu_data[cpu].irq_stat.f.bh_count)
+#define softirq_pending(cpu) (cpu_data(cpu)->softirq_pending)
+#define ksoftirqd_task(cpu) (cpu_data(cpu)->ksoftirqd)
+#define irq_count(cpu) (cpu_data(cpu)->irq_stat.f.irq_count)
+#define bh_count(cpu) (cpu_data(cpu)->irq_stat.f.bh_count)
#define syscall_count(cpu) /* unused on IA-64 */
#define nmi_count(cpu) 0
-#define local_softirq_active() (local_cpu_data->softirq.active)
-#define local_softirq_mask() (local_cpu_data->softirq.mask)
+#define local_softirq_pending() (local_cpu_data->softirq_pending)
+#define local_ksoftirqd() (local_cpu_data->ksoftirqd);
#define local_irq_count() (local_cpu_data->irq_stat.f.irq_count)
#define local_bh_count() (local_cpu_data->irq_stat.f.bh_count)
#define local_syscall_count() /* unused on IA-64 */
diff -urN linux-davidm/include/asm-ia64/ia32.h linux-2.4.7-lia/include/asm-ia64/ia32.h
--- linux-davidm/include/asm-ia64/ia32.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/asm-ia64/ia32.h Mon Jul 23 14:38:14 2001
@@ -367,10 +367,10 @@
| ((((sd) >> IA32_SEG_DB) & 0x1) << SEG_DB) \
| ((((sd) >> IA32_SEG_G) & 0x1) << SEG_G))
-#define IA32_IOBASE 0x2000000000000000 /* Virtual address for I/O space */
+#define IA32_IOBASE 0x2000000000000000 /* Virtual address for I/O space */
-#define IA32_CR0 0x80000001 /* Enable PG and PE bits */
-#define IA32_CR4 0 /* No architectural extensions */
+#define IA32_CR0 0x80000001 /* Enable PG and PE bits */
+#define IA32_CR4 0x600 /* MMXEX and FXSR on */
/*
* IA32 floating point control registers starting values
diff -urN linux-davidm/include/asm-ia64/io.h linux-2.4.7-lia/include/asm-ia64/io.h
--- linux-davidm/include/asm-ia64/io.h Sun Apr 29 15:50:41 2001
+++ linux-2.4.7-lia/include/asm-ia64/io.h Mon Jul 23 14:38:11 2001
@@ -333,7 +333,7 @@
#define readb(a) __readb((void *)(a))
#define readw(a) __readw((void *)(a))
#define readl(a) __readl((void *)(a))
-#define readq(a) __readqq((void *)(a))
+#define readq(a) __readq((void *)(a))
#define __raw_readb readb
#define __raw_readw readw
#define __raw_readl readl
diff -urN linux-davidm/include/asm-ia64/iosapic.h linux-2.4.7-lia/include/asm-ia64/iosapic.h
--- linux-davidm/include/asm-ia64/iosapic.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.7-lia/include/asm-ia64/iosapic.h Mon Jul 23 14:18:00 2001
@@ -51,7 +51,8 @@
#ifndef __ASSEMBLY__
-extern void __init iosapic_init (unsigned long address, unsigned int base_irq);
+extern void __init iosapic_init (unsigned long address, unsigned int base_irq,
+ int pcat_compat);
extern void iosapic_register_legacy_irq (unsigned long irq, unsigned long pin,
unsigned long polarity, unsigned long trigger);
extern void iosapic_pci_fixup (int);
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.7-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/asm-ia64/offsets.h Mon Jul 23 14:18:19 2001
@@ -1,16 +1,13 @@
#ifndef _ASM_IA64_OFFSETS_H
#define _ASM_IA64_OFFSETS_H
-
/*
* DO NOT MODIFY
*
- * This file was generated by arch/ia64/tools/print_offsets.
+ * This file was generated by arch/ia64/tools/print_offsets.awk.
*
*/
-
-#define PT_PTRACED_BIT 0
-#define PT_TRACESYS_BIT 1
-
+#define PT_PTRACED_BIT 0
+#define PT_TRACESYS_BIT 1
#define IA64_TASK_SIZE 3904 /* 0xf40 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
@@ -76,7 +73,7 @@
#define IA64_PT_REGS_F8_OFFSET 368 /* 0x170 */
#define IA64_PT_REGS_F9_OFFSET 384 /* 0x180 */
#define IA64_SWITCH_STACK_CALLER_UNAT_OFFSET 0 /* 0x0 */
-#define IA64_SWITCH_STACK_AR_FPSR_OFFSET 8 /* 0x8 */
+#define IA64_SWITCH_STACK_AR_FPSR_OFFSET 8 /* 0x8 */
#define IA64_SWITCH_STACK_F2_OFFSET 16 /* 0x10 */
#define IA64_SWITCH_STACK_F3_OFFSET 32 /* 0x20 */
#define IA64_SWITCH_STACK_F4_OFFSET 48 /* 0x30 */
@@ -115,8 +112,8 @@
#define IA64_SWITCH_STACK_B5_OFFSET 504 /* 0x1f8 */
#define IA64_SWITCH_STACK_AR_PFS_OFFSET 512 /* 0x200 */
#define IA64_SWITCH_STACK_AR_LC_OFFSET 520 /* 0x208 */
-#define IA64_SWITCH_STACK_AR_UNAT_OFFSET 528 /* 0x210 */
-#define IA64_SWITCH_STACK_AR_RNAT_OFFSET 536 /* 0x218 */
+#define IA64_SWITCH_STACK_AR_UNAT_OFFSET 528 /* 0x210 */
+#define IA64_SWITCH_STACK_AR_RNAT_OFFSET 536 /* 0x218 */
#define IA64_SWITCH_STACK_AR_BSPSTORE_OFFSET 544 /* 0x220 */
#define IA64_SWITCH_STACK_PR_OFFSET 552 /* 0x228 */
#define IA64_SIGCONTEXT_AR_BSP_OFFSET 72 /* 0x48 */
@@ -135,12 +132,10 @@
#define IA64_SIGFRAME_RBS_BASE_OFFSET 24 /* 0x18 */
#define IA64_SIGFRAME_HANDLER_OFFSET 32 /* 0x20 */
#define IA64_SIGFRAME_SIGCONTEXT_OFFSET 176 /* 0xb0 */
-#define IA64_CLONE_VFORK 16384 /* 0x4000 */
+#define IA64_CLONE_VFORK 16384 /* 0x4000 */
#define IA64_CLONE_VM 256 /* 0x100 */
-#define IA64_CPU_IRQ_COUNT_OFFSET 8 /* 0x8 */
-#define IA64_CPU_BH_COUNT_OFFSET 12 /* 0xc */
-#define IA64_CPU_SOFTIRQ_ACTIVE_OFFSET 0 /* 0x0 */
-#define IA64_CPU_SOFTIRQ_MASK_OFFSET 4 /* 0x4 */
-#define IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET 16 /* 0x10 */
+#define IA64_CPU_IRQ_COUNT_OFFSET 0 /* 0x0 */
+#define IA64_CPU_BH_COUNT_OFFSET 4 /* 0x4 */
+#define IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET 12 /* 0xc */
#endif /* _ASM_IA64_OFFSETS_H */
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.7-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/asm-ia64/processor.h Mon Jul 23 14:38:11 2001
@@ -235,11 +235,7 @@
* state comes earlier:
*/
struct cpuinfo_ia64 {
- /* irq_stat and softirq should be 64-bit aligned */
- struct {
- __u32 active;
- __u32 mask;
- } softirq;
+ /* irq_stat must be 64-bit aligned */
union {
struct {
__u32 irq_count;
@@ -247,8 +243,8 @@
} f;
__u64 irq_and_bh_counts;
} irq_stat;
+ __u32 softirq_pending;
__u32 phys_stacked_size_p8; /* size of physical stacked registers + 8 */
- __u32 pad0;
__u64 itm_delta; /* # of clock cycles between clock ticks */
__u64 itm_next; /* interval timer mask value to use for next clock tick */
__u64 *pgd_quick;
@@ -273,6 +269,7 @@
__u64 ptce_base;
__u32 ptce_count[2];
__u32 ptce_stride[2];
+ struct task_struct *ksoftirqd; /* kernel softirq daemon for this CPU */
#ifdef CONFIG_SMP
__u64 loops_per_jiffy;
__u64 ipi_count;
@@ -280,6 +277,9 @@
__u64 prof_multiplier;
__u64 ipi_operation;
#endif
+#ifdef CONFIG_NUMA
+ struct cpuinfo_ia64 *cpu_data[NR_CPUS];
+#endif
} __attribute__ ((aligned (PAGE_SIZE))) ;
/*
@@ -288,7 +288,22 @@
*/
#define local_cpu_data ((struct cpuinfo_ia64 *) PERCPU_ADDR)
-extern struct cpuinfo_ia64 cpu_data[NR_CPUS];
+/*
+ * On NUMA systems, cpu_data for each cpu is allocated during cpu_init() & is allocated on
+ * the node that contains the cpu. This minimizes off-node memory references. cpu_data
+ * for each cpu contains an array of pointers to the cpu_data structures of each of the
+ * other cpus.
+ *
+ * On non-NUMA systems, cpu_data is a static array allocated at compile time. References
+ * to the cpu_data of another cpu is done by direct references to the appropriate entry of
+ * the array.
+ */
+#ifdef CONFIG_NUMA
+# define cpu_data(cpu) local_cpu_data->cpu_data_ptrs[cpu]
+#else
+ extern struct cpuinfo_ia64 _cpu_data[NR_CPUS];
+# define cpu_data(cpu) (&_cpu_data[cpu])
+#endif
extern void identify_cpu (struct cpuinfo_ia64 *);
extern void print_cpu_info (struct cpuinfo_ia64 *);
diff -urN linux-davidm/include/asm-ia64/smp.h linux-2.4.7-lia/include/asm-ia64/smp.h
--- linux-davidm/include/asm-ia64/smp.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/asm-ia64/smp.h Mon Jul 23 14:38:11 2001
@@ -40,7 +40,7 @@
extern unsigned char smp_int_redirect;
extern int smp_num_cpus;
-extern volatile int ia64_cpu_to_sapicid[];
+extern volatile int ia64_cpu_to_sapicid[];
#define cpu_physical_id(i) ia64_cpu_to_sapicid[i]
#define cpu_number_map(i) (i)
#define cpu_logical_map(i) (i)
diff -urN linux-davidm/include/asm-ia64/softirq.h linux-2.4.7-lia/include/asm-ia64/softirq.h
--- linux-davidm/include/asm-ia64/softirq.h Sun Apr 29 15:50:45 2001
+++ linux-2.4.7-lia/include/asm-ia64/softirq.h Mon Jul 23 14:38:20 2001
@@ -7,8 +7,18 @@
*/
#include <asm/hardirq.h>
+#define __local_bh_enable() do { barrier(); local_bh_count()--; } while (0)
+
#define local_bh_disable() do { local_bh_count()++; barrier(); } while (0)
-#define local_bh_enable() do { barrier(); local_bh_count()--; } while (0)
+#define local_bh_enable() \
+do { \
+ __local_bh_enable(); \
+ if (__builtin_expect(local_softirq_pending(), 0) && local_bh_count() = 0) \
+ do_softirq(); \
+} while (0)
+
+
+#define __cpu_raise_softirq(cpu,nr) set_bit((nr), &softirq_pending(cpu))
#define in_softirq() (local_bh_count() != 0)
diff -urN linux-davidm/include/asm-ia64/string.h linux-2.4.7-lia/include/asm-ia64/string.h
--- linux-davidm/include/asm-ia64/string.h Mon Oct 9 17:55:00 2000
+++ linux-2.4.7-lia/include/asm-ia64/string.h Mon Jul 23 14:19:48 2001
@@ -10,7 +10,6 @@
*/
#include <linux/config.h> /* remove this once we remove the A-step workaround... */
-#ifndef CONFIG_ITANIUM_ASTEP_SPECIFIC
#define __HAVE_ARCH_STRLEN 1 /* see arch/ia64/lib/strlen.S */
#define __HAVE_ARCH_MEMSET 1 /* see arch/ia64/lib/memset.S */
@@ -20,7 +19,5 @@
extern __kernel_size_t strlen (const char *);
extern void *memset (void *, int, __kernel_size_t);
extern void *memcpy (void *, const void *, __kernel_size_t);
-
-#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
#endif /* _ASM_IA64_STRING_H */
diff -urN linux-davidm/include/asm-ia64/system.h linux-2.4.7-lia/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Sun Apr 29 15:50:45 2001
+++ linux-2.4.7-lia/include/asm-ia64/system.h Mon Jul 23 14:38:11 2001
@@ -29,8 +29,7 @@
#define GATE_ADDR (0xa000000000000000 + PAGE_SIZE)
#define PERCPU_ADDR (0xa000000000000000 + 2*PAGE_SIZE)
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC)
+#if defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC)
/* Workaround for Errata 97. */
# define IA64_SEMFIX_INSN mf;
# define IA64_SEMFIX "mf;"
diff -urN linux-davidm/include/linux/agp_backend.h linux-2.4.7-lia/include/linux/agp_backend.h
--- linux-davidm/include/linux/agp_backend.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/linux/agp_backend.h Mon Jul 23 14:39:37 2001
@@ -50,6 +50,7 @@
INTEL_I815,
INTEL_I840,
INTEL_I850,
+ INTEL_460GX,
VIA_GENERIC,
VIA_VP3,
VIA_MVP3,
diff -urN linux-davidm/include/linux/genhd.h linux-2.4.7-lia/include/linux/genhd.h
--- linux-davidm/include/linux/genhd.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/linux/genhd.h Mon Jul 23 14:39:21 2001
@@ -56,6 +56,9 @@
long nr_sects;
devfs_handle_t de; /* primary (master) devfs entry */
int number; /* stupid old code wastes space */
+#ifdef CONFIG_DEVFS_GUID
+ efi_guid_t *guid;
+#endif
};
#define GENHD_FL_REMOVABLE 1
diff -urN linux-davidm/include/linux/irq_cpustat.h linux-2.4.7-lia/include/linux/irq_cpustat.h
--- linux-davidm/include/linux/irq_cpustat.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/linux/irq_cpustat.h Mon Jul 23 14:21:12 2001
@@ -23,12 +23,12 @@
#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
#else
#define __IRQ_STAT(cpu, member) ((void)(cpu), irq_stat[0].member)
-#endif
+#endif
/* arch independent irq_stat fields */
#define softirq_pending(cpu) __IRQ_STAT((cpu), __softirq_pending)
-#define local_irq_count(cpu) __IRQ_STAT((cpu), __local_irq_count)
-#define local_bh_count(cpu) __IRQ_STAT((cpu), __local_bh_count)
+#define irq_count(cpu) __IRQ_STAT((cpu), __irq_count)
+#define bh_count(cpu) __IRQ_STAT((cpu), __bh_count)
#define syscall_count(cpu) __IRQ_STAT((cpu), __syscall_count)
#define ksoftirqd_task(cpu) __IRQ_STAT((cpu), __ksoftirqd_task)
/* arch dependent irq_stat fields */
diff -urN linux-davidm/include/linux/sched.h linux-2.4.7-lia/include/linux/sched.h
--- linux-davidm/include/linux/sched.h Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/include/linux/sched.h Mon Jul 23 14:38:12 2001
@@ -539,7 +539,7 @@
extern unsigned long volatile jiffies;
extern unsigned long itimer_ticks;
extern unsigned long itimer_next;
-extern volatile struct timeval xtime;
+extern struct timeval xtime;
extern void do_timer(struct pt_regs *);
extern unsigned int * prof_buffer;
diff -urN linux-davidm/include/linux/string.h linux-2.4.7-lia/include/linux/string.h
--- linux-davidm/include/linux/string.h Sun Apr 29 15:50:56 2001
+++ linux-2.4.7-lia/include/linux/string.h Mon Jul 23 14:21:49 2001
@@ -79,6 +79,7 @@
#ifndef __HAVE_ARCH_MEMCHR
extern void * memchr(const void *,int,__kernel_size_t);
#endif
+extern char * kstrdup(const char *,int);
#ifdef __cplusplus
}
diff -urN linux-davidm/kernel/pm.c linux-2.4.7-lia/kernel/pm.c
--- linux-davidm/kernel/pm.c Sun Apr 29 15:50:57 2001
+++ linux-2.4.7-lia/kernel/pm.c Mon Jul 23 14:22:24 2001
@@ -162,7 +162,7 @@
case PM_SUSPEND:
case PM_RESUME:
prev_state = dev->state;
- next_state = (int) data;
+ next_state = (long) data;
if (prev_state != next_state) {
if (dev->callback)
status = (*dev->callback)(dev, rqst, data);
@@ -197,7 +197,7 @@
*/
pm_request_t undo = (dev->prev_state
? PM_SUSPEND:PM_RESUME);
- pm_send(dev, undo, (void*) dev->prev_state);
+ pm_send(dev, undo, (void*) (long) dev->prev_state);
}
entry = entry->prev;
}
diff -urN linux-davidm/kernel/softirq.c linux-2.4.7-lia/kernel/softirq.c
--- linux-davidm/kernel/softirq.c Mon Jul 23 16:15:10 2001
+++ linux-2.4.7-lia/kernel/softirq.c Mon Jul 23 14:22:33 2001
@@ -40,10 +40,10 @@
- Bottom halves: globally serialized, grr...
*/
-/* No separate irq_stat for s390, it is part of PSA */
+/* No separate irq_stat for s390 and ia64, it is part of PSA */
#if !defined(CONFIG_ARCH_S390) && !defined(CONFIG_IA64)
irq_cpustat_t irq_stat[NR_CPUS];
-#endif
+#endif /* CONFIG_ARCH_S390 || CONFIG_IA64 */
static struct softirq_action softirq_vec[32] __cacheline_aligned;
@@ -124,7 +124,7 @@
* Otherwise we wake up ksoftirqd to make sure we
* schedule the softirq soon.
*/
- if (!(local_irq_count(cpu) | local_bh_count(cpu)))
+ if (!(irq_count(cpu) | bh_count(cpu)))
wakeup_softirqd(cpu);
}
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.7)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (53 preceding siblings ...)
2001-07-23 23:49 ` [Linux-ia64] kernel update (relative to 2.4.7) David Mosberger
@ 2001-07-24 1:50 ` Keith Owens
2001-07-24 3:02 ` Keith Owens
` (160 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-07-24 1:50 UTC (permalink / raw)
To: linux-ia64
On Mon, 23 Jul 2001 16:49:32 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>The latest IA-64 patch is available at:
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>in file linux-2.4.7-ia64-010723.diff*.
The new __get_user code breaks older versions of gcc for IA64.
gcc version 2.96-ia64-000717 snap 001117
scsi_ioctl.c: In function `scsi_ioctl_send_command':
scsi_ioctl.c:366: Internal compiler error in `rws_access_regno', at config/ia64/ia64.c:3671
ia64-unknown-linux-gcc: Internal compiler error: program cpp got fatal signal 13
Time to upgrade gcc. Workaround for old versions.
Index: 7.11/drivers/scsi/scsi_ioctl.c
--- 7.11/drivers/scsi/scsi_ioctl.c Fri, 06 Jul 2001 09:49:24 +1000 kaos (linux-2.4/U/b/19_scsi_ioctl 1.2.2.3 644)
+++ 7.11(w)/drivers/scsi/scsi_ioctl.c Tue, 24 Jul 2001 11:44:19 +1000 kaos (linux-2.4/U/b/19_scsi_ioctl 1.2.2.3 644)
@@ -209,10 +209,10 @@ int scsi_ioctl_send_command(Scsi_Device
if (verify_area(VERIFY_READ, sic, sizeof(Scsi_Ioctl_Command)))
return -EFAULT;
- if(__get_user(inlen, &sic->inlen))
+ if(get_user(inlen, &sic->inlen))
return -EFAULT;
- if(__get_user(outlen, &sic->outlen))
+ if(get_user(outlen, &sic->outlen))
return -EFAULT;
/*
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.7)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (54 preceding siblings ...)
2001-07-24 1:50 ` Keith Owens
@ 2001-07-24 3:02 ` Keith Owens
2001-07-24 16:37 ` Andreas Schwab
` (159 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-07-24 3:02 UTC (permalink / raw)
To: linux-ia64
On Mon, 23 Jul 2001 16:49:32 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>The latest IA-64 patch is available at:
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>in file linux-2.4.7-ia64-010723.diff*.
2.4.7 has __set_bit and similar functions, used by devfs, there is no
IA64 implementation. Most of the patch is adding DocBook headers.
Index: 7.11/include/asm-ia64/bitops.h
--- 7.11/include/asm-ia64/bitops.h Tue, 24 Jul 2001 11:08:09 +1000 kaos (linux-2.4/t/47_bitops.h 1.1.3.1.1.1 644)
+++ 7.11(w)/include/asm-ia64/bitops.h Tue, 24 Jul 2001 12:47:45 +1000 kaos (linux-2.4/t/47_bitops.h 1.1.3.1.1.1 644)
@@ -10,15 +10,23 @@
#include <asm/system.h>
-/*
- * These operations need to be atomic. The address must be (at least) "long" aligned.
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This function is atomic and may not be reordered. See __set_bit()
+ * if you do not require the atomic guarantees.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ *
+ * The address must be (at least) "long" aligned.
* Note that there are driver (e.g., eepro100) which use these operations to operate on
* hw-defined data-structures, so we can't easily change these operations to force a
* bigger alignment.
*
* bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
*/
-
static __inline__ void
set_bit (int nr, volatile void *addr)
{
@@ -35,11 +43,36 @@ set_bit (int nr, volatile void *addr)
} while (cmpxchg_acq(m, old, new) != old);
}
+/**
+ * __set_bit - Set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * Unlike set_bit(), this function is non-atomic and may be reordered.
+ * If it's called on the same region of memory simultaneously, the effect
+ * may be that only one operation succeeds.
+ */
+static __inline__ void __set_bit(int nr, volatile void * addr)
+{
+ *((__u32 *) addr + (nr >> 5)) |= (1 << (nr & 31));
+}
+
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * clear_bit() is atomic and may not be reordered. However, it does
+ * not contain a memory barrier, so if it is used for locking purposes,
+ * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+ * in order to ensure changes are visible on other processors.
+ */
static __inline__ void
clear_bit (int nr, volatile void *addr)
{
@@ -56,18 +89,29 @@ clear_bit (int nr, volatile void *addr)
} while (cmpxchg_acq(m, old, new) != old);
}
-/*
- * WARNING: non atomic version.
+/**
+ * __change_bit - Toggle a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * Unlike change_bit(), this function is non-atomic and may be reordered.
+ * If it's called on the same region of memory simultaneously, the effect
+ * may be that only one operation succeeds.
*/
-static __inline__ void
-__change_bit (int nr, void *addr)
+static __inline__ void __change_bit(int nr, volatile void * addr)
{
- volatile __u32 *m = (__u32 *) addr + (nr >> 5);
- __u32 bit = (1 << (nr & 31));
-
- *m = *m ^ bit;
+ *((__u32 *) addr + (nr >> 5)) ^= (1 << (nr & 31));
}
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * change_bit() is atomic and may not be reordered.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
static __inline__ void
change_bit (int nr, volatile void *addr)
{
@@ -84,6 +128,14 @@ change_bit (int nr, volatile void *addr)
} while (cmpxchg_acq(m, old, new) != old);
}
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
static __inline__ int
test_and_set_bit (int nr, volatile void *addr)
{
@@ -101,6 +153,30 @@ test_and_set_bit (int nr, volatile void
return (old & bit) != 0;
}
+/**
+ * __test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail. You must protect multiple accesses with a lock.
+ */
+static __inline__ int __test_and_set_bit(int nr, volatile void * addr)
+{
+ int oldbit = *((__u32 *) addr + (nr >> 5)) & (1 << (nr & 31));
+ *((__u32 *) addr + (nr >> 5)) |= (1 << (nr & 31));
+ return(oldbit != 0);
+}
+
+/**
+ * test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
static __inline__ int
test_and_clear_bit (int nr, volatile void *addr)
{
@@ -118,6 +194,22 @@ test_and_clear_bit (int nr, volatile voi
return (old & ~mask) != 0;
}
+/**
+ * __test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail. You must protect multiple accesses with a lock.
+ */
+static __inline__ int __test_and_clear_bit(int nr, volatile void * addr)
+{
+ int oldbit = *((__u32 *) addr + (nr >> 5)) & (1 << (nr & 31));
+ *((__u32 *) addr + (nr >> 5)) &= ~(1 << (nr & 31));
+ return(oldbit != 0);
+}
+
/*
* WARNING: non atomic version.
*/
@@ -132,6 +224,14 @@ __test_and_change_bit (int nr, void *add
return (old & bit) != 0;
}
+/**
+ * test_and_change_bit - Change a bit and return its new value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
static __inline__ int
test_and_change_bit (int nr, volatile void *addr)
{
@@ -155,10 +255,13 @@ test_bit (int nr, volatile void *addr)
return 1 & (((const volatile __u32 *) addr)[nr >> 5] >> (nr & 31));
}
-/*
- * ffz = "find first zero". Returns the bit number (0..63) of the first (least
- * significant) bit that is zero in X. Undefined if no zero exists, so code should check
- * against ~0UL first...
+/**
+ * find_first_zero_bit - find the first zero bit in a memory region
+ * @x: The address to start the search at
+ *
+ * Returns the bit-number (0..63) of the first (least significant) zero bit, not
+ * the number of the byte containing a bit. Undefined if no zero exists, so
+ * code should check against ~0UL first...
*/
static inline unsigned long
ffz (unsigned long x)
@@ -172,8 +275,8 @@ ffz (unsigned long x)
#ifdef __KERNEL__
/*
- * Find the most significant bit that is set (undefined if no bit is
- * set).
+ * find_last_zero_bit - find the last zero bit in a 64 bit quantity
+ * @x: The value to search
*/
static inline unsigned long
ia64_fls (unsigned long x)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.7)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (55 preceding siblings ...)
2001-07-24 3:02 ` Keith Owens
@ 2001-07-24 16:37 ` Andreas Schwab
2001-07-24 18:42 ` David Mosberger
` (158 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-07-24 16:37 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> The latest IA-64 patch is available at:
|>
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
|>
|> in file linux-2.4.7-ia64-010723.diff*.
|>
|> There are no major new features in this patch but mostly bug fixes and
|> syncing up with 2.4.7. Note that 2.4.7 introduces a somewhat
|> non-trivial change in how softirqs are handled: they are now checked
|> only as a result of a hard irq and not every time the kernel returns
|> to user mode. While I'm not particularly happy to see such changes
|> this late in the 2.4.x series, I do believe it is safe and I have been
|> running a fairly busy machine since Saturday without any problems. As
|> usual your mileage may vary, so test well before you ship with this
|> kernel.
Did you try out the UP kernel? It hangs in ksoftirqd.
Andreas.
--
Andreas Schwab "And now for something
SuSE Labs completely different."
Andreas.Schwab@suse.de
SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.7)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (56 preceding siblings ...)
2001-07-24 16:37 ` Andreas Schwab
@ 2001-07-24 18:42 ` David Mosberger
2001-08-14 8:15 ` [Linux-ia64] kernel update (relative to 2.4.8) Chris Ahna
` (157 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-07-24 18:42 UTC (permalink / raw)
To: linux-ia64
>>>>> On 24 Jul 2001 18:37:19 +0200, Andreas Schwab <schwab@suse.de> said:
Andreas> Did you try out the UP kernel? It hangs in ksoftirqd.
I thought I did. Got as far as building and installing the UP kernel
but I seem to have mistyped the kernel name when booting the system.
You are right, the kernel hangs early in the boot.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (57 preceding siblings ...)
2001-07-24 18:42 ` David Mosberger
@ 2001-08-14 8:15 ` Chris Ahna
2001-08-14 8:19 ` David Mosberger
` (156 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Chris Ahna @ 2001-08-14 8:15 UTC (permalink / raw)
To: linux-ia64
On Tue, Aug 14, 2001 at 01:19:43AM -0700, David Mosberger wrote:
> Note that Linus switched to the DRM code needed for XFree86-4.1. But since
> we don't 460gx have AGP support for this version yet and since most
> folks are presumably still running 4.0.x, I kept the old DRM code
> instead. I assume this will stay so until someone (Chris?) has had a
> chance up update the 460gx code to 4.1.
I have updated DRM code for 460gx working on my machines and should
have a patch out sometime tomorrow.
Chris
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (58 preceding siblings ...)
2001-08-14 8:15 ` [Linux-ia64] kernel update (relative to 2.4.8) Chris Ahna
@ 2001-08-14 8:19 ` David Mosberger
2001-08-14 8:51 ` Keith Owens
` (155 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-08-14 8:19 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.8-ia64-010814.diff*.
One major improvement in this patch is that the machine check support
(MCA code) should work again and better. Thanks to Asit & crew. Note
that Linus switched to the DRM code needed for XFree86-4.1. But since
we don't 460gx have AGP support for this version yet and since most
folks are presumably still running 4.0.x, I kept the old DRM code
instead. I assume this will stay so until someone (Chris?) has had a
chance up update the 460gx code to 4.1.
Oh, I dropped support for B0-B2 CPUs. So now, only B3 or newer CPUs
are supported.
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (59 preceding siblings ...)
2001-08-14 8:19 ` David Mosberger
@ 2001-08-14 8:51 ` Keith Owens
2001-08-14 15:48 ` David Mosberger
` (154 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-08-14 8:51 UTC (permalink / raw)
To: linux-ia64
On Tue, 14 Aug 2001 01:19:43 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>that Linus switched to the DRM code needed for XFree86-4.1. But since
>we don't 460gx have AGP support for this version yet and since most
>folks are presumably still running 4.0.x, I kept the old DRM code
>instead. I assume this will stay so until someone (Chris?) has had a
>chance up update the 460gx code to 4.1.
If you are going to keep both DRM 4.0 and 4.1, please do it the same
way that Alan Cox did, to reduce the number of solutions to this
"problem". The -ac kernels have, in drivers/char/Config.in
bool 'Direct Rendering Manager (XFree86 DRI support)' CONFIG_DRM
if [ "$CONFIG_DRM" = "y" ]; then
bool ' Build drivers for new (XFree 4.1) DRM' CONFIG_DRM_NEW
if [ "$CONFIG_DRM_NEW" = "y" ]; then
source drivers/char/drm/Config.in
else
define_bool CONFIG_DRM_OLD y
source drivers/char/drm-4.0/Config.in
fi
fi
drivers/char/drm-4.0/ is a copy of the previous drivers/char/drm with
CONFIG_DRM_foo replaced by CONFIG_DRM40_foo.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (60 preceding siblings ...)
2001-08-14 8:51 ` Keith Owens
@ 2001-08-14 15:48 ` David Mosberger
2001-08-14 16:23 ` Don Dugger
` (153 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-08-14 15:48 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 14 Aug 2001 18:51:32 +1000, Keith Owens <kaos@ocs.com.au> said:
Keith> If you are going to keep both DRM 4.0 and 4.1, please do it
Keith> the same way that Alan Cox did, to reduce the number of
Keith> solutions to this "problem". The -ac kernels have, in
Keith> drivers/char/Config.in
At the moment, only 4.0 is there. Perhaps I should include both once
4.1 works for IA64.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (61 preceding siblings ...)
2001-08-14 15:48 ` David Mosberger
@ 2001-08-14 16:23 ` Don Dugger
2001-08-14 17:06 ` David Mosberger
` (152 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Don Dugger @ 2001-08-14 16:23 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 1123 bytes --]
David-
I discovered a minor typo, you must be using GCC 3.0.
On Tue, Aug 14, 2001 at 01:19:43AM -0700, David Mosberger wrote:
> The latest IA-64 patch is available at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
>
> in file linux-2.4.8-ia64-010814.diff*.
>
> One major improvement in this patch is that the machine check support
> (MCA code) should work again and better. Thanks to Asit & crew. Note
> that Linus switched to the DRM code needed for XFree86-4.1. But since
> we don't 460gx have AGP support for this version yet and since most
> folks are presumably still running 4.0.x, I kept the old DRM code
> instead. I assume this will stay so until someone (Chris?) has had a
> chance up update the 460gx code to 4.1.
>
> Oh, I dropped support for B0-B2 CPUs. So now, only B3 or newer CPUs
> are supported.
>
> Enjoy,
>
> --david
>
> _______________________________________________
> Linux-IA64 mailing list
> Linux-IA64@linuxia64.org
> http://lists.linuxia64.org/lists/listinfo/linux-ia64
--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
n0ano@valinux.com
Ph: 303/938-9838
[-- Attachment #2: scsi.patch --]
[-- Type: text/plain, Size: 333 bytes --]
--- kernel-bigsur-2.4.8-ref/drivers/scsi/scsi_ioctl.c Tue Aug 14 10:19:50 2001
+++ kernel-bigsur/drivers/scsi/scsi_ioctl.c Tue Aug 14 10:18:19 2001
@@ -213,7 +213,7 @@
return -EFAULT;
#if __GNUC__ < 3
- foo = __get_user(inlen, &sic->inlen)
+ foo = __get_user(inlen, &sic->inlen);
if(foo)
return -EFAULT;
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (62 preceding siblings ...)
2001-08-14 16:23 ` Don Dugger
@ 2001-08-14 17:06 ` David Mosberger
2001-08-15 0:22 ` Keith Owens
` (151 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-08-14 17:06 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 14 Aug 2001 10:23:47 -0600, Don Dugger <n0ano@valinux.com> said:
Don> David- I discovered a minor typo, you must be using GCC 3.0.
Yes, I am. Sorry about that.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.8)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (63 preceding siblings ...)
2001-08-14 17:06 ` David Mosberger
@ 2001-08-15 0:22 ` Keith Owens
2001-08-21 3:55 ` [Linux-ia64] kernel update (relative to 2.4.9) David Mosberger
` (150 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-08-15 0:22 UTC (permalink / raw)
To: linux-ia64
On Tue, 14 Aug 2001 08:48:11 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>>>>>> On Tue, 14 Aug 2001 18:51:32 +1000, Keith Owens <kaos@ocs.com.au> said:
>
> Keith> If you are going to keep both DRM 4.0 and 4.1, please do it
> Keith> the same way that Alan Cox did, to reduce the number of
> Keith> solutions to this "problem". The -ac kernels have, in
> Keith> drivers/char/Config.in
>
>At the moment, only 4.0 is there. Perhaps I should include both once
>4.1 works for IA64.
I recommend doing it now. The ia64 patch is much larger that it need
be because you delete the 4.1 code as well as adding back the 4.0 code.
If you just add 4.0 as drm-old (the same way as AC) then you get a
smaller patch footprint. Not to mention making it easier to track
Linus's and AC's kernels.
The only downside is that ia64 users have to select drm-old instead of
drm, but that is already true for a lot of non-ia64 users.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.9)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (64 preceding siblings ...)
2001-08-15 0:22 ` Keith Owens
@ 2001-08-21 3:55 ` David Mosberger
2001-08-22 10:00 ` Andreas Schwab
` (149 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-08-21 3:55 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.9-ia64-010820.diff*.
The big news in this patch is a major cleanup of the DRM source
tree(s) by Chris Ahna. Much of the ia64 specific code has now been
restructured so it fits much better into the existing framework. This
should make it a lot easier in getting the changes accepted by the DRM
maintainers. Other changes:
o Small bug fix to the perfmon support (Stephane)
o Various updated for 2.4.9.
o Minor fixes to get rid of warnings (code in ACPI and the cs4281 and
qla1280 driver are affected by this)
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help lia64/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Mon Aug 20 18:58:50 2001
+++ lia64/Documentation/Configure.help Mon Aug 20 17:31:11 2001
@@ -2537,7 +2537,7 @@
CONFIG_AGP_I460
This option gives you AGP support for the Intel 460GX chipset. This
chipset, the first to support Intel Itanium processors, is new and
- this option is correspondingly experimental.
+ this option is correspondingly a little experimental.
If you don't have a 460GX based machine (such as BigSur) with an AGP
slot then this option isn't going to do you much good. If you're
diff -urN linux-davidm/Makefile lia64/Makefile
--- linux-davidm/Makefile Mon Aug 20 18:58:50 2001
+++ lia64/Makefile Mon Aug 20 17:31:11 2001
@@ -136,9 +136,6 @@
drivers/net/net.o \
drivers/media/media.o
DRIVERS-$(CONFIG_AGP) += drivers/char/agp/agp.o
-ifeq ($(CONFIG_AGP)$(CONFIG_AGP_PTE_FIXUPS), my)
-DRIVERS-y += drivers/char/agp/agp.o
-endif
DRIVERS-$(CONFIG_DRM_NEW) += drivers/char/drm/drm.o
DRIVERS-$(CONFIG_DRM_OLD) += drivers/char/drm-4.0/drm.o
DRIVERS-$(CONFIG_NUBUS) += drivers/nubus/nubus.a
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c lia64/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Mon Aug 20 17:57:14 2001
+++ lia64/arch/ia64/kernel/perfmon.c Mon Aug 20 13:09:57 2001
@@ -657,6 +657,16 @@
/* upper part is ignored on rval */
ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
+
+ /*
+ * we must reset BTB index (clears pmd16.full to make
+ * sure we do not report the same branches twice.
+ * The non-blocking case in handled in update_counters().
+ */
+ if (cnum = ctx->ctx_btb_counter) {
+ DBprintk(("reseting PMD16\n"));
+ ia64_set_pmd(16, 0);
+ }
}
}
}
diff -urN linux-davidm/arch/ia64/kernel/smpboot.c lia64/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Mon Aug 20 17:57:14 2001
+++ lia64/arch/ia64/kernel/smpboot.c Mon Aug 20 18:43:28 2001
@@ -324,8 +324,7 @@
phys_id = hard_smp_processor_id();
if (test_and_set_bit(cpuid, &cpu_online_map)) {
- printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n",
- phys_id, cpuid);
+ printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n", phys_id, cpuid);
BUG();
}
diff -urN linux-davidm/drivers/acpi/acpiconf.c lia64/drivers/acpi/acpiconf.c
--- linux-davidm/drivers/acpi/acpiconf.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/acpi/acpiconf.c Mon Aug 20 18:46:14 2001
@@ -243,10 +243,11 @@
acpi_rpb_t rpb;
PCI_ROUTING_TABLE *prt;
- ACPI_BUFFER ret_buf;
UINT8 path_name[PATHNAME_MAX];
#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ ACPI_BUFFER ret_buf;
+
ret_buf.length = PATHNAME_MAX;
ret_buf.pointer = (void *) path_name;
@@ -325,10 +326,11 @@
NATIVE_UINT devfn;
UINT8 bn;
- ACPI_BUFFER ret_buf;
UINT8 path_name[PATHNAME_MAX];
#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ ACPI_BUFFER ret_buf;
+
ret_buf.length = PATHNAME_MAX;
ret_buf.pointer = (void *) path_name;
@@ -422,10 +424,11 @@
NATIVE_UINT temp = 0x0F;
ACPI_STATUS status;
- ACPI_BUFFER ret_buf;
UINT8 path_name[PATHNAME_MAX];
#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ ACPI_BUFFER ret_buf;
+
ret_buf.length = PATHNAME_MAX;
ret_buf.pointer = (void *) path_name;
diff -urN linux-davidm/drivers/acpi/driver.c lia64/drivers/acpi/driver.c
--- linux-davidm/drivers/acpi/driver.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/acpi/driver.c Mon Aug 20 18:46:23 2001
@@ -87,7 +87,7 @@
return -ENODEV;
}
#else
- rsdp_phys = efi.acpi;
+ rsdp_phys = (ACPI_PHYSICAL_ADDRESS) efi.acpi;
#endif
/* from this point on, on error we must call acpi_terminate() */
diff -urN linux-davidm/drivers/char/Config.in lia64/drivers/char/Config.in
--- linux-davidm/drivers/char/Config.in Mon Aug 20 18:58:53 2001
+++ lia64/drivers/char/Config.in Mon Aug 20 17:31:11 2001
@@ -189,10 +189,7 @@
if [ "$CONFIG_AGP" != "n" ]; then
bool ' Intel 440LX/BX/GX and I815/I840/I850 support' CONFIG_AGP_INTEL
if [ "$CONFIG_IA64" != "n" ]; then
- bool ' Intel 460GX support (EXPERIMENTAL)' CONFIG_AGP_I460
- if [ "$CONFIG_AGP_I460" != "n" ]; then
- define_bool CONFIG_AGP_PTE_FIXUPS y
- fi
+ bool ' Intel 460GX support' CONFIG_AGP_I460
fi
bool ' Intel I810/I815 (on-board) support' CONFIG_AGP_I810
bool ' VIA chipset support' CONFIG_AGP_VIA
diff -urN linux-davidm/drivers/char/Makefile lia64/drivers/char/Makefile
--- linux-davidm/drivers/char/Makefile Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/Makefile Mon Aug 20 17:31:11 2001
@@ -201,9 +201,6 @@
subdir-$(CONFIG_DRM_OLD) += drm-4.0
subdir-$(CONFIG_PCMCIA) += pcmcia
subdir-$(CONFIG_AGP) += agp
-ifeq ($(CONFIG_AGP)$(CONFIG_AGP_PTE_FIXUPS), my)
- subdir-y += agp
-endif
ifeq ($(CONFIG_FTAPE),y)
obj-y += ftape/ftape.o
diff -urN linux-davidm/drivers/char/agp/Makefile lia64/drivers/char/agp/Makefile
--- linux-davidm/drivers/char/agp/Makefile Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/agp/Makefile Mon Aug 20 18:09:28 2001
@@ -5,13 +5,12 @@
O_TARGET := agp.o
-export-objs := agpgart_be.o vmmap.o
+export-objs := agpgart_be.o
list-multi := agpgart.o
agpgart-objs := agpgart_fe.o agpgart_be.o
obj-$(CONFIG_AGP) += agpgart.o
-obj-$(CONFIG_AGP_PTE_FIXUPS) += vmmap.o
include $(TOPDIR)/Rules.make
diff -urN linux-davidm/drivers/char/agp/vmmap.c lia64/drivers/char/agp/vmmap.c
--- linux-davidm/drivers/char/agp/vmmap.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/agp/vmmap.c Wed Dec 31 16:00:00 1969
@@ -1,234 +0,0 @@
-/*
- * vmmap.c
- *
- * Hack to allow virtual addressing fixups in the kernel's
- * vmalloc area. This is needed so that the page tables in the
- * kernel's vmalloc area can be used to emulate GART translation
- * This file is currently good for 460GX only.
- *
- * This file is basically a copy of mm/vmalloc.c modified to not
- * allocate and free pages.
- *
- * Chris Ahna <christopher.j.ahna@intel.com>
- */
-
-#include <linux/config.h>
-#include <linux/slab.h>
-#include <linux/vmalloc.h>
-#include <linux/spinlock.h>
-#include <linux/module.h>
-#include <linux/agp_backend.h>
-#include <linux/highmem.h>
-
-#include <asm/uaccess.h>
-#include <asm/pgalloc.h>
-
-#include "agp.h"
-
-#ifdef CONFIG_AGP_PTE_FIXUPS
-EXPORT_SYMBOL(agp_vmmap);
-EXPORT_SYMBOL(agp_vmunmap);
-EXPORT_SYMBOL(agp_flush_tlb_all);
-
-static inline void agp_free_area_pte(pmd_t * pmd, unsigned long address, unsigned long size)
-{
- pte_t * pte;
- unsigned long end;
-
- if (pmd_none(*pmd))
- return;
- if (pmd_bad(*pmd)) {
- pmd_ERROR(*pmd);
- pmd_clear(pmd);
- return;
- }
- pte = pte_offset(pmd, address);
- address &= ~PMD_MASK;
- end = address + size;
- if (end > PMD_SIZE)
- end = PMD_SIZE;
- do {
- pte_t page;
- page = ptep_get_and_clear(pte);
- address += PAGE_SIZE;
- pte++;
- if (pte_none(page) || pte_present(page)) {
- continue;
- }
- printk(KERN_CRIT "Whee.. Swapped out page in kernel page table\n");
- } while (address < end);
-}
-
-static inline void agp_free_area_pmd(pgd_t * dir, unsigned long address, unsigned long size)
-{
- pmd_t * pmd;
- unsigned long end;
-
- if (pgd_none(*dir))
- return;
- if (pgd_bad(*dir)) {
- pgd_ERROR(*dir);
- pgd_clear(dir);
- return;
- }
- pmd = pmd_offset(dir, address);
- address &= ~PGDIR_MASK;
- end = address + size;
- if (end > PGDIR_SIZE)
- end = PGDIR_SIZE;
- do {
- agp_free_area_pte(pmd, address, end - address);
- address = (address + PMD_SIZE) & PMD_MASK;
- pmd++;
- } while (address < end);
-}
-
-void agp_vmfree_area_pages(unsigned long address, unsigned long size)
-{
- pgd_t * dir;
- unsigned long end = address + size;
-
- dir = pgd_offset_k(address);
- flush_cache_all();
- do {
- agp_free_area_pmd(dir, address, end - address);
- address = (address + PGDIR_SIZE) & PGDIR_MASK;
- dir++;
- } while (address && (address < end));
- flush_tlb_all();
-}
-
-static inline int agp_alloc_area_pte (pte_t * pte, unsigned long address,
- unsigned long size, unsigned long target, pgprot_t prot)
-{
- unsigned long end;
-
- address &= ~PMD_MASK;
- end = address + size;
- if (end > PMD_SIZE)
- end = PMD_SIZE;
- target = (unsigned long) __va(target);
- do {
- struct page * page;
- page = virt_to_page(target);
- if (!pte_none(*pte))
- printk(KERN_ERR "alloc_area_pte: page already exists\n");
- if (!page)
- return -ENOMEM;
- set_pte(pte, mk_pte(page, prot));
- address += PAGE_SIZE;
- target += PAGE_SIZE;
- pte++;
- } while (address < end);
- return 0;
-}
-
-static inline int agp_alloc_area_pmd(pmd_t * pmd, unsigned long address, unsigned long size, unsigned long target, pgprot_t prot)
-{
- unsigned long end;
-
- address &= ~PGDIR_MASK;
- end = address + size;
- if (end > PGDIR_SIZE)
- end = PGDIR_SIZE;
- do {
- pte_t * pte = pte_alloc(&init_mm, pmd, address);
- if (!pte)
- return -ENOMEM;
- if (agp_alloc_area_pte(pte, address, end - address, target, prot))
- return -ENOMEM;
- target += ((address + PMD_SIZE) & PMD_MASK) - address;
- address = (address + PMD_SIZE) & PMD_MASK;
- pmd++;
- } while (address < end);
- return 0;
-}
-
-inline int agp_vmalloc_area_pages (unsigned long address, unsigned long size,
- unsigned long target, pgprot_t prot)
-{
- pgd_t * dir;
- unsigned long end = address + size;
- int ret;
-
- dir = pgd_offset_k(address);
- flush_cache_all();
- spin_lock(&init_mm.page_table_lock);
- do {
- pmd_t *pmd;
-
- pmd = pmd_alloc(&init_mm, dir, address);
- ret = -ENOMEM;
- if (!pmd)
- break;
-
- ret = -ENOMEM;
- if (agp_alloc_area_pmd(pmd, address, end - address, target, prot))
- break;
-
- target += ((address + PGDIR_SIZE) & PGDIR_MASK) - address;
- address = (address + PGDIR_SIZE) & PGDIR_MASK;
- dir++;
-
- ret = 0;
- } while (address && (address < end));
- spin_unlock(&init_mm.page_table_lock);
- flush_tlb_all();
- return ret;
-}
-
-void agp_vmunmap(void *addr)
-{
- struct vm_struct **p, *tmp;
-
- if (!addr)
- return;
- if ((PAGE_SIZE-1) & (unsigned long) addr) {
- printk(KERN_ERR "Trying to vfree() bad address (%p)\n", addr);
- return;
- }
- write_lock(&vmlist_lock);
- for (p = &vmlist ; (tmp = *p) ; p = &tmp->next) {
- if (tmp->addr = addr) {
- *p = tmp->next;
- agp_vmfree_area_pages(VMALLOC_VMADDR(tmp->addr), tmp->size);
- write_unlock(&vmlist_lock);
- kfree(tmp);
- return;
- }
- }
- write_unlock(&vmlist_lock);
- printk(KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n", addr);
-}
-
-void *__agp_vmmap (unsigned long size, unsigned long target, pgprot_t prot)
-{
- void * addr;
- struct vm_struct *area;
-
- size = PAGE_ALIGN(size);
- if (!size) {
- BUG();
- return NULL;
- }
- area = get_vm_area(size, VM_ALLOC);
- if (!area)
- return NULL;
- addr = area->addr;
- if (agp_vmalloc_area_pages(VMALLOC_VMADDR(addr), size, target, prot)) {
- vfree(addr);
- return NULL;
- }
- return addr;
-}
-
-void *agp_vmmap(unsigned long offset, unsigned long size) {
-
- return __agp_vmmap(size, offset, pgprot_writecombine(PAGE_KERNEL));
-}
-
-void agp_flush_tlb_all(void) {
-
- flush_tlb_all();
-}
-#endif /* CONFIG_AGP_PTE_FIXUPS */
diff -urN linux-davidm/drivers/char/drm/drmP.h lia64/drivers/char/drm/drmP.h
--- linux-davidm/drivers/char/drm/drmP.h Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm/drmP.h Mon Aug 20 18:09:30 2001
@@ -76,6 +76,17 @@
#include <asm/pgalloc.h>
#include "drm.h"
+/* page_to_bus for earlier kernels, not optimal in all cases */
+#ifndef page_to_bus
+#define page_to_bus(page) ((unsigned int)(virt_to_bus(page_address(page))))
+#endif
+
+/* We just use virt_to_bus for pci_map_single on older kernels */
+#if LINUX_VERSION_CODE < 0x020400
+#define pci_map_single(hwdev, ptr, size, direction) virt_to_bus(ptr)
+#define pci_unmap_single(hwdev, dma_addr, size, direction)
+#endif
+
/* DRM template customization defaults
*/
#ifndef __HAVE_AGP
@@ -355,13 +366,14 @@
if (len > DRM_PROC_LIMIT) { ret; *eof = 1; return len - offset; }
/* Mapping helper macros */
-#define DRM_IOREMAP(map) \
- (map)->handle = DRM(ioremap)( (map)->offset, (map)->size )
+#define DRM_IOREMAP(map, dev) \
+ (map)->handle = DRM(ioremap)( (map)->offset, (map)->size, (dev) )
-#define DRM_IOREMAPFREE(map) \
+#define DRM_IOREMAPFREE(map, dev) \
do { \
if ( (map)->handle && (map)->size ) \
- DRM(ioremapfree)( (map)->handle, (map)->size ); \
+ DRM(ioremapfree)( (map)->handle, \
+ (map)->size, (dev) ); \
} while (0)
#define DRM_FIND_MAP(_map, _o) \
@@ -386,14 +398,6 @@
#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x))
#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist)
-#if __REALLY_HAVE_AGP && defined(CONFIG_AGP_PTE_FIXUPS)
-# define IOREMAP_SAFE(_x, _y) DRM(agp_ioremap)(_x, _y)
-# define IOUNMAP_SAFE(_x) DRM(agp_iounmap)(_x)
-#else
-# define IOREMAP_SAFE(_x, _y) ioremap(_x, _y)
-# define IOUNMAP_SAFE(_x) iounmap(_x)
-#endif
-
#define DRM_GET_PRIV_SAREA(_dev, _ctx, _map) do { \
(_map) = (_dev)->context_sareas[_ctx]; \
} while(0)
@@ -624,6 +628,8 @@
int acquired;
unsigned long base;
int agp_mtrr;
+ int cant_use_aperture;
+ unsigned long page_mask;
} drm_agp_head_t;
#endif
@@ -632,9 +638,7 @@
void *virtual;
int pages;
struct page **pagelist;
-#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
dma_addr_t *busaddr;
-#endif
} drm_sg_mem_t;
typedef struct drm_sigdata {
@@ -725,8 +729,8 @@
#if __REALLY_HAVE_AGP
drm_agp_head_t *agp;
#endif
-#ifdef __alpha__
struct pci_dev *pdev;
+#ifdef __alpha__
#if LINUX_VERSION_CODE < 0x020403
struct pci_controler *hose;
#else
@@ -823,8 +827,10 @@
extern unsigned long DRM(alloc_pages)(int order, int area);
extern void DRM(free_pages)(unsigned long address, int order,
int area);
-extern void *DRM(ioremap)(unsigned long offset, unsigned long size);
-extern void DRM(ioremapfree)(void *pt, unsigned long size);
+extern void *DRM(ioremap)(unsigned long offset,
+ unsigned long size, drm_device_t *dev);
+extern void DRM(ioremapfree)(void *pt,
+ unsigned long size, drm_device_t *dev);
#if __REALLY_HAVE_AGP
extern agp_memory *DRM(alloc_agp)(int pages, u32 type);
@@ -1003,11 +1009,6 @@
extern int DRM(agp_free_memory)(agp_memory *handle);
extern int DRM(agp_bind_memory)(agp_memory *handle, off_t start);
extern int DRM(agp_unbind_memory)(agp_memory *handle);
-#ifdef CONFIG_AGP_PTE_FIXUPS
-extern void *DRM(agp_ioremap)(unsigned long offset,
- unsigned long size);
-extern void DRM(agp_iounmap)(void *handle);
-#endif
#endif
/* Stub support (drm_stub.h) */
diff -urN linux-davidm/drivers/char/drm/drm_agpsupport.h lia64/drivers/char/drm/drm_agpsupport.h
--- linux-davidm/drivers/char/drm/drm_agpsupport.h Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm/drm_agpsupport.h Mon Aug 20 18:09:30 2001
@@ -249,22 +249,6 @@
return 0;
}
-#ifdef CONFIG_AGP_PTE_FIXUPS
-void *DRM(agp_ioremap)(unsigned long offset, unsigned long size)
-{
- if(drm_agp->ioremap)
- return drm_agp->ioremap(offset, size);
-
- return NULL;
-}
-
-void DRM(agp_iounmap)(void *handle)
-{
- if(drm_agp->iounmap)
- drm_agp->iounmap(handle);
-}
-#endif /* CONFIG_AGP_PTE_FIXUPS */
-
drm_agp_head_t *DRM(agp_init)(void)
{
drm_agp_head_t *head = NULL;
@@ -333,6 +317,14 @@
default: head->chipset = "Unknown"; break;
}
+#if LINUX_VERSION_CODE <= 0x020408
+ head->cant_use_aperture = 0;
+ head->page_mask = ~(0xfff);
+#else
+ head->cant_use_aperture = head->agp_info.cant_use_aperture;
+ head->page_mask = head->agp_info.page_mask;
+#endif
+
DRM_INFO("AGP %d.%d on %s @ 0x%08lx %ZuMB\n",
head->agp_info.version.major,
head->agp_info.version.minor,
diff -urN linux-davidm/drivers/char/drm/drm_bufs.h lia64/drivers/char/drm/drm_bufs.h
--- linux-davidm/drivers/char/drm/drm_bufs.h Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm/drm_bufs.h Mon Aug 20 18:09:30 2001
@@ -124,7 +124,7 @@
MTRR_TYPE_WRCOMB, 1 );
}
#endif
- map->handle = DRM(ioremap)( map->offset, map->size );
+ map->handle = DRM(ioremap)( map->offset, map->size, dev );
break;
case _DRM_SHM:
@@ -249,7 +249,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
diff -urN linux-davidm/drivers/char/drm/drm_drv.h lia64/drivers/char/drm/drm_drv.h
--- linux-davidm/drivers/char/drm/drm_drv.h Wed Aug 8 09:42:14 2001
+++ lia64/drivers/char/drm/drm_drv.h Mon Aug 20 18:09:31 2001
@@ -438,7 +438,7 @@
DRM_DEBUG( "mtrr_del=%d\n", retcode );
}
#endif
- DRM(ioremapfree)( map->handle, map->size );
+ DRM(ioremapfree)( map->handle, map->size, dev );
break;
case _DRM_SHM:
vfree(map->handle);
diff -urN linux-davidm/drivers/char/drm/drm_ioctl.h lia64/drivers/char/drm/drm_ioctl.h
--- linux-davidm/drivers/char/drm/drm_ioctl.h Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm/drm_ioctl.h Mon Aug 20 18:09:31 2001
@@ -98,7 +98,6 @@
}
sprintf(dev->devname, "%s@%s", dev->name, dev->unique);
-#ifdef __alpha__
do {
struct pci_dev *pci_dev;
int b, d, f;
@@ -116,10 +115,11 @@
pci_dev = pci_find_slot(b, PCI_DEVFN(d,f));
if (pci_dev) {
dev->pdev = pci_dev;
- dev->hose = pci_dev->sysdata;
+#ifdef __alpha__
+ dev->hose = pci_dev->sysdata;
+#endif
}
} while(0);
-#endif
return 0;
}
diff -urN linux-davidm/drivers/char/drm/drm_memory.h lia64/drivers/char/drm/drm_memory.h
--- linux-davidm/drivers/char/drm/drm_memory.h Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm/drm_memory.h Mon Aug 20 18:09:31 2001
@@ -306,7 +306,7 @@
}
}
-void *DRM(ioremap)(unsigned long offset, unsigned long size)
+void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev)
{
void *pt;
@@ -316,12 +316,52 @@
return NULL;
}
- if (!(pt = IOREMAP_SAFE(offset, size))) {
+ if(dev->agp->cant_use_aperture = 0) {
+ goto standard_ioremap;
+ } else {
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ struct list_head *list, *head;
+
+ list_for_each(list, &dev->maplist->head) {
+ r_list = (drm_map_list_t *)list;
+ map = r_list->map;
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
+ break;
+ }
+
+ if(agpmem = NULL)
+ goto standard_ioremap;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
+ } else {
+ goto standard_ioremap;
+ }
+ }
+
+standard_ioremap:
+ if (!(pt = ioremap(offset, size))) {
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&DRM(mem_lock));
return NULL;
}
+
+ioremap_success:
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count;
DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated += size;
@@ -329,7 +369,7 @@
return pt;
}
-void DRM(ioremapfree)(void *pt, unsigned long size)
+void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev)
{
int alloc_count;
int free_count;
@@ -337,8 +377,8 @@
if (!pt)
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
- else
- IOUNMAP_SAFE(pt);
+ else if(dev->agp->cant_use_aperture = 0)
+ iounmap(pt);
spin_lock(&DRM(mem_lock));
DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_freed += size;
diff -urN linux-davidm/drivers/char/drm/drm_vm.h lia64/drivers/char/drm/drm_vm.h
--- linux-davidm/drivers/char/drm/drm_vm.h Mon Aug 20 18:58:54 2001
+++ lia64/drivers/char/drm/drm_vm.h Mon Aug 20 18:09:33 2001
@@ -68,55 +68,53 @@
#endif
{
#if (defined(__alpha__) || defined(__ia64__)) && __REALLY_HAVE_AGP
- drm_file_t *priv = vma->vm_file->private_data;
- drm_device_t *dev = priv->dev;
- drm_map_t *map = NULL;
- drm_map_list_t *r_list;
- struct list_head *list;
+ drm_file_t *priv = vma->vm_file->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ struct list_head *list;
- /*
+ /*
* Find the right map
*/
- list_for_each(list, &dev->maplist->head) {
- r_list = (drm_map_list_t *)list;
- map = r_list->map;
- if (!map) continue;
- if (map->offset = VM_OFFSET(vma)) break;
- }
- if (map && map->type = _DRM_AGP) {
- unsigned long offset = address - vma->vm_start;
- unsigned long baddr = VM_OFFSET(vma) + offset, paddr;
- struct drm_agp_mem *agpmem;
- struct page *page;
+ if(!dev->agp->cant_use_aperture) goto vm_nopage_error;
-#if defined(__alpha__)
- /*
- * Make it a bus-relative address
+ list_for_each(list, &dev->maplist->head) {
+ r_list = (drm_map_list_t *)list;
+ map = r_list->map;
+ if (!map) continue;
+ if (map->offset = VM_OFFSET(vma)) break;
+ }
+
+ if (map && map->type = _DRM_AGP) {
+ unsigned long offset = address - vma->vm_start;
+ unsigned long baddr = VM_OFFSET(vma) + offset, paddr;
+ struct drm_agp_mem *agpmem;
+ struct page *page;
+
+#if __alpha__
+ /*
+ * Adjust to a bus-relative address
*/
- baddr -= dev->hose->mem_space->start;
+ baddr -= dev->hose->mem_space->start;
#endif
- /*
+ /*
* It's AGP memory - find the real physical page to map
*/
- for(agpmem = dev->agp->memory; agpmem; agpmem = agpmem->next) {
- if (agpmem->bound <= baddr &&
- agpmem->bound + agpmem->pages * PAGE_SIZE > baddr)
- break;
- }
-
- if (!agpmem) {
- /*
- * Oops - no memory found
- */
- return NOPAGE_SIGBUS; /* couldn't find it */
- }
+ for(agpmem = dev->agp->memory; agpmem; agpmem = agpmem->next) {
+ if (agpmem->bound <= baddr &&
+ agpmem->bound + agpmem->pages * PAGE_SIZE > baddr)
+ break;
+ }
- /*
+ if (!agpmem) goto vm_nopage_error;
+
+ /*
* Get the page, inc the use count, and return it
*/
- offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
+ offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
/*
* This is bad. What we really want to do here is unmask
@@ -130,12 +128,21 @@
paddr = (agpmem->memory->memory[offset] & 0xffffff) << 12;
#endif
- page = virt_to_page(__va(paddr));
+ page = virt_to_page(__va(paddr));
+ get_page(page);
- get_page(page);
- return page;
- }
+ DRM_DEBUG("baddr = 0x%lx page = 0x%p, offset = 0x%lx\n",
+ baddr, __va(agpmem->memory->memory[offset]), offset);
+
+#if LINUX_VERSION_CODE < 0x020317
+ return page_address(page);
+#else
+ return page;
#endif
+ }
+vm_nopage_error:
+#endif /* __REALLY_HAVE_AGP */
+
return NOPAGE_SIGBUS; /* Disallow mremap */
}
@@ -260,7 +267,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
@@ -289,24 +296,27 @@
drm_file_t *priv = vma->vm_file->private_data;
drm_device_t *dev = priv->dev;
drm_device_dma_t *dma = dev->dma;
- unsigned long physical;
unsigned long offset;
- unsigned long page;
+ unsigned long page_nr;
+ struct page *page;
if (!dma) return NOPAGE_SIGBUS; /* Error */
if (address > vma->vm_end) return NOPAGE_SIGBUS; /* Disallow mremap */
if (!dma->pagelist) return NOPAGE_OOM ; /* Nothing allocated */
offset = address - vma->vm_start; /* vm_[pg]off[set] should be 0 */
- page = offset >> PAGE_SHIFT;
- physical = dma->pagelist[page] + (offset & (~PAGE_MASK));
- atomic_inc(&virt_to_page(physical)->count); /* Dec. by kernel */
+ page_nr = offset >> PAGE_SHIFT;
+ page = virt_to_page((dma->pagelist[page_nr] +
+ (offset & (~PAGE_MASK))));
+
+ get_page(page);
- DRM_DEBUG("0x%08lx (page %lu) => 0x%08lx\n", address, page, physical);
+ DRM_DEBUG("0x%08lx (page %lu) => 0x%08x\n", address, page_nr,
+ page_to_bus(page));
#if LINUX_VERSION_CODE < 0x020317
- return physical;
+ return page_address(page);
#else
- return virt_to_page(physical);
+ return page;
#endif
}
@@ -343,10 +353,10 @@
map_offset = map->offset - dev->sg->handle;
page_offset = (offset >> PAGE_SHIFT) + (map_offset >> PAGE_SHIFT);
page = entry->pagelist[page_offset];
- atomic_inc(&page->count); /* Dec. by kernel */
+ get_page(page);
#if LINUX_VERSION_CODE < 0x020317
- return (unsigned long)virt_to_phys(page->virtual);
+ return page_address(page);
#else
return page;
#endif
@@ -509,7 +519,7 @@
/*
* On Alpha and ia64 we can't talk to bus dma address from
* the CPU, so for memory of type DRM_AGP, we'll deal with
- * sorting out the real physical pages and mappings in
+ * sorting out the real physical pages and mappings
* in nopage()
*/
vma->vm_ops = &DRM(vm_ops);
@@ -520,6 +530,7 @@
break;
#endif
+ /* fall through to _DRM_FRAME_BUFFER... */
case _DRM_FRAME_BUFFER:
case _DRM_REGISTERS:
if (VM_OFFSET(vma) >= __pa(high_memory)) {
diff -urN linux-davidm/drivers/char/drm/i810_dma.c lia64/drivers/char/drm/i810_dma.c
--- linux-davidm/drivers/char/drm/i810_dma.c Wed Aug 8 09:42:15 2001
+++ lia64/drivers/char/drm/i810_dma.c Mon Aug 20 18:09:34 2001
@@ -315,7 +315,7 @@
if(dev_priv->ring.virtual_start) {
DRM(ioremapfree)((void *) dev_priv->ring.virtual_start,
- dev_priv->ring.Size);
+ dev_priv->ring.Size, dev);
}
if(dev_priv->hw_status_page != 0UL) {
i810_free_page(dev, dev_priv->hw_status_page);
@@ -329,7 +329,8 @@
for (i = 0; i < dma->buf_count; i++) {
drm_buf_t *buf = dma->buflist[ i ];
drm_i810_buf_priv_t *buf_priv = buf->dev_private;
- DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total);
+ DRM(ioremapfree)(buf_priv->kernel_virtual,
+ buf->total, dev);
}
}
return 0;
@@ -402,7 +403,7 @@
*buf_priv->in_use = I810_BUF_FREE;
buf_priv->kernel_virtual = DRM(ioremap)(buf->bus_address,
- buf->total);
+ buf->total, dev);
}
return 0;
}
@@ -458,7 +459,7 @@
dev_priv->ring.virtual_start = DRM(ioremap)(dev->agp->base +
init->ring_start,
- init->ring_size);
+ init->ring_size, dev);
if (dev_priv->ring.virtual_start = NULL) {
dev->dev_private = (void *) dev_priv;
diff -urN linux-davidm/drivers/char/drm/mga_dma.c lia64/drivers/char/drm/mga_dma.c
--- linux-davidm/drivers/char/drm/mga_dma.c Wed Aug 8 09:42:15 2001
+++ lia64/drivers/char/drm/mga_dma.c Mon Aug 20 18:09:34 2001
@@ -557,9 +557,9 @@
(drm_mga_sarea_t *)((u8 *)dev_priv->sarea->handle +
init->sarea_priv_offset);
- DRM_IOREMAP( dev_priv->warp );
- DRM_IOREMAP( dev_priv->primary );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->warp, dev );
+ DRM_IOREMAP( dev_priv->primary, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->warp->handle ||
!dev_priv->primary->handle ||
@@ -647,9 +647,9 @@
if ( dev->dev_private ) {
drm_mga_private_t *dev_priv = dev->dev_private;
- DRM_IOREMAPFREE( dev_priv->warp );
- DRM_IOREMAPFREE( dev_priv->primary );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->warp, dev );
+ DRM_IOREMAPFREE( dev_priv->primary, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
if ( dev_priv->head != NULL ) {
mga_freelist_cleanup( dev );
diff -urN linux-davidm/drivers/char/drm/r128_cce.c lia64/drivers/char/drm/r128_cce.c
--- linux-davidm/drivers/char/drm/r128_cce.c Mon Aug 20 18:58:54 2001
+++ lia64/drivers/char/drm/r128_cce.c Mon Aug 20 18:09:34 2001
@@ -593,9 +593,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cce_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cce_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cce_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -667,9 +667,9 @@
drm_r128_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cce_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cce_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
diff -urN linux-davidm/drivers/char/drm/radeon_cp.c lia64/drivers/char/drm/radeon_cp.c
--- linux-davidm/drivers/char/drm/radeon_cp.c Mon Aug 20 18:58:54 2001
+++ lia64/drivers/char/drm/radeon_cp.c Mon Aug 20 18:09:34 2001
@@ -641,6 +641,7 @@
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
page_ofs = tmp_ofs >> PAGE_SHIFT;
+
#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
entry->busaddr[page_ofs]);
@@ -865,9 +866,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cp_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cp_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cp_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -1012,9 +1013,9 @@
drm_radeon_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cp_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cp_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
diff -urN linux-davidm/drivers/char/drm/tdfx_drv.c lia64/drivers/char/drm/tdfx_drv.c
--- linux-davidm/drivers/char/drm/tdfx_drv.c Wed Aug 8 09:42:15 2001
+++ lia64/drivers/char/drm/tdfx_drv.c Mon Aug 20 18:09:34 2001
@@ -44,13 +44,30 @@
#define DRIVER_MINOR 0
#define DRIVER_PATCHLEVEL 0
+#ifndef PCI_VENDOR_ID_3DFX
+#define PCI_VENDOR_ID_3DFX 0x121A
+#endif
#ifndef PCI_DEVICE_ID_3DFX_VOODOO5
#define PCI_DEVICE_ID_3DFX_VOODOO5 0x0009
#endif
+#ifndef PCI_DEVICE_ID_3DFX_VOODOO4
+#define PCI_DEVICE_ID_3DFX_VOODOO4 0x0007
+#endif
+#ifndef PCI_DEVICE_ID_3DFX_VOODOO3_3000 /* Voodoo3 3000 */
+#define PCI_DEVICE_ID_3DFX_VOODOO3_3000 0x0005
+#endif
+#ifndef PCI_DEVICE_ID_3DFX_VOODOO3_2000 /* Voodoo3 3000 */
+#define PCI_DEVICE_ID_3DFX_VOODOO3_2000 0x0004
+#endif
+#ifndef PCI_DEVICE_ID_3DFX_BANSHEE
+#define PCI_DEVICE_ID_3DFX_BANSHEE 0x0003
+#endif
static drm_pci_list_t DRM(idlist)[] = {
{ PCI_VENDOR_ID_3DFX, PCI_DEVICE_ID_3DFX_BANSHEE },
- { PCI_VENDOR_ID_3DFX, PCI_DEVICE_ID_3DFX_VOODOO3 },
+ { PCI_VENDOR_ID_3DFX, PCI_DEVICE_ID_3DFX_VOODOO3_2000 },
+ { PCI_VENDOR_ID_3DFX, PCI_DEVICE_ID_3DFX_VOODOO3_3000 },
+ { PCI_VENDOR_ID_3DFX, PCI_DEVICE_ID_3DFX_VOODOO4 },
{ PCI_VENDOR_ID_3DFX, PCI_DEVICE_ID_3DFX_VOODOO5 },
{ 0, 0 }
};
diff -urN linux-davidm/drivers/char/drm-4.0/agpsupport.c lia64/drivers/char/drm-4.0/agpsupport.c
--- linux-davidm/drivers/char/drm-4.0/agpsupport.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/agpsupport.c Mon Aug 20 18:09:34 2001
@@ -193,22 +193,6 @@
return drm_unbind_agp(entry->memory);
}
-#ifdef CONFIG_AGP_PTE_FIXUPS
-void *drm_agp_ioremap(unsigned long offset, unsigned long size)
-{
- if(drm_agp->ioremap)
- return drm_agp->ioremap(offset, size);
-
- return NULL;
-}
-
-void drm_agp_iounmap(void *handle)
-{
- if(drm_agp->iounmap)
- drm_agp->iounmap(handle);
-}
-#endif /* CONFIG_AGP_PTE_FIXUPS */
-
int drm_agp_bind(struct inode *inode, struct file *filp, unsigned int cmd,
unsigned long arg)
{
@@ -310,11 +294,19 @@
case ALI_M1651: head->chipset = "ALi M1651"; break;
case SVWRKS_GENERIC: head->chipset = "Serverworks Generic";
break;
- case SVWRKS_HE: head->chipset = "Serverworks HE";
- case SVWRKS_LE: head->chipset = "Serverworks LE";
+ case SVWRKS_HE: head->chipset = "Serverworks HE"; break;
+ case SVWRKS_LE: head->chipset = "Serverworks LE"; break;
default: head->chipset = "Unknown"; break;
}
+#if LINUX_VERSION_CODE <= 0x020408
+ head->cant_use_aperture = 0;
+ head->page_mask = ~(0xfff);
+#else
+ head->cant_use_aperture = head->agp_info.cant_use_aperture;
+ head->page_mask = head->agp_info.page_mask;
+#endif
+
DRM_INFO("AGP %d.%d on %s @ 0x%08lx %ZuMB\n",
head->agp_info.version.major,
head->agp_info.version.minor,
diff -urN linux-davidm/drivers/char/drm-4.0/bufs.c lia64/drivers/char/drm-4.0/bufs.c
--- linux-davidm/drivers/char/drm-4.0/bufs.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/bufs.c Mon Aug 20 18:09:34 2001
@@ -87,7 +87,7 @@
MTRR_TYPE_WRCOMB, 1);
}
#endif
- map->handle = drm_ioremap(map->offset, map->size);
+ map->handle = drm_ioremap(map->offset, map->size, dev);
break;
diff -urN linux-davidm/drivers/char/drm-4.0/drm.h lia64/drivers/char/drm-4.0/drm.h
--- linux-davidm/drivers/char/drm-4.0/drm.h Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/drm.h Mon Aug 20 18:09:34 2001
@@ -84,7 +84,7 @@
#include "i810_drm.h"
#include "r128_drm.h"
#include "radeon_drm.h"
-#ifdef CONFIG_DRM_SIS
+#ifdef CONFIG_DRM40_SIS
#include "sis_drm.h"
#endif
@@ -399,7 +399,7 @@
#define DRM_IOCTL_RADEON_STIPPLE DRM_IOW( 0x4c, drm_radeon_stipple_t)
#define DRM_IOCTL_RADEON_INDIRECT DRM_IOWR(0x4d, drm_radeon_indirect_t)
-#ifdef CONFIG_DRM_SIS
+#ifdef CONFIG_DRM40_SIS
/* SiS specific ioctls */
#define SIS_IOCTL_FB_ALLOC DRM_IOWR(0x44, drm_sis_mem_t)
#define SIS_IOCTL_FB_FREE DRM_IOW( 0x45, drm_sis_mem_t)
diff -urN linux-davidm/drivers/char/drm-4.0/drmP.h lia64/drivers/char/drm-4.0/drmP.h
--- linux-davidm/drivers/char/drm-4.0/drmP.h Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/drmP.h Mon Aug 20 18:20:22 2001
@@ -294,14 +294,6 @@
#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x))
#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist)
-#ifdef CONFIG_AGP_PTE_FIXUPS
-# define IOREMAP_SAFE(_x, _y) drm_agp_ioremap(_x, _y)
-# define IOUNMAP_SAFE(_x) drm_agp_iounmap(_x)
-#else
-# define IOREMAP_SAFE(_x, _y) ioremap(_x, _y)
-# define IOUNMAP_SAFE(_x) iounmap(_x)
-#endif
-
typedef int drm_ioctl_t(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
@@ -518,6 +510,8 @@
int acquired;
unsigned long base;
int agp_mtrr;
+ int cant_use_aperture;
+ unsigned long page_mask;
} drm_agp_head_t;
#endif
@@ -686,8 +680,10 @@
extern unsigned long drm_alloc_pages(int order, int area);
extern void drm_free_pages(unsigned long address, int order,
int area);
-extern void *drm_ioremap(unsigned long offset, unsigned long size);
-extern void drm_ioremapfree(void *pt, unsigned long size);
+extern void *drm_ioremap(unsigned long offset, unsigned long size,
+ drm_device_t *dev);
+extern void drm_ioremapfree(void *pt, unsigned long size,
+ drm_device_t *dev);
#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
extern agp_memory *drm_alloc_agp(int pages, u32 type);
@@ -838,11 +834,6 @@
extern int drm_agp_free_memory(agp_memory *handle);
extern int drm_agp_bind_memory(agp_memory *handle, off_t start);
extern int drm_agp_unbind_memory(agp_memory *handle);
-#ifdef CONFIG_AGP_PTE_FIXUPS
-extern void *drm_agp_ioremap(unsigned long offset,
- unsigned long size);
-extern void drm_agp_iounmap(void *handle);
-#endif
#endif
#endif
#endif
diff -urN linux-davidm/drivers/char/drm-4.0/ffb_drv.c lia64/drivers/char/drm-4.0/ffb_drv.c
--- linux-davidm/drivers/char/drm-4.0/ffb_drv.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/ffb_drv.c Mon Aug 20 18:09:34 2001
@@ -157,7 +157,7 @@
switch (map->type) {
case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER:
- drm_ioremapfree(map->handle, map->size);
+ drm_ioremapfree(map->handle, map->size, dev);
break;
case _DRM_SHM:
diff -urN linux-davidm/drivers/char/drm-4.0/gamma_drv.c lia64/drivers/char/drm-4.0/gamma_drv.c
--- linux-davidm/drivers/char/drm-4.0/gamma_drv.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/gamma_drv.c Mon Aug 20 18:09:34 2001
@@ -257,7 +257,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- drm_ioremapfree(map->handle, map->size);
+ drm_ioremapfree(map->handle, map->size, dev);
break;
case _DRM_SHM:
drm_free_pages((unsigned long)map->handle,
diff -urN linux-davidm/drivers/char/drm-4.0/i810_dma.c lia64/drivers/char/drm-4.0/i810_dma.c
--- linux-davidm/drivers/char/drm-4.0/i810_dma.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/i810_dma.c Mon Aug 20 18:09:34 2001
@@ -309,7 +309,7 @@
if(dev_priv->ring.virtual_start) {
drm_ioremapfree((void *) dev_priv->ring.virtual_start,
- dev_priv->ring.Size);
+ dev_priv->ring.Size, dev);
}
if(dev_priv->hw_status_page != 0UL) {
i810_free_page(dev, dev_priv->hw_status_page);
@@ -323,7 +323,8 @@
for (i = 0; i < dma->buf_count; i++) {
drm_buf_t *buf = dma->buflist[ i ];
drm_i810_buf_priv_t *buf_priv = buf->dev_private;
- drm_ioremapfree(buf_priv->kernel_virtual, buf->total);
+ drm_ioremapfree(buf_priv->kernel_virtual,
+ buf->total, dev);
}
}
return 0;
@@ -397,7 +398,7 @@
*buf_priv->in_use = I810_BUF_FREE;
buf_priv->kernel_virtual = drm_ioremap(buf->bus_address,
- buf->total);
+ buf->total, dev);
}
return 0;
}
@@ -434,7 +435,7 @@
dev_priv->ring.virtual_start = drm_ioremap(dev->agp->base +
init->ring_start,
- init->ring_size);
+ init->ring_size, dev);
dev_priv->ring.tail_mask = dev_priv->ring.Size - 1;
diff -urN linux-davidm/drivers/char/drm-4.0/i810_drv.c lia64/drivers/char/drm-4.0/i810_drv.c
--- linux-davidm/drivers/char/drm-4.0/i810_drv.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/i810_drv.c Mon Aug 20 18:09:34 2001
@@ -285,7 +285,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- drm_ioremapfree(map->handle, map->size);
+ drm_ioremapfree(map->handle, map->size, dev);
break;
case _DRM_SHM:
drm_free_pages((unsigned long)map->handle,
diff -urN linux-davidm/drivers/char/drm-4.0/memory.c lia64/drivers/char/drm-4.0/memory.c
--- linux-davidm/drivers/char/drm-4.0/memory.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/memory.c Mon Aug 20 18:09:34 2001
@@ -296,7 +296,7 @@
}
}
-void *drm_ioremap(unsigned long offset, unsigned long size)
+void *drm_ioremap(unsigned long offset, unsigned long size, drm_device_t *dev)
{
void *pt;
@@ -306,12 +306,50 @@
return NULL;
}
- if (!(pt = IOREMAP_SAFE(offset, size))) {
+ if(dev->agp->cant_use_aperture = 0) {
+ goto standard_ioremap;
+ } else {
+ drm_map_t *map = NULL;
+ int i;
+
+ for(i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
+ break;
+ }
+
+ if(agpmem = NULL)
+ goto standard_ioremap;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
+ } else {
+ goto standard_ioremap;
+ }
+ }
+
+standard_ioremap:
+ if (!(pt = ioremap(offset, size))) {
spin_lock(&drm_mem_lock);
++drm_mem_stats[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&drm_mem_lock);
return NULL;
}
+
+ioremap_success:
spin_lock(&drm_mem_lock);
++drm_mem_stats[DRM_MEM_MAPPINGS].succeed_count;
drm_mem_stats[DRM_MEM_MAPPINGS].bytes_allocated += size;
@@ -319,7 +357,7 @@
return pt;
}
-void drm_ioremapfree(void *pt, unsigned long size)
+void drm_ioremapfree(void *pt, unsigned long size, drm_device_t *dev)
{
int alloc_count;
int free_count;
@@ -327,8 +365,8 @@
if (!pt)
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
- else
- IOUNMAP_SAFE(pt);
+ else if(dev->agp->cant_use_aperture = 0)
+ iounmap(pt);
spin_lock(&drm_mem_lock);
drm_mem_stats[DRM_MEM_MAPPINGS].bytes_freed += size;
diff -urN linux-davidm/drivers/char/drm-4.0/mga_dma.c lia64/drivers/char/drm-4.0/mga_dma.c
--- linux-davidm/drivers/char/drm-4.0/mga_dma.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/mga_dma.c Mon Aug 20 18:09:34 2001
@@ -308,7 +308,7 @@
temp = ((temp + PAGE_SIZE - 1) / PAGE_SIZE) * PAGE_SIZE;
dev_priv->ioremap = drm_ioremap(dev->agp->base + offset,
- temp);
+ temp, dev);
if(dev_priv->ioremap = NULL) {
DRM_ERROR("Ioremap failed\n");
return -ENOMEM;
@@ -635,7 +635,7 @@
dev_priv->primary_size +
PAGE_SIZE - 1) / PAGE_SIZE * PAGE_SIZE;
- drm_ioremapfree((void *) dev_priv->ioremap, temp);
+ drm_ioremapfree((void *) dev_priv->ioremap, temp, dev);
}
if(dev_priv->status_page != NULL) {
iounmap(dev_priv->status_page);
@@ -741,9 +741,18 @@
return -ENOMEM;
}
- /* Write status page when secend or softrap occurs */
+ /* Write status page when secend or softrap occurs
+ *
+ * Disable this on ia64 on the off chance that real status page will be
+ * above 4GB.
+ */
+#if defined(__ia64__)
MGA_WRITE(MGAREG_PRIMPTR,
- virt_to_bus((void *)dev_priv->real_status_page) | 0);
+ virt_to_bus((void *)dev_priv->real_status_page));
+#else
+ MGA_WRITE(MGAREG_PRIMPTR,
+ virt_to_bus((void *)dev_priv->real_status_page) | 0x00000003);
+#endif
/* Private is now filled in, initialize the hardware */
{
diff -urN linux-davidm/drivers/char/drm-4.0/mga_drv.c lia64/drivers/char/drm-4.0/mga_drv.c
--- linux-davidm/drivers/char/drm-4.0/mga_drv.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/mga_drv.c Mon Aug 20 18:09:34 2001
@@ -285,7 +285,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- drm_ioremapfree(map->handle, map->size);
+ drm_ioremapfree(map->handle, map->size, dev);
break;
case _DRM_SHM:
drm_free_pages((unsigned long)map->handle,
diff -urN linux-davidm/drivers/char/drm-4.0/r128_cce.c lia64/drivers/char/drm-4.0/r128_cce.c
--- linux-davidm/drivers/char/drm-4.0/r128_cce.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/r128_cce.c Mon Aug 20 18:09:34 2001
@@ -86,12 +86,13 @@
};
-#define DO_REMAP(_m) (_m)->handle = drm_ioremap((_m)->offset, (_m)->size)
+#define DO_REMAP(_m, _d) (_m)->handle = drm_ioremap((_m)->offset, \
+ (_m)->size, (_d))
-#define DO_REMAPFREE(_m) \
+#define DO_REMAPFREE(_m, _d) \
do { \
if ((_m)->handle && (_m)->size) \
- drm_ioremapfree((_m)->handle, (_m)->size); \
+ drm_ioremapfree((_m)->handle, (_m)->size, (_d)); \
} while (0)
#define DO_FIND_MAP(_m, _o) \
@@ -519,12 +520,12 @@
(drm_r128_sarea_t *)((u8 *)dev_priv->sarea->handle +
init->sarea_priv_offset);
- DO_REMAP( dev_priv->cce_ring );
- DO_REMAP( dev_priv->ring_rptr );
- DO_REMAP( dev_priv->buffers );
+ DO_REMAP( dev_priv->cce_ring, dev );
+ DO_REMAP( dev_priv->ring_rptr, dev );
+ DO_REMAP( dev_priv->buffers, dev );
#if 0
if ( !dev_priv->is_pci ) {
- DO_REMAP( dev_priv->agp_textures );
+ DO_REMAP( dev_priv->agp_textures, dev );
}
#endif
@@ -559,12 +560,12 @@
if ( dev->dev_private ) {
drm_r128_private_t *dev_priv = dev->dev_private;
- DO_REMAPFREE( dev_priv->cce_ring );
- DO_REMAPFREE( dev_priv->ring_rptr );
- DO_REMAPFREE( dev_priv->buffers );
+ DO_REMAPFREE( dev_priv->cce_ring, dev );
+ DO_REMAPFREE( dev_priv->ring_rptr, dev );
+ DO_REMAPFREE( dev_priv->buffers, dev );
#if 0
if ( !dev_priv->is_pci ) {
- DO_REMAPFREE( dev_priv->agp_textures );
+ DO_REMAPFREE( dev_priv->agp_textures, dev );
}
#endif
diff -urN linux-davidm/drivers/char/drm-4.0/r128_drv.c lia64/drivers/char/drm-4.0/r128_drv.c
--- linux-davidm/drivers/char/drm-4.0/r128_drv.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/r128_drv.c Mon Aug 20 18:09:34 2001
@@ -295,7 +295,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- drm_ioremapfree(map->handle, map->size);
+ drm_ioremapfree(map->handle, map->size, dev);
break;
case _DRM_SHM:
drm_free_pages((unsigned long)map->handle,
diff -urN linux-davidm/drivers/char/drm-4.0/radeon_cp.c lia64/drivers/char/drm-4.0/radeon_cp.c
--- linux-davidm/drivers/char/drm-4.0/radeon_cp.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/radeon_cp.c Mon Aug 20 18:09:34 2001
@@ -300,12 +300,13 @@
};
-#define DO_IOREMAP(_m) (_m)->handle = drm_ioremap((_m)->offset, (_m)->size)
+#define DO_IOREMAP(_m, _d) (_m)->handle = drm_ioremap((_m)->offset, \
+ (_m)->size, (_d))
-#define DO_IOREMAPFREE(_m) \
+#define DO_IOREMAPFREE(_m, _d) \
do { \
if ((_m)->handle && (_m)->size) \
- drm_ioremapfree((_m)->handle, (_m)->size); \
+ drm_ioremapfree((_m)->handle, (_m)->size, (_d));\
} while (0)
#define DO_FIND_MAP(_m, _o) \
@@ -782,12 +783,12 @@
(drm_radeon_sarea_t *)((u8 *)dev_priv->sarea->handle +
init->sarea_priv_offset);
- DO_IOREMAP( dev_priv->cp_ring );
- DO_IOREMAP( dev_priv->ring_rptr );
- DO_IOREMAP( dev_priv->buffers );
+ DO_IOREMAP( dev_priv->cp_ring, dev );
+ DO_IOREMAP( dev_priv->ring_rptr, dev );
+ DO_IOREMAP( dev_priv->buffers, dev );
#if 0
if ( !dev_priv->is_pci ) {
- DO_IOREMAP( dev_priv->agp_textures );
+ DO_IOREMAP( dev_priv->agp_textures, dev );
}
#endif
@@ -853,12 +854,12 @@
if ( dev->dev_private ) {
drm_radeon_private_t *dev_priv = dev->dev_private;
- DO_IOREMAPFREE( dev_priv->cp_ring );
- DO_IOREMAPFREE( dev_priv->ring_rptr );
- DO_IOREMAPFREE( dev_priv->buffers );
+ DO_IOREMAPFREE( dev_priv->cp_ring, dev );
+ DO_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DO_IOREMAPFREE( dev_priv->buffers, dev );
#if 0
if ( !dev_priv->is_pci ) {
- DO_IOREMAPFREE( dev_priv->agp_textures );
+ DO_IOREMAPFREE( dev_priv->agp_textures, dev );
}
#endif
diff -urN linux-davidm/drivers/char/drm-4.0/radeon_drv.c lia64/drivers/char/drm-4.0/radeon_drv.c
--- linux-davidm/drivers/char/drm-4.0/radeon_drv.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/radeon_drv.c Mon Aug 20 18:09:34 2001
@@ -293,7 +293,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- drm_ioremapfree(map->handle, map->size);
+ drm_ioremapfree(map->handle, map->size, dev);
break;
case _DRM_SHM:
drm_free_pages((unsigned long)map->handle,
diff -urN linux-davidm/drivers/char/drm-4.0/tdfx_drv.c lia64/drivers/char/drm-4.0/tdfx_drv.c
--- linux-davidm/drivers/char/drm-4.0/tdfx_drv.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/tdfx_drv.c Mon Aug 20 18:09:34 2001
@@ -263,7 +263,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- drm_ioremapfree(map->handle, map->size);
+ drm_ioremapfree(map->handle, map->size, dev);
break;
case _DRM_SHM:
drm_free_pages((unsigned long)map->handle,
diff -urN linux-davidm/drivers/char/drm-4.0/vm.c lia64/drivers/char/drm-4.0/vm.c
--- linux-davidm/drivers/char/drm-4.0/vm.c Mon Aug 20 17:57:14 2001
+++ lia64/drivers/char/drm-4.0/vm.c Mon Aug 20 18:09:34 2001
@@ -77,6 +77,9 @@
/*
* Find the right map
*/
+
+ if(!dev->agp->cant_use_aperture) goto vm_nopage_error;
+
for (i = 0; i < dev->map_count; i++) {
map = dev->maplist[i];
if (!map) continue;
@@ -98,12 +101,7 @@
break;
}
- if (!agpmem) {
- /*
- * Oops - no memory found
- */
- return NOPAGE_SIGBUS; /* couldn't find it */
- }
+ if (!agpmem) goto vm_nopage_error;
/*
* Get the page, inc the use count, and return it
@@ -120,8 +118,14 @@
page = virt_to_page(__va(paddr));
get_page(page);
- return page;
+
+#if LINUX_VERSION_CODE < 0x020317
+ return page_address(page);
+#else
+ return page;
+#endif
}
+vm_nopage_error:
#endif
return NOPAGE_SIGBUS; /* Disallow mremap */
}
diff -urN linux-davidm/drivers/pci/pci.c lia64/drivers/pci/pci.c
--- linux-davidm/drivers/pci/pci.c Mon Aug 20 18:59:04 2001
+++ lia64/drivers/pci/pci.c Mon Aug 20 18:48:29 2001
@@ -1540,10 +1540,10 @@
switch (rqst) {
case PM_SAVE_STATE:
- error = pci_pm_save_state((u32)data);
+ error = pci_pm_save_state((unsigned long)data);
break;
case PM_SUSPEND:
- error = pci_pm_suspend((u32)data);
+ error = pci_pm_suspend((unsigned long)data);
break;
case PM_RESUME:
error = pci_pm_resume();
@@ -1860,9 +1860,9 @@
int map, block;
if ((page = pool_find_page (pool, dma)) = 0) {
- printk (KERN_ERR "pci_pool_free %s/%s, %p/%x (bad dma)\n",
+ printk (KERN_ERR "pci_pool_free %s/%s, %p/%lx (bad dma)\n",
pool->dev ? pool->dev->slot_name : NULL,
- pool->name, vaddr, dma);
+ pool->name, vaddr, (unsigned long) dma);
return;
}
#ifdef CONFIG_PCIPOOL_DEBUG
diff -urN linux-davidm/drivers/scsi/qla1280.c lia64/drivers/scsi/qla1280.c
--- linux-davidm/drivers/scsi/qla1280.c Mon Aug 20 17:57:15 2001
+++ lia64/drivers/scsi/qla1280.c Mon Aug 20 18:49:25 2001
@@ -598,9 +598,9 @@
size = sprintf(PROC_BUF, "SCSI Host Adapter Information: %s\n", bdp->bdName);
len += size;
- size = sprintf(PROC_BUF, "Request Queue = 0x%x, Response Queue = 0x%x\n",
- ha->request_dma,
- ha->response_dma);
+ size = sprintf(PROC_BUF, "Request Queue = 0x%lx, Response Queue = 0x%lx\n",
+ (unsigned long) ha->request_dma,
+ (unsigned long) ha->response_dma);
len += size;
size = sprintf(PROC_BUF, "Request Queue count= 0x%x, Response Queue count= 0x%x\n",
REQUEST_ENTRY_CNT,
diff -urN linux-davidm/drivers/sound/cs4281/cs4281pm-24.c lia64/drivers/sound/cs4281/cs4281pm-24.c
--- linux-davidm/drivers/sound/cs4281/cs4281pm-24.c Mon Aug 20 17:57:15 2001
+++ lia64/drivers/sound/cs4281/cs4281pm-24.c Mon Aug 20 18:49:47 2001
@@ -38,8 +38,8 @@
#define CS4281_SUSPEND_TBL cs4281_suspend_tbl
#define CS4281_RESUME_TBL cs4281_resume_tbl
*/
-#define CS4281_SUSPEND_TBL cs4281_null
-#define CS4281_RESUME_TBL cs4281_null
+#define CS4281_SUSPEND_TBL (int (*) (struct pci_dev *, u32)) cs4281_null
+#define CS4281_RESUME_TBL (int (*) (struct pci_dev *)) cs4281_null
int cs4281_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data)
{
@@ -78,7 +78,7 @@
}
#else /* CS4281_PM */
-#define CS4281_SUSPEND_TBL cs4281_null
-#define CS4281_RESUME_TBL cs4281_null
+#define CS4281_SUSPEND_TBL (int (*) (struct pci_dev *, u32)) cs4281_null
+#define CS4281_RESUME_TBL (int (*) (struct pci_dev *)) cs4281_null
#endif /* CS4281_PM */
diff -urN linux-davidm/drivers/usb/usb-ohci.c lia64/drivers/usb/usb-ohci.c
--- linux-davidm/drivers/usb/usb-ohci.c Mon Aug 20 18:59:08 2001
+++ lia64/drivers/usb/usb-ohci.c Mon Aug 20 18:50:00 2001
@@ -2326,21 +2326,22 @@
{
ohci_t * ohci;
struct usb_bus * bus;
+ dma_addr_t bus_addr;
ohci = (ohci_t *) kmalloc (sizeof *ohci, GFP_KERNEL);
if (!ohci)
return NULL;
-
+
memset (ohci, 0, sizeof (ohci_t));
- ohci->hcca = pci_alloc_consistent (dev, sizeof *ohci->hcca,
- &ohci->hcca_dma);
+ ohci->hcca = pci_alloc_consistent (dev, sizeof *ohci->hcca, &bus_addr);
if (!ohci->hcca) {
kfree (ohci);
return NULL;
}
memset (ohci->hcca, 0, sizeof (struct ohci_hcca));
+ ohci->hcca_dma = bus_addr;
ohci->disabled = 1;
ohci->sleeping = 0;
ohci->irq = -1;
diff -urN linux-davidm/include/asm-ia64/io.h lia64/include/asm-ia64/io.h
--- linux-davidm/include/asm-ia64/io.h Tue Jul 31 10:30:09 2001
+++ lia64/include/asm-ia64/io.h Mon Aug 20 18:50:11 2001
@@ -43,7 +43,7 @@
}
static inline void*
-phys_to_virt(unsigned long address)
+phys_to_virt (unsigned long address)
{
return (void *) (address + PAGE_OFFSET);
}
@@ -54,6 +54,7 @@
*/
#define bus_to_virt phys_to_virt
#define virt_to_bus virt_to_phys
+#define page_to_bus page_to_phys
# endif /* KERNEL */
diff -urN linux-davidm/include/asm-ia64/page.h lia64/include/asm-ia64/page.h
--- linux-davidm/include/asm-ia64/page.h Thu Apr 5 12:51:47 2001
+++ lia64/include/asm-ia64/page.h Mon Aug 20 18:50:53 2001
@@ -55,12 +55,15 @@
#ifdef CONFIG_IA64_GENERIC
# include <asm/machvec.h>
# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
+# define page_to_phys(page) XXX fix me
#elif defined (CONFIG_IA64_SGI_SN1)
# ifndef CONFIG_DISCONTIGMEM
# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
+# define page_to_phys(page) XXX fix me
# endif
#else
# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
+# define page_to_phys(page) ((page - mem_map) << PAGE_SHIFT)
#endif
#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
diff -urN linux-davidm/include/asm-ia64/sigcontext.h lia64/include/asm-ia64/sigcontext.h
--- linux-davidm/include/asm-ia64/sigcontext.h Tue Jul 31 10:30:09 2001
+++ lia64/include/asm-ia64/sigcontext.h Mon Aug 20 18:51:01 2001
@@ -2,13 +2,13 @@
#define _ASM_IA64_SIGCONTEXT_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/fpu.h>
-#define IA64_SC_FLAG_ONSTACK_BIT 1 /* is handler running on signal stack? */
+#define IA64_SC_FLAG_ONSTACK_BIT 0 /* is handler running on signal stack? */
#define IA64_SC_FLAG_IN_SYSCALL_BIT 1 /* did signal interrupt a syscall? */
#define IA64_SC_FLAG_FPH_VALID_BIT 2 /* is state in f[32]-f[127] valid? */
diff -urN linux-davidm/include/asm-ia64/user.h lia64/include/asm-ia64/user.h
--- linux-davidm/include/asm-ia64/user.h Sun Feb 6 18:42:40 2000
+++ lia64/include/asm-ia64/user.h Mon Aug 20 18:51:10 2001
@@ -24,11 +24,12 @@
* current->start_stack, so we round each of these in order to be able
* to write an integer number of pages.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/ptrace.h>
+#include <linux/types.h>
#include <asm/page.h>
diff -urN linux-davidm/include/linux/agp_backend.h lia64/include/linux/agp_backend.h
--- linux-davidm/include/linux/agp_backend.h Mon Aug 20 17:57:15 2001
+++ lia64/include/linux/agp_backend.h Mon Aug 20 17:31:11 2001
@@ -88,6 +88,8 @@
size_t aper_size;
int max_memory; /* In pages */
int current_memory;
+ int cant_use_aperture;
+ unsigned long page_mask;
} agp_kern_info;
/*
@@ -110,6 +112,7 @@
size_t page_count;
int num_scratch_pages;
unsigned long *memory;
+ void *vmptr;
off_t pg_start;
u32 type;
u32 physical;
@@ -240,33 +243,6 @@
*
*/
-#ifdef CONFIG_AGP_PTE_FIXUPS
-extern void *agp_ioremap(unsigned long offset, unsigned long size);
-
-/*
- * agp_ioremap:
- *
- * This function performs a standard ioremap unless offset is
- * in the AGP aperture. In this case, a pointer to a portion of
- * the vmalloc area is returned. This remapped chunk of the
- * vmalloc area is added to the fixup list and consequently kept
- * in agreement with GART state. Make this call to ensure that
- * ioremaps in the AGP aperture function correctly on chipsets
- * needing PTE fixups.
- */
-
-extern void agp_iounmap(void *handle);
-
-/*
- * agp_iounmap:
- *
- * Perform a standard iounmap unless handle is in the vmalloc area.
- * In this case, free this portion of vmalloc and remove this map
- * from the fixup list.
- */
-
-#endif
-
typedef struct {
void (*free_memory)(agp_memory *);
agp_memory *(*allocate_memory)(size_t, u32);
@@ -276,10 +252,6 @@
int (*acquire)(void);
void (*release)(void);
void (*copy_info)(agp_kern_info *);
-#ifdef CONFIG_AGP_PTE_FIXUPS
- void *(*ioremap)(unsigned long, unsigned long);
- void (*iounmap)(void *);
-#endif
} drm_agp_t;
extern const drm_agp_t *drm_agp_p;
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.9)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (65 preceding siblings ...)
2001-08-21 3:55 ` [Linux-ia64] kernel update (relative to 2.4.9) David Mosberger
@ 2001-08-22 10:00 ` Andreas Schwab
2001-08-22 17:42 ` Chris Ahna
` (148 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-08-22 10:00 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> The latest IA-64 patch is available at:
|>
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
|>
|> in file linux-2.4.9-ia64-010820.diff*.
|>
|> The big news in this patch is a major cleanup of the DRM source
|> tree(s) by Chris Ahna. Much of the ia64 specific code has now been
|> restructured so it fits much better into the existing framework. This
|> should make it a lot easier in getting the changes accepted by the DRM
|> maintainers. Other changes:
The new DRM code (CONFIG_DRM_NEW) does not compile:
In file included from gamma_drv.c:92:
drm_memory.h: In function `gamma_ioremap':
drm_memory.h:319: structure has no member named `agp'
drm_memory.h:338: structure has no member named `agp'
Andreas.
--
Andreas Schwab "And now for something
SuSE Labs completely different."
Andreas.Schwab@suse.de
SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.9)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (66 preceding siblings ...)
2001-08-22 10:00 ` Andreas Schwab
@ 2001-08-22 17:42 ` Chris Ahna
2001-09-25 7:13 ` [Linux-ia64] kernel update (relative to 2.4.10) David Mosberger
` (147 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Chris Ahna @ 2001-08-22 17:42 UTC (permalink / raw)
To: linux-ia64
On Wed, Aug 22, 2001 at 12:00:44PM +0200, Andreas Schwab wrote:
> The new DRM code (CONFIG_DRM_NEW) does not compile:
>
> In file included from gamma_drv.c:92:
> drm_memory.h: In function `gamma_ioremap':
> drm_memory.h:319: structure has no member named `agp'
> drm_memory.h:338: structure has no member named `agp'
Ouch, sorry about that. Patch is appended. David, please apply.
Chris
diff -X /cvs/excludes -urN linux-2.4.9-ia64-010820-pristine/drivers/char/drm/drm_memory.h linux-2.4.9-ia64-010820-dev/drivers/char/drm/drm_memory.h
--- linux-2.4.9-ia64-010820-pristine/drivers/char/drm/drm_memory.h Tue Aug 21 11:22:58 2001
+++ linux-2.4.9-ia64-010820-dev/drivers/char/drm/drm_memory.h Wed Aug 22 10:34:37 2001
@@ -309,6 +309,11 @@
void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev)
{
void *pt;
+#if __REALLY_HAVE_AGP
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ struct list_head *list;
+#endif
if (!size) {
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
@@ -316,52 +321,51 @@
return NULL;
}
- if(dev->agp->cant_use_aperture = 0) {
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 0)
goto standard_ioremap;
- } else {
- drm_map_t *map = NULL;
- drm_map_list_t *r_list;
- struct list_head *list, *head;
-
- list_for_each(list, &dev->maplist->head) {
- r_list = (drm_map_list_t *)list;
- map = r_list->map;
- if (!map) continue;
- if (map->offset <= offset &&
- (map->offset + map->size) >= (offset + size))
+
+ list_for_each(list, &dev->maplist->head) {
+ r_list = (drm_map_list_t *)list;
+ map = r_list->map;
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
break;
}
-
- if(map && map->type = _DRM_AGP) {
- struct drm_agp_mem *agpmem;
-
- for(agpmem = dev->agp->memory; agpmem;
- agpmem = agpmem->next) {
- if(agpmem->bound <= offset &&
- (agpmem->bound + (agpmem->pages
- << PAGE_SHIFT)) >= (offset + size))
- break;
- }
-
- if(agpmem = NULL)
- goto standard_ioremap;
-
- pt = agpmem->memory->vmptr + (offset - agpmem->bound);
- goto ioremap_success;
- } else {
- goto standard_ioremap;
- }
+
+ if(agpmem = NULL)
+ goto ioremap_failure;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
}
standard_ioremap:
+#endif
if (!(pt = ioremap(offset, size))) {
+#if __REALLY_HAVE_AGP
+ioremap_failure:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&DRM(mem_lock));
return NULL;
}
-
+#if __REALLY_HAVE_AGP
ioremap_success:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count;
DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated += size;
@@ -377,7 +381,11 @@
if (!pt)
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
+#if __REALLY_HAVE_AGP
else if(dev->agp->cant_use_aperture = 0)
+#else
+ else
+#endif
iounmap(pt);
spin_lock(&DRM(mem_lock));
diff -X /cvs/excludes -urN linux-2.4.9-ia64-010820-pristine/drivers/char/drm/drm_vm.h linux-2.4.9-ia64-010820-dev/drivers/char/drm/drm_vm.h
--- linux-2.4.9-ia64-010820-pristine/drivers/char/drm/drm_vm.h Tue Aug 21 11:22:58 2001
+++ linux-2.4.9-ia64-010820-dev/drivers/char/drm/drm_vm.h Wed Aug 22 10:34:35 2001
@@ -67,11 +67,11 @@
int write_access)
#endif
{
-#if (defined(__alpha__) || defined(__ia64__)) && __REALLY_HAVE_AGP
+#if __REALLY_HAVE_AGP
drm_file_t *priv = vma->vm_file->private_data;
drm_device_t *dev = priv->dev;
drm_map_t *map = NULL;
- drm_map_list_t *r_list;
+ drm_map_list_t *r_list;
struct list_head *list;
/*
@@ -122,12 +122,11 @@
* There isn't a convenient way to call agp_bridge.unmask_
* memory from here, so hard code it for now.
*/
-#if defined(__alpha__)
- paddr = agpmem->memory->memory[offset] & ~1UL;
-#elif defined(__ia64__)
+#if defined(__ia64__)
paddr = (agpmem->memory->memory[offset] & 0xffffff) << 12;
+#else
+ paddr = agpmem->memory->memory[offset] & dev->agp->page_mask;
#endif
-
page = virt_to_page(__va(paddr));
get_page(page);
@@ -515,20 +514,21 @@
switch (map->type) {
case _DRM_AGP:
-#if defined(__alpha__) || defined(__ia64__)
- /*
- * On Alpha and ia64 we can't talk to bus dma address from
- * the CPU, so for memory of type DRM_AGP, we'll deal with
- * sorting out the real physical pages and mappings
- * in nopage()
- */
- vma->vm_ops = &DRM(vm_ops);
-
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 1) {
+ /*
+ * On some systems we can't talk to bus dma address from
+ * the CPU, so for memory of type DRM_AGP, we'll deal
+ * with sorting out the real physical pages and mappings
+ * in nopage()
+ */
+ vma->vm_ops = &DRM(vm_ops);
#if defined(__ia64__)
- vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+ vma->vm_page_prot + pgprot_writecombine(vma->vm_page_prot);
#endif
-
- break;
+ goto mapswitch_out;
+ }
#endif
/* fall through to _DRM_FRAME_BUFFER... */
case _DRM_FRAME_BUFFER:
@@ -582,6 +582,9 @@
default:
return -EINVAL; /* This should never happen. */
}
+#if __REALLY_HAVE_AGP
+mapswitch_out:
+#endif
vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
#if LINUX_VERSION_CODE < 0x020203 /* KERNEL_VERSION(2,2,3) */
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (67 preceding siblings ...)
2001-08-22 17:42 ` Chris Ahna
@ 2001-09-25 7:13 ` David Mosberger
2001-09-25 7:17 ` David Mosberger
` (146 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-09-25 7:13 UTC (permalink / raw)
To: linux-ia64
Here is a long-awaited kernel update which brings us in sync with
2.4.10. As usual, it's available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.10-ia64-010924.diff*
The major changes this time include:
- Major ia32 subsystem update by yours truly:
- enable use of emulator prefix: for a non-directory file with
path PATH, the ia32 subsystem now uses /emul/ia32-linux/PATH
if the file exists
- fix a nasty bug which caused ia32 binaries to subtly corrupt
page table page (this was the reason "realplay" often crashed
on start up, but the bug could affect any process, not just
ia32 processes)
- make the address space layout of ia32 tasks match that of
a real linux/x86 task; in particular, the stack now starts
at 0xc0000000 (and grows towards lower addresses)
- fix emulated mmap() and mprotect() to work when page size is 4K
and work better when the page size is larger
- fix emulated ptrace() sys x86 strace() works
- fill in missing system calls (pread, pwrite, sendfile, SuS-compliant
getrlimit, mmap2, {,f}truncate64, {,l,f}stat64, 32-bit uid/gid
versions of lchown, getuid etc., pivot_root, mincore, madvise,
getdents64, fcntl64) and drop some unsupported ones
(module related syscalls, vm86, bdflush)
- support signal delivery with SA_RESTORER sigreturn
- don't align shared mmap()s to 1MB---this filled up a 32-bit address
space too quickly
with these fixes in place, I'm able to successfully run the ia32
versions of realplay, OpenOffice, mozilla, netscape, acrobat, strace,
Intel compilers, etc. But as usual, your mileage may vary.
- update perform to v0.3 (Stephane Eranian)
- SN updates from SGI (just a starter, more to follow...)
- be more consistent about using i-cache prefetches (use .few for branches
within a routine, .many otherwise)
- fix alternate stack signal delivery (used to break if the old stack
didn't have enough memory to hold the dirty stacked registers); ironically,
this fix probably also speeds up this case (it's no longer necessary
to do a flushrs...)
- McKinley related I/O SAPIC updates from Alex Williamson
- fix irq bug which caused console keyboard to report timeout when
pressing the caps lock key (Richard Hirst)
- fix ptrace race condition (thanks to Shoichi Sakon for providing
a test case that reproduced this problem)
- turn on dcr.lc by default
- many sync-ups for 2.4.10 (thanks to KOCHI Takayoshi for sending a timely
ACPI update)
I should warn that this kernel has received only moderate testing so
far. The remote access server I'm normally using to access a test
ia64 machine crashed just shortly before making the patch, so I wasn't
able to do my usual round of testing. Having said that, the kernel
seems to work fine on a dual Itanium Big Sur, so there is hope. As
you know, Linus and co decided to basically rewrite the VM system. As
far as I can tell, this hasn't had any negative impact on ia64, but
I'd recommend doing a lot of testing before declaring this kernel as
ready for prime time.
Oh, and as usual, the patch below is fyi, only. The the full patch
for definite answers.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help linux-2.4.10-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/Documentation/Configure.help Mon Sep 24 21:34:01 2001
@@ -18686,10 +18686,21 @@
so the "DIG-compliant" option is usually the right choice.
HP-simulator For the HP simulator (http://software.hp.com/ia64linux/).
- SN1-simulator For the SGI SN1 simulator.
+ SGI-SN1 For SGI SN1 Platforms.
+ SGI-SN2 For SGI SN2 Platforms.
DIG-compliant For DIG ("Developer's Interface Guide") compliant system.
If you don't know what to do, choose "generic".
+
+CONFIG_IA64_SGI_SN_SIM
+ Build a kernel that runs on both the SGI simulator AND on hardware.
+ There is a very slight performance penalty on hardware for including this
+ option.
+
+CONFIG_IA64_SGI_SN_DEBUG
+ This enables addition debug code that helps isolate
+ platform/kernel bugs. There is a small but measurable performance
+ degradation when this option is enabled.
Kernel page size
CONFIG_IA64_PAGE_SIZE_4KB
diff -urN linux-davidm/arch/ia64/Makefile linux-2.4.10-lia/arch/ia64/Makefile
--- linux-davidm/arch/ia64/Makefile Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/Makefile Mon Sep 24 22:20:41 2001
@@ -17,7 +17,9 @@
AFLAGS_KERNEL := -mconstant-gp
EXTRA
-CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 -falign-functions2
+CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 \
+ -falign-functions2 --param max-inline-insns@0
+# -ffunction-sections
CFLAGS_KERNEL := -mconstant-gp
GCC_VERSION=$(shell $(CROSS_COMPILE)$(HOSTCC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.')
@@ -32,7 +34,7 @@
ifdef CONFIG_IA64_GENERIC
CORE_FILES := arch/$(ARCH)/hp/hp.a \
- arch/$(ARCH)/sn/sn.a \
+ arch/$(ARCH)/sn/sn.o \
arch/$(ARCH)/dig/dig.a \
arch/$(ARCH)/sn/io/sgiio.o \
$(CORE_FILES)
@@ -52,15 +54,14 @@
$(CORE_FILES)
endif
-ifdef CONFIG_IA64_SGI_SN1
+ifdef CONFIG_IA64_SGI_SN
CFLAGS += -DBRINGUP
- SUBDIRS := arch/$(ARCH)/sn/sn1 \
- arch/$(ARCH)/sn \
+ SUBDIRS := arch/$(ARCH)/sn/kernel \
arch/$(ARCH)/sn/io \
arch/$(ARCH)/sn/fprom \
$(SUBDIRS)
- CORE_FILES := arch/$(ARCH)/sn/sn.a \
- arch/$(ARCH)/sn/io/sgiio.o\
+ CORE_FILES := arch/$(ARCH)/sn/kernel/sn.o \
+ arch/$(ARCH)/sn/io/sgiio.o \
$(CORE_FILES)
endif
diff -urN linux-davidm/arch/ia64/config.in linux-2.4.10-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/config.in Mon Sep 24 21:43:20 2001
@@ -28,6 +28,7 @@
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
define_bool CONFIG_ACPI y
+ define_bool CONFIG_ACPI_EFI y
define_bool CONFIG_ACPI_INTERPRETER y
define_bool CONFIG_ACPI_KERNEL_CONFIG y
fi
@@ -40,7 +41,8 @@
"generic CONFIG_IA64_GENERIC \
DIG-compliant CONFIG_IA64_DIG \
HP-simulator CONFIG_IA64_HP_SIM \
- SGI-SN1 CONFIG_IA64_SGI_SN1" generic
+ SGI-SN1 CONFIG_IA64_SGI_SN1 \
+ SGI-SN2 CONFIG_IA64_SGI_SN2" generic
choice 'Kernel page size' \
"4KB CONFIG_IA64_PAGE_SIZE_4KB \
@@ -71,18 +73,22 @@
define_bool CONFIG_PM y
fi
-if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
- bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM
- define_bool CONFIG_DEVFS_DEBUG y
+if [ "$CONFIG_IA64_SGI_SN1" = "y" ] || [ "$CONFIG_IA64_SGI_SN2" = "y" ]; then
+ define_bool CONFIG_IA64_SGI_SN y
+ bool ' Enable extra debugging code' CONFIG_IA64_SGI_SN_DEBUG n
+ bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN_SIM
+ bool ' Enable autotest (llsc). Option to run cache test instead of booting' \
+ CONFIG_IA64_SGI_AUTOTEST n
define_bool CONFIG_DEVFS_FS y
- define_bool CONFIG_IA64_BRL_EMU y
+ if [ "$CONFIG_DEVFS_FS" = "y" ]; then
+ bool ' Enable DEVFS Debug Code' CONFIG_DEVFS_DEBUG n
+ fi
+ bool ' Enable protocol mode for the L1 console' CONFIG_SERIAL_SGI_L1_PROTOCOL y
+ define_bool CONFIG_DISCONTIGMEM y
define_bool CONFIG_IA64_MCA y
- define_bool CONFIG_ITANIUM y
- define_bool CONFIG_SGI_IOC3_ETH y
+ define_bool CONFIG_NUMA y
define_bool CONFIG_PERCPU_IRQ y
- define_int CONFIG_CACHE_LINE_SHIFT 7
- bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM
- bool ' Enable NUMA support' CONFIG_NUMA
+ tristate ' PCIBA support' CONFIG_PCIBA
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
@@ -250,13 +256,19 @@
mainmenu_option next_comment
comment 'Kernel hacking'
-#bool 'Debug kmalloc/kfree' CONFIG_DEBUG_MALLOC
-
-bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
-bool 'Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK
-bool 'Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG
-bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
-bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
-bool 'Disable VHPT' CONFIG_DISABLE_VHPT
+bool 'Kernel debugging' CONFIG_DEBUG_KERNEL
+if [ "$CONFIG_DEBUG_KERNEL" != "n" ]; then
+ bool ' Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
+ bool ' Disable VHPT' CONFIG_DISABLE_VHPT
+ bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ
+
+# early printk is currently broken for SMP: the secondary processors get stuck...
+# bool ' Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK
+
+ bool ' Debug memory allocations' CONFIG_DEBUG_SLAB
+ bool ' Spinlock debugging' CONFIG_DEBUG_SPINLOCK
+ bool ' Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG
+ bool ' Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
+fi
endmenu
diff -urN linux-davidm/arch/ia64/defconfig linux-2.4.10-lia/arch/ia64/defconfig
--- linux-davidm/arch/ia64/defconfig Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/defconfig Mon Sep 24 22:23:50 2001
@@ -25,6 +25,7 @@
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
CONFIG_ACPI=y
+CONFIG_ACPI_EFI=y
CONFIG_ACPI_INTERPRETER=y
CONFIG_ACPI_KERNEL_CONFIG=y
CONFIG_ITANIUM=y
@@ -33,6 +34,7 @@
CONFIG_IA64_DIG=y
# CONFIG_IA64_HP_SIM is not set
# CONFIG_IA64_SGI_SN1 is not set
+# CONFIG_IA64_SGI_SN2 is not set
# CONFIG_IA64_PAGE_SIZE_4KB is not set
# CONFIG_IA64_PAGE_SIZE_8KB is not set
CONFIG_IA64_PAGE_SIZE_16KB=y
@@ -44,6 +46,7 @@
CONFIG_PM=y
CONFIG_KCORE_ELF=y
CONFIG_SMP=y
+CONFIG_IA32_SUPPORT=y
CONFIG_PERFMON=y
CONFIG_IA64_PALINFO=y
CONFIG_EFI_VARS=y
@@ -159,6 +162,7 @@
# CONFIG_MD_RAID0 is not set
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID5 is not set
+# CONFIG_MD_MULTIPATH is not set
# CONFIG_BLK_DEV_LVM is not set
#
@@ -203,6 +207,7 @@
CONFIG_BLK_DEV_IDEPCI=y
CONFIG_IDEPCI_SHARE_IRQ=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
+CONFIG_BLK_DEV_ADMA=y
# CONFIG_BLK_DEV_OFFBOARD is not set
# CONFIG_IDEDMA_PCI_AUTO is not set
CONFIG_BLK_DEV_IDEDMA=y
@@ -212,8 +217,8 @@
# CONFIG_AEC62XX_TUNING is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_WDC_ALI15X3 is not set
-# CONFIG_BLK_DEV_AMD7409 is not set
-# CONFIG_AMD7409_OVERRIDE is not set
+# CONFIG_BLK_DEV_AMD74XX is not set
+# CONFIG_AMD74XX_OVERRIDE is not set
# CONFIG_BLK_DEV_CMD64X is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5530 is not set
@@ -226,7 +231,8 @@
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX is not set
# CONFIG_PDC202XX_BURST is not set
-# CONFIG_BLK_DEV_OSB4 is not set
+# CONFIG_PDC202XX_FORCE is not set
+# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIS5513 is not set
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
@@ -236,6 +242,9 @@
# CONFIG_IDEDMA_IVB is not set
# CONFIG_DMA_NONPCI is not set
CONFIG_BLK_DEV_IDE_MODES=y
+# CONFIG_BLK_DEV_ATARAID is not set
+# CONFIG_BLK_DEV_ATARAID_PDC is not set
+# CONFIG_BLK_DEV_ATARAID_HPT is not set
#
# SCSI support
@@ -271,6 +280,7 @@
# CONFIG_SCSI_AHA1740 is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_IN2000 is not set
# CONFIG_SCSI_AM53C974 is not set
@@ -288,6 +298,7 @@
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_NCR53C406A is not set
+# CONFIG_SCSI_NCR_D700 is not set
# CONFIG_SCSI_NCR53C7xx is not set
# CONFIG_SCSI_NCR53C8XX is not set
# CONFIG_SCSI_SYM53C8XX is not set
@@ -325,7 +336,6 @@
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
-# CONFIG_ARM_AM79C961A is not set
# CONFIG_SUNLANCE is not set
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNBMAC is not set
@@ -344,8 +354,6 @@
# CONFIG_APRICOT is not set
# CONFIG_CS89x0 is not set
# CONFIG_TULIP is not set
-# CONFIG_TULIP_MWI is not set
-# CONFIG_TULIP_MMIO is not set
# CONFIG_DE4X5 is not set
# CONFIG_DGRS is not set
# CONFIG_DM9102 is not set
@@ -366,15 +374,16 @@
# CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set
# CONFIG_WINBOND_840 is not set
+# CONFIG_LAN_SAA9730 is not set
# CONFIG_NET_POCKET is not set
#
# Ethernet (1000 Mbit)
#
# CONFIG_ACENIC is not set
-# CONFIG_ACENIC_OMIT_TIGON_I is not set
# CONFIG_DL2K is not set
# CONFIG_MYRI_SBUS is not set
+# CONFIG_NS83820 is not set
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_SK98LIN is not set
@@ -457,11 +466,37 @@
#
# Joysticks
#
-# CONFIG_JOYSTICK is not set
+# CONFIG_INPUT_GAMEPORT is not set
+# CONFIG_INPUT_NS558 is not set
+# CONFIG_INPUT_LIGHTNING is not set
+# CONFIG_INPUT_PCIGAME is not set
+# CONFIG_INPUT_CS461X is not set
+# CONFIG_INPUT_EMU10K1 is not set
+CONFIG_INPUT_SERIO=y
+CONFIG_INPUT_SERPORT=y
#
-# Input core support is needed for joysticks
+# Joysticks
#
+# CONFIG_INPUT_ANALOG is not set
+# CONFIG_INPUT_A3D is not set
+# CONFIG_INPUT_ADI is not set
+# CONFIG_INPUT_COBRA is not set
+# CONFIG_INPUT_GF2K is not set
+# CONFIG_INPUT_GRIP is not set
+# CONFIG_INPUT_INTERACT is not set
+# CONFIG_INPUT_TMDC is not set
+# CONFIG_INPUT_SIDEWINDER is not set
+# CONFIG_INPUT_IFORCE_USB is not set
+# CONFIG_INPUT_IFORCE_232 is not set
+# CONFIG_INPUT_WARRIOR is not set
+# CONFIG_INPUT_MAGELLAN is not set
+# CONFIG_INPUT_SPACEORB is not set
+# CONFIG_INPUT_SPACEBALL is not set
+# CONFIG_INPUT_STINGER is not set
+# CONFIG_INPUT_DB9 is not set
+# CONFIG_INPUT_GAMECON is not set
+# CONFIG_INPUT_TURBOGRAFX is not set
# CONFIG_QIC02_TAPE is not set
#
@@ -484,21 +519,21 @@
CONFIG_AGP=y
# CONFIG_AGP_INTEL is not set
CONFIG_AGP_I460=y
-# CONFIG_AGP_I460_FULLRQ is not set
# CONFIG_AGP_I810 is not set
# CONFIG_AGP_VIA is not set
# CONFIG_AGP_AMD is not set
# CONFIG_AGP_SIS is not set
# CONFIG_AGP_ALI is not set
# CONFIG_AGP_SWORKS is not set
-CONFIG_AGP_PTE_FIXUPS=y
CONFIG_DRM=y
-CONFIG_DRM_TDFX=y
-# CONFIG_DRM_GAMMA is not set
-# CONFIG_DRM_R128 is not set
-# CONFIG_DRM_RADEON is not set
-# CONFIG_DRM_I810 is not set
-# CONFIG_DRM_MGA is not set
+# CONFIG_DRM_NEW is not set
+CONFIG_DRM_OLD=y
+CONFIG_DRM40_TDFX=y
+# CONFIG_DRM40_GAMMA is not set
+# CONFIG_DRM40_R128 is not set
+# CONFIG_DRM40_RADEON is not set
+# CONFIG_DRM40_I810 is not set
+# CONFIG_DRM40_MGA is not set
#
# Multimedia devices
@@ -612,8 +647,23 @@
#
# Partition Types
#
-# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+CONFIG_EFI_PARTITION=y
+# CONFIG_DEVFS_GUID is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
# CONFIG_SMB_NLS is not set
CONFIG_NLS=y
@@ -672,8 +722,10 @@
# Sound
#
CONFIG_SOUND=y
+# CONFIG_SOUND_BT878 is not set
# CONFIG_SOUND_CMPCI is not set
# CONFIG_SOUND_EMU10K1 is not set
+# CONFIG_MIDI_EMU10K1 is not set
# CONFIG_SOUND_FUSION is not set
CONFIG_SOUND_CS4281=y
# CONFIG_SOUND_ES1370 is not set
@@ -682,6 +734,7 @@
# CONFIG_SOUND_MAESTRO is not set
# CONFIG_SOUND_MAESTRO3 is not set
# CONFIG_SOUND_ICH is not set
+# CONFIG_SOUND_RME96XX is not set
# CONFIG_SOUND_SONICVIBES is not set
# CONFIG_SOUND_TRIDENT is not set
# CONFIG_SOUND_MSNDCLAS is not set
@@ -747,10 +800,11 @@
#
# USB Network adaptors
#
-# CONFIG_USB_PLUSB is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_CATC is not set
-# CONFIG_USB_NET1080 is not set
+# CONFIG_USB_CDCETHER is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_USBNET is not set
#
# USB port drivers
@@ -775,11 +829,11 @@
#
# Kernel hacking
#
-CONFIG_IA32_SUPPORT=y
-CONFIG_MATHEMU=y
+CONFIG_DEBUG_KERNEL=y
+CONFIG_IA64_PRINT_HAZARDS=y
+# CONFIG_DISABLE_VHPT is not set
CONFIG_MAGIC_SYSRQ=y
-CONFIG_IA64_EARLY_PRINTK=y
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_IA64_DEBUG_CMPXCHG is not set
# CONFIG_IA64_DEBUG_IRQ is not set
-CONFIG_IA64_PRINT_HAZARDS=y
-# CONFIG_DISABLE_VHPT is not set
diff -urN linux-davidm/arch/ia64/ia32/binfmt_elf32.c linux-2.4.10-lia/arch/ia64/ia32/binfmt_elf32.c
--- linux-davidm/arch/ia64/ia32/binfmt_elf32.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/ia32/binfmt_elf32.c Mon Sep 24 21:43:45 2001
@@ -3,10 +3,11 @@
*
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 06/16/00 A. Mallick initialize csd/ssd/tssd/cflg for ia32_load_state
* 04/13/01 D. Mosberger dropped saving tssd in ar.k1---it's not needed
+ * 09/14/01 D. Mosberger fixed memory management for gdt/tss page
*/
#include <linux/config.h>
@@ -41,51 +42,32 @@
extern void ia64_elf32_init (struct pt_regs *regs);
extern void put_dirty_page (struct task_struct * tsk, struct page *page, unsigned long address);
+static void elf32_set_personality (void);
+
#define ELF_PLAT_INIT(_r) ia64_elf32_init(_r)
#define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm)
-#define elf_map elf_map32
+#define elf_map elf32_map
+#define SET_PERSONALITY(ex, ibcs2) elf32_set_personality()
/* Ugly but avoids duplication */
#include "../../../fs/binfmt_elf.c"
-/* Global descriptor table */
-unsigned long *ia32_gdt_table, *ia32_tss;
+extern struct page *ia32_shared_page[];
+extern unsigned long *ia32_gdt;
struct page *
-put_shared_page (struct task_struct * tsk, struct page *page, unsigned long address)
+ia32_install_shared_page (struct vm_area_struct *vma, unsigned long address, int no_share)
{
- pgd_t * pgd;
- pmd_t * pmd;
- pte_t * pte;
-
- if (page_count(page) != 1)
- printk("mem_map disagrees with %p at %08lx\n", (void *) page, address);
+ struct page *pg = ia32_shared_page[(address - vma->vm_start)/PAGE_SIZE];
- pgd = pgd_offset(tsk->mm, address);
-
- spin_lock(&tsk->mm->page_table_lock);
- {
- pmd = pmd_alloc(tsk->mm, pgd, address);
- if (!pmd)
- goto out;
- pte = pte_alloc(tsk->mm, pmd, address);
- if (!pte)
- goto out;
- if (!pte_none(*pte))
- goto out;
- flush_page_to_ram(page);
- set_pte(pte, pte_mkwrite(mk_pte(page, PAGE_SHARED)));
- }
- spin_unlock(&tsk->mm->page_table_lock);
- /* no need for flush_tlb */
- return page;
-
- out:
- spin_unlock(&tsk->mm->page_table_lock);
- __free_page(page);
- return 0;
+ get_page(pg);
+ return pg;
}
+static struct vm_operations_struct ia32_shared_page_vm_ops = {
+ nopage: ia32_install_shared_page
+};
+
void
ia64_elf32_init (struct pt_regs *regs)
{
@@ -97,9 +79,23 @@
* it with privilege level 3 because the IVE uses non-privileged accesses to these
* tables. IA-32 segmentation is used to protect against IA-32 accesses to them.
*/
- put_shared_page(current, virt_to_page(ia32_gdt_table), IA32_GDT_OFFSET);
- if (PAGE_SHIFT <= IA32_PAGE_SHIFT)
- put_shared_page(current, virt_to_page(ia32_tss), IA32_TSS_OFFSET);
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (vma) {
+ vma->vm_mm = current->mm;
+ vma->vm_start = IA32_GDT_OFFSET;
+ vma->vm_end = vma->vm_start + max(PAGE_SIZE, 2*IA32_PAGE_SIZE);
+ vma->vm_page_prot = PAGE_SHARED;
+ vma->vm_flags = VM_READ|VM_MAYREAD;
+ vma->vm_ops = &ia32_shared_page_vm_ops;
+ vma->vm_pgoff = 0;
+ vma->vm_file = NULL;
+ vma->vm_private_data = NULL;
+ down_write(¤t->mm->mmap_sem);
+ {
+ insert_vm_struct(current->mm, vma);
+ }
+ up_write(¤t->mm->mmap_sem);
+ }
/*
* Install LDT as anonymous memory. This gives us all-zero segment descriptors
@@ -116,25 +112,25 @@
vma->vm_pgoff = 0;
vma->vm_file = NULL;
vma->vm_private_data = NULL;
- insert_vm_struct(current->mm, vma);
+ down_write(¤t->mm->mmap_sem);
+ {
+ insert_vm_struct(current->mm, vma);
+ }
+ up_write(¤t->mm->mmap_sem);
}
nr = smp_processor_id();
- current->thread.map_base = IA32_PAGE_OFFSET/3;
- current->thread.task_size = IA32_PAGE_OFFSET; /* use what Linux/x86 uses... */
- set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
-
/* Setup the segment selectors */
regs->r16 = (__USER_DS << 16) | __USER_DS; /* ES = DS, GS, FS are zero */
regs->r17 = (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state() sets TSS and LDT */
/* Setup the segment descriptors */
- regs->r24 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* ESD */
- regs->r27 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* DSD */
+ regs->r24 = IA32_SEG_UNSCRAMBLE(ia32_gdt[__USER_DS >> 3]); /* ESD */
+ regs->r27 = IA32_SEG_UNSCRAMBLE(ia32_gdt[__USER_DS >> 3]); /* DSD */
regs->r28 = 0; /* FSD (null) */
regs->r29 = 0; /* GSD (null) */
- regs->r30 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_LDT(nr)]); /* LDTD */
+ regs->r30 = IA32_SEG_UNSCRAMBLE(ia32_gdt[_LDT(nr)]); /* LDTD */
/*
* Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descriptor
@@ -164,9 +160,9 @@
current->thread.fcr = IA32_FCR_DEFAULT;
current->thread.fir = 0;
current->thread.fdr = 0;
- current->thread.csd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_CS >> 3]);
- current->thread.ssd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]);
- current->thread.tssd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_TSS(nr)]);
+ current->thread.csd = IA32_SEG_UNSCRAMBLE(ia32_gdt[__USER_CS >> 3]);
+ current->thread.ssd = IA32_SEG_UNSCRAMBLE(ia32_gdt[__USER_DS >> 3]);
+ current->thread.tssd = IA32_SEG_UNSCRAMBLE(ia32_gdt[_TSS(nr)]);
ia32_load_state(current);
}
@@ -189,6 +185,7 @@
if (!mpnt)
return -ENOMEM;
+ down_write(¤t->mm->mmap_sem);
{
mpnt->vm_mm = current->mm;
mpnt->vm_start = PAGE_MASK & (unsigned long) bprm->p;
@@ -204,54 +201,32 @@
}
for (i = 0 ; i < MAX_ARG_PAGES ; i++) {
- if (bprm->page[i]) {
- put_dirty_page(current,bprm->page[i],stack_base);
+ struct page *page = bprm->page[i];
+ if (page) {
+ bprm->page[i] = NULL;
+ put_dirty_page(current, page, stack_base);
}
stack_base += PAGE_SIZE;
}
+ up_write(¤t->mm->mmap_sem);
return 0;
}
-static unsigned long
-ia32_mm_addr (unsigned long addr)
+static void
+elf32_set_personality (void)
{
- struct vm_area_struct *vma;
-
- if ((vma = find_vma(current->mm, addr)) = NULL)
- return ELF_PAGESTART(addr);
- if (vma->vm_start > addr)
- return ELF_PAGESTART(addr);
- return ELF_PAGEALIGN(addr);
+ set_personality(PER_LINUX32);
+ current->thread.map_base = IA32_PAGE_OFFSET/3;
+ current->thread.task_size = IA32_PAGE_OFFSET; /* use what Linux/x86 uses... */
+ set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
}
-/*
- * Normally we would do an `mmap' to map in the process's text section.
- * This doesn't work with IA32 processes as the ELF file might specify
- * a non page size aligned address. Instead we will just allocate
- * memory and read the data in from the file. Slightly less efficient
- * but it works.
- */
-extern long ia32_do_mmap (struct file *filep, unsigned int len, unsigned int prot,
- unsigned int flags, unsigned int fd, unsigned int offset);
-
static unsigned long
-elf_map32 (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
+elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
{
- unsigned long retval;
-
- if (eppnt->p_memsz >= (1UL<<32) || addr > (1UL<<32) - eppnt->p_memsz)
- return -EINVAL;
+ unsigned long pgoff = (eppnt->p_vaddr) & ~IA32_PAGE_MASK;
- /*
- * Make sure the elf interpreter doesn't get loaded at location 0
- * so that NULL pointers correctly cause segfaults.
- */
- if (addr = 0)
- addr += PAGE_SIZE;
- set_brk(ia32_mm_addr(addr), addr + eppnt->p_memsz);
- memset((char *) addr + eppnt->p_filesz, 0, eppnt->p_memsz - eppnt->p_filesz);
- kernel_read(filep, eppnt->p_offset, (char *) addr, eppnt->p_filesz);
- retval = (unsigned long) addr;
- return retval;
+ return ia32_do_mmap(filep, (addr & IA32_PAGE_MASK), eppnt->p_filesz + pgoff, prot, type,
+ eppnt->p_offset - pgoff);
}
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S linux-2.4.10-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/ia32/ia32_entry.S Mon Sep 24 21:44:23 2001
@@ -2,7 +2,7 @@
#include <asm/offsets.h>
#include <asm/signal.h>
-#include "../kernel/entry.h"
+#include "../kernel/minstate.h"
/*
* execve() is special because in case of success, we need to
@@ -14,13 +14,13 @@
alloc loc1=ar.pfs,3,2,4,0
mov loc0=rp
.body
- mov out0=in0 // filename
+ zxt4 out0=in0 // filename
;; // stop bit between alloc and call
- mov out1=in1 // argv
- mov out2=in2 // envp
+ zxt4 out1=in1 // argv
+ zxt4 out2=in2 // envp
add out3\x16,sp // regs
br.call.sptk.few rp=sys32_execve
-1: cmp4.ge p6,p0=r8,r0
+1: cmp.ge p6,p0=r8,r0
mov ar.pfs=loc1 // restore ar.pfs
;;
(p6) mov ar.pfs=r0 // clear ar.pfs in case of success
@@ -29,6 +29,25 @@
br.ret.sptk.few rp
END(ia32_execve)
+ENTRY(ia32_clone)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2)
+ alloc r16=ar.pfs,2,2,4,0
+ DO_SAVE_SWITCH_STACK
+ mov loc0=rp
+ mov loc1=r16 // save ar.pfs across do_fork
+ .body
+ zxt4 out1=in1 // newsp
+ mov out3=0 // stacksize
+ adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
+ zxt4 out0=in0 // out0 = clone_flags
+ br.call.sptk.many rp=do_fork
+.ret0: .restore sp
+ adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
+ mov ar.pfs=loc1
+ mov rp=loc0
+ br.ret.sptk.many rp
+END(ia32_clone)
+
//
// Get possibly unaligned sigmask argument into an aligned
// kernel buffer
@@ -38,7 +57,8 @@
// r32 is still the first argument which is the signal mask.
// We copy this 4-byte aligned value to an 8-byte aligned buffer
// in the task structure and then jump to the IA64 code.
-
+ zxt4 r32=r32
+ ;;
EX(.Lfail, ld4 r2=[r32],4) // load low part of sigmask
;;
EX(.Lfail, ld4 r3=[r32]) // load high part of sigmask
@@ -54,6 +74,24 @@
.Lfail: br.ret.sptk.many rp // failed to read sigmask
END(ia32_rt_sigsuspend)
+GLOBAL_ENTRY(ia32_ret_from_clone)
+ PT_REGS_UNWIND_INFO(0)
+ /*
+ * We need to call schedule_tail() to complete the scheduling process.
+ * Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
+ * address of the previously executing task.
+ */
+ br.call.sptk.many rp=ia64_invoke_schedule_tail
+.ret1: adds r2=IA64_TASK_PTRACE_OFFSET,r13
+ ;;
+ ld8 r2=[r2]
+ ;;
+ mov r8=0
+ tbit.nz p6,p0=r2,PT_TRACESYS_BIT
+(p6) br.cond.spnt .ia32_strace_check_retval
+ ;; // prevent RAW on r8
+END(ia32_ret_from_clone)
+ // fall thrugh
GLOBAL_ENTRY(ia32_ret_from_syscall)
PT_REGS_UNWIND_INFO(0)
@@ -77,15 +115,20 @@
//
GLOBAL_ENTRY(ia32_trace_syscall)
PT_REGS_UNWIND_INFO(0)
+ mov r3=-38
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,sp
+ ;;
+ st8 [r2]=r3 // initialize return code to -ENOSYS
br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
-.ret0: br.call.sptk.few rp¶ // do the syscall
-.ret1: cmp.lt p6,p0=r8,r0 // syscall failed?
+.ret2: br.call.sptk.few rp¶ // do the syscall
+.ia32_strace_check_retval:
+ cmp.lt p6,p0=r8,r0 // syscall failed?
adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
;;
st8.spill [r2]=r8 // store return value in slot for r8
br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
-.ret2: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
- br.cond.sptk.many ia64_leave_kernel // rp MUST be != ia64_leave_kernel!
+.ret4: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
+ br.cond.sptk.many ia64_leave_kernel
END(ia32_trace_syscall)
GLOBAL_ENTRY(sys32_vfork)
@@ -110,7 +153,7 @@
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
br.call.sptk.few rp=do_fork
-.ret3: mov ar.pfs=loc1
+.ret5: mov ar.pfs=loc1
.restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov rp=loc0
@@ -137,21 +180,21 @@
data8 sys32_time
data8 sys_mknod
data8 sys_chmod /* 15 */
- data8 sys_lchown
+ data8 sys_lchown /* 16-bit version */
data8 sys32_ni_syscall /* old break syscall holder */
data8 sys32_ni_syscall
data8 sys32_lseek
data8 sys_getpid /* 20 */
data8 sys_mount
data8 sys_oldumount
- data8 sys_setuid
- data8 sys_getuid
+ data8 sys_setuid /* 16-bit version */
+ data8 sys_getuid /* 16-bit version */
data8 sys32_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */
data8 sys32_ptrace
data8 sys32_alarm
data8 sys32_ni_syscall
- data8 sys_pause
- data8 ia32_utime /* 30 */
+ data8 sys32_pause
+ data8 sys32_utime /* 30 */
data8 sys32_ni_syscall /* old stty syscall holder */
data8 sys32_ni_syscall /* old gtty syscall holder */
data8 sys_access
@@ -167,15 +210,15 @@
data8 sys32_times
data8 sys32_ni_syscall /* old prof syscall holder */
data8 sys_brk /* 45 */
- data8 sys_setgid
- data8 sys_getgid
+ data8 sys_setgid /* 16-bit version */
+ data8 sys_getgid /* 16-bit version */
data8 sys32_signal
- data8 sys_geteuid
- data8 sys_getegid /* 50 */
+ data8 sys_geteuid /* 16-bit version */
+ data8 sys_getegid /* 16-bit version */ /* 50 */
data8 sys_acct
data8 sys_umount /* recycled never used phys( */
data8 sys32_ni_syscall /* old lock syscall holder */
- data8 ia32_ioctl
+ data8 sys32_ioctl
data8 sys32_fcntl /* 55 */
data8 sys32_ni_syscall /* old mpx syscall holder */
data8 sys_setpgid
@@ -191,19 +234,19 @@
data8 sys32_sigaction
data8 sys32_ni_syscall
data8 sys32_ni_syscall
- data8 sys_setreuid /* 70 */
- data8 sys_setregid
+ data8 sys_setreuid /* 16-bit version */ /* 70 */
+ data8 sys_setregid /* 16-bit version */
data8 sys32_ni_syscall
- data8 sys_sigpending
+ data8 sys32_sigpending
data8 sys_sethostname
data8 sys32_setrlimit /* 75 */
- data8 sys32_getrlimit
+ data8 sys32_old_getrlimit
data8 sys32_getrusage
data8 sys32_gettimeofday
data8 sys32_settimeofday
- data8 sys_getgroups /* 80 */
- data8 sys_setgroups
- data8 old_select
+ data8 sys32_getgroups16 /* 80 */
+ data8 sys32_setgroups16
+ data8 sys32_old_select
data8 sys_symlink
data8 sys32_ni_syscall
data8 sys_readlink /* 85 */
@@ -212,17 +255,17 @@
data8 sys_reboot
data8 sys32_readdir
data8 sys32_mmap /* 90 */
- data8 sys_munmap
+ data8 sys32_munmap
data8 sys_truncate
data8 sys_ftruncate
data8 sys_fchmod
- data8 sys_fchown /* 95 */
+ data8 sys_fchown /* 16-bit version */ /* 95 */
data8 sys_getpriority
data8 sys_setpriority
data8 sys32_ni_syscall /* old profil syscall holder */
data8 sys32_statfs
data8 sys32_fstatfs /* 100 */
- data8 sys_ioperm
+ data8 sys32_ioperm
data8 sys32_socketcall
data8 sys_syslog
data8 sys32_setitimer
@@ -231,36 +274,36 @@
data8 sys32_newlstat
data8 sys32_newfstat
data8 sys32_ni_syscall
- data8 sys_iopl /* 110 */
+ data8 sys32_iopl /* 110 */
data8 sys_vhangup
data8 sys32_ni_syscall /* used to be sys_idle */
data8 sys32_ni_syscall
data8 sys32_wait4
data8 sys_swapoff /* 115 */
- data8 sys_sysinfo
+ data8 sys32_sysinfo
data8 sys32_ipc
data8 sys_fsync
data8 sys32_sigreturn
- data8 sys_clone /* 120 */
+ data8 ia32_clone /* 120 */
data8 sys_setdomainname
data8 sys32_newuname
data8 sys32_modify_ldt
- data8 sys_adjtimex
+ data8 sys32_ni_syscall /* adjtimex */
data8 sys32_mprotect /* 125 */
- data8 sys_sigprocmask
- data8 sys_create_module
- data8 sys_init_module
- data8 sys_delete_module
- data8 sys_get_kernel_syms /* 130 */
- data8 sys_quotactl
+ data8 sys32_sigprocmask
+ data8 sys32_ni_syscall /* create_module */
+ data8 sys32_ni_syscall /* init_module */
+ data8 sys32_ni_syscall /* delete_module */
+ data8 sys32_ni_syscall /* get_kernel_syms */ /* 130 */
+ data8 sys32_quotactl
data8 sys_getpgid
data8 sys_fchdir
- data8 sys_bdflush
- data8 sys_sysfs /* 135 */
- data8 sys_personality
+ data8 sys32_ni_syscall /* sys_bdflush */
+ data8 sys_sysfs /* 135 */
+ data8 sys32_personality
data8 sys32_ni_syscall /* for afs_syscall */
- data8 sys_setfsuid
- data8 sys_setfsgid
+ data8 sys_setfsuid /* 16-bit version */
+ data8 sys_setfsgid /* 16-bit version */
data8 sys_llseek /* 140 */
data8 sys32_getdents
data8 sys32_select
@@ -282,68 +325,68 @@
data8 sys_sched_yield
data8 sys_sched_get_priority_max
data8 sys_sched_get_priority_min /* 160 */
- data8 sys_sched_rr_get_interval
+ data8 sys32_sched_rr_get_interval
data8 sys32_nanosleep
data8 sys_mremap
- data8 sys_setresuid
- data8 sys32_getresuid /* 165 */
- data8 sys_vm86
- data8 sys_query_module
+ data8 sys_setresuid /* 16-bit version */
+ data8 sys32_getresuid16 /* 16-bit version */ /* 165 */
+ data8 sys32_ni_syscall /* vm86 */
+ data8 sys32_ni_syscall /* sys_query_module */
data8 sys_poll
- data8 sys_nfsservctl
+ data8 sys32_ni_syscall /* nfsservctl */
data8 sys_setresgid /* 170 */
- data8 sys32_getresgid
+ data8 sys32_getresgid16
data8 sys_prctl
data8 sys32_rt_sigreturn
data8 sys32_rt_sigaction
data8 sys32_rt_sigprocmask /* 175 */
data8 sys_rt_sigpending
- data8 sys_rt_sigtimedwait
- data8 sys_rt_sigqueueinfo
+ data8 sys32_rt_sigtimedwait
+ data8 sys32_rt_sigqueueinfo
data8 ia32_rt_sigsuspend
- data8 sys_pread /* 180 */
- data8 sys_pwrite
- data8 sys_chown
+ data8 sys32_pread /* 180 */
+ data8 sys32_pwrite
+ data8 sys_chown /* 16-bit version */
data8 sys_getcwd
data8 sys_capget
data8 sys_capset /* 185 */
data8 sys32_sigaltstack
- data8 sys_sendfile
+ data8 sys32_sendfile
data8 sys32_ni_syscall /* streams1 */
data8 sys32_ni_syscall /* streams2 */
data8 sys32_vfork /* 190 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 195 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 200 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 205 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 210 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 215 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 220 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
+ data8 sys32_getrlimit
+ data8 sys32_mmap2
+ data8 sys32_truncate64
+ data8 sys32_ftruncate64
+ data8 sys32_stat64 /* 195 */
+ data8 sys32_lstat64
+ data8 sys32_fstat64
+ data8 sys_lchown
+ data8 sys_getuid
+ data8 sys_getgid /* 200 */
+ data8 sys_geteuid
+ data8 sys_getegid
+ data8 sys_setreuid
+ data8 sys_setregid
+ data8 sys_getgroups /* 205 */
+ data8 sys_setgroups
+ data8 sys_fchown
+ data8 sys_setresuid
+ data8 sys_getresuid
+ data8 sys_setresgid /* 210 */
+ data8 sys_getresgid
+ data8 sys_chown
+ data8 sys_setuid
+ data8 sys_setgid
+ data8 sys_setfsuid /* 215 */
+ data8 sys_setfsgid
+ data8 sys_pivot_root
+ data8 sys_mincore
+ data8 sys_madvise
+ data8 sys_getdents64 /* 220 */
+ data8 sys32_fcntl64
+ data8 sys_ni_syscall /* reserved for TUX */
/*
* CAUTION: If any system calls are added beyond this point
* then the check in `arch/ia64/kernel/ivt.S' will have
diff -urN linux-davidm/arch/ia64/ia32/ia32_ioctl.c linux-2.4.10-lia/arch/ia64/ia32/ia32_ioctl.c
--- linux-davidm/arch/ia64/ia32/ia32_ioctl.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/ia32/ia32_ioctl.c Mon Sep 24 21:44:39 2001
@@ -36,12 +36,13 @@
_ret; \
})
-#define P(i) ((void *)(long)(i))
+#define P(i) ((void *)(unsigned long)(i))
asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg);
-asmlinkage long ia32_ioctl(unsigned int fd, unsigned int cmd, unsigned int arg)
+asmlinkage long
+sys32_ioctl (unsigned int fd, unsigned int cmd, unsigned int arg)
{
long ret;
@@ -81,7 +82,7 @@
if (copy_to_user(P(arg), &ver32, sizeof(ver32)))
return -EFAULT;
}
- return(ret);
+ return ret;
}
case IOCTL_NR(DRM_IOCTL_GET_UNIQUE):
@@ -202,9 +203,9 @@
case IOCTL_NR(I2OHTML):
break;
default:
- return(sys_ioctl(fd, cmd, (unsigned long)arg));
+ return sys_ioctl(fd, cmd, (unsigned long)arg);
}
printk("%x:unimplemented IA32 ioctl system call\n", cmd);
- return(-EINVAL);
+ return -EINVAL;
}
diff -urN linux-davidm/arch/ia64/ia32/ia32_ldt.c linux-2.4.10-lia/arch/ia64/ia32/ia32_ldt.c
--- linux-davidm/arch/ia64/ia32/ia32_ldt.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/ia32/ia32_ldt.c Mon Sep 24 21:44:52 2001
@@ -16,6 +16,8 @@
#include <asm/uaccess.h>
#include <asm/ia32.h>
+#define P(p) ((void *) (unsigned long) (p))
+
/*
* read_ldt() is not really atomic - this is not a problem since synchronization of reads
* and writes done to the LDT has to be assured by user-space anyway. Writes are atomic,
@@ -101,19 +103,19 @@
}
asmlinkage int
-sys32_modify_ldt (int func, void *ptr, unsigned int bytecount)
+sys32_modify_ldt (int func, unsigned int ptr, unsigned int bytecount)
{
int ret = -ENOSYS;
switch (func) {
case 0:
- ret = read_ldt(ptr, bytecount);
+ ret = read_ldt(P(ptr), bytecount);
break;
case 1:
- ret = write_ldt(ptr, bytecount, 1);
+ ret = write_ldt(P(ptr), bytecount, 1);
break;
case 0x11:
- ret = write_ldt(ptr, bytecount, 0);
+ ret = write_ldt(P(ptr), bytecount, 0);
break;
}
return ret;
diff -urN linux-davidm/arch/ia64/ia32/ia32_signal.c linux-2.4.10-lia/arch/ia64/ia32/ia32_signal.c
--- linux-davidm/arch/ia64/ia32/ia32_signal.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/ia32/ia32_signal.c Mon Sep 24 21:45:00 2001
@@ -1,8 +1,8 @@
/*
* IA32 Architecture-specific signal handling support.
*
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999, 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
@@ -13,6 +13,7 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/personality.h>
#include <linux/ptrace.h>
#include <linux/sched.h>
#include <linux/signal.h>
@@ -31,6 +32,8 @@
#define DEBUG_SIG 0
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+#define __IA32_NR_sigreturn 119
+#define __IA32_NR_rt_sigreturn 173
struct sigframe_ia32
{
@@ -54,12 +57,51 @@
char retcode[8];
};
-static int
+int
+copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from)
+{
+ unsigned long tmp;
+ int err;
+
+ if (!access_ok(VERIFY_READ, from, sizeof(siginfo_t32)))
+ return -EFAULT;
+
+ err = __get_user(to->si_signo, &from->si_signo);
+ err |= __get_user(to->si_errno, &from->si_errno);
+ err |= __get_user(to->si_code, &from->si_code);
+
+ if (from->si_code < 0)
+ err |= __copy_from_user(&to->_sifields._pad, &from->_sifields._pad, SI_PAD_SIZE);
+ else {
+ switch (from->si_code >> 16) {
+ case __SI_CHLD >> 16:
+ err |= __get_user(to->si_utime, &from->si_utime);
+ err |= __get_user(to->si_stime, &from->si_stime);
+ err |= __get_user(to->si_status, &from->si_status);
+ default:
+ err |= __get_user(to->si_pid, &from->si_pid);
+ err |= __get_user(to->si_uid, &from->si_uid);
+ break;
+ case __SI_FAULT >> 16:
+ err |= __get_user(tmp, &from->si_addr);
+ to->si_addr = (void *) tmp;
+ break;
+ case __SI_POLL >> 16:
+ err |= __get_user(to->si_band, &from->si_band);
+ err |= __get_user(to->si_fd, &from->si_fd);
+ break;
+ /* case __SI_RT: This is not generated by the kernel as of now. */
+ }
+ }
+ return err;
+}
+
+int
copy_siginfo_to_user32 (siginfo_t32 *to, siginfo_t *from)
{
int err;
- if (!access_ok (VERIFY_WRITE, to, sizeof(siginfo_t32)))
+ if (!access_ok(VERIFY_WRITE, to, sizeof(siginfo_t32)))
return -EFAULT;
/* If you change siginfo_t structure, please be sure
@@ -100,80 +142,81 @@
static int
-setup_sigcontext_ia32(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
- struct pt_regs *regs, unsigned long mask)
+setup_sigcontext_ia32 (struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
+ struct pt_regs *regs, unsigned long mask)
{
- int err = 0;
- unsigned long flag;
+ int err = 0;
+ unsigned long flag;
- err |= __put_user((regs->r16 >> 32) & 0xffff , (unsigned int *)&sc->fs);
- err |= __put_user((regs->r16 >> 48) & 0xffff , (unsigned int *)&sc->gs);
+ err |= __put_user((regs->r16 >> 32) & 0xffff , (unsigned int *)&sc->fs);
+ err |= __put_user((regs->r16 >> 48) & 0xffff , (unsigned int *)&sc->gs);
- err |= __put_user((regs->r16 >> 56) & 0xffff, (unsigned int *)&sc->es);
- err |= __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds);
- err |= __put_user(regs->r15, &sc->edi);
- err |= __put_user(regs->r14, &sc->esi);
- err |= __put_user(regs->r13, &sc->ebp);
- err |= __put_user(regs->r12, &sc->esp);
- err |= __put_user(regs->r11, &sc->ebx);
- err |= __put_user(regs->r10, &sc->edx);
- err |= __put_user(regs->r9, &sc->ecx);
- err |= __put_user(regs->r8, &sc->eax);
+ err |= __put_user((regs->r16 >> 56) & 0xffff, (unsigned int *)&sc->es);
+ err |= __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds);
+ err |= __put_user(regs->r15, &sc->edi);
+ err |= __put_user(regs->r14, &sc->esi);
+ err |= __put_user(regs->r13, &sc->ebp);
+ err |= __put_user(regs->r12, &sc->esp);
+ err |= __put_user(regs->r11, &sc->ebx);
+ err |= __put_user(regs->r10, &sc->edx);
+ err |= __put_user(regs->r9, &sc->ecx);
+ err |= __put_user(regs->r8, &sc->eax);
#if 0
- err |= __put_user(current->tss.trap_no, &sc->trapno);
- err |= __put_user(current->tss.error_code, &sc->err);
+ err |= __put_user(current->tss.trap_no, &sc->trapno);
+ err |= __put_user(current->tss.error_code, &sc->err);
#endif
- err |= __put_user(regs->cr_iip, &sc->eip);
- err |= __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs);
- /*
- * `eflags' is in an ar register for this context
- */
- asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
- err |= __put_user((unsigned int)flag, &sc->eflags);
-
- err |= __put_user(regs->r12, &sc->esp_at_signal);
- err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
+ err |= __put_user(regs->cr_iip, &sc->eip);
+ err |= __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs);
+ /*
+ * `eflags' is in an ar register for this context
+ */
+ asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
+ err |= __put_user((unsigned int)flag, &sc->eflags);
+ err |= __put_user(regs->r12, &sc->esp_at_signal);
+ err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
#if 0
- tmp = save_i387(fpstate);
- if (tmp < 0)
- err = 1;
- else
- err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
+ tmp = save_i387(fpstate);
+ if (tmp < 0)
+ err = 1;
+ else
+ err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
- /* non-iBCS2 extensions.. */
+ /* non-iBCS2 extensions.. */
#endif
- err |= __put_user(mask, &sc->oldmask);
+ err |= __put_user(mask, &sc->oldmask);
#if 0
- err |= __put_user(current->tss.cr2, &sc->cr2);
+ err |= __put_user(current->tss.cr2, &sc->cr2);
#endif
-
- return err;
+ return err;
}
static int
-restore_sigcontext_ia32(struct pt_regs *regs, struct sigcontext_ia32 *sc, int *peax)
+restore_sigcontext_ia32 (struct pt_regs *regs, struct sigcontext_ia32 *sc, int *peax)
{
unsigned int err = 0;
-#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
+#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
-#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48)
-#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32)
-#define copyseg_cs(tmp) (regs->r17 |= tmp)
-#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16)
-#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16)
-#define copyseg_ds(tmp) (regs->r16 |= tmp)
-
-#define COPY_SEG(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
- copyseg_##seg(tmp); }
-
-#define COPY_SEG_STRICT(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
- copyseg_##seg(tmp|3); }
+#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48)
+#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32)
+#define copyseg_cs(tmp) (regs->r17 |= tmp)
+#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16)
+#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16)
+#define copyseg_ds(tmp) (regs->r16 |= tmp)
+
+#define COPY_SEG(seg) \
+ { \
+ unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp); \
+ }
+#define COPY_SEG_STRICT(seg) \
+ { \
+ unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp|3); \
+ }
/* To make COPY_SEGs easier, we zero r16, r17 */
regs->r16 = 0;
@@ -198,9 +241,8 @@
unsigned long flag;
/*
- * IA32 `eflags' is not part of `pt_regs', it's
- * in an ar register which is part of the thread
- * context. Fortunately, we are executing in the
+ * IA32 `eflags' is not part of `pt_regs', it's in an ar register which
+ * is part of the thread context. Fortunately, we are executing in the
* IA32 process's context.
*/
err |= __get_user(tmpflags, &sc->eflags);
@@ -227,7 +269,7 @@
err |= __get_user(*peax, &sc->eax);
return err;
-#if 0
+#if 0
badframe:
return 1;
#endif
@@ -238,158 +280,164 @@
* Determine which stack to use..
*/
static inline void *
-get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
+get_sigframe (struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
{
- unsigned long esp;
- unsigned int xss;
+ unsigned long esp;
- /* Default to using normal stack */
- esp = regs->r12;
- xss = regs->r16 >> 16;
-
- /* This is the X/Open sanctioned signal stack switching. */
- if (ka->sa.sa_flags & SA_ONSTACK) {
- if (! on_sig_stack(esp))
- esp = current->sas_ss_sp + current->sas_ss_size;
- }
- /* Legacy stack switching not supported */
-
- return (void *)((esp - frame_size) & -8ul);
+ /* Default to using normal stack (truncate off sign-extension of bit 31: */
+ esp = (unsigned int) regs->r12;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+ if (!on_sig_stack(esp))
+ esp = current->sas_ss_sp + current->sas_ss_size;
+ }
+ /* Legacy stack switching not supported */
+
+ return (void *)((esp - frame_size) & -8ul);
}
static int
-setup_frame_ia32(int sig, struct k_sigaction *ka, sigset_t *set,
- struct pt_regs * regs)
-{
- struct sigframe_ia32 *frame;
- int err = 0;
-
- frame = get_sigframe(ka, regs, sizeof(*frame));
-
- if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
- goto give_sigsegv;
-
- err |= __put_user((current->exec_domain
- && current->exec_domain->signal_invmap
- && sig < 32
- ? (int)(current->exec_domain->signal_invmap[sig])
- : sig),
- &frame->sig);
-
- err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
-
- if (_IA32_NSIG_WORDS > 1) {
- err |= __copy_to_user(frame->extramask, (char *) &set->sig[1] + 4,
- sizeof(frame->extramask));
- }
+setup_frame_ia32 (int sig, struct k_sigaction *ka, sigset_t *set, struct pt_regs * regs)
+{
+ struct sigframe_ia32 *frame;
+ int err = 0;
- /* Set up to return from userspace. If provided, use a stub
- already in userspace. */
- err |= __put_user((long)frame->retcode, &frame->pretcode);
- /* This is popl %eax ; movl $,%eax ; int $0x80 */
- err |= __put_user(0xb858, (short *)(frame->retcode+0));
-#define __IA32_NR_sigreturn 119
- err |= __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retcode+2));
- err |= __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+4));
- err |= __put_user(0x80cd, (short *)(frame->retcode+6));
-
- if (err)
- goto give_sigsegv;
-
- /* Set up registers for signal handler */
- regs->r12 = (unsigned long) frame;
- regs->cr_iip = (unsigned long) ka->sa.sa_handler;
-
- set_fs(USER_DS);
- regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS;
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? (int)(current->exec_domain->signal_invmap[sig])
+ : sig),
+ &frame->sig);
+
+ err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
+
+ if (_IA32_NSIG_WORDS > 1)
+ err |= __copy_to_user(frame->extramask, (char *) &set->sig + 4,
+ sizeof(frame->extramask));
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ unsigned int restorer = IA32_SA_RESTORER(ka);
+ err |= __put_user(restorer, &frame->pretcode);
+ } else {
+ err |= __put_user((long)frame->retcode, &frame->pretcode);
+ /* This is popl %eax ; movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb858, (short *)(frame->retcode+0));
+ err |= __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retcode+2));
+ err |= __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+4));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+6));
+ }
+
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = IA32_SA_HANDLER(ka);
+
+ set_fs(USER_DS);
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
#if 0
- regs->eflags &= ~TF_MASK;
+ regs->eflags &= ~TF_MASK;
#endif
#if 0
- printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
+ printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
current->comm, current->pid, sig, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
- return 1;
+ return 1;
-give_sigsegv:
- if (sig = SIGSEGV)
- ka->sa.sa_handler = SIG_DFL;
- force_sig(SIGSEGV, current);
- return 0;
+ give_sigsegv:
+ if (sig = SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+ return 0;
}
static int
-setup_rt_frame_ia32(int sig, struct k_sigaction *ka, siginfo_t *info,
- sigset_t *set, struct pt_regs * regs)
+setup_rt_frame_ia32 (int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs * regs)
{
- struct rt_sigframe_ia32 *frame;
- int err = 0;
+ struct rt_sigframe_ia32 *frame;
+ int err = 0;
- frame = get_sigframe(ka, regs, sizeof(*frame));
+ frame = get_sigframe(ka, regs, sizeof(*frame));
- if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
- goto give_sigsegv;
-
- err |= __put_user((current->exec_domain
- && current->exec_domain->signal_invmap
- && sig < 32
- ? current->exec_domain->signal_invmap[sig]
- : sig),
- &frame->sig);
- err |= __put_user((long)&frame->info, &frame->pinfo);
- err |= __put_user((long)&frame->uc, &frame->puc);
- err |= copy_siginfo_to_user32(&frame->info, info);
-
- /* Create the ucontext. */
- err |= __put_user(0, &frame->uc.uc_flags);
- err |= __put_user(0, &frame->uc.uc_link);
- err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
- err |= __put_user(sas_ss_flags(regs->r12),
- &frame->uc.uc_stack.ss_flags);
- err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
- err |= setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate,
- regs, set->sig[0]);
- err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
-
- err |= __put_user((long)frame->retcode, &frame->pretcode);
- /* This is movl $,%eax ; int $0x80 */
- err |= __put_user(0xb8, (char *)(frame->retcode+0));
-#define __IA32_NR_rt_sigreturn 173
- err |= __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1));
- err |= __put_user(0x80cd, (short *)(frame->retcode+5));
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? current->exec_domain->signal_invmap[sig]
+ : sig),
+ &frame->sig);
+ err |= __put_user((long)&frame->info, &frame->pinfo);
+ err |= __put_user((long)&frame->uc, &frame->puc);
+ err |= copy_siginfo_to_user32(&frame->info, info);
+
+ /* Create the ucontext. */
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(sas_ss_flags(regs->r12), &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate, regs, set->sig[0]);
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ unsigned int restorer = IA32_SA_RESTORER(ka);
+ err |= __put_user(restorer, &frame->pretcode);
+ } else {
+ err |= __put_user((long)frame->retcode, &frame->pretcode);
+ /* This is movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb8, (char *)(frame->retcode+0));
+ err |= __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+5));
+ }
- if (err)
- goto give_sigsegv;
+ if (err)
+ goto give_sigsegv;
- /* Set up registers for signal handler */
- regs->r12 = (unsigned long) frame;
- regs->cr_iip = (unsigned long) ka->sa.sa_handler;
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = IA32_SA_HANDLER(ka);
- set_fs(USER_DS);
+ set_fs(USER_DS);
- regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS;
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
#if 0
- regs->eflags &= ~TF_MASK;
+ regs->eflags &= ~TF_MASK;
#endif
#if 0
- printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
+ printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
current->comm, current->pid, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
- return 1;
+ return 1;
give_sigsegv:
- if (sig = SIGSEGV)
- ka->sa.sa_handler = SIG_DFL;
- force_sig(SIGSEGV, current);
- return 0;
+ if (sig = SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+ return 0;
}
int
@@ -398,93 +446,79 @@
{
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
- return(setup_rt_frame_ia32(sig, ka, info, set, regs));
+ return setup_rt_frame_ia32(sig, ka, info, set, regs);
else
- return(setup_frame_ia32(sig, ka, set, regs));
+ return setup_frame_ia32(sig, ka, set, regs);
}
-asmlinkage int
-sys32_sigreturn(
-int arg0,
-int arg1,
-int arg2,
-int arg3,
-int arg4,
-int arg5,
-int arg6,
-int arg7,
-unsigned long stack)
-{
- struct pt_regs *regs = (struct pt_regs *) &stack;
- struct sigframe_ia32 *frame = (struct sigframe_ia32 *)(regs->r12- 8);
- sigset_t set;
- int eax;
-
- if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
- goto badframe;
-
- if (__get_user(set.sig[0], &frame->sc.oldmask)
- || (_IA32_NSIG_WORDS > 1 && __copy_from_user((char *) &set.sig + 4, &frame->extramask,
- sizeof(frame->extramask))))
- goto badframe;
-
- sigdelsetmask(&set, ~_BLOCKABLE);
- spin_lock_irq(¤t->sigmask_lock);
- current->blocked = (sigset_t) set;
- recalc_sigpending(current);
- spin_unlock_irq(¤t->sigmask_lock);
-
- if (restore_sigcontext_ia32(regs, &frame->sc, &eax))
- goto badframe;
- return eax;
-
-badframe:
- force_sig(SIGSEGV, current);
- return 0;
-}
-
-asmlinkage int
-sys32_rt_sigreturn(
-int arg0,
-int arg1,
-int arg2,
-int arg3,
-int arg4,
-int arg5,
-int arg6,
-int arg7,
-unsigned long stack)
-{
- struct pt_regs *regs = (struct pt_regs *) &stack;
- struct rt_sigframe_ia32 *frame = (struct rt_sigframe_ia32 *)(regs->r12 - 4);
- sigset_t set;
- stack_t st;
- int eax;
-
- if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
- goto badframe;
- if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
- goto badframe;
-
- sigdelsetmask(&set, ~_BLOCKABLE);
- spin_lock_irq(¤t->sigmask_lock);
- current->blocked = set;
- recalc_sigpending(current);
- spin_unlock_irq(¤t->sigmask_lock);
-
- if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax))
- goto badframe;
-
- if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
- goto badframe;
- /* It is more difficult to avoid calling this function than to
- call it and ignore errors. */
- do_sigaltstack(&st, NULL, regs->r12);
-
- return eax;
+asmlinkage long
+sys32_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int arg5, int arg6, int arg7,
+ unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long esp = (unsigned int) regs->r12;
+ struct sigframe_ia32 *frame = (struct sigframe_ia32 *)(esp - 8);
+ sigset_t set;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_IA32_NSIG_WORDS > 1 && __copy_from_user((char *) &set.sig + 4, &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = (sigset_t) set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->sc, &eax))
+ goto badframe;
+ return eax;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
-badframe:
- force_sig(SIGSEGV, current);
- return 0;
-}
+asmlinkage long
+sys32_rt_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int arg5, int arg6, int arg7,
+ unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long esp = (unsigned int) regs->r12;
+ struct rt_sigframe_ia32 *frame = (struct rt_sigframe_ia32 *)(esp - 4);
+ sigset_t set;
+ stack_t st;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax))
+ goto badframe;
+
+ if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
+ goto badframe;
+ /* It is more difficult to avoid calling this function than to
+ call it and ignore errors. */
+ do_sigaltstack(&st, NULL, esp);
+
+ return eax;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
diff -urN linux-davidm/arch/ia64/ia32/ia32_support.c linux-2.4.10-lia/arch/ia64/ia32/ia32_support.c
--- linux-davidm/arch/ia64/ia32/ia32_support.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/ia32/ia32_support.c Mon Sep 24 21:45:11 2001
@@ -4,15 +4,17 @@
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2000 Asit K. Mallick <asit.k.mallick@intel.com>
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 06/16/00 A. Mallick added csd/ssd/tssd for ia32 thread context
* 02/19/01 D. Mosberger dropped tssd; it's not needed
+ * 09/14/01 D. Mosberger fixed memory management for gdt/tss page
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/mm.h>
+#include <linux/personality.h>
#include <linux/sched.h>
#include <asm/page.h>
@@ -21,10 +23,12 @@
#include <asm/processor.h>
#include <asm/ia32.h>
-extern unsigned long *ia32_gdt_table, *ia32_tss;
-
extern void die_if_kernel (char *str, struct pt_regs *regs, long err);
+struct exec_domain ia32_exec_domain;
+struct page *ia32_shared_page[(2*IA32_PAGE_SIZE + PAGE_SIZE - 1)/PAGE_SIZE];
+unsigned long *ia32_gdt;
+
void
ia32_save_state (struct task_struct *t)
{
@@ -85,36 +89,34 @@
void
ia32_gdt_init (void)
{
- unsigned long gdt_and_tss_page, ldt_size;
+ unsigned long *tss;
+ unsigned long ldt_size;
int nr;
- /* allocate two IA-32 pages of memory: */
- gdt_and_tss_page = __get_free_pages(GFP_KERNEL,
- (IA32_PAGE_SHIFT < PAGE_SHIFT)
- ? 0 : (IA32_PAGE_SHIFT + 1) - PAGE_SHIFT);
- ia32_gdt_table = (unsigned long *) gdt_and_tss_page;
- ia32_tss = (unsigned long *) (gdt_and_tss_page + IA32_PAGE_SIZE);
-
- /* Zero the gdt and tss */
- memset((void *) gdt_and_tss_page, 0, 2*IA32_PAGE_SIZE);
+ ia32_shared_page[0] = alloc_page(GFP_KERNEL);
+ ia32_gdt = page_address(ia32_shared_page[0]);
+ tss = ia32_gdt + IA32_PAGE_SIZE/sizeof(ia32_gdt[0]);
+
+ if (IA32_PAGE_SIZE = PAGE_SIZE) {
+ ia32_shared_page[1] = alloc_page(GFP_KERNEL);
+ tss = page_address(ia32_shared_page[1]);
+ }
/* CS descriptor in IA-32 (scrambled) format */
- ia32_gdt_table[__USER_CS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET - 1) >> IA32_PAGE_SHIFT,
- 0xb, 1, 3, 1, 1, 1, 1);
+ ia32_gdt[__USER_CS >> 3] = IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) >> IA32_PAGE_SHIFT,
+ 0xb, 1, 3, 1, 1, 1, 1);
/* DS descriptor in IA-32 (scrambled) format */
- ia32_gdt_table[__USER_DS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET - 1) >> IA32_PAGE_SHIFT,
- 0x3, 1, 3, 1, 1, 1, 1);
+ ia32_gdt[__USER_DS >> 3] = IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) >> IA32_PAGE_SHIFT,
+ 0x3, 1, 3, 1, 1, 1, 1);
/* We never change the TSS and LDT descriptors, so we can share them across all CPUs. */
ldt_size = PAGE_ALIGN(IA32_LDT_ENTRIES*IA32_LDT_ENTRY_SIZE);
for (nr = 0; nr < NR_CPUS; ++nr) {
- ia32_gdt_table[_TSS(nr)] = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235,
- 0xb, 0, 3, 1, 1, 1, 0);
- ia32_gdt_table[_LDT(nr)] = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1,
- 0x2, 0, 3, 1, 1, 1, 0);
+ ia32_gdt[_TSS(nr)] = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235,
+ 0xb, 0, 3, 1, 1, 1, 0);
+ ia32_gdt[_LDT(nr)] = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1,
+ 0x2, 0, 3, 1, 1, 1, 0);
}
}
@@ -133,3 +135,18 @@
siginfo.si_code = TRAP_BRKPT;
force_sig_info(SIGTRAP, &siginfo, current);
}
+
+static int __init
+ia32_init (void)
+{
+ ia32_exec_domain.name = "Linux/x86";
+ ia32_exec_domain.handler = NULL;
+ ia32_exec_domain.pers_low = PER_LINUX32;
+ ia32_exec_domain.pers_high = PER_LINUX32;
+ ia32_exec_domain.signal_map = default_exec_domain.signal_map;
+ ia32_exec_domain.signal_invmap = default_exec_domain.signal_invmap;
+ register_exec_domain(&ia32_exec_domain);
+ return 0;
+}
+
+__initcall(ia32_init);
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.4.10-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/ia32/sys_ia32.c Mon Sep 24 21:45:20 2001
@@ -1,14 +1,13 @@
/*
- * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on
- * sys_sparc32
+ * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Derived from sys_sparc32.c.
*
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
- * Copyright (C) 2000 Hewlett-Packard Co.
- * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* These routines maintain argument size conversion between 32bit and 64bit
* environment.
@@ -53,7 +52,6 @@
#include <asm/types.h>
#include <asm/uaccess.h>
#include <asm/semaphore.h>
-#include <asm/ipc.h>
#include <net/scm.h>
#include <net/sock.h>
@@ -66,18 +64,23 @@
extern asmlinkage long sys_execve (char *, char **, char **, struct pt_regs *);
extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long);
+extern asmlinkage long sys_munmap (unsigned long, size_t);
+
+/* forward declaration: */
+asmlinkage long sys32_mprotect (unsigned int, unsigned int, int);
static int
nargs (unsigned int arg, char **ap)
{
- int n, err, addr;
+ unsigned int addr;
+ int n, err;
if (!arg)
return 0;
n = 0;
do {
- err = get_user(addr, (int *)A(arg));
+ err = get_user(addr, (unsigned int *)A(arg));
if (err)
return err;
if (ap)
@@ -144,30 +147,33 @@
}
static inline int
-putstat(struct stat32 *ubuf, struct stat *kbuf)
+putstat (struct stat32 *ubuf, struct stat *kbuf)
{
int err;
- err = put_user (kbuf->st_dev, &ubuf->st_dev);
- err |= __put_user (kbuf->st_ino, &ubuf->st_ino);
- err |= __put_user (kbuf->st_mode, &ubuf->st_mode);
- err |= __put_user (kbuf->st_nlink, &ubuf->st_nlink);
- err |= __put_user (kbuf->st_uid, &ubuf->st_uid);
- err |= __put_user (kbuf->st_gid, &ubuf->st_gid);
- err |= __put_user (kbuf->st_rdev, &ubuf->st_rdev);
- err |= __put_user (kbuf->st_size, &ubuf->st_size);
- err |= __put_user (kbuf->st_atime, &ubuf->st_atime);
- err |= __put_user (kbuf->st_mtime, &ubuf->st_mtime);
- err |= __put_user (kbuf->st_ctime, &ubuf->st_ctime);
- err |= __put_user (kbuf->st_blksize, &ubuf->st_blksize);
- err |= __put_user (kbuf->st_blocks, &ubuf->st_blocks);
+ if (clear_user(ubuf, sizeof(*ubuf)))
+ return 1;
+
+ err = __put_user(kbuf->st_dev, &ubuf->st_dev);
+ err |= __put_user(kbuf->st_ino, &ubuf->st_ino);
+ err |= __put_user(kbuf->st_mode, &ubuf->st_mode);
+ err |= __put_user(kbuf->st_nlink, &ubuf->st_nlink);
+ err |= __put_user(kbuf->st_uid, &ubuf->st_uid);
+ err |= __put_user(kbuf->st_gid, &ubuf->st_gid);
+ err |= __put_user(kbuf->st_rdev, &ubuf->st_rdev);
+ err |= __put_user(kbuf->st_size, &ubuf->st_size);
+ err |= __put_user(kbuf->st_atime, &ubuf->st_atime);
+ err |= __put_user(kbuf->st_mtime, &ubuf->st_mtime);
+ err |= __put_user(kbuf->st_ctime, &ubuf->st_ctime);
+ err |= __put_user(kbuf->st_blksize, &ubuf->st_blksize);
+ err |= __put_user(kbuf->st_blocks, &ubuf->st_blocks);
return err;
}
-extern asmlinkage long sys_newstat(char * filename, struct stat * statbuf);
+extern asmlinkage long sys_newstat (char * filename, struct stat * statbuf);
asmlinkage long
-sys32_newstat(char * filename, struct stat32 *statbuf)
+sys32_newstat (char *filename, struct stat32 *statbuf)
{
int ret;
struct stat s;
@@ -175,8 +181,8 @@
set_fs(KERNEL_DS);
ret = sys_newstat(filename, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
@@ -184,16 +190,16 @@
extern asmlinkage long sys_newlstat(char * filename, struct stat * statbuf);
asmlinkage long
-sys32_newlstat(char * filename, struct stat32 *statbuf)
+sys32_newlstat (char *filename, struct stat32 *statbuf)
{
- int ret;
- struct stat s;
mm_segment_t old_fs = get_fs();
+ struct stat s;
+ int ret;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_newlstat(filename, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
@@ -201,112 +207,171 @@
extern asmlinkage long sys_newfstat(unsigned int fd, struct stat * statbuf);
asmlinkage long
-sys32_newfstat(unsigned int fd, struct stat32 *statbuf)
+sys32_newfstat (unsigned int fd, struct stat32 *statbuf)
{
- int ret;
- struct stat s;
mm_segment_t old_fs = get_fs();
+ struct stat s;
+ int ret;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_newfstat(fd, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
#define OFFSET4K(a) ((a) & 0xfff)
-unsigned long
-do_mmap_fake(struct file *file, unsigned long addr, unsigned long len,
- unsigned long prot, unsigned long flags, loff_t off)
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+
+/*
+ * Determine whether address ADDR is readable. This must be done *without* actually
+ * touching memory because otherwise the stack auto-expansion may *make* the address
+ * readable, which is not at all what we want. --davidm 01/09/20
+ */
+static inline long
+is_readable (unsigned long addr)
+{
+ struct vm_area_struct *vma = find_vma(current->mm, addr);
+ return vma && (vma->vm_start <= addr) && (vma->vm_flags & VM_READ);
+}
+
+static unsigned long
+do_mmap_fake (struct file *file, unsigned long addr, unsigned long len, int prot, int flags,
+ loff_t off)
{
+ unsigned long faddr = (addr & PAGE_MASK), end, front_len, back_len, retval;
+ void *front = 0, *back = 0;
struct inode *inode;
- void *front, *back;
- unsigned long baddr;
- int r;
- char c;
- if (OFFSET4K(addr) || OFFSET4K(off))
- return -EINVAL;
+ /*
+ * Allow any kind of access: this lets us avoid having to figure out what the
+ * protection of the partial front and back pages is...
+ */
prot |= PROT_WRITE;
- front = NULL;
- back = NULL;
- if ((baddr = (addr & PAGE_MASK)) != addr && get_user(c, (char *)baddr) = 0) {
- front = kmalloc(addr - baddr, GFP_KERNEL);
- if (!front)
- return -ENOMEM;
- __copy_user(front, (void *)baddr, addr - baddr);
+
+ if (OFFSET4K(addr))
+ return -EINVAL;
+
+ end = addr + len;
+ front_len = addr - faddr;
+ back_len = (end & ~PAGE_MASK);
+
+ if (front_len && is_readable(faddr)) {
+ front = kmalloc(front_len, GFP_KERNEL);
+ if (!front) {
+ addr = -ENOMEM;
+ goto fail;
+ }
+ copy_from_user(front, (void *)faddr, front_len);
}
- if (addr && ((addr + len) & ~PAGE_MASK) && get_user(c, (char *)(addr + len)) = 0) {
- back = kmalloc(PAGE_SIZE - ((addr + len) & ~PAGE_MASK), GFP_KERNEL);
+
+ if (addr && back_len && is_readable(end)) {
+ back = kmalloc(PAGE_SIZE - back_len, GFP_KERNEL);
if (!back) {
- if (front)
- kfree(front);
- return -ENOMEM;
+ addr = -ENOMEM;
+ goto fail;
}
- __copy_user(back, (char *)addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
+ copy_from_user(back, (char *)end, PAGE_SIZE - back_len);
}
+
down_write(¤t->mm->mmap_sem);
- r = do_mmap(0, baddr, len + (addr - baddr), prot, flags | MAP_ANONYMOUS, 0);
+ {
+ retval = do_mmap(0, faddr, len + front_len, prot, flags | MAP_ANONYMOUS, 0);
+ }
up_write(¤t->mm->mmap_sem);
- if (r < 0)
- return(r);
- if (addr = 0)
- addr = r;
+
+ if (IS_ERR((void *) retval)) {
+ addr = retval;
+ goto fail;
+ }
+
+ if (!addr)
+ addr = retval;
+
+ end = addr + len;
+
if (back) {
- __copy_user((char *)addr + len, back, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
+ if (copy_to_user((char *) end, back, PAGE_SIZE - back_len)) {
+ addr = -EINVAL;
+ goto fail;
+ }
kfree(back);
}
if (front) {
- __copy_user((void *)baddr, front, addr - baddr);
+ if (copy_to_user((void *) faddr, front, front_len)) {
+ addr = -EINVAL;
+ goto fail;
+ }
kfree(front);
}
- if (flags & MAP_ANONYMOUS) {
- clear_user((char *)addr, len);
- return(addr);
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ inode = file->f_dentry->d_inode;
+ if (!inode->i_fop
+ || !file->f_op->read
+ || (*file->f_op->read)(file, (char *)addr, len, &off) < 0)
+ {
+ sys_munmap(addr, len + front_len);
+ return -EINVAL;
+ }
}
- if (!file)
- return -EINVAL;
- inode = file->f_dentry->d_inode;
- if (!inode->i_fop)
- return -EINVAL;
- if (!file->f_op->read)
- return -EINVAL;
- r = file->f_op->read(file, (char *)addr, len, &off);
- return (r < 0) ? -EINVAL : addr;
+ return addr;
+
+ fail: if (front)
+ kfree(front);
+ if (back)
+ kfree(back);
+ return addr;
}
-long
-ia32_do_mmap (struct file *file, unsigned int addr, unsigned int len, unsigned int prot,
- unsigned int flags, unsigned int fd, unsigned int offset)
+#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */
+
+unsigned long
+ia32_do_mmap (struct file *file, unsigned long addr, unsigned long len, int prot, int flags,
+ loff_t offset)
{
- long error = -EFAULT;
- unsigned int poff;
+ if (file && (!file->f_op || !file->f_op->mmap))
+ return -ENODEV;
- flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
- prot |= PROT_EXEC;
+ len = IA32_PAGE_ALIGN(len);
+ if (len = 0)
+ return addr;
+
+ if (len > IA32_PAGE_OFFSET || addr > IA32_PAGE_OFFSET - len)
+ return -EINVAL;
+
+ if (OFFSET4K(offset))
+ return -EINVAL;
+
+ if (prot & (PROT_READ | PROT_WRITE))
+ prot |= PROT_EXEC; /* x86 has no "execute" permission bit... */
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
if ((flags & MAP_FIXED) && ((addr & ~PAGE_MASK) || (offset & ~PAGE_MASK)))
- error = do_mmap_fake(file, addr, len, prot, flags, (loff_t)offset);
- else {
- poff = offset & PAGE_MASK;
- len += offset - poff;
+ addr = do_mmap_fake(file, addr, len, prot, flags, offset);
+ else
+#endif
+ {
+ loff_t pgoff = offset & PAGE_MASK;
+ len += offset - pgoff;
down_write(¤t->mm->mmap_sem);
- error = do_mmap_pgoff(file, addr, len, prot, flags, poff >> PAGE_SHIFT);
+ {
+ addr = do_mmap(file, addr, len, prot, flags, pgoff);
+ }
up_write(¤t->mm->mmap_sem);
- if (!IS_ERR((void *) error))
- error += offset - poff;
+ if (!IS_ERR((void *) addr))
+ addr += offset - pgoff;
}
- return error;
+ return addr;
}
/*
- * Linux/i386 didn't use to be able to handle more than
- * 4 system call parameters, so these system calls used a memory
- * block for parameter passing..
+ * Linux/i386 didn't use to be able to handle more than 4 system call parameters, so these
+ * system calls used a memory block for parameter passing..
*/
struct mmap_arg_struct {
@@ -319,56 +384,147 @@
};
asmlinkage long
-sys32_mmap(struct mmap_arg_struct *arg)
+sys32_mmap (struct mmap_arg_struct *arg)
{
struct mmap_arg_struct a;
struct file *file = NULL;
- long retval;
+ unsigned long addr;
+ int flags;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
- if (PAGE_ALIGN(a.len) = 0)
- return a.addr;
+ if (OFFSET4K(a.offset))
+ return -EINVAL;
+
+ flags = a.flags;
- if (!(a.flags & MAP_ANONYMOUS)) {
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
file = fget(a.fd);
if (!file)
return -EBADF;
}
-#ifdef CONFIG_IA64_PAGE_SIZE_4KB
- if ((a.offset & ~PAGE_MASK) != 0)
- return -EINVAL;
- down_write(¤t->mm->mmap_sem);
- retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, a.offset >> PAGE_SHIFT);
- up_write(¤t->mm->mmap_sem);
-#else
- retval = ia32_do_mmap(file, a.addr, a.len, a.prot, a.flags, a.fd, a.offset);
-#endif
+ addr = ia32_do_mmap(file, a.addr, a.len, a.prot, flags, a.offset);
+
+ if (file)
+ fput(file);
+ return addr;
+}
+
+asmlinkage long
+sys32_mmap2 (unsigned int addr, unsigned int len, unsigned int prot, unsigned int flags,
+ unsigned int fd, unsigned int pgoff)
+{
+ struct file *file = NULL;
+ unsigned long retval;
+
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ return -EBADF;
+ }
+
+ retval = ia32_do_mmap(file, addr, len, prot, flags,
+ (unsigned long) pgoff << IA32_PAGE_SHIFT);
+
if (file)
fput(file);
return retval;
}
asmlinkage long
-sys32_mprotect(unsigned long start, size_t len, unsigned long prot)
+sys32_munmap (unsigned int start, unsigned int len)
+{
+ unsigned int end = start + len;
+
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ if (start > end)
+ return -EINVAL;
+
+ start = PAGE_ALIGN(start);
+ end = end & PAGE_MASK;
+
+ if (start >= end)
+ return 0;
+#endif
+ return sys_munmap(start, end - start);
+}
+
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+
+/*
+ * When mprotect()ing a partial page, we set the permission to the union of the old
+ * settings and the new settings. In other words, it's only possible to make access to a
+ * partial page less restrictive.
+ */
+static long
+mprotect_partial_page (unsigned long address, int new_prot)
+{
+ int old_prot;
+
+ if (new_prot = PROT_NONE)
+ return 0; /* optimize case where nothing changes... */
+
+ /*
+ * We cannot easily determine the existing protection on this page because we have
+ * to relinquish the mmap-semaphore before calling sys_mprotect(), which would
+ * create a window during which another task could change the protection settings
+ * underneath us. To avoid this, we conservatively assume that the page is
+ * RWXable.
+ */
+ old_prot = PROT_READ | PROT_WRITE | PROT_EXEC;
+ return sys_mprotect(address, PAGE_SIZE, new_prot | old_prot);
+}
+
+#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */
+
+asmlinkage long
+sys32_mprotect (unsigned int start, unsigned int len, int prot)
{
+ unsigned long end = start + len;
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ long retval;
+#endif
-#ifdef CONFIG_IA64_PAGE_SIZE_4KB
- return(sys_mprotect(start, len, prot));
-#else // CONFIG_IA64_PAGE_SIZE_4KB
- if (prot = 0)
- return(0);
- len += start & ~PAGE_MASK;
- if ((start & ~PAGE_MASK) && (prot & PROT_WRITE))
+ if (prot & (PROT_READ | PROT_WRITE))
+ /* on x86, PROT_WRITE implies PROT_READ and PROT_READ implies PROT_EXEC... */
prot |= PROT_EXEC;
- return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot));
-#endif // CONFIG_IA64_PAGE_SIZE_4KB
+
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ if (OFFSET4K(start))
+ return -EINVAL;
+
+ end = IA32_PAGE_ALIGN(end);
+ if (end < start)
+ return -EINVAL;
+
+ if (start & ~PAGE_MASK) {
+ /* start address is 4KB aligned but not page aligned. */
+ retval = mprotect_partial_page(start & PAGE_MASK, prot);
+ if (retval < 0)
+ return retval;
+
+ start = PAGE_ALIGN(start);
+ if (start >= end)
+ return 0;
+ }
+
+ if (end & ~PAGE_MASK) {
+ /* end address is 4KB aligned but not page aligned. */
+ retval = mprotect_partial_page(end & PAGE_MASK, prot);
+ if (retval < 0)
+ return retval;
+ end &= PAGE_MASK;
+ }
+#endif
+ return sys_mprotect(start, end - start, prot);
}
asmlinkage long
-sys32_pipe(int *fd)
+sys32_pipe (int *fd)
{
int retval;
int fds[2];
@@ -382,119 +538,154 @@
return retval;
}
+static inline void
+sigact_set_handler (struct k_sigaction *sa, unsigned int handler, unsigned int restorer)
+{
+ if (handler + 1 <= 2)
+ /* SIG_DFL, SIG_IGN, or SIG_ERR: must sign-extend to 64-bits */
+ sa->sa.sa_handler = (__sighandler_t) A((int) handler);
+ else
+ sa->sa.sa_handler = (__sighandler_t) (((unsigned long) restorer << 32) | handler);
+}
+
asmlinkage long
sys32_signal (int sig, unsigned int handler)
{
struct k_sigaction new_sa, old_sa;
int ret;
- new_sa.sa.sa_handler = (__sighandler_t) A(handler);
+ sigact_set_handler(&new_sa, handler, 0);
new_sa.sa.sa_flags = SA_ONESHOT | SA_NOMASK;
ret = do_sigaction(sig, &new_sa, &old_sa);
- return ret ? ret : (unsigned long)old_sa.sa.sa_handler;
+ return ret ? ret : IA32_SA_HANDLER(&old_sa);
}
asmlinkage long
-sys32_rt_sigaction(int sig, struct sigaction32 *act,
- struct sigaction32 *oact, unsigned int sigsetsize)
+sys32_rt_sigaction (int sig, struct sigaction32 *act,
+ struct sigaction32 *oact, unsigned int sigsetsize)
{
struct k_sigaction new_ka, old_ka;
+ unsigned int handler, restorer;
int ret;
- sigset32_t set32;
/* XXX: Don't preclude handling different sized sigset_t's. */
if (sigsetsize != sizeof(sigset32_t))
return -EINVAL;
if (act) {
- ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
- ret |= __copy_from_user(&set32, &act->sa_mask,
- sizeof(sigset32_t));
- switch (_NSIG_WORDS) {
- case 4: new_ka.sa.sa_mask.sig[3] = set32.sig[6]
- | (((long)set32.sig[7]) << 32);
- case 3: new_ka.sa.sa_mask.sig[2] = set32.sig[4]
- | (((long)set32.sig[5]) << 32);
- case 2: new_ka.sa.sa_mask.sig[1] = set32.sig[2]
- | (((long)set32.sig[3]) << 32);
- case 1: new_ka.sa.sa_mask.sig[0] = set32.sig[0]
- | (((long)set32.sig[1]) << 32);
- }
- ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
-
+ ret = get_user(handler, &act->sa_handler);
+ ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= get_user(restorer, &act->sa_restorer);
+ ret |= copy_from_user(&new_ka.sa.sa_mask, &act->sa_mask, sizeof(sigset32_t));
if (ret)
return -EFAULT;
+
+ sigact_set_handler(&new_ka, handler, restorer);
}
ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
if (!ret && oact) {
- switch (_NSIG_WORDS) {
- case 4:
- set32.sig[7] = (old_ka.sa.sa_mask.sig[3] >> 32);
- set32.sig[6] = old_ka.sa.sa_mask.sig[3];
- case 3:
- set32.sig[5] = (old_ka.sa.sa_mask.sig[2] >> 32);
- set32.sig[4] = old_ka.sa.sa_mask.sig[2];
- case 2:
- set32.sig[3] = (old_ka.sa.sa_mask.sig[1] >> 32);
- set32.sig[2] = old_ka.sa.sa_mask.sig[1];
- case 1:
- set32.sig[1] = (old_ka.sa.sa_mask.sig[0] >> 32);
- set32.sig[0] = old_ka.sa.sa_mask.sig[0];
- }
- ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
- ret |= __copy_to_user(&oact->sa_mask, &set32,
- sizeof(sigset32_t));
- ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret = put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler);
+ ret |= put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer);
+ ret |= copy_to_user(&oact->sa_mask, &old_ka.sa.sa_mask, sizeof(sigset32_t));
}
-
return ret;
}
-extern asmlinkage long sys_rt_sigprocmask(int how, sigset_t *set, sigset_t *oset,
- size_t sigsetsize);
+extern asmlinkage long sys_rt_sigprocmask (int how, sigset_t *set, sigset_t *oset,
+ size_t sigsetsize);
asmlinkage long
-sys32_rt_sigprocmask(int how, sigset32_t *set, sigset32_t *oset,
- unsigned int sigsetsize)
+sys32_rt_sigprocmask (int how, sigset32_t *set, sigset32_t *oset, unsigned int sigsetsize)
{
- sigset_t s;
- sigset32_t s32;
- int ret;
mm_segment_t old_fs = get_fs();
+ sigset_t s;
+ long ret;
+
+ if (sigsetsize > sizeof(s))
+ return -EINVAL;
if (set) {
- if (copy_from_user (&s32, set, sizeof(sigset32_t)))
+ memset(&s, 0, sizeof(s));
+ if (copy_from_user(&s.sig, set, sigsetsize))
return -EFAULT;
- switch (_NSIG_WORDS) {
- case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
- case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
- case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
- case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
- }
}
- set_fs (KERNEL_DS);
- ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL,
- sigsetsize);
- set_fs (old_fs);
- if (ret) return ret;
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL, sizeof(s));
+ set_fs(old_fs);
+ if (ret)
+ return ret;
if (oset) {
- switch (_NSIG_WORDS) {
- case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
- case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
- case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
- case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
- }
- if (copy_to_user (oset, &s32, sizeof(sigset32_t)))
+ if (copy_to_user(oset, &s.sig, sigsetsize))
return -EFAULT;
}
return 0;
}
+asmlinkage long
+sys32_sigprocmask (int how, unsigned int *set, unsigned int *oset)
+{
+ return sys32_rt_sigprocmask(how, (sigset32_t *) set, (sigset32_t *) oset, sizeof(*set));
+}
+
+struct timespec32 {
+ int tv_sec;
+ int tv_nsec;
+};
+
+asmlinkage long
+sys32_rt_sigtimedwait (sigset32_t *uthese, siginfo_t32 *uinfo, struct timespec32 *uts,
+ unsigned int sigsetsize)
+{
+ extern asmlinkage long sys_rt_sigtimedwait (const sigset_t *, siginfo_t *,
+ const struct timespec *, size_t);
+ extern int copy_siginfo_to_user32 (siginfo_t32 *, siginfo_t *);
+ mm_segment_t old_fs = get_fs();
+ struct timespec t;
+ siginfo_t info;
+ sigset_t s;
+ int ret;
+
+ if (copy_from_user(&s.sig, uthese, sizeof(sigset32_t)))
+ return -EFAULT;
+ if (uts) {
+ ret = get_user(t.tv_sec, &uts->tv_sec);
+ ret |= get_user(t.tv_nsec, &uts->tv_nsec);
+ if (ret)
+ return -EFAULT;
+ }
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
+ set_fs(old_fs);
+ if (ret >= 0 && uinfo) {
+ if (copy_siginfo_to_user32(uinfo, &info))
+ return -EFAULT;
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_rt_sigqueueinfo (int pid, int sig, siginfo_t32 *uinfo)
+{
+ extern asmlinkage long sys_rt_sigqueueinfo (int, int, siginfo_t *);
+ extern int copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from);
+ mm_segment_t old_fs = get_fs();
+ siginfo_t info;
+ int ret;
+
+ if (copy_siginfo_from_user32(&info, uinfo))
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigqueueinfo(pid, sig, &info);
+ set_fs(old_fs);
+ return ret;
+}
+
static inline int
put_statfs (struct statfs32 *ubuf, struct statfs *kbuf)
{
@@ -516,15 +707,15 @@
extern asmlinkage long sys_statfs(const char * path, struct statfs * buf);
asmlinkage long
-sys32_statfs(const char * path, struct statfs32 *buf)
+sys32_statfs (const char *path, struct statfs32 *buf)
{
int ret;
struct statfs s;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
- ret = sys_statfs((const char *)path, &s);
- set_fs (old_fs);
+ set_fs(KERNEL_DS);
+ ret = sys_statfs(path, &s);
+ set_fs(old_fs);
if (put_statfs(buf, &s))
return -EFAULT;
return ret;
@@ -533,15 +724,15 @@
extern asmlinkage long sys_fstatfs(unsigned int fd, struct statfs * buf);
asmlinkage long
-sys32_fstatfs(unsigned int fd, struct statfs32 *buf)
+sys32_fstatfs (unsigned int fd, struct statfs32 *buf)
{
int ret;
struct statfs s;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_fstatfs(fd, &s);
- set_fs (old_fs);
+ set_fs(old_fs);
if (put_statfs(buf, &s))
return -EFAULT;
return ret;
@@ -559,7 +750,7 @@
};
static inline long
-get_tv32(struct timeval *o, struct timeval32 *i)
+get_tv32 (struct timeval *o, struct timeval32 *i)
{
return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
(__get_user(o->tv_sec, &i->tv_sec) |
@@ -567,7 +758,7 @@
}
static inline long
-put_tv32(struct timeval32 *o, struct timeval *i)
+put_tv32 (struct timeval32 *o, struct timeval *i)
{
return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
(__put_user(i->tv_sec, &o->tv_sec) |
@@ -575,7 +766,7 @@
}
static inline long
-get_it32(struct itimerval *o, struct itimerval32 *i)
+get_it32 (struct itimerval *o, struct itimerval32 *i)
{
return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
(__get_user(o->it_interval.tv_sec, &i->it_interval.tv_sec) |
@@ -585,7 +776,7 @@
}
static inline long
-put_it32(struct itimerval32 *o, struct itimerval *i)
+put_it32 (struct itimerval32 *o, struct itimerval *i)
{
return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
(__put_user(i->it_interval.tv_sec, &o->it_interval.tv_sec) |
@@ -594,10 +785,10 @@
__put_user(i->it_value.tv_usec, &o->it_value.tv_usec)));
}
-extern int do_getitimer(int which, struct itimerval *value);
+extern int do_getitimer (int which, struct itimerval *value);
asmlinkage long
-sys32_getitimer(int which, struct itimerval32 *it)
+sys32_getitimer (int which, struct itimerval32 *it)
{
struct itimerval kit;
int error;
@@ -609,10 +800,10 @@
return error;
}
-extern int do_setitimer(int which, struct itimerval *, struct itimerval *);
+extern int do_setitimer (int which, struct itimerval *, struct itimerval *);
asmlinkage long
-sys32_setitimer(int which, struct itimerval32 *in, struct itimerval32 *out)
+sys32_setitimer (int which, struct itimerval32 *in, struct itimerval32 *out)
{
struct itimerval kin, kout;
int error;
@@ -632,8 +823,9 @@
return 0;
}
+
asmlinkage unsigned long
-sys32_alarm(unsigned int seconds)
+sys32_alarm (unsigned int seconds)
{
struct itimerval it_new, it_old;
unsigned int oldalarm;
@@ -662,7 +854,7 @@
extern asmlinkage long sys_gettimeofday (struct timeval *tv, struct timezone *tz);
asmlinkage long
-ia32_utime(char * filename, struct utimbuf_32 *times32)
+sys32_utime (char *filename, struct utimbuf_32 *times32)
{
mm_segment_t old_fs = get_fs();
struct timeval tv[2], *tvp;
@@ -675,20 +867,20 @@
if (get_user(tv[1].tv_sec, ×32->mtime))
return -EFAULT;
tv[1].tv_usec = 0;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
tvp = tv;
} else
tvp = NULL;
ret = sys_utimes(filename, tvp);
- set_fs (old_fs);
+ set_fs(old_fs);
return ret;
}
extern struct timezone sys_tz;
-extern int do_sys_settimeofday(struct timeval *tv, struct timezone *tz);
+extern int do_sys_settimeofday (struct timeval *tv, struct timezone *tz);
asmlinkage long
-sys32_gettimeofday(struct timeval32 *tv, struct timezone *tz)
+sys32_gettimeofday (struct timeval32 *tv, struct timezone *tz)
{
if (tv) {
struct timeval ktv;
@@ -704,7 +896,7 @@
}
asmlinkage long
-sys32_settimeofday(struct timeval32 *tv, struct timezone *tz)
+sys32_settimeofday (struct timeval32 *tv, struct timezone *tz)
{
struct timeval ktv;
struct timezone ktz;
@@ -777,7 +969,7 @@
}
asmlinkage long
-sys32_getdents (unsigned int fd, void * dirent, unsigned int count)
+sys32_getdents (unsigned int fd, struct linux32_dirent *dirent, unsigned int count)
{
struct file * file;
struct linux32_dirent * lastdirent;
@@ -789,7 +981,7 @@
if (!file)
goto out;
- buf.current_dir = (struct linux32_dirent *) dirent;
+ buf.current_dir = dirent;
buf.previous = NULL;
buf.count = count;
buf.error = 0;
@@ -833,7 +1025,7 @@
}
asmlinkage long
-sys32_readdir (unsigned int fd, void * dirent, unsigned int count)
+sys32_readdir (unsigned int fd, void *dirent, unsigned int count)
{
int error;
struct file * file;
@@ -868,7 +1060,7 @@
#define ROUND_UP_TIME(x,y) (((x)+(y)-1)/(y))
asmlinkage long
-sys32_select(int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
+sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
{
fd_set_bits fds;
char *bits;
@@ -880,8 +1072,7 @@
time_t sec, usec;
ret = -EFAULT;
- if (get_user(sec, &tvp32->tv_sec)
- || get_user(usec, &tvp32->tv_usec))
+ if (get_user(sec, &tvp32->tv_sec) || get_user(usec, &tvp32->tv_usec))
goto out_nofds;
ret = -EINVAL;
@@ -935,9 +1126,7 @@
usec = timeout % HZ;
usec *= (1000000/HZ);
}
- if (put_user(sec, (int *)&tvp32->tv_sec)
- || put_user(usec, (int *)&tvp32->tv_usec))
- {
+ if (put_user(sec, &tvp32->tv_sec) || put_user(usec, &tvp32->tv_usec)) {
ret = -EFAULT;
goto out;
}
@@ -971,50 +1160,43 @@
};
asmlinkage long
-old_select(struct sel_arg_struct *arg)
+sys32_old_select (struct sel_arg_struct *arg)
{
struct sel_arg_struct a;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
- return sys32_select(a.n, (fd_set *)A(a.inp), (fd_set *)A(a.outp), (fd_set *)A(a.exp),
- (struct timeval32 *)A(a.tvp));
+ return sys32_select(a.n, (fd_set *) A(a.inp), (fd_set *) A(a.outp), (fd_set *) A(a.exp),
+ (struct timeval32 *) A(a.tvp));
}
-struct timespec32 {
- int tv_sec;
- int tv_nsec;
-};
-
-extern asmlinkage long sys_nanosleep(struct timespec *rqtp, struct timespec *rmtp);
+extern asmlinkage long sys_nanosleep (struct timespec *rqtp, struct timespec *rmtp);
asmlinkage long
-sys32_nanosleep(struct timespec32 *rqtp, struct timespec32 *rmtp)
+sys32_nanosleep (struct timespec32 *rqtp, struct timespec32 *rmtp)
{
struct timespec t;
int ret;
mm_segment_t old_fs = get_fs ();
- if (get_user (t.tv_sec, &rqtp->tv_sec) ||
- __get_user (t.tv_nsec, &rqtp->tv_nsec))
+ if (get_user (t.tv_sec, &rqtp->tv_sec) || get_user (t.tv_nsec, &rqtp->tv_nsec))
return -EFAULT;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_nanosleep(&t, rmtp ? &t : NULL);
- set_fs (old_fs);
+ set_fs(old_fs);
if (rmtp && ret = -EINTR) {
- if (__put_user (t.tv_sec, &rmtp->tv_sec) ||
- __put_user (t.tv_nsec, &rmtp->tv_nsec))
+ if (put_user(t.tv_sec, &rmtp->tv_sec) || put_user(t.tv_nsec, &rmtp->tv_nsec))
return -EFAULT;
}
return ret;
}
struct iovec32 { unsigned int iov_base; int iov_len; };
-asmlinkage ssize_t sys_readv(unsigned long,const struct iovec *,unsigned long);
-asmlinkage ssize_t sys_writev(unsigned long,const struct iovec *,unsigned long);
+asmlinkage ssize_t sys_readv (unsigned long,const struct iovec *,unsigned long);
+asmlinkage ssize_t sys_writev (unsigned long,const struct iovec *,unsigned long);
static struct iovec *
-get_iovec32(struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int type)
+get_iovec32 (struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int type)
{
int i;
u32 buf, len;
@@ -1024,24 +1206,23 @@
if (!count)
return 0;
- if(verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count))
- return(struct iovec *)0;
+ if (verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count))
+ return NULL;
if (count > UIO_MAXIOV)
- return(struct iovec *)0;
+ return NULL;
if (count > UIO_FASTIOV) {
iov = kmalloc(count*sizeof(struct iovec), GFP_KERNEL);
if (!iov)
- return((struct iovec *)0);
+ return NULL;
} else
iov = iov_buf;
ivp = iov;
for (i = 0; i < count; i++) {
- if (__get_user(len, &iov32->iov_len) ||
- __get_user(buf, &iov32->iov_base)) {
+ if (__get_user(len, &iov32->iov_len) || __get_user(buf, &iov32->iov_base)) {
if (iov != iov_buf)
kfree(iov);
- return((struct iovec *)0);
+ return NULL;
}
if (verify_area(type, (void *)A(buf), len)) {
if (iov != iov_buf)
@@ -1049,22 +1230,23 @@
return((struct iovec *)0);
}
ivp->iov_base = (void *)A(buf);
- ivp->iov_len = (__kernel_size_t)len;
+ ivp->iov_len = (__kernel_size_t) len;
iov32++;
ivp++;
}
- return(iov);
+ return iov;
}
asmlinkage long
-sys32_readv(int fd, struct iovec32 *vector, u32 count)
+sys32_readv (int fd, struct iovec32 *vector, u32 count)
{
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov;
- int ret;
+ long ret;
mm_segment_t old_fs = get_fs();
- if ((iov = get_iovec32(vector, iovstack, count, VERIFY_WRITE)) = (struct iovec *)0)
+ iov = get_iovec32(vector, iovstack, count, VERIFY_WRITE);
+ if (!iov)
return -EFAULT;
set_fs(KERNEL_DS);
ret = sys_readv(fd, iov, count);
@@ -1075,14 +1257,15 @@
}
asmlinkage long
-sys32_writev(int fd, struct iovec32 *vector, u32 count)
+sys32_writev (int fd, struct iovec32 *vector, u32 count)
{
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov;
- int ret;
+ long ret;
mm_segment_t old_fs = get_fs();
- if ((iov = get_iovec32(vector, iovstack, count, VERIFY_READ)) = (struct iovec *)0)
+ iov = get_iovec32(vector, iovstack, count, VERIFY_READ);
+ if (!iov)
return -EFAULT;
set_fs(KERNEL_DS);
ret = sys_writev(fd, iov, count);
@@ -1100,45 +1283,66 @@
int rlim_max;
};
-extern asmlinkage long sys_getrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_getrlimit (unsigned int resource, struct rlimit *rlim);
asmlinkage long
-sys32_getrlimit(unsigned int resource, struct rlimit32 *rlim)
+sys32_old_getrlimit (unsigned int resource, struct rlimit32 *rlim)
{
+ mm_segment_t old_fs = get_fs();
struct rlimit r;
int ret;
- mm_segment_t old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
+ ret = sys_getrlimit(resource, &r);
+ set_fs(old_fs);
+ if (!ret) {
+ ret = put_user(RESOURCE32(r.rlim_cur), &rlim->rlim_cur);
+ ret |= put_user(RESOURCE32(r.rlim_max), &rlim->rlim_max);
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_getrlimit (unsigned int resource, struct rlimit32 *rlim)
+{
+ mm_segment_t old_fs = get_fs();
+ struct rlimit r;
+ int ret;
+
+ set_fs(KERNEL_DS);
ret = sys_getrlimit(resource, &r);
- set_fs (old_fs);
+ set_fs(old_fs);
if (!ret) {
- ret = put_user (RESOURCE32(r.rlim_cur), &rlim->rlim_cur);
- ret |= __put_user (RESOURCE32(r.rlim_max), &rlim->rlim_max);
+ if (r.rlim_cur >= 0xffffffff)
+ r.rlim_cur = 0xffffffff;
+ if (r.rlim_max >= 0xffffffff)
+ r.rlim_max = 0xffffffff;
+ ret = put_user(r.rlim_cur, &rlim->rlim_cur);
+ ret |= put_user(r.rlim_max, &rlim->rlim_max);
}
return ret;
}
-extern asmlinkage long sys_setrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_setrlimit (unsigned int resource, struct rlimit *rlim);
asmlinkage long
-sys32_setrlimit(unsigned int resource, struct rlimit32 *rlim)
+sys32_setrlimit (unsigned int resource, struct rlimit32 *rlim)
{
struct rlimit r;
int ret;
- mm_segment_t old_fs = get_fs ();
+ mm_segment_t old_fs = get_fs();
- if (resource >= RLIM_NLIMITS) return -EINVAL;
- if (get_user (r.rlim_cur, &rlim->rlim_cur) ||
- __get_user (r.rlim_max, &rlim->rlim_max))
+ if (resource >= RLIM_NLIMITS)
+ return -EINVAL;
+ if (get_user(r.rlim_cur, &rlim->rlim_cur) || get_user(r.rlim_max, &rlim->rlim_max))
return -EFAULT;
if (r.rlim_cur = RLIM_INFINITY32)
r.rlim_cur = RLIM_INFINITY;
if (r.rlim_max = RLIM_INFINITY32)
r.rlim_max = RLIM_INFINITY;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_setrlimit(resource, &r);
- set_fs (old_fs);
+ set_fs(old_fs);
return ret;
}
@@ -1157,7 +1361,7 @@
};
static inline int
-shape_msg(struct msghdr *mp, struct msghdr32 *mp32)
+shape_msg (struct msghdr *mp, struct msghdr32 *mp32)
{
int ret;
unsigned int i;
@@ -1189,7 +1393,7 @@
*/
static inline int
-verify_iovec32(struct msghdr *m, struct iovec *iov, char *address, int mode)
+verify_iovec32 (struct msghdr *m, struct iovec *iov, char *address, int mode)
{
int size, err, ct;
struct iovec32 *iov32;
@@ -1224,8 +1428,8 @@
return err;
}
-extern __inline__ void
-sockfd_put(struct socket *sock)
+static inline void
+sockfd_put (struct socket *sock)
{
fput(sock->file);
}
@@ -1236,13 +1440,14 @@
24 for IPv6,
about 80 for AX.25 */
-extern struct socket *sockfd_lookup(int fd, int *err);
+extern struct socket *sockfd_lookup (int fd, int *err);
/*
* BSD sendmsg interface
*/
-int sys32_sendmsg(int fd, struct msghdr32 *msg, unsigned flags)
+int
+sys32_sendmsg (int fd, struct msghdr32 *msg, unsigned flags)
{
struct socket *sock;
char address[MAX_SOCK_ADDR];
@@ -1406,9 +1611,9 @@
/* Argument list sizes for sys_socketcall */
#define AL(x) ((x) * sizeof(u32))
-static unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
- AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
- AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
+static const unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
+ AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
+ AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
#undef AL
extern asmlinkage long sys_bind(int fd, struct sockaddr *umyaddr, int addrlen);
@@ -1437,7 +1642,8 @@
extern asmlinkage long sys_shutdown(int fd, int how);
extern asmlinkage long sys_listen(int fd, int backlog);
-asmlinkage long sys32_socketcall(int call, u32 *args)
+asmlinkage long
+sys32_socketcall (int call, u32 *args)
{
int ret;
u32 a[6];
@@ -1465,16 +1671,13 @@
ret = sys_listen(a0, a1);
break;
case SYS_ACCEPT:
- ret = sys_accept(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_accept(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_GETSOCKNAME:
- ret = sys_getsockname(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_getsockname(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_GETPEERNAME:
- ret = sys_getpeername(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_getpeername(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_SOCKETPAIR:
ret = sys_socketpair(a0, a1, a[2], (int *)A(a[3]));
@@ -1502,12 +1705,10 @@
ret = sys_getsockopt(a0, a1, a[2], a[3], a[4]);
break;
case SYS_SENDMSG:
- ret = sys32_sendmsg(a0, (struct msghdr32 *)A(a1),
- a[2]);
+ ret = sys32_sendmsg(a0, (struct msghdr32 *) A(a1), a[2]);
break;
case SYS_RECVMSG:
- ret = sys32_recvmsg(a0, (struct msghdr32 *)A(a1),
- a[2]);
+ ret = sys32_recvmsg(a0, (struct msghdr32 *) A(a1), a[2]);
break;
default:
ret = EINVAL;
@@ -1574,10 +1775,27 @@
unsigned short shm_nattch;
};
+struct ipc_kludge {
+ struct msgbuf *msgp;
+ long msgtyp;
+};
+
+#define SEMOP 1
+#define SEMGET 2
+#define SEMCTL 3
+#define MSGSND 11
+#define MSGRCV 12
+#define MSGGET 13
+#define MSGCTL 14
+#define SHMAT 21
+#define SHMDT 22
+#define SHMGET 23
+#define SHMCTL 24
+
#define IPCOP_MASK(__x) (1UL << (__x))
static int
-do_sys32_semctl(int first, int second, int third, void *uptr)
+do_sys32_semctl (int first, int second, int third, void *uptr)
{
union semun fourth;
u32 pad;
@@ -1590,7 +1808,7 @@
return -EINVAL;
if (get_user(pad, (u32 *)uptr))
return -EFAULT;
- if(third = SETVAL)
+ if (third = SETVAL)
fourth.val = (int)pad;
else
fourth.__pad = (void *)A(pad);
@@ -1607,44 +1825,38 @@
case GETALL:
case SETVAL:
case SETALL:
- err = sys_semctl (first, second, third, fourth);
+ err = sys_semctl(first, second, third, fourth);
break;
case IPC_STAT:
case SEM_STAT:
usp = (struct semid_ds32 *)A(pad);
fourth.__pad = &s;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_semctl (first, second, third, fourth);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_semctl(first, second, third, fourth);
+ set_fs(old_fs);
err2 = put_user(s.sem_perm.key, &usp->sem_perm.key);
- err2 |= __put_user(s.sem_perm.uid, &usp->sem_perm.uid);
- err2 |= __put_user(s.sem_perm.gid, &usp->sem_perm.gid);
- err2 |= __put_user(s.sem_perm.cuid,
- &usp->sem_perm.cuid);
- err2 |= __put_user (s.sem_perm.cgid,
- &usp->sem_perm.cgid);
- err2 |= __put_user (s.sem_perm.mode,
- &usp->sem_perm.mode);
- err2 |= __put_user (s.sem_perm.seq, &usp->sem_perm.seq);
- err2 |= __put_user (s.sem_otime, &usp->sem_otime);
- err2 |= __put_user (s.sem_ctime, &usp->sem_ctime);
- err2 |= __put_user (s.sem_nsems, &usp->sem_nsems);
+ err2 |= put_user(s.sem_perm.uid, &usp->sem_perm.uid);
+ err2 |= put_user(s.sem_perm.gid, &usp->sem_perm.gid);
+ err2 |= put_user(s.sem_perm.cuid, &usp->sem_perm.cuid);
+ err2 |= put_user(s.sem_perm.cgid, &usp->sem_perm.cgid);
+ err2 |= put_user(s.sem_perm.mode, &usp->sem_perm.mode);
+ err2 |= put_user(s.sem_perm.seq, &usp->sem_perm.seq);
+ err2 |= put_user(s.sem_otime, &usp->sem_otime);
+ err2 |= put_user(s.sem_ctime, &usp->sem_ctime);
+ err2 |= put_user(s.sem_nsems, &usp->sem_nsems);
if (err2)
err = -EFAULT;
break;
-
}
-
return err;
}
static int
do_sys32_msgsnd (int first, int second, int third, void *uptr)
{
- struct msgbuf *p = kmalloc (second + sizeof (struct msgbuf)
- + 4, GFP_USER);
+ struct msgbuf *p = kmalloc (second + sizeof(struct msgbuf) + 4, GFP_USER);
struct msgbuf32 *up = (struct msgbuf32 *)uptr;
mm_segment_t old_fs;
int err;
@@ -1656,17 +1868,16 @@
if (err)
goto out;
old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
err = sys_msgsnd (first, p, second, third);
- set_fs (old_fs);
+ set_fs(old_fs);
out:
kfree (p);
return err;
}
static int
-do_sys32_msgrcv (int first, int second, int msgtyp, int third,
- int version, void *uptr)
+do_sys32_msgrcv (int first, int second, int msgtyp, int third, int version, void *uptr)
{
struct msgbuf32 *up;
struct msgbuf *p;
@@ -1691,9 +1902,9 @@
if (!p)
goto out;
old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
err = sys_msgrcv (first, p, second + 4, msgtyp, third);
- set_fs (old_fs);
+ set_fs(old_fs);
if (err < 0)
goto free_then_out;
up = (struct msgbuf32 *)uptr;
@@ -1720,49 +1931,47 @@
case IPC_INFO:
case IPC_RMID:
case MSG_INFO:
- err = sys_msgctl (first, second, (struct msqid_ds *)uptr);
+ err = sys_msgctl(first, second, (struct msqid_ds *)uptr);
break;
case IPC_SET:
- err = get_user (m.msg_perm.uid, &up->msg_perm.uid);
- err |= __get_user (m.msg_perm.gid, &up->msg_perm.gid);
- err |= __get_user (m.msg_perm.mode, &up->msg_perm.mode);
- err |= __get_user (m.msg_qbytes, &up->msg_qbytes);
+ err = get_user(m.msg_perm.uid, &up->msg_perm.uid);
+ err |= get_user(m.msg_perm.gid, &up->msg_perm.gid);
+ err |= get_user(m.msg_perm.mode, &up->msg_perm.mode);
+ err |= get_user(m.msg_qbytes, &up->msg_qbytes);
if (err)
break;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgctl (first, second, &m);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgctl(first, second, &m);
+ set_fs(old_fs);
break;
case IPC_STAT:
case MSG_STAT:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgctl (first, second, (void *) &m64);
- set_fs (old_fs);
- err2 = put_user (m64.msg_perm.key, &up->msg_perm.key);
- err2 |= __put_user(m64.msg_perm.uid, &up->msg_perm.uid);
- err2 |= __put_user(m64.msg_perm.gid, &up->msg_perm.gid);
- err2 |= __put_user(m64.msg_perm.cuid, &up->msg_perm.cuid);
- err2 |= __put_user(m64.msg_perm.cgid, &up->msg_perm.cgid);
- err2 |= __put_user(m64.msg_perm.mode, &up->msg_perm.mode);
- err2 |= __put_user(m64.msg_perm.seq, &up->msg_perm.seq);
- err2 |= __put_user(m64.msg_stime, &up->msg_stime);
- err2 |= __put_user(m64.msg_rtime, &up->msg_rtime);
- err2 |= __put_user(m64.msg_ctime, &up->msg_ctime);
- err2 |= __put_user(m64.msg_cbytes, &up->msg_cbytes);
- err2 |= __put_user(m64.msg_qnum, &up->msg_qnum);
- err2 |= __put_user(m64.msg_qbytes, &up->msg_qbytes);
- err2 |= __put_user(m64.msg_lspid, &up->msg_lspid);
- err2 |= __put_user(m64.msg_lrpid, &up->msg_lrpid);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgctl(first, second, (void *) &m64);
+ set_fs(old_fs);
+ err2 = put_user(m64.msg_perm.key, &up->msg_perm.key);
+ err2 |= put_user(m64.msg_perm.uid, &up->msg_perm.uid);
+ err2 |= put_user(m64.msg_perm.gid, &up->msg_perm.gid);
+ err2 |= put_user(m64.msg_perm.cuid, &up->msg_perm.cuid);
+ err2 |= put_user(m64.msg_perm.cgid, &up->msg_perm.cgid);
+ err2 |= put_user(m64.msg_perm.mode, &up->msg_perm.mode);
+ err2 |= put_user(m64.msg_perm.seq, &up->msg_perm.seq);
+ err2 |= put_user(m64.msg_stime, &up->msg_stime);
+ err2 |= put_user(m64.msg_rtime, &up->msg_rtime);
+ err2 |= put_user(m64.msg_ctime, &up->msg_ctime);
+ err2 |= put_user(m64.msg_cbytes, &up->msg_cbytes);
+ err2 |= put_user(m64.msg_qnum, &up->msg_qnum);
+ err2 |= put_user(m64.msg_qbytes, &up->msg_qbytes);
+ err2 |= put_user(m64.msg_lspid, &up->msg_lspid);
+ err2 |= put_user(m64.msg_lrpid, &up->msg_lrpid);
if (err2)
err = -EFAULT;
break;
-
}
-
return err;
}
@@ -1774,8 +1983,8 @@
int err;
if (version = 1)
- return -EINVAL;
- err = sys_shmat (first, uptr, second, &raddr);
+ return -EINVAL; /* iBCS2 emulator entry point: unsupported */
+ err = sys_shmat(first, uptr, second, &raddr);
if (err)
return err;
return put_user(raddr, uaddr);
@@ -1806,60 +2015,55 @@
break;
case IPC_SET:
err = get_user (s.shm_perm.uid, &up->shm_perm.uid);
- err |= __get_user (s.shm_perm.gid, &up->shm_perm.gid);
- err |= __get_user (s.shm_perm.mode, &up->shm_perm.mode);
+ err |= get_user(s.shm_perm.gid, &up->shm_perm.gid);
+ err |= get_user(s.shm_perm.mode, &up->shm_perm.mode);
if (err)
break;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, &s);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, &s);
+ set_fs(old_fs);
break;
case IPC_STAT:
case SHM_STAT:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
err = sys_shmctl (first, second, (void *) &s64);
- set_fs (old_fs);
+ set_fs(old_fs);
if (err < 0)
break;
- err2 = put_user (s64.shm_perm.key, &up->shm_perm.key);
- err2 |= __put_user (s64.shm_perm.uid, &up->shm_perm.uid);
- err2 |= __put_user (s64.shm_perm.gid, &up->shm_perm.gid);
- err2 |= __put_user (s64.shm_perm.cuid,
- &up->shm_perm.cuid);
- err2 |= __put_user (s64.shm_perm.cgid,
- &up->shm_perm.cgid);
- err2 |= __put_user (s64.shm_perm.mode,
- &up->shm_perm.mode);
- err2 |= __put_user (s64.shm_perm.seq, &up->shm_perm.seq);
- err2 |= __put_user (s64.shm_atime, &up->shm_atime);
- err2 |= __put_user (s64.shm_dtime, &up->shm_dtime);
- err2 |= __put_user (s64.shm_ctime, &up->shm_ctime);
- err2 |= __put_user (s64.shm_segsz, &up->shm_segsz);
- err2 |= __put_user (s64.shm_nattch, &up->shm_nattch);
- err2 |= __put_user (s64.shm_cpid, &up->shm_cpid);
- err2 |= __put_user (s64.shm_lpid, &up->shm_lpid);
+ err2 = put_user(s64.shm_perm.key, &up->shm_perm.key);
+ err2 |= put_user(s64.shm_perm.uid, &up->shm_perm.uid);
+ err2 |= put_user(s64.shm_perm.gid, &up->shm_perm.gid);
+ err2 |= put_user(s64.shm_perm.cuid, &up->shm_perm.cuid);
+ err2 |= put_user(s64.shm_perm.cgid, &up->shm_perm.cgid);
+ err2 |= put_user(s64.shm_perm.mode, &up->shm_perm.mode);
+ err2 |= put_user(s64.shm_perm.seq, &up->shm_perm.seq);
+ err2 |= put_user(s64.shm_atime, &up->shm_atime);
+ err2 |= put_user(s64.shm_dtime, &up->shm_dtime);
+ err2 |= put_user(s64.shm_ctime, &up->shm_ctime);
+ err2 |= put_user(s64.shm_segsz, &up->shm_segsz);
+ err2 |= put_user(s64.shm_nattch, &up->shm_nattch);
+ err2 |= put_user(s64.shm_cpid, &up->shm_cpid);
+ err2 |= put_user(s64.shm_lpid, &up->shm_lpid);
if (err2)
err = -EFAULT;
break;
case SHM_INFO:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, (void *)&si);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, (void *)&si);
+ set_fs(old_fs);
if (err < 0)
break;
- err2 = put_user (si.used_ids, &uip->used_ids);
- err2 |= __put_user (si.shm_tot, &uip->shm_tot);
- err2 |= __put_user (si.shm_rss, &uip->shm_rss);
- err2 |= __put_user (si.shm_swp, &uip->shm_swp);
- err2 |= __put_user (si.swap_attempts,
- &uip->swap_attempts);
- err2 |= __put_user (si.swap_successes,
- &uip->swap_successes);
+ err2 = put_user(si.used_ids, &uip->used_ids);
+ err2 |= put_user(si.shm_tot, &uip->shm_tot);
+ err2 |= put_user(si.shm_rss, &uip->shm_rss);
+ err2 |= put_user(si.shm_swp, &uip->shm_swp);
+ err2 |= put_user(si.swap_attempts, &uip->swap_attempts);
+ err2 |= put_user(si.swap_successes, &uip->swap_successes);
if (err2)
err = -EFAULT;
break;
@@ -1877,53 +2081,48 @@
call &= 0xffff;
switch (call) {
-
- case SEMOP:
+ case SEMOP:
/* struct sembuf is the same on 32 and 64bit :)) */
- err = sys_semop (first, (struct sembuf *)AA(ptr),
- second);
- break;
- case SEMGET:
- err = sys_semget (first, second, third);
- break;
- case SEMCTL:
- err = do_sys32_semctl (first, second, third,
- (void *)AA(ptr));
- break;
+ return sys_semop(first, (struct sembuf *)AA(ptr), second);
+ case SEMGET:
+ return sys_semget(first, second, third);
+ case SEMCTL:
+ if (third & IPC_64) {
+ printk("sys32_ipc(SEMCTL): no IPC_64 version; please fix me.");
+ return -ENOSYS;
+ }
+ return do_sys32_semctl(first, second, third, (void *)AA(ptr));
- case MSGSND:
- err = do_sys32_msgsnd (first, second, third,
- (void *)AA(ptr));
- break;
- case MSGRCV:
- err = do_sys32_msgrcv (first, second, fifth, third,
- version, (void *)AA(ptr));
- break;
- case MSGGET:
- err = sys_msgget ((key_t) first, second);
- break;
- case MSGCTL:
- err = do_sys32_msgctl (first, second, (void *)AA(ptr));
- break;
+ case MSGSND:
+ return do_sys32_msgsnd(first, second, third, (void *)AA(ptr));
+ case MSGRCV:
+ return do_sys32_msgrcv(first, second, fifth, third, version, (void *)AA(ptr));
+ case MSGGET:
+ return sys_msgget((key_t) first, second);
+ case MSGCTL:
+ if (second & IPC_64) {
+ printk("sys32_ipc(MSGCTL): no IPC_64 version; please fix me.");
+ return -ENOSYS;
+ }
+ return do_sys32_msgctl(first, second, (void *)AA(ptr));
- case SHMAT:
- err = do_sys32_shmat (first, second, third, version, (void *)AA(ptr));
+ case SHMAT:
+ err = do_sys32_shmat(first, second, third, version, (void *)AA(ptr));
break;
- case SHMDT:
- err = sys_shmdt ((char *)AA(ptr));
- break;
- case SHMGET:
- err = sys_shmget (first, second, third);
- break;
- case SHMCTL:
- err = do_sys32_shmctl (first, second, (void *)AA(ptr));
- break;
- default:
- err = -EINVAL;
- break;
- }
+ case SHMDT:
+ return sys_shmdt((char *)AA(ptr));
+ case SHMGET:
+ return sys_shmget(first, second, third);
+ case SHMCTL:
+ if (second & IPC_64) {
+ printk("sys32_ipc(SHMCTL): no IPC_64 version; please fix me.");
+ return -ENOSYS;
+ }
+ return do_sys32_shmctl(first, second, (void *)AA(ptr));
- return err;
+ default:
+ return -EINVAL;
+ }
}
/*
@@ -1931,7 +2130,8 @@
* sys_gettimeofday(). IA64 did this but i386 Linux did not
* so we have to implement this system call here.
*/
-asmlinkage long sys32_time(int * tloc)
+asmlinkage long
+sys32_time (int *tloc)
{
int i;
@@ -1939,7 +2139,7 @@
stuff it to user space. No side effects */
i = CURRENT_TIME;
if (tloc) {
- if (put_user(i,tloc))
+ if (put_user(i, tloc))
i = -EFAULT;
}
return i;
@@ -1991,8 +2191,7 @@
}
asmlinkage long
-sys32_wait4(__kernel_pid_t32 pid, unsigned int *stat_addr, int options,
- struct rusage32 *ru)
+sys32_wait4 (int pid, unsigned int *stat_addr, int options, struct rusage32 *ru)
{
if (!ru)
return sys_wait4(pid, stat_addr, options, NULL);
@@ -2002,37 +2201,38 @@
unsigned int status;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_wait4(pid, stat_addr ? &status : NULL, options, &r);
- set_fs (old_fs);
- if (put_rusage (ru, &r)) return -EFAULT;
- if (stat_addr && put_user (status, stat_addr))
+ set_fs(old_fs);
+ if (put_rusage(ru, &r))
+ return -EFAULT;
+ if (stat_addr && put_user(status, stat_addr))
return -EFAULT;
return ret;
}
}
asmlinkage long
-sys32_waitpid(__kernel_pid_t32 pid, unsigned int *stat_addr, int options)
+sys32_waitpid (int pid, unsigned int *stat_addr, int options)
{
return sys32_wait4(pid, stat_addr, options, NULL);
}
-extern asmlinkage long
-sys_getrusage(int who, struct rusage *ru);
+extern asmlinkage long sys_getrusage (int who, struct rusage *ru);
asmlinkage long
-sys32_getrusage(int who, struct rusage32 *ru)
+sys32_getrusage (int who, struct rusage32 *ru)
{
struct rusage r;
int ret;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_getrusage(who, &r);
- set_fs (old_fs);
- if (put_rusage (ru, &r)) return -EFAULT;
+ set_fs(old_fs);
+ if (put_rusage (ru, &r))
+ return -EFAULT;
return ret;
}
@@ -2043,41 +2243,41 @@
__kernel_clock_t32 tms_cstime;
};
-extern asmlinkage long sys_times(struct tms * tbuf);
+extern asmlinkage long sys_times (struct tms * tbuf);
asmlinkage long
-sys32_times(struct tms32 *tbuf)
+sys32_times (struct tms32 *tbuf)
{
+ mm_segment_t old_fs = get_fs();
struct tms t;
long ret;
- mm_segment_t old_fs = get_fs ();
int err;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_times(tbuf ? &t : NULL);
- set_fs (old_fs);
+ set_fs(old_fs);
if (tbuf) {
err = put_user (IA32_TICK(t.tms_utime), &tbuf->tms_utime);
- err |= __put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime);
- err |= __put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime);
- err |= __put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime);
+ err |= put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime);
+ err |= put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime);
+ err |= put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime);
if (err)
ret = -EFAULT;
}
return IA32_TICK(ret);
}
-unsigned int
+static unsigned int
ia32_peek (struct pt_regs *regs, struct task_struct *child, unsigned long addr, unsigned int *val)
{
size_t copied;
unsigned int ret;
copied = access_process_vm(child, addr, val, sizeof(*val), 0);
- return(copied != sizeof(ret) ? -EIO : 0);
+ return (copied != sizeof(ret)) ? -EIO : 0;
}
-unsigned int
+static unsigned int
ia32_poke (struct pt_regs *regs, struct task_struct *child, unsigned long addr, unsigned int val)
{
@@ -2107,113 +2307,70 @@
#define PT_UESP 15
#define PT_SS 16
-unsigned int
-getreg(struct task_struct *child, int regno)
+static unsigned int
+getreg (struct task_struct *child, int regno)
{
struct pt_regs *child_regs;
child_regs = ia64_task_regs(child);
switch (regno / sizeof(int)) {
-
- case PT_EBX:
- return(child_regs->r11);
- case PT_ECX:
- return(child_regs->r9);
- case PT_EDX:
- return(child_regs->r10);
- case PT_ESI:
- return(child_regs->r14);
- case PT_EDI:
- return(child_regs->r15);
- case PT_EBP:
- return(child_regs->r13);
- case PT_EAX:
- case PT_ORIG_EAX:
- return(child_regs->r8);
- case PT_EIP:
- return(child_regs->cr_iip);
- case PT_UESP:
- return(child_regs->r12);
- case PT_EFL:
- return(child->thread.eflag);
- case PT_DS:
- case PT_ES:
- case PT_FS:
- case PT_GS:
- case PT_SS:
- return((unsigned int)__USER_DS);
- case PT_CS:
- return((unsigned int)__USER_CS);
- default:
- printk(KERN_ERR "getregs:unknown register %d\n", regno);
+ case PT_EBX: return child_regs->r11;
+ case PT_ECX: return child_regs->r9;
+ case PT_EDX: return child_regs->r10;
+ case PT_ESI: return child_regs->r14;
+ case PT_EDI: return child_regs->r15;
+ case PT_EBP: return child_regs->r13;
+ case PT_EAX: return child_regs->r8;
+ case PT_ORIG_EAX: return child_regs->r1; /* see dispatch_to_ia32_handler() */
+ case PT_EIP: return child_regs->cr_iip;
+ case PT_UESP: return child_regs->r12;
+ case PT_EFL: return child->thread.eflag;
+ case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS:
+ return __USER_DS;
+ case PT_CS: return __USER_CS;
+ default:
+ printk(KERN_ERR "getreg:unknown register %d\n", regno);
break;
-
}
- return(0);
+ return 0;
}
-void
-putreg(struct task_struct *child, int regno, unsigned int value)
+static void
+putreg (struct task_struct *child, int regno, unsigned int value)
{
struct pt_regs *child_regs;
child_regs = ia64_task_regs(child);
switch (regno / sizeof(int)) {
-
- case PT_EBX:
- child_regs->r11 = value;
- break;
- case PT_ECX:
- child_regs->r9 = value;
- break;
- case PT_EDX:
- child_regs->r10 = value;
- break;
- case PT_ESI:
- child_regs->r14 = value;
- break;
- case PT_EDI:
- child_regs->r15 = value;
- break;
- case PT_EBP:
- child_regs->r13 = value;
- break;
- case PT_EAX:
- case PT_ORIG_EAX:
- child_regs->r8 = value;
- break;
- case PT_EIP:
- child_regs->cr_iip = value;
- break;
- case PT_UESP:
- child_regs->r12 = value;
- break;
- case PT_EFL:
- child->thread.eflag = value;
- break;
- case PT_DS:
- case PT_ES:
- case PT_FS:
- case PT_GS:
- case PT_SS:
+ case PT_EBX: child_regs->r11 = value; break;
+ case PT_ECX: child_regs->r9 = value; break;
+ case PT_EDX: child_regs->r10 = value; break;
+ case PT_ESI: child_regs->r14 = value; break;
+ case PT_EDI: child_regs->r15 = value; break;
+ case PT_EBP: child_regs->r13 = value; break;
+ case PT_EAX: child_regs->r8 = value; break;
+ case PT_ORIG_EAX: child_regs->r1 = value; break;
+ case PT_EIP: child_regs->cr_iip = value; break;
+ case PT_UESP: child_regs->r12 = value; break;
+ case PT_EFL: child->thread.eflag = value; break;
+ case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS:
if (value != __USER_DS)
printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
regno, value);
break;
- case PT_CS:
+ case PT_CS:
if (value != __USER_CS)
printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
regno, value);
break;
- default:
- printk(KERN_ERR "getregs:unknown register %d\n", regno);
+ default:
+ printk(KERN_ERR "putreg:unknown register %d\n", regno);
break;
-
}
}
static inline void
-ia32f2ia64f(void *dst, void *src)
+ia32f2ia64f (void *dst, void *src)
{
__asm__ ("ldfe f6=[%1] ;;\n\t"
@@ -2224,7 +2381,7 @@
}
static inline void
-ia64f2ia32f(void *dst, void *src)
+ia64f2ia32f (void *dst, void *src)
{
__asm__ ("ldf.fill f6=[%1] ;;\n\t"
@@ -2234,8 +2391,9 @@
return;
}
-void
-put_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp, int tos)
+static void
+put_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp,
+ int tos)
{
struct _fpreg_ia32 *f;
char buf[32];
@@ -2264,8 +2422,9 @@
__copy_to_user(reg, f, sizeof(*reg));
}
-void
-get_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp, int tos)
+static void
+get_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp,
+ int tos)
{
if ((regno += tos) >= 8)
@@ -2291,8 +2450,8 @@
return;
}
-int
-save_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save)
+static int
+save_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save)
{
struct switch_stack *swp;
struct pt_regs *ptp;
@@ -2315,11 +2474,11 @@
tos = (tsk->thread.fsr >> 11) & 3;
for (i = 0; i < 8; i++)
put_fpreg(i, &save->_st[i], ptp, swp, tos);
- return(0);
+ return 0;
}
-int
-restore_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save)
+static int
+restore_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save)
{
struct switch_stack *swp;
struct pt_regs *ptp;
@@ -2342,10 +2501,11 @@
tos = (tsk->thread.fsr >> 11) & 3;
for (i = 0; i < 8; i++)
get_fpreg(i, &save->_st[i], ptp, swp, tos);
- return(ret ? -EFAULT : 0);
+ return ret ? -EFAULT : 0;
}
-asmlinkage long sys_ptrace(long, pid_t, unsigned long, unsigned long, long, long, long, long, long);
+extern asmlinkage long sys_ptrace (long, pid_t, unsigned long, unsigned long, long, long, long,
+ long, long);
/*
* Note that the IA32 version of `ptrace' calls the IA64 routine for
@@ -2360,13 +2520,12 @@
{
struct pt_regs *regs = (struct pt_regs *) &stack;
struct task_struct *child;
+ unsigned int value, tmp;
long i, ret;
- unsigned int value;
lock_kernel();
if (request = PTRACE_TRACEME) {
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
goto out;
}
@@ -2381,8 +2540,7 @@
goto out;
if (request = PTRACE_ATTACH) {
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
goto out;
}
ret = -ESRCH;
@@ -2400,21 +2558,32 @@
case PTRACE_PEEKDATA: /* read word at location addr */
ret = ia32_peek(regs, child, addr, &value);
if (ret = 0)
- ret = put_user(value, (unsigned int *)A(data));
+ ret = put_user(value, (unsigned int *) A(data));
else
ret = -EIO;
goto out;
case PTRACE_POKETEXT:
case PTRACE_POKEDATA: /* write the word at location addr */
- ret = ia32_poke(regs, child, addr, (unsigned int)data);
+ ret = ia32_poke(regs, child, addr, data);
goto out;
case PTRACE_PEEKUSR: /* read word at addr in USER area */
- ret = 0;
+ ret = -EIO;
+ if ((addr & 3) || addr > 17*sizeof(int))
+ break;
+
+ tmp = getreg(child, addr);
+ if (!put_user(tmp, (unsigned int *) A(data)))
+ ret = 0;
break;
case PTRACE_POKEUSR: /* write word at addr in USER area */
+ ret = -EIO;
+ if ((addr & 3) || addr > 17*sizeof(int))
+ break;
+
+ putreg(child, addr, data);
ret = 0;
break;
@@ -2423,28 +2592,25 @@
ret = -EIO;
break;
}
- for ( i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
- __put_user(getreg(child, i), (unsigned int *) A(data));
+ for (i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
+ put_user(getreg(child, i), (unsigned int *) A(data));
data += sizeof(int);
}
ret = 0;
break;
case IA32_PTRACE_SETREGS:
- {
- unsigned int tmp;
if (!access_ok(VERIFY_READ, (int *) A(data), 17*sizeof(int))) {
ret = -EIO;
break;
}
- for ( i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
- __get_user(tmp, (unsigned int *) A(data));
+ for (i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
+ get_user(tmp, (unsigned int *) A(data));
putreg(child, i, tmp);
data += sizeof(int);
}
ret = 0;
break;
- }
case IA32_PTRACE_GETFPREGS:
ret = save_ia32_fpstate(child, (struct _fpstate_ia32 *) A(data));
@@ -2459,10 +2625,8 @@
case PTRACE_KILL:
case PTRACE_SINGLESTEP: /* execute chile for one instruction */
case PTRACE_DETACH: /* detach a process */
- unlock_kernel();
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
- return(ret);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
+ break;
default:
ret = -EIO;
@@ -2500,35 +2664,35 @@
return err;
}
-extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd,
- unsigned long arg);
+extern asmlinkage long sys_fcntl (unsigned int fd, unsigned int cmd, unsigned long arg);
asmlinkage long
-sys32_fcntl(unsigned int fd, unsigned int cmd, int arg)
+sys32_fcntl (unsigned int fd, unsigned int cmd, unsigned int arg)
{
- struct flock f;
mm_segment_t old_fs;
+ struct flock f;
long ret;
switch (cmd) {
- case F_GETLK:
- case F_SETLK:
- case F_SETLKW:
- if(get_flock32(&f, (struct flock32 *)((long)arg)))
+ case F_GETLK:
+ case F_SETLK:
+ case F_SETLKW:
+ if (get_flock32(&f, (struct flock32 *) A(arg)))
return -EFAULT;
old_fs = get_fs();
set_fs(KERNEL_DS);
- ret = sys_fcntl(fd, cmd, (unsigned long)&f);
+ ret = sys_fcntl(fd, cmd, (unsigned long) &f);
set_fs(old_fs);
- if(cmd = F_GETLK && put_flock32(&f, (struct flock32 *)((long)arg)))
+ if (cmd = F_GETLK && put_flock32(&f, (struct flock32 *) A(arg)))
return -EFAULT;
return ret;
- default:
+
+ default:
/*
* `sys_fcntl' lies about arg, for the F_SETOWN
* sub-function arg can have a negative value.
*/
- return sys_fcntl(fd, cmd, (unsigned long)((long)arg));
+ return sys_fcntl(fd, cmd, arg);
}
}
@@ -2536,25 +2700,30 @@
sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigaction32 *oact)
{
struct k_sigaction new_ka, old_ka;
+ unsigned int handler, restorer;
int ret;
if (act) {
old_sigset32_t mask;
- ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
- ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
- ret |= __get_user(mask, &act->sa_mask);
+ ret = get_user(handler, &act->sa_handler);
+ ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= get_user(restorer, &act->sa_restorer);
+ ret |= get_user(mask, &act->sa_mask);
if (ret)
return ret;
+
+ sigact_set_handler(&new_ka, handler, restorer);
siginitset(&new_ka.sa.sa_mask, mask);
}
ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
if (!ret && oact) {
- ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
- ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
- ret |= __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ ret = put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler);
+ ret |= put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer);
+ ret |= put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
}
return ret;
@@ -2563,8 +2732,8 @@
asmlinkage long sys_ni_syscall(void);
asmlinkage long
-sys32_ni_syscall(int dummy0, int dummy1, int dummy2, int dummy3,
- int dummy4, int dummy5, int dummy6, int dummy7, int stack)
+sys32_ni_syscall (int dummy0, int dummy1, int dummy2, int dummy3, int dummy4, int dummy5,
+ int dummy6, int dummy7, int stack)
{
struct pt_regs *regs = (struct pt_regs *)&stack;
@@ -2579,7 +2748,7 @@
#define IOLEN ((65536 / 4) * 4096)
asmlinkage long
-sys_iopl (int level)
+sys32_iopl (int level)
{
extern unsigned long ia64_iobase;
int fd;
@@ -2624,7 +2793,7 @@
}
asmlinkage long
-sys_ioperm (unsigned int from, unsigned int num, int on)
+sys32_ioperm (unsigned int from, unsigned int num, int on)
{
/*
@@ -2637,7 +2806,7 @@
* XXX proper ioperm() support should be emulated by
* manipulating the page protections...
*/
- return sys_iopl(3);
+ return sys32_iopl(3);
}
typedef struct {
@@ -2647,10 +2816,8 @@
} ia32_stack_t;
asmlinkage long
-sys32_sigaltstack (const ia32_stack_t *uss32, ia32_stack_t *uoss32,
-long arg2, long arg3, long arg4,
-long arg5, long arg6, long arg7,
-long stack)
+sys32_sigaltstack (ia32_stack_t *uss32, ia32_stack_t *uoss32,
+ long arg2, long arg3, long arg4, long arg5, long arg6, long arg7, long stack)
{
struct pt_regs *pt = (struct pt_regs *) &stack;
stack_t uss, uoss;
@@ -2659,8 +2826,8 @@
mm_segment_t old_fs = get_fs();
if (uss32)
- if (copy_from_user(&buf32, (void *)A(uss32), sizeof(ia32_stack_t)))
- return(-EFAULT);
+ if (copy_from_user(&buf32, uss32, sizeof(ia32_stack_t)))
+ return -EFAULT;
uss.ss_sp = (void *) (long) buf32.ss_sp;
uss.ss_flags = buf32.ss_flags;
uss.ss_size = buf32.ss_size;
@@ -2673,34 +2840,34 @@
buf32.ss_sp = (long) uoss.ss_sp;
buf32.ss_flags = uoss.ss_flags;
buf32.ss_size = uoss.ss_size;
- if (copy_to_user((void*)A(uoss32), &buf32, sizeof(ia32_stack_t)))
- return(-EFAULT);
+ if (copy_to_user(uoss32, &buf32, sizeof(ia32_stack_t)))
+ return -EFAULT;
}
- return(ret);
+ return ret;
}
asmlinkage int
-sys_pause (void)
+sys32_pause (void)
{
current->state = TASK_INTERRUPTIBLE;
schedule();
return -ERESTARTNOHAND;
}
-asmlinkage long sys_msync(unsigned long start, size_t len, int flags);
+asmlinkage long sys_msync (unsigned long start, size_t len, int flags);
asmlinkage int
-sys32_msync(unsigned int start, unsigned int len, int flags)
+sys32_msync (unsigned int start, unsigned int len, int flags)
{
unsigned int addr;
if (OFFSET4K(start))
return -EINVAL;
addr = start & PAGE_MASK;
- return(sys_msync(addr, len + (start - addr), flags));
+ return sys_msync(addr, len + (start - addr), flags);
}
-struct sysctl_ia32 {
+struct sysctl32 {
unsigned int name;
int nlen;
unsigned int oldval;
@@ -2713,16 +2880,16 @@
extern asmlinkage long sys_sysctl(struct __sysctl_args *args);
asmlinkage long
-sys32_sysctl(struct sysctl_ia32 *args32)
+sys32_sysctl (struct sysctl32 *args)
{
- struct sysctl_ia32 a32;
+ struct sysctl32 a32;
mm_segment_t old_fs = get_fs ();
void *oldvalp, *newvalp;
size_t oldlen;
int *namep;
long ret;
- if (copy_from_user(&a32, args32, sizeof (a32)))
+ if (copy_from_user(&a32, args, sizeof(a32)))
return -EFAULT;
/*
@@ -2755,7 +2922,7 @@
}
asmlinkage long
-sys32_newuname(struct new_utsname * name)
+sys32_newuname (struct new_utsname *name)
{
extern asmlinkage long sys_newuname(struct new_utsname * name);
int ret = sys_newuname(name);
@@ -2766,10 +2933,10 @@
return ret;
}
-extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
+extern asmlinkage long sys_getresuid (uid_t *ruid, uid_t *euid, uid_t *suid);
asmlinkage long
-sys32_getresuid (u16 *ruid, u16 *euid, u16 *suid)
+sys32_getresuid16 (u16 *ruid, u16 *euid, u16 *suid)
{
uid_t a, b, c;
int ret;
@@ -2787,7 +2954,7 @@
extern asmlinkage long sys_getresgid (gid_t *rgid, gid_t *egid, gid_t *sgid);
asmlinkage long
-sys32_getresgid(u16 *rgid, u16 *egid, u16 *sgid)
+sys32_getresgid16 (u16 *rgid, u16 *egid, u16 *sgid)
{
gid_t a, b, c;
int ret;
@@ -2797,15 +2964,13 @@
ret = sys_getresgid(&a, &b, &c);
set_fs(old_fs);
- if (!ret) {
- ret = put_user(a, rgid);
- ret |= put_user(b, egid);
- ret |= put_user(c, sgid);
- }
- return ret;
+ if (ret)
+ return ret;
+
+ return put_user(a, rgid) | put_user(b, egid) | put_user(c, sgid);
}
-int
+asmlinkage long
sys32_lseek (unsigned int fd, int offset, unsigned int whence)
{
extern off_t sys_lseek (unsigned int fd, off_t offset, unsigned int origin);
@@ -2814,190 +2979,448 @@
return sys_lseek(fd, offset, whence);
}
-#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
+extern asmlinkage long sys_getgroups (int gidsetsize, gid_t *grouplist);
-/* In order to reduce some races, while at the same time doing additional
- * checking and hopefully speeding things up, we copy filenames to the
- * kernel data space before using them..
- *
- * POSIX.1 2.4: an empty pathname is invalid (ENOENT).
- */
-static inline int
-do_getname32(const char *filename, char *page)
+asmlinkage long
+sys32_getgroups16 (int gidsetsize, short *grouplist)
{
- int retval;
+ mm_segment_t old_fs = get_fs();
+ gid_t gl[NGROUPS];
+ int ret, i;
- /* 32bit pointer will be always far below TASK_SIZE :)) */
- retval = strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE);
- if (retval > 0) {
- if (retval < PAGE_SIZE)
- return 0;
- return -ENAMETOOLONG;
- } else if (!retval)
- retval = -ENOENT;
- return retval;
+ set_fs(KERNEL_DS);
+ ret = sys_getgroups(gidsetsize, gl);
+ set_fs(old_fs);
+
+ if (gidsetsize && ret > 0 && ret <= NGROUPS)
+ for (i = 0; i < ret; i++, grouplist++)
+ if (put_user(gl[i], grouplist))
+ return -EFAULT;
+ return ret;
}
-char *
-getname32(const char *filename)
-{
- char *tmp, *result;
+extern asmlinkage long sys_setgroups (int gidsetsize, gid_t *grouplist);
- result = ERR_PTR(-ENOMEM);
- tmp = (char *)__get_free_page(GFP_KERNEL);
- if (tmp) {
- int retval = do_getname32(filename, tmp);
+asmlinkage long
+sys32_setgroups16 (int gidsetsize, short *grouplist)
+{
+ mm_segment_t old_fs = get_fs();
+ gid_t gl[NGROUPS];
+ int ret, i;
- result = tmp;
- if (retval < 0) {
- putname(tmp);
- result = ERR_PTR(retval);
- }
- }
- return result;
+ if ((unsigned) gidsetsize > NGROUPS)
+ return -EINVAL;
+ for (i = 0; i < gidsetsize; i++, grouplist++)
+ if (get_user(gl[i], grouplist))
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_setgroups(gidsetsize, gl);
+ set_fs(old_fs);
+ return ret;
}
-/* 32-bit timeval and related flotsam. */
-
-extern asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int on);
+/*
+ * Unfortunately, the x86 compiler aligns variables of type "long long" to a 4 byte boundary
+ * only, which means that the x86 version of "struct flock64" doesn't match the ia64 version
+ * of struct flock.
+ */
-asmlinkage long
-sys32_ioperm(u32 from, u32 num, int on)
+static inline long
+ia32_put_flock (struct flock *l, unsigned long addr)
{
- return sys_ioperm((unsigned long)from, (unsigned long)num, on);
+ return (put_user(l->l_type, (short *) addr)
+ | put_user(l->l_whence, (short *) (addr + 2))
+ | put_user(l->l_start, (long *) (addr + 4))
+ | put_user(l->l_len, (long *) (addr + 12))
+ | put_user(l->l_pid, (int *) (addr + 20)));
}
-struct dqblk32 {
- __u32 dqb_bhardlimit;
- __u32 dqb_bsoftlimit;
- __u32 dqb_curblocks;
- __u32 dqb_ihardlimit;
- __u32 dqb_isoftlimit;
- __u32 dqb_curinodes;
- __kernel_time_t32 dqb_btime;
- __kernel_time_t32 dqb_itime;
-};
-
-extern asmlinkage long sys_quotactl(int cmd, const char *special, int id,
- caddr_t addr);
+static inline long
+ia32_get_flock (struct flock *l, unsigned long addr)
+{
+ unsigned int start_lo, start_hi, len_lo, len_hi;
+ int err = (get_user(l->l_type, (short *) addr)
+ | get_user(l->l_whence, (short *) (addr + 2))
+ | get_user(start_lo, (int *) (addr + 4))
+ | get_user(start_hi, (int *) (addr + 8))
+ | get_user(len_lo, (int *) (addr + 12))
+ | get_user(len_hi, (int *) (addr + 16))
+ | get_user(l->l_pid, (int *) (addr + 20)));
+ l->l_start = ((unsigned long) start_hi << 32) | start_lo;
+ l->l_len = ((unsigned long) len_hi << 32) | len_lo;
+ return err;
+}
asmlinkage long
-sys32_quotactl(int cmd, const char *special, int id, unsigned long addr)
+sys32_fcntl64 (unsigned int fd, unsigned int cmd, unsigned int arg)
{
- int cmds = cmd >> SUBCMDSHIFT;
- int err;
- struct dqblk d;
mm_segment_t old_fs;
- char *spec;
+ struct flock f;
+ long ret;
- switch (cmds) {
- case Q_GETQUOTA:
- break;
- case Q_SETQUOTA:
- case Q_SETUSE:
- case Q_SETQLIM:
- if (copy_from_user (&d, (struct dqblk32 *)addr,
- sizeof (struct dqblk32)))
+ switch (cmd) {
+ case F_GETLK64:
+ case F_SETLK64:
+ case F_SETLKW64:
+ if (ia32_get_flock(&f, arg))
return -EFAULT;
- d.dqb_itime = ((struct dqblk32 *)&d)->dqb_itime;
- d.dqb_btime = ((struct dqblk32 *)&d)->dqb_btime;
- break;
- default:
- return sys_quotactl(cmd, special,
- id, (caddr_t)addr);
- }
- spec = getname32 (special);
- err = PTR_ERR(spec);
- if (IS_ERR(spec)) return err;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_quotactl(cmd, (const char *)spec, id, (caddr_t)&d);
- set_fs (old_fs);
- putname (spec);
- if (cmds = Q_GETQUOTA) {
- __kernel_time_t b = d.dqb_btime, i = d.dqb_itime;
- ((struct dqblk32 *)&d)->dqb_itime = i;
- ((struct dqblk32 *)&d)->dqb_btime = b;
- if (copy_to_user ((struct dqblk32 *)addr, &d,
- sizeof (struct dqblk32)))
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ ret = sys_fcntl(fd, cmd, (unsigned long) &f);
+ set_fs(old_fs);
+ if (cmd = F_GETLK && ia32_put_flock(&f, arg))
return -EFAULT;
+ break;
+
+ default:
+ ret = sys32_fcntl(fd, cmd, arg);
+ break;
}
- return err;
+ return ret;
}
-extern asmlinkage long sys_utime(char * filename, struct utimbuf * times);
+asmlinkage long
+sys32_truncate64 (unsigned int path, unsigned int len_lo, unsigned int len_hi)
+{
+ extern asmlinkage long sys_truncate (const char *path, unsigned long length);
-struct utimbuf32 {
- __kernel_time_t32 actime, modtime;
-};
+ return sys_truncate((const char *) A(path), ((unsigned long) len_hi << 32) | len_lo);
+}
asmlinkage long
-sys32_utime(char * filename, struct utimbuf32 *times)
+sys32_ftruncate64 (int fd, unsigned int len_lo, unsigned int len_hi)
{
- struct utimbuf t;
- mm_segment_t old_fs;
- int ret;
- char *filenam;
+ extern asmlinkage long sys_ftruncate (int fd, unsigned long length);
- if (!times)
- return sys_utime(filename, NULL);
- if (get_user (t.actime, ×->actime) ||
- __get_user (t.modtime, ×->modtime))
- return -EFAULT;
- filenam = getname32 (filename);
- ret = PTR_ERR(filenam);
- if (!IS_ERR(filenam)) {
- old_fs = get_fs();
- set_fs (KERNEL_DS);
- ret = sys_utime(filenam, &t);
- set_fs (old_fs);
- putname (filenam);
- }
- return ret;
+ return sys_ftruncate(fd, ((unsigned long) len_hi << 32) | len_lo);
}
-/*
- * Ooo, nasty. We need here to frob 32-bit unsigned longs to
- * 64-bit unsigned longs.
- */
-
-static inline int
-get_fd_set32(unsigned long n, unsigned long *fdset, u32 *ufdset)
+static int
+putstat64 (struct stat64 *ubuf, struct stat *kbuf)
{
- if (ufdset) {
- unsigned long odd;
+ int err;
- if (verify_area(VERIFY_WRITE, ufdset, n*sizeof(u32)))
- return -EFAULT;
+ if (clear_user(ubuf, sizeof(*ubuf)))
+ return 1;
- odd = n & 1UL;
- n &= ~1UL;
- while (n) {
- unsigned long h, l;
- __get_user(l, ufdset);
- __get_user(h, ufdset+1);
- ufdset += 2;
- *fdset++ = h << 32 | l;
- n -= 2;
- }
- if (odd)
- __get_user(*fdset, ufdset);
- } else {
- /* Tricky, must clear full unsigned long in the
- * kernel fdset at the end, this makes sure that
- * actually happens.
- */
- memset(fdset, 0, ((n + 1) & ~1)*sizeof(u32));
- }
- return 0;
+ err = __put_user(kbuf->st_dev, &ubuf->st_dev);
+ err |= __put_user(kbuf->st_ino, &ubuf->__st_ino);
+ err |= __put_user(kbuf->st_ino, &ubuf->st_ino_lo);
+ err |= __put_user(kbuf->st_ino >> 32, &ubuf->st_ino_hi);
+ err |= __put_user(kbuf->st_mode, &ubuf->st_mode);
+ err |= __put_user(kbuf->st_nlink, &ubuf->st_nlink);
+ err |= __put_user(kbuf->st_uid, &ubuf->st_uid);
+ err |= __put_user(kbuf->st_gid, &ubuf->st_gid);
+ err |= __put_user(kbuf->st_rdev, &ubuf->st_rdev);
+ err |= __put_user(kbuf->st_size, &ubuf->st_size_lo);
+ err |= __put_user((kbuf->st_size >> 32), &ubuf->st_size_hi);
+ err |= __put_user(kbuf->st_atime, &ubuf->st_atime);
+ err |= __put_user(kbuf->st_mtime, &ubuf->st_mtime);
+ err |= __put_user(kbuf->st_ctime, &ubuf->st_ctime);
+ err |= __put_user(kbuf->st_blksize, &ubuf->st_blksize);
+ err |= __put_user(kbuf->st_blocks, &ubuf->st_blocks);
+ return err;
}
-static inline void
-set_fd_set32(unsigned long n, u32 *ufdset, unsigned long *fdset)
+asmlinkage long
+sys32_stat64 (char *filename, struct stat64 *statbuf)
{
- unsigned long odd;
-
- if (!ufdset)
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newstat(filename, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_lstat64 (char *filename, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newlstat(filename, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_fstat64 (unsigned int fd, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newfstat(fd, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_sigpending (unsigned int *set)
+{
+ return do_sigpending(set, sizeof(*set));
+}
+
+struct sysinfo32 {
+ s32 uptime;
+ u32 loads[3];
+ u32 totalram;
+ u32 freeram;
+ u32 sharedram;
+ u32 bufferram;
+ u32 totalswap;
+ u32 freeswap;
+ unsigned short procs;
+ char _f[22];
+};
+
+asmlinkage long
+sys32_sysinfo (struct sysinfo32 *info)
+{
+ extern asmlinkage long sys_sysinfo (struct sysinfo *);
+ mm_segment_t old_fs = get_fs();
+ struct sysinfo s;
+ long ret, err;
+
+ set_fs(KERNEL_DS);
+ ret = sys_sysinfo(&s);
+ set_fs(old_fs);
+
+ if (!access_ok(VERIFY_WRITE, info, sizeof(*info)))
+ return -EFAULT;
+
+ err = __put_user(s.uptime, &info->uptime);
+ err |= __put_user(s.loads[0], &info->loads[0]);
+ err |= __put_user(s.loads[1], &info->loads[1]);
+ err |= __put_user(s.loads[2], &info->loads[2]);
+ err |= __put_user(s.totalram, &info->totalram);
+ err |= __put_user(s.freeram, &info->freeram);
+ err |= __put_user(s.sharedram, &info->sharedram);
+ err |= __put_user(s.bufferram, &info->bufferram);
+ err |= __put_user(s.totalswap, &info->totalswap);
+ err |= __put_user(s.freeswap, &info->freeswap);
+ err |= __put_user(s.procs, &info->procs);
+ if (err)
+ return -EFAULT;
+ return ret;
+}
+
+/* In order to reduce some races, while at the same time doing additional
+ * checking and hopefully speeding things up, we copy filenames to the
+ * kernel data space before using them..
+ *
+ * POSIX.1 2.4: an empty pathname is invalid (ENOENT).
+ */
+static inline int
+do_getname32 (const char *filename, char *page)
+{
+ int retval;
+
+ /* 32bit pointer will be always far below TASK_SIZE :)) */
+ retval = strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE);
+ if (retval > 0) {
+ if (retval < PAGE_SIZE)
+ return 0;
+ return -ENAMETOOLONG;
+ } else if (!retval)
+ retval = -ENOENT;
+ return retval;
+}
+
+static char *
+getname32 (const char *filename)
+{
+ char *tmp, *result;
+
+ result = ERR_PTR(-ENOMEM);
+ tmp = (char *)__get_free_page(GFP_KERNEL);
+ if (tmp) {
+ int retval = do_getname32(filename, tmp);
+
+ result = tmp;
+ if (retval < 0) {
+ putname(tmp);
+ result = ERR_PTR(retval);
+ }
+ }
+ return result;
+}
+
+struct dqblk32 {
+ __u32 dqb_bhardlimit;
+ __u32 dqb_bsoftlimit;
+ __u32 dqb_curblocks;
+ __u32 dqb_ihardlimit;
+ __u32 dqb_isoftlimit;
+ __u32 dqb_curinodes;
+ __kernel_time_t32 dqb_btime;
+ __kernel_time_t32 dqb_itime;
+};
+
+asmlinkage long
+sys32_quotactl (int cmd, unsigned int special, int id, struct dqblk32 *addr)
+{
+ extern asmlinkage long sys_quotactl (int, const char *, int, caddr_t);
+ int cmds = cmd >> SUBCMDSHIFT;
+ mm_segment_t old_fs;
+ struct dqblk d;
+ char *spec;
+ long err;
+
+ switch (cmds) {
+ case Q_GETQUOTA:
+ break;
+ case Q_SETQUOTA:
+ case Q_SETUSE:
+ case Q_SETQLIM:
+ if (copy_from_user (&d, addr, sizeof(struct dqblk32)))
+ return -EFAULT;
+ d.dqb_itime = ((struct dqblk32 *)&d)->dqb_itime;
+ d.dqb_btime = ((struct dqblk32 *)&d)->dqb_btime;
+ break;
+ default:
+ return sys_quotactl(cmd, (void *) A(special), id, (caddr_t) addr);
+ }
+ spec = getname32((void *) A(special));
+ err = PTR_ERR(spec);
+ if (IS_ERR(spec))
+ return err;
+ old_fs = get_fs ();
+ set_fs(KERNEL_DS);
+ err = sys_quotactl(cmd, (const char *)spec, id, (caddr_t)&d);
+ set_fs(old_fs);
+ putname(spec);
+ if (cmds = Q_GETQUOTA) {
+ __kernel_time_t b = d.dqb_btime, i = d.dqb_itime;
+ ((struct dqblk32 *)&d)->dqb_itime = i;
+ ((struct dqblk32 *)&d)->dqb_btime = b;
+ if (copy_to_user(addr, &d, sizeof(struct dqblk32)))
+ return -EFAULT;
+ }
+ return err;
+}
+
+asmlinkage long
+sys32_sched_rr_get_interval (pid_t pid, struct timespec32 *interval)
+{
+ extern asmlinkage long sys_sched_rr_get_interval (pid_t, struct timespec *);
+ mm_segment_t old_fs = get_fs();
+ struct timespec t;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_sched_rr_get_interval(pid, &t);
+ set_fs(old_fs);
+ if (put_user (t.tv_sec, &interval->tv_sec) || put_user (t.tv_nsec, &interval->tv_nsec))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_pread (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, u32 pos_hi)
+{
+ extern asmlinkage long sys_pread (unsigned int, char *, size_t, loff_t);
+ return sys_pread(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo);
+}
+
+asmlinkage long
+sys32_pwrite (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, u32 pos_hi)
+{
+ extern asmlinkage long sys_pwrite (unsigned int, const char *, size_t, loff_t);
+ return sys_pwrite(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo);
+}
+
+asmlinkage long
+sys32_sendfile (int out_fd, int in_fd, int *offset, unsigned int count)
+{
+ extern asmlinkage long sys_sendfile (int, int, off_t *, size_t);
+ mm_segment_t old_fs = get_fs();
+ long ret;
+ off_t of;
+
+ if (offset && get_user(of, offset))
+ return -EFAULT;
+
+ set_fs(KERNEL_DS);
+ ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
+ set_fs(old_fs);
+
+ if (!ret && offset && put_user(of, offset))
+ return -EFAULT;
+
+ return ret;
+}
+
+asmlinkage long
+sys32_personality (unsigned int personality)
+{
+ extern asmlinkage long sys_personality (unsigned long);
+ long ret;
+
+ if (current->personality = PER_LINUX32 && personality = PER_LINUX)
+ personality = PER_LINUX32;
+ ret = sys_personality(personality);
+ if (ret = PER_LINUX32)
+ ret = PER_LINUX;
+ return ret;
+}
+
+#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
+
+/*
+ * Ooo, nasty. We need here to frob 32-bit unsigned longs to
+ * 64-bit unsigned longs.
+ */
+
+static inline int
+get_fd_set32(unsigned long n, unsigned long *fdset, u32 *ufdset)
+{
+ if (ufdset) {
+ unsigned long odd;
+
+ if (verify_area(VERIFY_WRITE, ufdset, n*sizeof(u32)))
+ return -EFAULT;
+
+ odd = n & 1UL;
+ n &= ~1UL;
+ while (n) {
+ unsigned long h, l;
+ __get_user(l, ufdset);
+ __get_user(h, ufdset+1);
+ ufdset += 2;
+ *fdset++ = h << 32 | l;
+ n -= 2;
+ }
+ if (odd)
+ __get_user(*fdset, ufdset);
+ } else {
+ /* Tricky, must clear full unsigned long in the
+ * kernel fdset at the end, this makes sure that
+ * actually happens.
+ */
+ memset(fdset, 0, ((n + 1) & ~1)*sizeof(u32));
+ }
+ return 0;
+}
+
+static inline void
+set_fd_set32(unsigned long n, u32 *ufdset, unsigned long *fdset)
+{
+ unsigned long odd;
+
+ if (!ufdset)
return;
odd = n & 1UL;
@@ -3015,20 +3438,11 @@
__put_user(*fdset, ufdset);
}
-extern asmlinkage long sys_sysfs(int option, unsigned long arg1,
- unsigned long arg2);
-
-asmlinkage long
-sys32_sysfs(int option, u32 arg1, u32 arg2)
-{
- return sys_sysfs(option, arg1, arg2);
-}
-
struct ncp_mount_data32 {
int version;
unsigned int ncp_fd;
__kernel_uid_t32 mounted_uid;
- __kernel_pid_t32 wdog_pid;
+ int wdog_pid;
unsigned char mounted_vol[NCP_VOLNAME_LEN + 1];
unsigned int time_out;
unsigned int retry_count;
@@ -3175,269 +3589,7 @@
}
}
-struct sysinfo32 {
- s32 uptime;
- u32 loads[3];
- u32 totalram;
- u32 freeram;
- u32 sharedram;
- u32 bufferram;
- u32 totalswap;
- u32 freeswap;
- unsigned short procs;
- char _f[22];
-};
-
-extern asmlinkage long sys_sysinfo(struct sysinfo *info);
-
-asmlinkage long
-sys32_sysinfo(struct sysinfo32 *info)
-{
- struct sysinfo s;
- int ret, err;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_sysinfo(&s);
- set_fs (old_fs);
- err = put_user (s.uptime, &info->uptime);
- err |= __put_user (s.loads[0], &info->loads[0]);
- err |= __put_user (s.loads[1], &info->loads[1]);
- err |= __put_user (s.loads[2], &info->loads[2]);
- err |= __put_user (s.totalram, &info->totalram);
- err |= __put_user (s.freeram, &info->freeram);
- err |= __put_user (s.sharedram, &info->sharedram);
- err |= __put_user (s.bufferram, &info->bufferram);
- err |= __put_user (s.totalswap, &info->totalswap);
- err |= __put_user (s.freeswap, &info->freeswap);
- err |= __put_user (s.procs, &info->procs);
- if (err)
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_sched_rr_get_interval(pid_t pid,
- struct timespec *interval);
-
-asmlinkage long
-sys32_sched_rr_get_interval(__kernel_pid_t32 pid, struct timespec32 *interval)
-{
- struct timespec t;
- int ret;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_sched_rr_get_interval(pid, &t);
- set_fs (old_fs);
- if (put_user (t.tv_sec, &interval->tv_sec) ||
- __put_user (t.tv_nsec, &interval->tv_nsec))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_sigprocmask(int how, old_sigset_t *set,
- old_sigset_t *oset);
-
-asmlinkage long
-sys32_sigprocmask(int how, old_sigset_t32 *set, old_sigset_t32 *oset)
-{
- old_sigset_t s;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- if (set && get_user (s, set)) return -EFAULT;
- set_fs (KERNEL_DS);
- ret = sys_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL);
- set_fs (old_fs);
- if (ret) return ret;
- if (oset && put_user (s, oset)) return -EFAULT;
- return 0;
-}
-
-extern asmlinkage long sys_sigpending(old_sigset_t *set);
-
-asmlinkage long
-sys32_sigpending(old_sigset_t32 *set)
-{
- old_sigset_t s;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_sigpending(&s);
- set_fs (old_fs);
- if (put_user (s, set)) return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_rt_sigpending(sigset_t *set, size_t sigsetsize);
-
-asmlinkage long
-sys32_rt_sigpending(sigset_t32 *set, __kernel_size_t32 sigsetsize)
-{
- sigset_t s;
- sigset_t32 s32;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_rt_sigpending(&s, sigsetsize);
- set_fs (old_fs);
- if (!ret) {
- switch (_NSIG_WORDS) {
- case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
- case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
- case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
- case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
- }
- if (copy_to_user (set, &s32, sizeof(sigset_t32)))
- return -EFAULT;
- }
- return ret;
-}
-
-siginfo_t32 *
-siginfo64to32(siginfo_t32 *d, siginfo_t *s)
-{
- memset(d, 0, sizeof(siginfo_t32));
- d->si_signo = s->si_signo;
- d->si_errno = s->si_errno;
- d->si_code = s->si_code;
- if (s->si_signo >= SIGRTMIN) {
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- /* XXX: Ouch, how to find this out??? */
- d->si_int = s->si_int;
- } else switch (s->si_signo) {
- /* XXX: What about POSIX1.b timers */
- case SIGCHLD:
- d->si_pid = s->si_pid;
- d->si_status = s->si_status;
- d->si_utime = s->si_utime;
- d->si_stime = s->si_stime;
- break;
- case SIGSEGV:
- case SIGBUS:
- case SIGFPE:
- case SIGILL:
- d->si_addr = (long)(s->si_addr);
- /* XXX: Do we need to translate this from ia64 to ia32 traps? */
- d->si_trapno = s->si_trapno;
- break;
- case SIGPOLL:
- d->si_band = s->si_band;
- d->si_fd = s->si_fd;
- break;
- default:
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- break;
- }
- return d;
-}
-
-siginfo_t *
-siginfo32to64(siginfo_t *d, siginfo_t32 *s)
-{
- d->si_signo = s->si_signo;
- d->si_errno = s->si_errno;
- d->si_code = s->si_code;
- if (s->si_signo >= SIGRTMIN) {
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- /* XXX: Ouch, how to find this out??? */
- d->si_int = s->si_int;
- } else switch (s->si_signo) {
- /* XXX: What about POSIX1.b timers */
- case SIGCHLD:
- d->si_pid = s->si_pid;
- d->si_status = s->si_status;
- d->si_utime = s->si_utime;
- d->si_stime = s->si_stime;
- break;
- case SIGSEGV:
- case SIGBUS:
- case SIGFPE:
- case SIGILL:
- d->si_addr = (void *)A(s->si_addr);
- /* XXX: Do we need to translate this from ia32 to ia64 traps? */
- d->si_trapno = s->si_trapno;
- break;
- case SIGPOLL:
- d->si_band = s->si_band;
- d->si_fd = s->si_fd;
- break;
- default:
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- break;
- }
- return d;
-}
-
-extern asmlinkage long
-sys_rt_sigtimedwait(const sigset_t *uthese, siginfo_t *uinfo,
- const struct timespec *uts, size_t sigsetsize);
-
-asmlinkage long
-sys32_rt_sigtimedwait(sigset_t32 *uthese, siginfo_t32 *uinfo,
- struct timespec32 *uts, __kernel_size_t32 sigsetsize)
-{
- sigset_t s;
- sigset_t32 s32;
- struct timespec t;
- int ret;
- mm_segment_t old_fs = get_fs();
- siginfo_t info;
- siginfo_t32 info32;
-
- if (copy_from_user (&s32, uthese, sizeof(sigset_t32)))
- return -EFAULT;
- switch (_NSIG_WORDS) {
- case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
- case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
- case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
- case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
- }
- if (uts) {
- ret = get_user (t.tv_sec, &uts->tv_sec);
- ret |= __get_user (t.tv_nsec, &uts->tv_nsec);
- if (ret)
- return -EFAULT;
- }
- set_fs (KERNEL_DS);
- ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
- set_fs (old_fs);
- if (ret >= 0 && uinfo) {
- if (copy_to_user (uinfo, siginfo64to32(&info32, &info),
- sizeof(siginfo_t32)))
- return -EFAULT;
- }
- return ret;
-}
-
-extern asmlinkage long
-sys_rt_sigqueueinfo(int pid, int sig, siginfo_t *uinfo);
-
-asmlinkage long
-sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo)
-{
- siginfo_t info;
- siginfo_t32 info32;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- if (copy_from_user (&info32, uinfo, sizeof(siginfo_t32)))
- return -EFAULT;
- /* XXX: Is this correct? */
- siginfo32to64(&info, &info32);
- set_fs (KERNEL_DS);
- ret = sys_rt_sigqueueinfo(pid, sig, &info);
- set_fs (old_fs);
- return ret;
-}
-
-extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
+extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
{
@@ -3462,24 +3614,6 @@
return sys_setresuid(sruid, seuid, ssuid);
}
-extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
-
-asmlinkage long
-sys32_getresuid(__kernel_uid_t32 *ruid, __kernel_uid_t32 *euid,
- __kernel_uid_t32 *suid)
-{
- uid_t a, b, c;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_getresuid(&a, &b, &c);
- set_fs (old_fs);
- if (put_user (a, ruid) || put_user (b, euid) || put_user (c, suid))
- return -EFAULT;
- return ret;
-}
-
extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid);
asmlinkage long
@@ -3506,46 +3640,6 @@
return sys_setresgid(srgid, segid, ssgid);
}
-extern asmlinkage long sys_getgroups(int gidsetsize, gid_t *grouplist);
-
-asmlinkage long
-sys32_getgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
-{
- gid_t gl[NGROUPS];
- int ret, i;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_getgroups(gidsetsize, gl);
- set_fs (old_fs);
- if (gidsetsize && ret > 0 && ret <= NGROUPS)
- for (i = 0; i < ret; i++, grouplist++)
- if (__put_user (gl[i], grouplist))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_setgroups(int gidsetsize, gid_t *grouplist);
-
-asmlinkage long
-sys32_setgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
-{
- gid_t gl[NGROUPS];
- int ret, i;
- mm_segment_t old_fs = get_fs ();
-
- if ((unsigned) gidsetsize > NGROUPS)
- return -EINVAL;
- for (i = 0; i < gidsetsize; i++, grouplist++)
- if (__get_user (gl[i], grouplist))
- return -EFAULT;
- set_fs (KERNEL_DS);
- ret = sys_setgroups(gidsetsize, gl);
- set_fs (old_fs);
- return ret;
-}
-
-
/* XXX These as well... */
extern __inline__ struct socket *
socki_lookup(struct inode *inode)
@@ -3947,601 +4041,165 @@
kcmsg32->cmsg_len = clen32;
ucmsg = (struct cmsghdr *) (((char *)ucmsg) +
- CMSG_ALIGN(clen64));
- wp = (((char *)kcmsg32) + CMSG32_ALIGN(clen32));
- }
-
- /* Copy back fixed up data, and adjust pointers. */
- bufsz = (wp - workbuf);
- copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz);
-
- kmsg->msg_control = (struct cmsghdr *)
- (((char *)orig_cmsg_uptr) + bufsz);
- kmsg->msg_controllen = space_avail - bufsz;
-
- kfree(workbuf);
- return;
-
-fail:
- /* If we leave the 64-bit format CMSG chunks in there,
- * the application could get confused and crash. So to
- * ensure greater recovery, we report no CMSGs.
- */
- kmsg->msg_controllen += bufsz;
- kmsg->msg_control = (void *) orig_cmsg_uptr;
-}
-
-asmlinkage long
-sys32_sendmsg(int fd, struct msghdr32 *user_msg, unsigned user_flags)
-{
- struct socket *sock;
- char address[MAX_SOCK_ADDR];
- struct iovec iov[UIO_FASTIOV];
- unsigned char ctl[sizeof(struct cmsghdr) + 20];
- unsigned char *ctl_buf = ctl;
- struct msghdr kern_msg;
- int err, total_len;
-
- if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
- return -EFAULT;
- if(kern_msg.msg_iovlen > UIO_MAXIOV)
- return -EINVAL;
- err = verify_iovec32(&kern_msg, iov, address, VERIFY_READ);
- if (err < 0)
- goto out;
- total_len = err;
-
- if(kern_msg.msg_controllen) {
- err = cmsghdr_from_user32_to_kern(&kern_msg, ctl, sizeof(ctl));
- if(err)
- goto out_freeiov;
- ctl_buf = kern_msg.msg_control;
- }
- kern_msg.msg_flags = user_flags;
-
- sock = sockfd_lookup(fd, &err);
- if (sock != NULL) {
- if (sock->file->f_flags & O_NONBLOCK)
- kern_msg.msg_flags |= MSG_DONTWAIT;
- err = sock_sendmsg(sock, &kern_msg, total_len);
- sockfd_put(sock);
- }
-
- /* N.B. Use kfree here, as kern_msg.msg_controllen might change? */
- if(ctl_buf != ctl)
- kfree(ctl_buf);
-out_freeiov:
- if(kern_msg.msg_iov != iov)
- kfree(kern_msg.msg_iov);
-out:
- return err;
-}
-
-asmlinkage long
-sys32_recvmsg(int fd, struct msghdr32 *user_msg, unsigned int user_flags)
-{
- struct iovec iovstack[UIO_FASTIOV];
- struct msghdr kern_msg;
- char addr[MAX_SOCK_ADDR];
- struct socket *sock;
- struct iovec *iov = iovstack;
- struct sockaddr *uaddr;
- int *uaddr_len;
- unsigned long cmsg_ptr;
- int err, total_len, len = 0;
-
- if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
- return -EFAULT;
- if(kern_msg.msg_iovlen > UIO_MAXIOV)
- return -EINVAL;
-
- uaddr = kern_msg.msg_name;
- uaddr_len = &user_msg->msg_namelen;
- err = verify_iovec32(&kern_msg, iov, addr, VERIFY_WRITE);
- if (err < 0)
- goto out;
- total_len = err;
-
- cmsg_ptr = (unsigned long) kern_msg.msg_control;
- kern_msg.msg_flags = 0;
-
- sock = sockfd_lookup(fd, &err);
- if (sock != NULL) {
- struct scm_cookie scm;
-
- if (sock->file->f_flags & O_NONBLOCK)
- user_flags |= MSG_DONTWAIT;
- memset(&scm, 0, sizeof(scm));
- lock_kernel();
- err = sock->ops->recvmsg(sock, &kern_msg, total_len,
- user_flags, &scm);
- if(err >= 0) {
- len = err;
- if(!kern_msg.msg_control) {
- if(sock->passcred || scm.fp)
- kern_msg.msg_flags |= MSG_CTRUNC;
- if(scm.fp)
- __scm_destroy(&scm);
- } else {
- /* If recvmsg processing itself placed some
- * control messages into user space, it's is
- * using 64-bit CMSG processing, so we need
- * to fix it up before we tack on more stuff.
- */
- if((unsigned long) kern_msg.msg_control
- != cmsg_ptr)
- cmsg32_recvmsg_fixup(&kern_msg,
- cmsg_ptr);
-
- /* Wheee... */
- if(sock->passcred)
- put_cmsg32(&kern_msg,
- SOL_SOCKET, SCM_CREDENTIALS,
- sizeof(scm.creds),
- &scm.creds);
- if(scm.fp != NULL)
- scm_detach_fds32(&kern_msg, &scm);
- }
- }
- unlock_kernel();
- sockfd_put(sock);
- }
-
- if(uaddr != NULL && err >= 0)
- err = move_addr_to_user(addr, kern_msg.msg_namelen, uaddr,
- uaddr_len);
- if(cmsg_ptr != 0 && err >= 0) {
- unsigned long ucmsg_ptr = ((unsigned long)kern_msg.msg_control);
- __kernel_size_t32 uclen = (__kernel_size_t32) (ucmsg_ptr
- - cmsg_ptr);
- err |= __put_user(uclen, &user_msg->msg_controllen);
- }
- if(err >= 0)
- err = __put_user(kern_msg.msg_flags, &user_msg->msg_flags);
- if(kern_msg.msg_iov != iov)
- kfree(kern_msg.msg_iov);
-out:
- if(err < 0)
- return err;
- return len;
-}
-
-extern void check_pending(int signum);
-
-#ifdef CONFIG_MODULES
-
-extern asmlinkage unsigned long sys_create_module(const char *name_user,
- size_t size);
-
-asmlinkage unsigned long
-sys32_create_module(const char *name_user, __kernel_size_t32 size)
-{
- return sys_create_module(name_user, (size_t)size);
-}
-
-extern asmlinkage long sys_init_module(const char *name_user,
- struct module *mod_user);
-
-/* Hey, when you're trying to init module, take time and prepare us a nice 64bit
- * module structure, even if from 32bit modutils... Why to pollute kernel... :))
- */
-asmlinkage long
-sys32_init_module(const char *name_user, struct module *mod_user)
-{
- return sys_init_module(name_user, mod_user);
-}
-
-extern asmlinkage long sys_delete_module(const char *name_user);
-
-asmlinkage long
-sys32_delete_module(const char *name_user)
-{
- return sys_delete_module(name_user);
-}
-
-struct module_info32 {
- u32 addr;
- u32 size;
- u32 flags;
- s32 usecount;
-};
-
-/* Query various bits about modules. */
-
-static inline long
-get_mod_name(const char *user_name, char **buf)
-{
- unsigned long page;
- long retval;
-
- if ((unsigned long)user_name >= TASK_SIZE
- && !segment_eq(get_fs (), KERNEL_DS))
- return -EFAULT;
-
- page = __get_free_page(GFP_KERNEL);
- if (!page)
- return -ENOMEM;
-
- retval = strncpy_from_user((char *)page, user_name, PAGE_SIZE);
- if (retval > 0) {
- if (retval < PAGE_SIZE) {
- *buf = (char *)page;
- return retval;
- }
- retval = -ENAMETOOLONG;
- } else if (!retval)
- retval = -EINVAL;
-
- free_page(page);
- return retval;
-}
-
-static inline void
-put_mod_name(char *buf)
-{
- free_page((unsigned long)buf);
-}
-
-static __inline__ struct module *
-find_module(const char *name)
-{
- struct module *mod;
-
- for (mod = module_list; mod ; mod = mod->next) {
- if (mod->flags & MOD_DELETED)
- continue;
- if (!strcmp(mod->name, name))
- break;
- }
-
- return mod;
-}
-
-static int
-qm_modules(char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- struct module *mod;
- size_t nmod, space, len;
-
- nmod = space = 0;
-
- for (mod = module_list; mod->next != NULL; mod = mod->next, ++nmod) {
- len = strlen(mod->name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, mod->name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(nmod, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while ((mod = mod->next)->next != NULL)
- space += strlen(mod->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static int
-qm_deps(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- size_t i, space, len;
-
- if (mod->next = NULL)
- return -EINVAL;
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = 0;
- for (i = 0; i < mod->ndeps; ++i) {
- const char *dep_name = mod->deps[i].dep->name;
-
- len = strlen(dep_name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, dep_name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(i, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while (++i < mod->ndeps)
- space += strlen(mod->deps[i].dep->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static int
-qm_refs(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- size_t nrefs, space, len;
- struct module_ref *ref;
-
- if (mod->next = NULL)
- return -EINVAL;
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = 0;
- for (nrefs = 0, ref = mod->refs; ref ; ++nrefs, ref = ref->next_ref) {
- const char *ref_name = ref->ref->name;
-
- len = strlen(ref_name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, ref_name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(nrefs, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while ((ref = ref->next_ref) != NULL)
- space += strlen(ref->ref->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static inline int
-qm_symbols(struct module *mod, char *buf, size_t bufsize,
- __kernel_size_t32 *ret)
-{
- size_t i, space, len;
- struct module_symbol *s;
- char *strings;
- unsigned *vals;
-
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = mod->nsyms * 2*sizeof(u32);
-
- i = len = 0;
- s = mod->syms;
-
- if (space > bufsize)
- goto calc_space_needed;
-
- if (!access_ok(VERIFY_WRITE, buf, space))
- return -EFAULT;
-
- bufsize -= space;
- vals = (unsigned *)buf;
- strings = buf+space;
-
- for (; i < mod->nsyms ; ++i, ++s, vals += 2) {
- len = strlen(s->name)+1;
- if (len > bufsize)
- goto calc_space_needed;
-
- if (copy_to_user(strings, s->name, len)
- || __put_user(s->value, vals+0)
- || __put_user(space, vals+1))
- return -EFAULT;
-
- strings += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(i, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- for (; i < mod->nsyms; ++i, ++s)
- space += strlen(s->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static inline int
-qm_info(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- int error = 0;
-
- if (mod->next = NULL)
- return -EINVAL;
+ CMSG_ALIGN(clen64));
+ wp = (((char *)kcmsg32) + CMSG32_ALIGN(clen32));
+ }
- if (sizeof(struct module_info32) <= bufsize) {
- struct module_info32 info;
- info.addr = (unsigned long)mod;
- info.size = mod->size;
- info.flags = mod->flags;
- info.usecount - ((mod_member_present(mod, can_unload)
- && mod->can_unload)
- ? -1 : atomic_read(&mod->uc.usecount));
+ /* Copy back fixed up data, and adjust pointers. */
+ bufsz = (wp - workbuf);
+ copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz);
- if (copy_to_user(buf, &info, sizeof(struct module_info32)))
- return -EFAULT;
- } else
- error = -ENOSPC;
+ kmsg->msg_control = (struct cmsghdr *)
+ (((char *)orig_cmsg_uptr) + bufsz);
+ kmsg->msg_controllen = space_avail - bufsz;
- if (put_user(sizeof(struct module_info32), ret))
- return -EFAULT;
+ kfree(workbuf);
+ return;
- return error;
+fail:
+ /* If we leave the 64-bit format CMSG chunks in there,
+ * the application could get confused and crash. So to
+ * ensure greater recovery, we report no CMSGs.
+ */
+ kmsg->msg_controllen += bufsz;
+ kmsg->msg_control = (void *) orig_cmsg_uptr;
}
asmlinkage long
-sys32_query_module(char *name_user, int which, char *buf,
- __kernel_size_t32 bufsize, u32 ret)
+sys32_sendmsg(int fd, struct msghdr32 *user_msg, unsigned user_flags)
{
- struct module *mod;
- int err;
+ struct socket *sock;
+ char address[MAX_SOCK_ADDR];
+ struct iovec iov[UIO_FASTIOV];
+ unsigned char ctl[sizeof(struct cmsghdr) + 20];
+ unsigned char *ctl_buf = ctl;
+ struct msghdr kern_msg;
+ int err, total_len;
- lock_kernel();
- if (name_user = 0) {
- /* This finds "kernel_module" which is not exported. */
- for(mod = module_list; mod->next != NULL; mod = mod->next)
- ;
- } else {
- long namelen;
- char *name;
+ if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
+ return -EFAULT;
+ if(kern_msg.msg_iovlen > UIO_MAXIOV)
+ return -EINVAL;
+ err = verify_iovec32(&kern_msg, iov, address, VERIFY_READ);
+ if (err < 0)
+ goto out;
+ total_len = err;
- if ((namelen = get_mod_name(name_user, &name)) < 0) {
- err = namelen;
- goto out;
- }
- err = -ENOENT;
- if (namelen = 0) {
- /* This finds "kernel_module" which is not exported. */
- for(mod = module_list;
- mod->next != NULL;
- mod = mod->next) ;
- } else if ((mod = find_module(name)) = NULL) {
- put_mod_name(name);
- goto out;
- }
- put_mod_name(name);
+ if(kern_msg.msg_controllen) {
+ err = cmsghdr_from_user32_to_kern(&kern_msg, ctl, sizeof(ctl));
+ if(err)
+ goto out_freeiov;
+ ctl_buf = kern_msg.msg_control;
}
+ kern_msg.msg_flags = user_flags;
- switch (which)
- {
- case 0:
- err = 0;
- break;
- case QM_MODULES:
- err = qm_modules(buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_DEPS:
- err = qm_deps(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_REFS:
- err = qm_refs(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_SYMBOLS:
- err = qm_symbols(mod, buf, bufsize,
- (__kernel_size_t32 *)AA(ret));
- break;
- case QM_INFO:
- err = qm_info(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- default:
- err = -EINVAL;
- break;
+ sock = sockfd_lookup(fd, &err);
+ if (sock != NULL) {
+ if (sock->file->f_flags & O_NONBLOCK)
+ kern_msg.msg_flags |= MSG_DONTWAIT;
+ err = sock_sendmsg(sock, &kern_msg, total_len);
+ sockfd_put(sock);
}
+
+ /* N.B. Use kfree here, as kern_msg.msg_controllen might change? */
+ if(ctl_buf != ctl)
+ kfree(ctl_buf);
+out_freeiov:
+ if(kern_msg.msg_iov != iov)
+ kfree(kern_msg.msg_iov);
out:
- unlock_kernel();
return err;
}
-struct kernel_sym32 {
- u32 value;
- char name[60];
-};
-
-extern asmlinkage long sys_get_kernel_syms(struct kernel_sym *table);
-
asmlinkage long
-sys32_get_kernel_syms(struct kernel_sym32 *table)
+sys32_recvmsg(int fd, struct msghdr32 *user_msg, unsigned int user_flags)
{
- int len, i;
- struct kernel_sym *tbl;
- mm_segment_t old_fs;
-
- len = sys_get_kernel_syms(NULL);
- if (!table) return len;
- tbl = kmalloc (len * sizeof (struct kernel_sym), GFP_KERNEL);
- if (!tbl) return -ENOMEM;
- old_fs = get_fs();
- set_fs (KERNEL_DS);
- sys_get_kernel_syms(tbl);
- set_fs (old_fs);
- for (i = 0; i < len; i++, table += sizeof (struct kernel_sym32)) {
- if (put_user (tbl[i].value, &table->value) ||
- copy_to_user (table->name, tbl[i].name, 60))
- break;
- }
- kfree (tbl);
- return i;
-}
+ struct iovec iovstack[UIO_FASTIOV];
+ struct msghdr kern_msg;
+ char addr[MAX_SOCK_ADDR];
+ struct socket *sock;
+ struct iovec *iov = iovstack;
+ struct sockaddr *uaddr;
+ int *uaddr_len;
+ unsigned long cmsg_ptr;
+ int err, total_len, len = 0;
-#else /* CONFIG_MODULES */
+ if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
+ return -EFAULT;
+ if(kern_msg.msg_iovlen > UIO_MAXIOV)
+ return -EINVAL;
-asmlinkage unsigned long
-sys32_create_module(const char *name_user, size_t size)
-{
- return -ENOSYS;
-}
+ uaddr = kern_msg.msg_name;
+ uaddr_len = &user_msg->msg_namelen;
+ err = verify_iovec32(&kern_msg, iov, addr, VERIFY_WRITE);
+ if (err < 0)
+ goto out;
+ total_len = err;
-asmlinkage long
-sys32_init_module(const char *name_user, struct module *mod_user)
-{
- return -ENOSYS;
-}
+ cmsg_ptr = (unsigned long) kern_msg.msg_control;
+ kern_msg.msg_flags = 0;
-asmlinkage long
-sys32_delete_module(const char *name_user)
-{
- return -ENOSYS;
-}
+ sock = sockfd_lookup(fd, &err);
+ if (sock != NULL) {
+ struct scm_cookie scm;
-asmlinkage long
-sys32_query_module(const char *name_user, int which, char *buf, size_t bufsize,
- size_t *ret)
-{
- /* Let the program know about the new interface. Not that
- it'll do them much good. */
- if (which = 0)
- return 0;
+ if (sock->file->f_flags & O_NONBLOCK)
+ user_flags |= MSG_DONTWAIT;
+ memset(&scm, 0, sizeof(scm));
+ lock_kernel();
+ err = sock->ops->recvmsg(sock, &kern_msg, total_len,
+ user_flags, &scm);
+ if(err >= 0) {
+ len = err;
+ if(!kern_msg.msg_control) {
+ if(sock->passcred || scm.fp)
+ kern_msg.msg_flags |= MSG_CTRUNC;
+ if(scm.fp)
+ __scm_destroy(&scm);
+ } else {
+ /* If recvmsg processing itself placed some
+ * control messages into user space, it's is
+ * using 64-bit CMSG processing, so we need
+ * to fix it up before we tack on more stuff.
+ */
+ if((unsigned long) kern_msg.msg_control
+ != cmsg_ptr)
+ cmsg32_recvmsg_fixup(&kern_msg,
+ cmsg_ptr);
- return -ENOSYS;
-}
+ /* Wheee... */
+ if(sock->passcred)
+ put_cmsg32(&kern_msg,
+ SOL_SOCKET, SCM_CREDENTIALS,
+ sizeof(scm.creds),
+ &scm.creds);
+ if(scm.fp != NULL)
+ scm_detach_fds32(&kern_msg, &scm);
+ }
+ }
+ unlock_kernel();
+ sockfd_put(sock);
+ }
-asmlinkage long
-sys32_get_kernel_syms(struct kernel_sym *table)
-{
- return -ENOSYS;
+ if(uaddr != NULL && err >= 0)
+ err = move_addr_to_user(addr, kern_msg.msg_namelen, uaddr,
+ uaddr_len);
+ if(cmsg_ptr != 0 && err >= 0) {
+ unsigned long ucmsg_ptr = ((unsigned long)kern_msg.msg_control);
+ __kernel_size_t32 uclen = (__kernel_size_t32) (ucmsg_ptr
+ - cmsg_ptr);
+ err |= __put_user(uclen, &user_msg->msg_controllen);
+ }
+ if(err >= 0)
+ err = __put_user(kern_msg.msg_flags, &user_msg->msg_flags);
+ if(kern_msg.msg_iov != iov)
+ kfree(kern_msg.msg_iov);
+out:
+ if(err < 0)
+ return err;
+ return len;
}
-#endif /* CONFIG_MODULES */
-
/* Stuff for NFS server syscalls... */
struct nfsctl_svc32 {
u16 svc32_port;
@@ -4821,152 +4479,12 @@
return err;
}
-asmlinkage long sys_utimes(char *, struct timeval *);
-
-asmlinkage long
-sys32_utimes(char *filename, struct timeval32 *tvs)
-{
- char *kfilename;
- struct timeval ktvs[2];
- mm_segment_t old_fs;
- int ret;
-
- kfilename = getname32(filename);
- ret = PTR_ERR(kfilename);
- if (!IS_ERR(kfilename)) {
- if (tvs) {
- if (get_tv32(&ktvs[0], tvs) ||
- get_tv32(&ktvs[1], 1+tvs))
- return -EFAULT;
- }
-
- old_fs = get_fs();
- set_fs(KERNEL_DS);
- ret = sys_utimes(kfilename, &ktvs[0]);
- set_fs(old_fs);
-
- putname(kfilename);
- }
- return ret;
-}
-
-/* These are here just in case some old ia32 binary calls it. */
-asmlinkage long
-sys32_pause(void)
-{
- current->state = TASK_INTERRUPTIBLE;
- schedule();
- return -ERESTARTNOHAND;
-}
-
-/* PCI config space poking. */
-extern asmlinkage long sys_pciconfig_read(unsigned long bus,
- unsigned long dfn,
- unsigned long off,
- unsigned long len,
- unsigned char *buf);
-
-extern asmlinkage long sys_pciconfig_write(unsigned long bus,
- unsigned long dfn,
- unsigned long off,
- unsigned long len,
- unsigned char *buf);
-
-asmlinkage long
-sys32_pciconfig_read(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
-{
- return sys_pciconfig_read((unsigned long) bus,
- (unsigned long) dfn,
- (unsigned long) off,
- (unsigned long) len,
- (unsigned char *)AA(ubuf));
-}
-
-asmlinkage long
-sys32_pciconfig_write(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
-{
- return sys_pciconfig_write((unsigned long) bus,
- (unsigned long) dfn,
- (unsigned long) off,
- (unsigned long) len,
- (unsigned char *)AA(ubuf));
-}
-
-extern asmlinkage long sys_prctl(int option, unsigned long arg2,
- unsigned long arg3, unsigned long arg4,
- unsigned long arg5);
-
-asmlinkage long
-sys32_prctl(int option, u32 arg2, u32 arg3, u32 arg4, u32 arg5)
-{
- return sys_prctl(option,
- (unsigned long) arg2,
- (unsigned long) arg3,
- (unsigned long) arg4,
- (unsigned long) arg5);
-}
-
-
-extern asmlinkage ssize_t sys_pread(unsigned int fd, char * buf,
- size_t count, loff_t pos);
-
-extern asmlinkage ssize_t sys_pwrite(unsigned int fd, const char * buf,
- size_t count, loff_t pos);
-
-typedef __kernel_ssize_t32 ssize_t32;
-
-asmlinkage ssize_t32
-sys32_pread(unsigned int fd, char *ubuf, __kernel_size_t32 count,
- u32 poshi, u32 poslo)
-{
- return sys_pread(fd, ubuf, count,
- ((loff_t)AA(poshi) << 32) | AA(poslo));
-}
-
-asmlinkage ssize_t32
-sys32_pwrite(unsigned int fd, char *ubuf, __kernel_size_t32 count,
- u32 poshi, u32 poslo)
-{
- return sys_pwrite(fd, ubuf, count,
- ((loff_t)AA(poshi) << 32) | AA(poslo));
-}
-
-
-extern asmlinkage long sys_personality(unsigned long);
-
-asmlinkage long
-sys32_personality(unsigned long personality)
-{
- int ret;
- if (current->personality = PER_LINUX32 && personality = PER_LINUX)
- personality = PER_LINUX32;
- ret = sys_personality(personality);
- if (ret = PER_LINUX32)
- ret = PER_LINUX;
- return ret;
-}
-
extern asmlinkage ssize_t sys_sendfile(int out_fd, int in_fd, off_t *offset,
size_t count);
asmlinkage long
sys32_sendfile(int out_fd, int in_fd, __kernel_off_t32 *offset, s32 count)
{
- mm_segment_t old_fs = get_fs();
- int ret;
- off_t of;
-
- if (offset && get_user(of, offset))
- return -EFAULT;
-
- set_fs(KERNEL_DS);
- ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
- set_fs(old_fs);
-
- if (!ret && offset && put_user(of, offset))
- return -EFAULT;
-
- return ret;
}
/* Handle adjtimex compatability. */
diff -urN linux-davidm/arch/ia64/kernel/acpi.c linux-2.4.10-lia/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/acpi.c Mon Sep 24 21:45:43 2001
@@ -60,6 +60,8 @@
return "hpsim";
# elif defined (CONFIG_IA64_SGI_SN1)
return "sn1";
+# elif defined (CONFIG_IA64_SGI_SN2)
+ return "sn2";
# elif defined (CONFIG_IA64_DIG)
return "dig";
# else
diff -urN linux-davidm/arch/ia64/kernel/efi_stub.S linux-2.4.10-lia/arch/ia64/kernel/efi_stub.S
--- linux-davidm/arch/ia64/kernel/efi_stub.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/efi_stub.S Mon Sep 24 21:45:52 2001
@@ -1,8 +1,8 @@
/*
* EFI call stub.
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * David Mosberger <davidm@hpl.hp.com>
*
* This stub allows us to make EFI calls in physical mode with interrupts
* turned off. We need this because we can't call SetVirtualMap() until
@@ -68,17 +68,17 @@
;;
andcm r16=loc3,r16 // get psr with IT, DT, and RT bits cleared
mov out3=in4
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret0: mov out4=in5
mov out5=in6
mov out6=in7
- br.call.sptk.few rp¶ // call the EFI function
+ br.call.sptk.many rp¶ // call the EFI function
.ret1: mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret2: mov ar.rsc=loc4 // restore RSE configuration
mov ar.pfs=loc1
mov rp=loc0
mov gp=loc2
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(efi_call_phys)
diff -urN linux-davidm/arch/ia64/kernel/efivars.c linux-2.4.10-lia/arch/ia64/kernel/efivars.c
--- linux-davidm/arch/ia64/kernel/efivars.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/efivars.c Mon Sep 24 21:46:04 2001
@@ -276,21 +276,20 @@
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
- spin_lock(&efivars_lock);
MOD_INC_USE_COUNT;
var_data = kmalloc(size, GFP_KERNEL);
if (!var_data) {
MOD_DEC_USE_COUNT;
- spin_unlock(&efivars_lock);
return -ENOMEM;
}
if (copy_from_user(var_data, buffer, size)) {
MOD_DEC_USE_COUNT;
- spin_unlock(&efivars_lock);
+ kfree(var_data);
return -EFAULT;
}
+ spin_lock(&efivars_lock);
/* Since the data ptr we've currently got is probably for
a different variable find the right variable.
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.10-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/entry.S Mon Sep 24 21:46:12 2001
@@ -55,7 +55,7 @@
mov out1=in1 // argv
mov out2=in2 // envp
add out3\x16,sp // regs
- br.call.sptk.few rp=sys_execve
+ br.call.sptk.many rp=sys_execve
.ret0: cmp4.ge p6,p7=r8,r0
mov ar.pfs=loc1 // restore ar.pfs
sxt4 r8=r8 // return 64-bit result
@@ -64,7 +64,7 @@
(p6) cmp.ne pKern,pUser=r0,r0 // a successful execve() lands us in user-mode...
mov rp=loc0
(p6) mov ar.pfs=r0 // clear ar.pfs on success
-(p7) br.ret.sptk.few rp
+(p7) br.ret.sptk.many rp
/*
* In theory, we'd have to zap this state only to prevent leaking of
@@ -85,7 +85,7 @@
ldf.fill f26=[sp]; ldf.fill f27=[sp]; mov f28ð
ldf.fill f29=[sp]; ldf.fill f30=[sp]; mov f31ð
mov ar.lc=0
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_execve)
GLOBAL_ENTRY(sys_clone2)
@@ -99,7 +99,7 @@
mov out3=in2
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
- br.call.sptk.few rp=do_fork
+ br.call.sptk.many rp=do_fork
.ret1: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
@@ -118,7 +118,7 @@
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
- br.call.sptk.few rp=do_fork
+ br.call.sptk.many rp=do_fork
.ret2: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
@@ -151,12 +151,13 @@
* again.
*/
(p6) cmp.eq p7,p6=r26,r27
-(p6) br.cond.dpnt.few .map
+(p6) br.cond.dpnt .map
;;
-.done: ld8 sp=[r21] // load kernel stack pointer of new task
+.done:
(p6) ssm psr.ic // if we we had to map, renable the psr.ic bit FIRST!!!
;;
(p6) srlz.d
+ ld8 sp=[r21] // load kernel stack pointer of new task
mov IA64_KR(CURRENT)=r20 // update "current" application register
mov r8=r13 // return pointer to previously running task
mov r13=in0 // set "current" pointer
@@ -167,7 +168,7 @@
#ifdef CONFIG_SMP
sync.i // ensure "fc"s done by this CPU are visible on other CPUs
#endif
- br.ret.sptk.few rp // boogie on out in new context
+ br.ret.sptk.many rp // boogie on out in new context
.map:
rsm psr.i | psr.ic
@@ -184,7 +185,7 @@
mov IA64_KR(CURRENT_STACK)=r26 // remember last page we mapped...
;;
itr.d dtr[r25]=r23 // wire in new mapping...
- br.cond.sptk.many .done
+ br.cond.sptk .done
END(ia64_switch_to)
/*
@@ -303,7 +304,7 @@
st8 [r2]=r20 // save ar.bspstore
st8 [r3]=r21 // save predicate registers
mov ar.rsc=3 // put RSE back into eager mode, pl 0
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
END(save_switch_stack)
/*
@@ -418,7 +419,7 @@
;;
(p6) st4 [r2]=r8
(p6) mov r8=-1
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_syscall)
/*
@@ -433,11 +434,11 @@
.body
mov loc2¶
;;
- br.call.sptk.few rp=syscall_trace
+ br.call.sptk.many rp=syscall_trace
.ret3: mov rp=loc0
mov ar.pfs=loc1
mov b6=loc2
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(invoke_syscall_trace)
/*
@@ -454,21 +455,21 @@
GLOBAL_ENTRY(ia64_trace_syscall)
PT_REGS_UNWIND_INFO(0)
- br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
-.ret6: br.call.sptk.few rp¶ // do the syscall
+ br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch syscall args
+.ret6: br.call.sptk.many rp¶ // do the syscall
strace_check_retval:
cmp.lt p6,p0=r8,r0 // syscall failed?
adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8
adds r3=PT(R10)+16,sp // r3 = &pt_regs.r10
mov r10=0
-(p6) br.cond.sptk.few strace_error // syscall failed ->
+(p6) br.cond.sptk strace_error // syscall failed ->
;; // avoid RAW on r10
strace_save_retval:
.mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8
.mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10
ia64_strace_leave_kernel:
- br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
-.rety: br.cond.sptk.many ia64_leave_kernel
+ br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch return value
+.rety: br.cond.sptk ia64_leave_kernel
strace_error:
ld8 r3=[r2] // load pt_regs.r8
@@ -479,7 +480,7 @@
;;
(p6) mov r10=-1
(p6) mov r8=r9
- br.cond.sptk.few strace_save_retval
+ br.cond.sptk strace_save_retval
END(ia64_trace_syscall)
GLOBAL_ENTRY(ia64_ret_from_clone)
@@ -489,7 +490,7 @@
* Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
* address of the previously executing task.
*/
- br.call.sptk.few rp=invoke_schedule_tail
+ br.call.sptk.many rp=ia64_invoke_schedule_tail
.ret8:
adds r2=IA64_TASK_PTRACE_OFFSET,r13
;;
@@ -497,7 +498,7 @@
;;
mov r8=0
tbit.nz p6,p0=r2,PT_TRACESYS_BIT
-(p6) br strace_check_retval
+(p6) br.cond.spnt strace_check_retval
;; // added stop bits to prevent r8 dependency
END(ia64_ret_from_clone)
// fall through
@@ -511,7 +512,7 @@
(p6) st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit
.mem.offset 8,0
(p6) st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit
-(p7) br.cond.spnt.few handle_syscall_error // handle potential syscall failure
+(p7) br.cond.spnt handle_syscall_error // handle potential syscall failure
END(ia64_ret_from_syscall)
// fall through
GLOBAL_ENTRY(ia64_leave_kernel)
@@ -524,17 +525,17 @@
adds r17=IA64_TASK_NEED_RESCHED_OFFSET,r13
adds r18=IA64_TASK_SIGPENDING_OFFSET,r13
#ifdef CONFIG_PERFMON
- adds r19=IA64_TASK_PFM_NOTIFY_OFFSET,r13
+ adds r19=IA64_TASK_PFM_MUST_BLOCK_OFFSET,r13
#endif
;;
#ifdef CONFIG_PERFMON
-(pUser) ld8 r19=[r19] // load current->task.pfm_notify
+(pUser) ld8 r19=[r19] // load current->thread.pfm_must_block
#endif
(pUser) ld8 r17=[r17] // load current->need_resched
(pUser) ld4 r18=[r18] // load current->sigpending
;;
#ifdef CONFIG_PERFMON
-(pUser) cmp.ne.unc p9,p0=r19,r0 // current->task.pfm_notify != 0?
+(pUser) cmp.ne.unc p9,p0=r19,r0 // current->thread.pfm_must_block != 0?
#endif
(pUser) cmp.ne.unc p7,p0=r17,r0 // current->need_resched != 0?
(pUser) cmp.ne.unc p8,p0=r18,r0 // current->sigpending != 0?
@@ -542,7 +543,7 @@
adds r2=PT(R8)+16,r12
adds r3=PT(R9)+16,r12
#ifdef CONFIG_PERFMON
-(p9) br.call.spnt.many b7=pfm_overflow_notify
+(p9) br.call.spnt.many b7=pfm_block_on_overflow
#endif
#if __GNUC__ < 3
(p7) br.call.spnt.many b7=invoke_schedule
@@ -642,13 +643,13 @@
movl r17=PERCPU_ADDR+IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET
;;
ld4 r17=[r17] // r17 = cpu_data->phys_stacked_size_p8
-(pKern) br.cond.dpnt.few skip_rbs_switch
+(pKern) br.cond.dpnt skip_rbs_switch
/*
* Restore user backing store.
*
* NOTE: alloc, loadrs, and cover can't be predicated.
*/
-(pNonSys) br.cond.dpnt.few dont_preserve_current_frame
+(pNonSys) br.cond.dpnt dont_preserve_current_frame
cover // add current frame into dirty partition
;;
mov r19=ar.bsp // get new backing store pointer
@@ -698,7 +699,7 @@
}{ .mib
mov loc3=0
mov loc4=0
-(pRecurse) br.call.sptk.few b6=rse_clear_invalid
+(pRecurse) br.call.sptk.many b6=rse_clear_invalid
}{ .mfi // cycle 2
mov loc5=0
@@ -707,7 +708,7 @@
}{ .mib
mov loc6=0
mov loc7=0
-(pReturn) br.ret.sptk.few b6
+(pReturn) br.ret.sptk.many b6
}
# undef pRecurse
# undef pReturn
@@ -753,24 +754,24 @@
;;
.mem.offset 0,0; st8.spill [r2]=r9 // store errno in pt_regs.r8 and set unat bit
.mem.offset 8,0; st8.spill [r3]=r10 // store error indication in pt_regs.r10 and set unat bit
- br.cond.sptk.many ia64_leave_kernel
+ br.cond.sptk ia64_leave_kernel
END(handle_syscall_error)
/*
* Invoke schedule_tail(task) while preserving in0-in7, which may be needed
* in case a system call gets restarted.
*/
-ENTRY(invoke_schedule_tail)
+GLOBAL_ENTRY(ia64_invoke_schedule_tail)
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,1,0
mov loc0=rp
mov out0=r8 // Address of previous task
;;
- br.call.sptk.few rp=schedule_tail
+ br.call.sptk.many rp=schedule_tail
.ret11: mov ar.pfs=loc1
mov rp=loc0
br.ret.sptk.many rp
-END(invoke_schedule_tail)
+END(ia64_invoke_schedule_tail)
#if __GNUC__ < 3
@@ -789,7 +790,7 @@
mov loc0=rp
;;
.body
- br.call.sptk.few rp=schedule
+ br.call.sptk.many rp=schedule
.ret14: mov ar.pfs=loc1
mov rp=loc0
br.ret.sptk.many rp
@@ -816,7 +817,7 @@
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
.body
- br.call.sptk.few rp=ia64_do_signal
+ br.call.sptk.many rp=ia64_do_signal
.ret15: .restore sp
adds sp\x16,sp // pop scratch stack space
;;
@@ -841,7 +842,7 @@
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
.body
- br.call.sptk.few rp=ia64_rt_sigsuspend
+ br.call.sptk.many rp=ia64_rt_sigsuspend
.ret17: .restore sp
adds sp\x16,sp // pop scratch stack space
;;
@@ -863,7 +864,7 @@
cmp.eq pNonSys,pSys=r0,r0 // sigreturn isn't a normal syscall...
;;
adds out0\x16,sp // out0 = &sigscratch
- br.call.sptk.few rp=ia64_rt_sigreturn
+ br.call.sptk.many rp=ia64_rt_sigreturn
.ret19: .restore sp 0
adds sp\x16,sp
;;
@@ -871,7 +872,7 @@
mov.sptk b7=r8,ia64_leave_kernel
;;
mov ar.unat=r9
- br b7
+ br.many b7
END(sys_rt_sigreturn)
GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
@@ -882,7 +883,7 @@
mov r16=r0
.prologue
DO_SAVE_SWITCH_STACK
- br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in ivt
+ br.call.sptk.many rp=ia64_handle_unaligned // stack frame setup in ivt
.ret21: .body
DO_LOAD_SWITCH_STACK
br.cond.sptk.many rp // goes to ia64_leave_kernel
@@ -912,14 +913,14 @@
adds out0\x16,sp // &info
mov out1=r13 // current
adds out2\x16+EXTRA_FRAME_SIZE,sp // &switch_stack
- br.call.sptk.few rp=unw_init_frame_info
+ br.call.sptk.many rp=unw_init_frame_info
1: adds out0\x16,sp // &info
mov b6=loc2
mov loc2=gp // save gp across indirect function call
;;
ld8 gp=[in0]
mov out1=in1 // arg
- br.call.sptk.few rp¶ // invoke the callback function
+ br.call.sptk.many rp¶ // invoke the callback function
1: mov gp=loc2 // restore gp
// For now, we don't allow changing registers from within
diff -urN linux-davidm/arch/ia64/kernel/fw-emu.c linux-2.4.10-lia/arch/ia64/kernel/fw-emu.c
--- linux-davidm/arch/ia64/kernel/fw-emu.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/fw-emu.c Mon Sep 24 21:46:40 2001
@@ -416,11 +416,6 @@
strcpy(sal_systab->product_id, "SDV");
#endif
-#ifdef CONFIG_IA64_SGI_SN1_SIM
- strcpy(sal_systab->oem_id, "SGI");
- strcpy(sal_systab->product_id, "SN1");
-#endif
-
/* fill in an entry point: */
sal_ed->type = SAL_DESC_ENTRY_POINT;
sal_ed->pal_proc = __pa(pal_desc[0]);
diff -urN linux-davidm/arch/ia64/kernel/gate.S linux-2.4.10-lia/arch/ia64/kernel/gate.S
--- linux-davidm/arch/ia64/kernel/gate.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/gate.S Mon Sep 24 21:46:49 2001
@@ -3,7 +3,7 @@
* region. For now, it contains the signal trampoline code only.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
@@ -18,7 +18,6 @@
# define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET)
# define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET)
# define ARG2_OFF (16 + IA64_SIGFRAME_ARG2_OFFSET)
-# define RBS_BASE_OFF (16 + IA64_SIGFRAME_RBS_BASE_OFFSET)
# define SIGHANDLER_OFF (16 + IA64_SIGFRAME_HANDLER_OFFSET)
# define SIGCONTEXT_OFF (16 + IA64_SIGFRAME_SIGCONTEXT_OFFSET)
@@ -32,6 +31,8 @@
# define PR_OFF IA64_SIGCONTEXT_PR_OFFSET
# define RP_OFF IA64_SIGCONTEXT_B0_OFFSET
# define SP_OFF IA64_SIGCONTEXT_R12_OFFSET
+# define RBS_BASE_OFF IA64_SIGCONTEXT_RBS_BASE_OFFSET
+# define LOADRS_OFF IA64_SIGCONTEXT_LOADRS_OFFSET
# define base0 r2
# define base1 r3
/*
@@ -73,34 +74,37 @@
.vframesp SP_OFF+SIGCONTEXT_OFF
.body
- .prologue
+ .label_state 1
+
adds base0=SIGHANDLER_OFF,sp
- adds base1=RBS_BASE_OFF,sp
+ adds base1=RBS_BASE_OFF+SIGCONTEXT_OFF,sp
br.call.sptk.many rp\x1f
1:
ld8 r17=[base0],(ARG0_OFF-SIGHANDLER_OFF) // get pointer to signal handler's plabel
- ld8 r15=[base1],(ARG1_OFF-RBS_BASE_OFF) // get address of new RBS base (or NULL)
+ ld8 r15=[base1] // get address of new RBS base (or NULL)
cover // push args in interrupted frame onto backing store
;;
+ cmp.ne p8,p0=r15,r0 // do we need to switch the rbs?
+ mov.m r9=ar.bsp // fetch ar.bsp
+ .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+(p8) br.cond.spnt setup_rbs // yup -> (clobbers r14, r15, and r16)
+back_from_setup_rbs:
+
.save ar.pfs, r8
alloc r8=ar.pfs,0,0,3,0 // get CFM0, EC0, and CPL0 into r8
ld8 out0=[base0],16 // load arg0 (signum)
+ adds base1=(ARG1_OFF-(RBS_BASE_OFF+SIGCONTEXT_OFF)),base1
;;
ld8 out1=[base1] // load arg1 (siginfop)
ld8 r10=[r17],8 // get signal handler entry point
;;
ld8 out2=[base0] // load arg2 (sigcontextp)
ld8 gp=[r17] // get signal handler's global pointer
- cmp.ne p8,p0=r15,r0 // do we need to switch the rbs?
- mov.m r17=ar.bsp // fetch ar.bsp
- .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
-(p8) br.cond.spnt.few setup_rbs // yup -> (clobbers r14 and r16)
-back_from_setup_rbs:
adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
;;
.spillsp ar.bsp, BSP_OFF+SIGCONTEXT_OFF
- st8 [base0]=r17,(CFM_OFF-BSP_OFF) // save sc_ar_bsp
+ st8 [base0]=r9,(CFM_OFF-BSP_OFF) // save sc_ar_bsp
dep r8=0,r8,38,26 // clear EC0, CPL0 and reserved bits
adds base1=(FR6_OFF+16+SIGCONTEXT_OFF),sp
;;
@@ -123,7 +127,7 @@
;;
stf.spill [base0]ñ4,32
stf.spill [base1]ñ5,32
- br.call.sptk.few rp¶ // call the signal handler
+ br.call.sptk.many rp¶ // call the signal handler
.ret0: adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
;;
ld8 r15=[base0],(CFM_OFF-BSP_OFF) // fetch sc_ar_bsp and advance to CFM_OFF
@@ -131,7 +135,7 @@
;;
ld8 r8=[base0] // restore (perhaps modified) CFM0, EC0, and CPL0
cmp.ne p8,p0=r14,r15 // do we need to restore the rbs?
-(p8) br.cond.spnt.few restore_rbs // yup -> (clobbers r14 and r16)
+(p8) br.cond.spnt restore_rbs // yup -> (clobbers r14 and r16)
;;
back_from_restore_rbs:
adds base0=(FR6_OFF+SIGCONTEXT_OFF),sp
@@ -154,30 +158,52 @@
mov r15=__NR_rt_sigreturn
break __BREAK_SYSCALL
+ .body
+ .copy_state 1
setup_rbs:
- flushrs // must be first in insn
mov ar.rsc=0 // put RSE into enforced lazy mode
- adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
- mov r14=ar.rnat // get rnat as updated by flushrs
- mov ar.bspstore=r15 // set new register backing store area
+ .save ar.rnat, r16
+ mov r16=ar.rnat // save RNaT before switching backing store area
+ adds r14=(RNAT_OFF+SIGCONTEXT_OFF),sp
+
+ mov ar.bspstore=r15 // switch over to new register backing store area
;;
.spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
- st8 [r16]=r14 // save sc_ar_rnat
+ st8 [r14]=r16 // save sc_ar_rnat
+ adds r14=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+
+ mov.m r16=ar.bsp // sc_loadrs <- (new bsp - new bspstore) << 16
+ ;;
+ invala
+ sub r15=r16,r15
+ ;;
+ shl r15=r15,16
+ ;;
+ st8 [r14]=r15 // save sc_loadrs
mov ar.rsc=0xf // set RSE into eager mode, pl 3
- invala // invalidate ALAT
- br.cond.sptk.many back_from_setup_rbs
+ br.cond.sptk back_from_setup_rbs
+ .prologue
+ .copy_state 1
+ .spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+ .body
restore_rbs:
- flushrs
- mov ar.rsc=0 // put RSE into enforced lazy mode
+ alloc r2=ar.pfs,0,0,0,0 // alloc null frame
+ adds r16=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+ ;;
+ ld8 r14=[r16]
adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
+ mov ar.rsc=r14 // put RSE into enforced lazy mode
ld8 r14=[r16] // get new rnat
- mov ar.bspstore=r15 // set old register backing store area
;;
- mov ar.rnat=r14 // establish new rnat
+ loadrs // restore dirty partition
+ ;;
+ mov ar.bspstore=r15 // switch back to old register backing store area
+ ;;
+ mov ar.rnat=r14 // restore RNaT
mov ar.rsc=0xf // (will be restored later on from sc_ar_rsc)
// invala not necessary as that will happen when returning to user-mode
- br.cond.sptk.many back_from_restore_rbs
+ br.cond.sptk back_from_restore_rbs
END(ia64_sigtramp)
diff -urN linux-davidm/arch/ia64/kernel/head.S linux-2.4.10-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/head.S Mon Sep 24 21:47:23 2001
@@ -6,8 +6,8 @@
* entry point.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 2001 Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Intel Corp.
@@ -86,7 +86,8 @@
/*
* Switch into virtual mode:
*/
- movl r16=(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN)
+ movl r16=(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN \
+ |IA64_PSR_DI)
;;
mov cr.ipsr=r16
movl r17\x1f
@@ -183,31 +184,31 @@
alloc r2=ar.pfs,0,0,2,0
movl out0=alive_msg
;;
- br.call.sptk.few rpêrly_printk
+ br.call.sptk.many rpêrly_printk
1: // force new bundle
#endif /* CONFIG_IA64_EARLY_PRINTK */
#ifdef CONFIG_SMP
-(isAP) br.call.sptk.few rp=start_secondary
+(isAP) br.call.sptk.many rp=start_secondary
.ret0:
-(isAP) br.cond.sptk.few self
+(isAP) br.cond.sptk self
#endif
// This is executed by the bootstrap processor (bsp) only:
#ifdef CONFIG_IA64_FW_EMU
// initialize PAL & SAL emulator:
- br.call.sptk.few rp=sys_fw_init
+ br.call.sptk.many rp=sys_fw_init
.ret1:
#endif
- br.call.sptk.few rp=start_kernel
+ br.call.sptk.many rp=start_kernel
.ret2: addl r3=@ltoff(halt_msg),gp
;;
alloc r2=ar.pfs,8,0,2,0
;;
ld8 out0=[r3]
- br.call.sptk.few b0=console_print
-self: br.sptk.few self // endless loop
+ br.call.sptk.many b0=console_print
+self: br.sptk.many self // endless loop
END(_start)
GLOBAL_ENTRY(ia64_save_debug_regs)
@@ -227,10 +228,10 @@
;;
st8.nta [in0]=r16,8
st8.nta [r19]=r17,8
- br.cloop.sptk.few 1b
+ br.cloop.sptk.many 1b
;;
mov ar.lc=r20 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_save_debug_regs)
GLOBAL_ENTRY(ia64_load_debug_regs)
@@ -251,10 +252,10 @@
srlz.d // Errata 132 (NoFix status)
#endif
mov ibr[r18]=r17
- br.cloop.sptk.few 1b
+ br.cloop.sptk.many 1b
;;
mov ar.lc=r20 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_load_debug_regs)
GLOBAL_ENTRY(__ia64_save_fpu)
@@ -404,7 +405,7 @@
;;
stf.spill.nta [in0]ñ26,32
stf.spill.nta [ r3]ñ27,32
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_save_fpu)
GLOBAL_ENTRY(__ia64_load_fpu)
@@ -554,7 +555,7 @@
;;
ldf.fill.nta f126=[in0],32
ldf.fill.nta f127=[ r3],32
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_load_fpu)
GLOBAL_ENTRY(__ia64_init_fpu)
@@ -688,7 +689,7 @@
;;
ldf.fill f126=[sp]
mov f127ð
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_init_fpu)
/*
@@ -736,7 +737,7 @@
rfi // must be last insn in group
;;
1: mov rp=r14
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_switch_mode)
#ifdef CONFIG_IA64_BRL_EMU
@@ -750,7 +751,7 @@
alloc r16=ar.pfs,1,0,0,0; \
mov reg=r32; \
;; \
- br.ret.sptk rp; \
+ br.ret.sptk.many rp; \
END(ia64_set_##reg)
SET_REG(b1);
@@ -814,7 +815,7 @@
;;
cmp.ne p15,p0=tmp,r0
mov tmp=ar.itc
-(p15) br.cond.sptk.few .retry // lock is still busy
+(p15) br.cond.sptk .retry // lock is still busy
;;
// try acquiring lock (we know ar.ccv is still zero!):
mov tmp=1
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.4.10-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/ia64_ksyms.c Mon Sep 24 21:47:44 2001
@@ -145,4 +145,3 @@
#include <linux/proc_fs.h>
extern struct proc_dir_entry *efi_dir;
EXPORT_SYMBOL(efi_dir);
-
diff -urN linux-davidm/arch/ia64/kernel/iosapic.c linux-2.4.10-lia/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/iosapic.c Mon Sep 24 21:47:52 2001
@@ -343,6 +343,54 @@
}
/*
+ * ACPI can describe IOSAPIC interrupts via static tables and namespace
+ * methods. This provides an interface to register those interrupts and
+ * program the IOSAPIC RTE.
+ */
+int
+iosapic_register_irq (u32 global_vector, unsigned long polarity, unsigned long
+ edge_triggered, u32 base_irq, char *iosapic_address)
+{
+ irq_desc_t *idesc;
+ struct hw_interrupt_type *irq_type;
+ int vector;
+
+ vector = iosapic_irq_to_vector(global_vector);
+ if (vector < 0)
+ vector = ia64_alloc_irq();
+
+ /* fill in information from this vector's IOSAPIC */
+ iosapic_irq[vector].addr = iosapic_address;
+ iosapic_irq[vector].base_irq = base_irq;
+ iosapic_irq[vector].pin = global_vector - iosapic_irq[vector].base_irq;
+ iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW;
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+
+ if (edge_triggered) {
+ iosapic_irq[vector].trigger = IOSAPIC_EDGE;
+ irq_type = &irq_type_iosapic_edge;
+ } else {
+ iosapic_irq[vector].trigger = IOSAPIC_LEVEL;
+ irq_type = &irq_type_iosapic_level;
+ }
+
+ idesc = irq_desc(vector);
+ if (idesc->handler != irq_type) {
+ if (idesc->handler != &no_irq_type)
+ printk("iosapic_register_irq(): changing vector 0x%02x from"
+ "%s to %s\n", vector, idesc->handler->typename, irq_type->typename);
+ idesc->handler = irq_type;
+ }
+
+ printk("IOSAPIC %x(%s,%s) -> Vector %x\n", global_vector,
+ (polarity ? "high" : "low"), (edge_triggered ? "edge" : "level"), vector);
+
+ /* program the IOSAPIC routing table */
+ set_rte(vector, (ia64_get_lid() >> 16) & 0xffff);
+ return vector;
+}
+
+/*
* ACPI calls this when it finds an entry for a platform interrupt.
* Note that the irq_base and IOSAPIC address must be set in iosapic_init().
*/
@@ -392,7 +440,7 @@
}
printk("PLATFORM int %x: IOSAPIC %x(%s,%s) -> Vector %x CPU %.02u:%.02u\n",
- int_type, global_vector, (polarity ? "hign" : "low"),
+ int_type, global_vector, (polarity ? "high" : "low"),
(edge_triggered ? "edge" : "level"), vector, eid, id);
/* program the IOSAPIC routing table */
@@ -496,7 +544,7 @@
/* the interrupt route is for another controller... */
continue;
- if (irq < 16)
+ if (pcat_compat && (irq < 16))
vector = isa_irq_to_vector(irq);
else {
vector = iosapic_irq_to_vector(irq);
@@ -575,6 +623,23 @@
printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> 0x%02x\n",
dev->bus->number, PCI_SLOT(dev->devfn), pin, vector);
dev->irq = vector;
+
+#ifdef CONFIG_SMP
+ /*
+ * For platforms that do not support interrupt redirect
+ * via the XTP interface, we can round-robin the PCI
+ * device interrupts to the processors
+ */
+ if (!(smp_int_redirect & SMP_IRQ_REDIRECTION)) {
+ static int cpu_index = 0;
+
+ set_rte(vector, cpu_physical_id(cpu_index) & 0xffff);
+
+ cpu_index++;
+ if (cpu_index >= smp_num_cpus)
+ cpu_index = 0;
+ }
+#endif
}
}
/*
diff -urN linux-davidm/arch/ia64/kernel/irq.c linux-2.4.10-lia/arch/ia64/kernel/irq.c
--- linux-davidm/arch/ia64/kernel/irq.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/irq.c Mon Sep 24 21:48:11 2001
@@ -33,6 +33,7 @@
#include <linux/irq.h>
#include <linux/proc_fs.h>
+#include <asm/atomic.h>
#include <asm/io.h>
#include <asm/smp.h>
#include <asm/system.h>
@@ -121,7 +122,10 @@
end_none
};
-volatile unsigned long irq_err_count;
+atomic_t irq_err_count;
+#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MISMATCH_DEBUG)
+atomic_t irq_mis_count;
+#endif
/*
* Generic, controller-independent functions:
@@ -164,14 +168,17 @@
p += sprintf(p, "%10u ",
nmi_count(cpu_logical_map(j)));
p += sprintf(p, "\n");
-#if defined(CONFIG_SMP) && defined(__i386__)
+#if defined(CONFIG_SMP) && defined(CONFIG_X86)
p += sprintf(p, "LOC: ");
for (j = 0; j < smp_num_cpus; j++)
p += sprintf(p, "%10u ",
apic_timer_irqs[cpu_logical_map(j)]);
p += sprintf(p, "\n");
#endif
- p += sprintf(p, "ERR: %10lu\n", irq_err_count);
+ p += sprintf(p, "ERR: %10u\n", atomic_read(&irq_err_count));
+#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MISMATCH_DEBUG)
+ p += sprintf(p, "MIS: %10u\n", atomic_read(&irq_mis_count));
+#endif
return p - buf;
}
@@ -183,7 +190,7 @@
#ifdef CONFIG_SMP
unsigned int global_irq_holder = NO_PROC_ID;
-volatile unsigned long global_irq_lock; /* long for set_bit --RR */
+unsigned volatile long global_irq_lock; /* pedantic: long for set_bit --RR */
extern void show_stack(unsigned long* esp);
@@ -201,14 +208,14 @@
printk(" %d",bh_count(i));
printk(" ]\nStack dumps:");
-#if defined(__ia64__)
+#if defined(CONFIG_IA64)
/*
* We can't unwind the stack of another CPU without access to
* the registers of that CPU. And sending an IPI when we're
* in a potentially wedged state doesn't sound like a smart
* idea.
*/
-#elif defined(__i386__)
+#elif defined(CONFIG_X86)
for(i=0;i< smp_num_cpus;i++) {
unsigned long esp;
if(i=cpu)
@@ -261,7 +268,7 @@
/*
* We have to allow irqs to arrive between __sti and __cli
*/
-# ifdef __ia64__
+# ifdef CONFIG_IA64
# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop 0")
# else
# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop")
@@ -331,6 +338,9 @@
/* Uhhuh.. Somebody else got it. Wait.. */
do {
do {
+#ifdef CONFIG_X86
+ rep_nop();
+#endif
} while (test_bit(0,&global_irq_lock));
} while (test_and_set_bit(0,&global_irq_lock));
}
@@ -364,7 +374,7 @@
{
unsigned int flags;
-#ifdef __ia64__
+#ifdef CONFIG_IA64
__save_flags(flags);
if (flags & IA64_PSR_I) {
__cli();
@@ -403,7 +413,7 @@
int cpu = smp_processor_id();
__save_flags(flags);
-#ifdef __ia64__
+#ifdef CONFIG_IA64
local_enabled = (flags & IA64_PSR_I) != 0;
#else
local_enabled = (flags >> EFLAGS_IF_SHIFT) & 1;
@@ -476,13 +486,19 @@
return status;
}
-/*
- * Generic enable/disable code: this just calls
- * down into the PIC-specific version for the actual
- * hardware disable after having gotten the irq
- * controller lock.
+/**
+ * disable_irq_nosync - disable an irq without waiting
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables and Enables are
+ * nested.
+ * Unlike disable_irq(), this function does not ensure existing
+ * instances of the IRQ handler have completed before returning.
+ *
+ * This function may be called from IRQ context.
*/
-void inline disable_irq_nosync(unsigned int irq)
+
+inline void disable_irq_nosync(unsigned int irq)
{
irq_desc_t *desc = irq_desc(irq);
unsigned long flags;
@@ -495,10 +511,19 @@
spin_unlock_irqrestore(&desc->lock, flags);
}
-/*
- * Synchronous version of the above, making sure the IRQ is
- * no longer running on any other IRQ..
+/**
+ * disable_irq - disable an irq and wait for completion
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Enables and Disables are
+ * nested.
+ * This function waits for any pending IRQ handlers for this interrupt
+ * to complete before returning. If you use this function while
+ * holding a resource the IRQ handler may need you will deadlock.
+ *
+ * This function may be called - with care - from IRQ context.
*/
+
void disable_irq(unsigned int irq)
{
disable_irq_nosync(irq);
@@ -512,6 +537,17 @@
#endif
}
+/**
+ * enable_irq - enable handling of an irq
+ * @irq: Interrupt to enable
+ *
+ * Undoes the effect of one call to disable_irq(). If this
+ * matches the last disable, processing of interrupts on this
+ * IRQ line is re-enabled.
+ *
+ * This function may be called from IRQ context.
+ */
+
void enable_irq(unsigned int irq)
{
irq_desc_t *desc = irq_desc(irq);
@@ -533,7 +569,8 @@
desc->depth--;
break;
case 0:
- printk("enable_irq() unbalanced from %p\n", (void *) __builtin_return_address(0));
+ printk("enable_irq(%u) unbalanced from %p\n",
+ irq, (void *) __builtin_return_address(0));
}
spin_unlock_irqrestore(&desc->lock, flags);
}
@@ -626,11 +663,41 @@
desc->handler->end(irq);
spin_unlock(&desc->lock);
}
- if (local_softirq_pending())
- do_softirq();
return 1;
}
+/**
+ * request_irq - allocate an interrupt line
+ * @irq: Interrupt line to allocate
+ * @handler: Function to be called when the IRQ occurs
+ * @irqflags: Interrupt type flags
+ * @devname: An ascii name for the claiming device
+ * @dev_id: A cookie passed back to the handler function
+ *
+ * This call allocates interrupt resources and enables the
+ * interrupt line and IRQ handling. From the point this
+ * call is made your handler function may be invoked. Since
+ * your handler function must clear any interrupt the board
+ * raises, you must take care both to initialise your hardware
+ * and to set up the interrupt handler in the right order.
+ *
+ * Dev_id must be globally unique. Normally the address of the
+ * device data structure is used as the cookie. Since the handler
+ * receives this value it makes sense to use it.
+ *
+ * If your interrupt is shared you must pass a non NULL dev_id
+ * as this is required when freeing the interrupt.
+ *
+ * Flags:
+ *
+ * SA_SHIRQ Interrupt is shared
+ *
+ * SA_INTERRUPT Disable local interrupts while processing
+ *
+ * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ *
+ */
+
int request_irq(unsigned int irq,
void (*handler)(int, void *, struct pt_regs *),
unsigned long irqflags,
@@ -676,6 +743,24 @@
return retval;
}
+/**
+ * free_irq - free an interrupt
+ * @irq: Interrupt line to free
+ * @dev_id: Device identity to free
+ *
+ * Remove an interrupt handler. The handler is removed and if the
+ * interrupt line is no longer in use by any driver it is disabled.
+ * On a shared IRQ the caller must ensure the interrupt is disabled
+ * on the card it drives before calling this function. The function
+ * does not return until any executing interrupts for this IRQ
+ * have completed.
+ *
+ * This function may be called from interrupt context.
+ *
+ * Bugs: Attempting to free an irq in a handler for the same irq hangs
+ * the machine.
+ */
+
void free_irq(unsigned int irq, void *dev_id)
{
irq_desc_t *desc;
@@ -726,6 +811,17 @@
* with "IRQ_WAITING" cleared and the interrupt
* disabled.
*/
+
+static DECLARE_MUTEX(probe_sem);
+
+/**
+ * probe_irq_on - begin an interrupt autodetect
+ *
+ * Commence probing for an interrupt. The interrupts are scanned
+ * and a mask of potential interrupt lines is returned.
+ *
+ */
+
unsigned long probe_irq_on(void)
{
unsigned int i;
@@ -733,6 +829,7 @@
unsigned long val;
unsigned long delay;
+ down(&probe_sem);
/*
* something may have generated an irq long ago and we want to
* flush such a longstanding irq before considering it as spurious.
@@ -799,10 +896,19 @@
return val;
}
-/*
- * Return a mask of triggered interrupts (this
- * can handle only legacy ISA interrupts).
+/**
+ * probe_irq_mask - scan a bitmap of interrupt lines
+ * @val: mask of interrupts to consider
+ *
+ * Scan the ISA bus interrupt lines and return a bitmap of
+ * active interrupts. The interrupt probe logic state is then
+ * returned to its previous value.
+ *
+ * Note: we need to scan all the irq's even though we will
+ * only return ISA irq numbers - just so that we reset them
+ * all to a known state.
*/
+
unsigned int probe_irq_mask(unsigned long val)
{
int i;
@@ -825,14 +931,29 @@
}
spin_unlock_irq(&desc->lock);
}
+ up(&probe_sem);
return mask & val;
}
-/*
- * Return the one interrupt that triggered (this can
- * handle any interrupt source)
+/**
+ * probe_irq_off - end an interrupt autodetect
+ * @val: mask of potential interrupts (unused)
+ *
+ * Scans the unused interrupt lines and returns the line which
+ * appears to have triggered the interrupt. If no interrupt was
+ * found then zero is returned. If more than one interrupt is
+ * found then minus the first candidate is returned to indicate
+ * their is doubt.
+ *
+ * The interrupt probe logic state is returned to its previous
+ * value.
+ *
+ * BUGS: When used in a module (which arguably shouldnt happen)
+ * nothing prevents two IRQ probe callers from overlapping. The
+ * results of this are non-optimal.
*/
+
int probe_irq_off(unsigned long val)
{
int i, irq_found, nr_irqs;
@@ -857,6 +978,7 @@
}
spin_unlock_irq(&desc->lock);
}
+ up(&probe_sem);
if (nr_irqs > 1)
irq_found = -irq_found;
@@ -911,7 +1033,7 @@
if (!shared) {
desc->depth = 0;
- desc->status &= ~IRQ_DISABLED;
+ desc->status &= ~(IRQ_DISABLED | IRQ_AUTODETECT | IRQ_WAITING);
desc->handler->startup(irq);
}
spin_unlock_irqrestore(&desc->lock,flags);
@@ -922,20 +1044,9 @@
static struct proc_dir_entry * root_irq_dir;
static struct proc_dir_entry * irq_dir [NR_IRQS];
-static struct proc_dir_entry * smp_affinity_entry [NR_IRQS];
-
-static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
#define HEX_DIGITS 8
-static int irq_affinity_read_proc (char *page, char **start, off_t off,
- int count, int *eof, void *data)
-{
- if (count < HEX_DIGITS+1)
- return -EINVAL;
- return sprintf (page, "%08lx\n", irq_affinity[(long)data]);
-}
-
static unsigned int parse_hex_value (const char *buffer,
unsigned long count, unsigned long *ret)
{
@@ -973,6 +1084,20 @@
return 0;
}
+#if CONFIG_SMP
+
+static struct proc_dir_entry * smp_affinity_entry [NR_IRQS];
+
+static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
+
+static int irq_affinity_read_proc (char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+{
+ if (count < HEX_DIGITS+1)
+ return -EINVAL;
+ return sprintf (page, "%08lx\n", irq_affinity[(long)data]);
+}
+
static int irq_affinity_write_proc (struct file *file, const char *buffer,
unsigned long count, void *data)
{
@@ -984,7 +1109,6 @@
err = parse_hex_value(buffer, count, &new_value);
-#if CONFIG_SMP
/*
* Do not allow disabling IRQs completely - it's a too easy
* way to make the system unusable accidentally :-) At least
@@ -992,7 +1116,6 @@
*/
if (!(new_value & cpu_online_map))
return -EINVAL;
-#endif
irq_affinity[irq] = new_value;
irq_desc(irq)->handler->set_affinity(irq, new_value);
@@ -1000,6 +1123,8 @@
return full_count;
}
+#endif /* CONFIG_SMP */
+
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
@@ -1027,7 +1152,6 @@
static void register_irq_proc (unsigned int irq)
{
- struct proc_dir_entry *entry;
char name [MAX_NAMELEN];
if (!root_irq_dir || (irq_desc(irq)->handler = &no_irq_type))
@@ -1039,15 +1163,22 @@
/* create /proc/irq/1234 */
irq_dir[irq] = proc_mkdir(name, root_irq_dir);
- /* create /proc/irq/1234/smp_affinity */
- entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
-
- entry->nlink = 1;
- entry->data = (void *)(long)irq;
- entry->read_proc = irq_affinity_read_proc;
- entry->write_proc = irq_affinity_write_proc;
+#if CONFIG_SMP
+ {
+ struct proc_dir_entry *entry;
+ /* create /proc/irq/1234/smp_affinity */
+ entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
+
+ if (entry) {
+ entry->nlink = 1;
+ entry->data = (void *)(long)irq;
+ entry->read_proc = irq_affinity_read_proc;
+ entry->write_proc = irq_affinity_write_proc;
+ }
- smp_affinity_entry[irq] = entry;
+ smp_affinity_entry[irq] = entry;
+ }
+#endif
}
unsigned long prof_cpu_mask = -1;
@@ -1062,6 +1193,9 @@
/* create /proc/irq/prof_cpu_mask */
entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
+
+ if (!entry)
+ return;
entry->nlink = 1;
entry->data = (void *)&prof_cpu_mask;
diff -urN linux-davidm/arch/ia64/kernel/irq_ia64.c linux-2.4.10-lia/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/irq_ia64.c Mon Sep 24 21:48:36 2001
@@ -1,9 +1,9 @@
/*
* linux/arch/ia64/kernel/irq.c
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 6/10/99: Updated to bring in sync with x86 version to facilitate
* support for SMP and different interrupt controllers.
@@ -131,6 +131,13 @@
ia64_eoi();
vector = ia64_get_ivr();
}
+ /*
+ * This must be done *after* the ia64_eoi(). For example, the keyboard softirq
+ * handler needs to be able to wait for further keyboard interrupts, which can't
+ * come through until ia64_eoi() has been done.
+ */
+ if (local_softirq_pending())
+ do_softirq();
}
#ifdef CONFIG_SMP
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.4.10-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/ivt.S Mon Sep 24 21:48:47 2001
@@ -2,8 +2,8 @@
* arch/ia64/kernel/ivt.S
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1998-2001 David Mosberger <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger <davidm@hpl.hp.com>
*
* 00/08/23 Asit Mallick <asit.k.mallick@intel.com> TLB handling for SMP
* 00/12/20 David Mosberger-Tang <davidm@hpl.hp.com> DTLB/ITLB handler now uses virtual PT.
@@ -157,7 +157,7 @@
;;
(p10) itc.i r18 // insert the instruction TLB entry
(p11) itc.d r18 // insert the data TLB entry
-(p6) br.spnt.many page_fault // handle bad address/page not present (page fault)
+(p6) br.cond.spnt.many page_fault // handle bad address/page not present (page fault)
mov cr.ifa=r22
/*
@@ -213,7 +213,7 @@
;;
mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
;;
itc.i r18
;;
@@ -251,7 +251,7 @@
;;
mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
;;
itc.d r18
;;
@@ -286,7 +286,7 @@
;;
(p8) mov cr.iha=r17
(p8) mov r29° // save b0
-(p8) br.cond.dptk.many itlb_fault
+(p8) br.cond.dptk itlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
shr.u r18=r16,57 // move address bit 61 to bit 4
@@ -297,7 +297,7 @@
dep r19=r17,r19,0,12 // insert PTE control bits into r19
;;
or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6
-(p8) br.cond.spnt.many page_fault
+(p8) br.cond.spnt page_fault
;;
itc.i r19 // insert the TLB entry
mov pr=r31,-1
@@ -324,7 +324,7 @@
;;
(p8) mov cr.iha=r17
(p8) mov r29° // save b0
-(p8) br.cond.dptk.many dtlb_fault
+(p8) br.cond.dptk dtlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
tbit.nz p6,p7=r20,IA64_ISR_SP_BIT // is speculation bit on?
@@ -333,7 +333,7 @@
;;
andcm r18=0x10,r18 // bit 4=~address-bit(61)
cmp.ne p8,p0=r0,r23
-(p8) br.cond.spnt.many page_fault
+(p8) br.cond.spnt page_fault
dep r21=-1,r21,IA64_PSR_ED_BIT,1
dep r19=r17,r19,0,12 // insert PTE control bits into r19
@@ -429,7 +429,7 @@
;;
(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
mov b0=r30
br.sptk.many b0 // return to continuation point
END(nested_dtlb_miss)
@@ -622,7 +622,7 @@
mov r31=pr // prepare to save predicates
;;
cmp.eq p0,p7=r16,r17 // is this a system call? (p7 <- false, if so)
-(p7) br.cond.spnt.many non_syscall
+(p7) br.cond.spnt non_syscall
SAVE_MIN // uses r31; defines r2:
@@ -638,7 +638,7 @@
adds r3=8,r2 // set up second base pointer for SAVE_REST
;;
SAVE_REST
- br.call.sptk rpÞmine_args // clear NaT bits in (potential) syscall args
+ br.call.sptk.many rpÞmine_args // clear NaT bits in (potential) syscall args
mov r3%5
adds r15=-1024,r15 // r15 contains the syscall number---subtract 1024
@@ -680,7 +680,7 @@
st8 [r16]=r18 // store new value for cr.isr
(p8) br.call.sptk.many b6¶ // ignore this return addr
- br.cond.sptk.many ia64_trace_syscall
+ br.cond.sptk ia64_trace_syscall
// NOT REACHED
END(break_fault)
@@ -793,8 +793,8 @@
mov b6=r8
;;
cmp.ne p6,p0=0,r8
-(p6) br.call.dpnt b6¶ // call returns to ia64_leave_kernel
- br.sptk ia64_leave_kernel
+(p6) br.call.dpnt.many b6¶ // call returns to ia64_leave_kernel
+ br.sptk.many ia64_leave_kernel
END(dispatch_illegal_op_fault)
.align 1024
@@ -837,12 +837,12 @@
adds r15=IA64_PT_REGS_R1_OFFSET + 16,sp
;;
cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
- st8 [r15]=r8 // save orignal EAX in r1 (IA32 procs don't use the GP)
+ st8 [r15]=r8 // save original EAX in r1 (IA32 procs don't use the GP)
;;
alloc r15=ar.pfs,0,0,6,0 // must first in an insn group
;;
ld4 r8=[r14],8 // r8 = EAX (syscall number)
- mov r15"2 // sys_vfork - last implemented system call
+ mov r15"2 // last entry in system call table
;;
cmp.leu.unc p6,p7=r8,r15
ld4 out1=[r14],8 // r9 = ecx
@@ -871,7 +871,7 @@
;;
mov rp=r15
(p8) br.call.sptk.many b6¶
- br.cond.sptk.many ia32_trace_syscall
+ br.cond.sptk ia32_trace_syscall
non_ia32_syscall:
alloc r15=ar.pfs,0,0,2,0
@@ -1067,7 +1067,7 @@
mov r31=pr
;;
cmp4.eq p6,p0=0,r16
-(p6) br.sptk dispatch_illegal_op_fault
+(p6) br.sptk.many dispatch_illegal_op_fault
;;
mov r19$ // fault number
br.sptk.many dispatch_to_fault_handler
diff -urN linux-davidm/arch/ia64/kernel/mca.c linux-2.4.10-lia/arch/ia64/kernel/mca.c
--- linux-davidm/arch/ia64/kernel/mca.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/mca.c Mon Sep 24 21:49:34 2001
@@ -125,7 +125,7 @@
void
ia64_mca_cpe_int_handler (int cpe_irq, void *arg, struct pt_regs *ptregs)
{
- IA64_MCA_DEBUG("ia64_mca_cpe_int_handler : received interrupt. vector = %#x\n", cpe_irq);
+ IA64_MCA_DEBUG("ia64_mca_cpe_int_handler: received interrupt. vector = %#x\n", cpe_irq);
/* Get the CMC error record and log it */
ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CPE);
@@ -200,12 +200,12 @@
{
/* Register the CPE interrupt vector with SAL */
if (ia64_sal_mc_set_params(SAL_MC_PARAM_CPE_INT, SAL_MC_PARAM_MECHANISM_INT, cpev, 0, 0)) {
- printk("ia64_mca_platform_init : failed to register Corrected "
+ printk("ia64_mca_platform_init: failed to register Corrected "
"Platform Error interrupt vector with SAL.\n");
return;
}
- IA64_MCA_DEBUG("ia64_mca_platform_init : corrected platform error "
+ IA64_MCA_DEBUG("ia64_mca_platform_init: corrected platform error "
"vector %#x setup and enabled\n", cpev);
}
@@ -279,11 +279,11 @@
cmcv.cmcv_vector = IA64_CMC_VECTOR;
ia64_set_cmcv(cmcv.cmcv_regval);
- IA64_MCA_DEBUG("ia64_mca_platform_init : CPU %d corrected "
+ IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d corrected "
"machine check vector %#x setup and enabled.\n",
smp_processor_id(), IA64_CMC_VECTOR);
- IA64_MCA_DEBUG("ia64_mca_platform_init : CPU %d CMCV = %#016lx\n",
+ IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d CMCV = %#016lx\n",
smp_processor_id(), ia64_get_cmcv());
}
@@ -331,7 +331,7 @@
int rc;
if ((rc = memcmp((void *)test, (void *)target, sizeof(efi_guid_t)))) {
- IA64_MCA_DEBUG("ia64_mca_print : invalid guid = "
+ IA64_MCA_DEBUG("ia64_mca_print: invalid guid = "
"{ %08x, %04x, %04x, { %#02x, %#02x, %#02x, %#02x, "
"%#02x, %#02x, %#02x, %#02x, } } \n ",
test->data1, test->data2, test->data3, test->data4[0],
@@ -372,10 +372,10 @@
int i;
s64 rc;
- IA64_MCA_DEBUG("ia64_mca_init : begin\n");
+ IA64_MCA_DEBUG("ia64_mca_init: begin\n");
/* Clear the Rendez checkin flag for all cpus */
- for(i = 0 ; i < IA64_MAXCPUS; i++)
+ for(i = 0 ; i < NR_CPUS; i++)
ia64_mc_info.imi_rendez_checkin[i] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
/*
@@ -389,7 +389,7 @@
IA64_MCA_RENDEZ_TIMEOUT,
0)))
{
- printk("ia64_mca_init : Failed to register rendezvous interrupt "
+ printk("ia64_mca_init: Failed to register rendezvous interrupt "
"with SAL. rc = %ld\n", rc);
return;
}
@@ -400,12 +400,12 @@
IA64_MCA_WAKEUP_VECTOR,
0, 0)))
{
- printk("ia64_mca_init : Failed to register wakeup interrupt with SAL. rc = %ld\n",
+ printk("ia64_mca_init: Failed to register wakeup interrupt with SAL. rc = %ld\n",
rc);
return;
}
- IA64_MCA_DEBUG("ia64_mca_init : registered mca rendezvous spinloop and wakeup mech.\n");
+ IA64_MCA_DEBUG("ia64_mca_init: registered mca rendezvous spinloop and wakeup mech.\n");
ia64_mc_info.imi_mca_handler = __pa(mca_hldlr_ptr->fp);
/*
@@ -421,12 +421,12 @@
ia64_mc_info.imi_mca_handler_size,
0, 0, 0)))
{
- printk("ia64_mca_init : Failed to register os mca handler with SAL. rc = %ld\n",
+ printk("ia64_mca_init: Failed to register os mca handler with SAL. rc = %ld\n",
rc);
return;
}
- IA64_MCA_DEBUG("ia64_mca_init : registered os mca handler with SAL at 0x%lx, gp = 0x%lx\n",
+ IA64_MCA_DEBUG("ia64_mca_init: registered os mca handler with SAL at 0x%lx, gp = 0x%lx\n",
ia64_mc_info.imi_mca_handler, mca_hldlr_ptr->gp);
/*
@@ -438,7 +438,7 @@
ia64_mc_info.imi_slave_init_handler = __pa(slave_init_ptr->fp);
ia64_mc_info.imi_slave_init_handler_size = 0;
- IA64_MCA_DEBUG("ia64_mca_init : os init handler at %lx\n",
+ IA64_MCA_DEBUG("ia64_mca_init: os init handler at %lx\n",
ia64_mc_info.imi_monarch_init_handler);
/* Register the os init handler with SAL */
@@ -450,12 +450,12 @@
__pa(ia64_get_gp()),
ia64_mc_info.imi_slave_init_handler_size)))
{
- printk("ia64_mca_init : Failed to register m/s init handlers with SAL. rc = %ld\n",
+ printk("ia64_mca_init: Failed to register m/s init handlers with SAL. rc = %ld\n",
rc);
return;
}
- IA64_MCA_DEBUG("ia64_mca_init : registered os init handler with SAL\n");
+ IA64_MCA_DEBUG("ia64_mca_init: registered os init handler with SAL\n");
/*
* Configure the CMCI vector and handler. Interrupts for CMC are
@@ -486,7 +486,7 @@
}
ia64_mca_register_cpev(cpev);
} else
- printk("ia64_mca_init : Failed to get routed CPEI vector from ACPI.\n");
+ printk("ia64_mca_init: Failed to get routed CPEI vector from ACPI.\n");
}
/* Initialize the areas set aside by the OS to buffer the
@@ -732,7 +732,7 @@
void
ia64_mca_cmc_int_handler(int cmc_irq, void *arg, struct pt_regs *ptregs)
{
- IA64_MCA_DEBUG("ia64_mca_cmc_int_handler : received interrupt vector = %#x on CPU %d\n",
+ IA64_MCA_DEBUG("ia64_mca_cmc_int_handler: received interrupt vector = %#x on CPU %d\n",
cmc_irq, smp_processor_id());
/* Get the CMC error record and log it */
@@ -922,7 +922,7 @@
return total_len;
} else {
IA64_LOG_UNLOCK(sal_info_type);
- prfunc("ia64_log_get : Failed to retrieve SAL error record type %d\n",
+ prfunc("ia64_log_get: Failed to retrieve SAL error record type %d\n",
sal_info_type);
return 0;
}
@@ -1082,7 +1082,7 @@
u64 target_addr;
if (!cache_check_info->valid.check_info) {
- IA64_MCA_DEBUG("ia64_mca_log_print : invalid cache_check_info[%d]\n",i);
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid cache_check_info[%d]\n",i);
return; /* If check info data not valid, skip it */
}
@@ -1132,7 +1132,7 @@
pal_tlb_check_info_t *info;
if (!tlb_check_info->valid.check_info) {
- IA64_MCA_DEBUG("ia64_mca_log_print : invalid tlb_check_info[%d]\n", i);
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid tlb_check_info[%d]\n", i);
return; /* If check info data not valid, skip it */
}
@@ -1177,7 +1177,7 @@
/* or obtained from */
if (!bus_check_info->valid.check_info) {
- IA64_MCA_DEBUG("ia64_mca_log_print : invalid bus_check_info[%d]\n", i);
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid bus_check_info[%d]\n", i);
return; /* If check info data not valid, skip it */
}
@@ -1696,7 +1696,7 @@
#endif // MCA_PRT_XTRA_DATA for test only @FVL
if ((ercd_pos = sizeof(sal_log_record_header_t)) >= lh->len) {
- IA64_MCA_DEBUG("ia64_mca_log_print : "
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
"truncated SAL CMC error record. len = %d\n",
lh->len);
return;
@@ -1714,7 +1714,7 @@
#endif // MCA_PRT_XTRA_DATA for test only @FVL
if (verify_guid((void *)&slsh->guid, (void *)&(SAL_PROC_DEV_ERR_SECT_GUID))) {
- IA64_MCA_DEBUG("ia64_mca_log_print : unsupported record section\n");
+ IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n");
continue;
}
@@ -1725,7 +1725,7 @@
printk);
}
- IA64_MCA_DEBUG("ia64_mca_log_print : "
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
"found %d sections in SAL CMC error record. len = %d\n",
n_sects, lh->len);
if (!n_sects) {
@@ -1759,7 +1759,7 @@
#endif // MCA_PRT_XTRA_DATA for test only @FVL
if ((ercd_pos = sizeof(sal_log_record_header_t)) >= lh->len) {
- IA64_MCA_DEBUG("ia64_mca_log_print : "
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
"truncated SAL error record. len = %d\n",
lh->len);
return;
@@ -1826,12 +1826,12 @@
ia64_log_plat_bus_err_info_print((sal_log_plat_bus_err_info_t *)slsh,
prfunc);
} else {
- IA64_MCA_DEBUG("ia64_mca_log_print : unsupported record section\n");
+ IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n");
continue;
}
}
- IA64_MCA_DEBUG("ia64_mca_log_print : found %d sections in SAL error record. len = %d\n",
+ IA64_MCA_DEBUG("ia64_mca_log_print: found %d sections in SAL error record. len = %d\n",
n_sects, lh->len);
if (!n_sects) {
prfunc("No Platform Error Info Sections found\n");
diff -urN linux-davidm/arch/ia64/kernel/mca_asm.S linux-2.4.10-lia/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/mca_asm.S Mon Sep 24 21:49:52 2001
@@ -116,7 +116,7 @@
// call our handler
movl r2=ia64_mca_ucmc_handler;;
mov b6=r2;;
- br.call.sptk.few b0¶;;
+ br.call.sptk.many b0¶;;
.ret0:
// Revert back to physical mode before going back to SAL
PHYSICAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_end, r4)
@@ -724,7 +724,7 @@
adds out0\x16,sp // out0 = pointer to pt_regs
;;
- br.call.sptk.few rp=ia64_init_handler
+ br.call.sptk.many rp=ia64_init_handler
.ret1:
return_from_init:
diff -urN linux-davidm/arch/ia64/kernel/pal.S linux-2.4.10-lia/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/pal.S Mon Sep 24 21:50:01 2001
@@ -4,8 +4,9 @@
*
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * David Mosberger <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 05/22/2000 eranian Added support for stacked register calls
* 05/24/2000 eranian Added support for physical mode static calls
@@ -31,7 +32,7 @@
movl r2=pal_entry_point
;;
st8 [r2]=in0
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_pal_handler_init)
/*
@@ -41,7 +42,7 @@
*/
GLOBAL_ENTRY(ia64_pal_default_handler)
mov r8=-1
- br.cond.sptk.few rp
+ br.cond.sptk.many rp
END(ia64_pal_default_handler)
/*
@@ -79,13 +80,13 @@
;;
(p6) srlz.i
mov rp = r8
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
1: mov psr.l = loc3
mov ar.pfs = loc1
mov rp = loc0
;;
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_static)
/*
@@ -120,7 +121,7 @@
mov rp = loc0
;;
srlz.d // serialize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_stacked)
/*
@@ -173,13 +174,13 @@
or loc3=loc3,r17 // add in psr the bits to set
;;
andcm r16=loc3,r16 // removes bits to clear from psr
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret1: mov rp = r8 // install return address (physical)
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
1:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret2:
mov psr.l = loc3 // restore init PSR
@@ -188,7 +189,7 @@
;;
mov ar.rsc=loc4 // restore RSE configuration
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_phys_static)
/*
@@ -227,13 +228,13 @@
mov b7 = loc2 // install target to branch reg
;;
andcm r16=loc3,r16 // removes bits to clear from psr
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret6:
br.call.sptk.many rp· // now make the call
.ret7:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret8: mov psr.l = loc3 // restore init PSR
mov ar.pfs = loc1
@@ -241,6 +242,6 @@
;;
mov ar.rsc=loc4 // restore RSE configuration
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_phys_stacked)
diff -urN linux-davidm/arch/ia64/kernel/perfmon.c linux-2.4.10-lia/arch/ia64/kernel/perfmon.c
--- linux-davidm/arch/ia64/kernel/perfmon.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/perfmon.c Mon Sep 24 21:50:18 2001
@@ -33,13 +33,12 @@
#include <asm/signal.h>
#include <asm/system.h>
#include <asm/system.h>
-#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/delay.h> /* for ia64_get_itc() */
#ifdef CONFIG_PERFMON
-#define PFM_VERSION "0.2"
+#define PFM_VERSION "0.3"
#define PFM_SMPL_HDR_VERSION 1
#define PMU_FIRST_COUNTER 4 /* first generic counter */
@@ -53,6 +52,7 @@
#define PFM_DISABLE 0xa6 /* freeze only */
#define PFM_RESTART 0xcf
#define PFM_CREATE_CONTEXT 0xa7
+#define PFM_DESTROY_CONTEXT 0xa8
/*
* Those 2 are just meant for debugging. I considered using sysctl() for
* that but it is a little bit too pervasive. This solution is at least
@@ -61,6 +61,8 @@
#define PFM_DEBUG_ON 0xe0
#define PFM_DEBUG_OFF 0xe1
+#define PFM_DEBUG_BASE PFM_DEBUG_ON
+
/*
* perfmon API flags
@@ -69,7 +71,8 @@
#define PFM_FL_INHERIT_ONCE 0x01 /* clone pfm_context only once across fork() */
#define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork() */
#define PFM_FL_SMPL_OVFL_NOBLOCK 0x04 /* do not block on sampling buffer overflow */
-#define PFM_FL_SYSTEMWIDE 0x08 /* create a systemwide context */
+#define PFM_FL_SYSTEM_WIDE 0x08 /* create a system wide context */
+#define PFM_FL_EXCL_INTR 0x10 /* exclude interrupt from system wide monitoring */
/*
* PMC API flags
@@ -88,7 +91,7 @@
#endif
#define PMC_IS_IMPL(i) (i < pmu_conf.num_pmcs && pmu_conf.impl_regs[i>>6] & (1<< (i&~(64-1))))
-#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
+#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
#define PMD_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
#define PMC_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
@@ -198,7 +201,8 @@
unsigned int noblock:1; /* block/don't block on overflow with notification */
unsigned int system:1; /* do system wide monitoring */
unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */
- unsigned int reserved:27;
+ unsigned int exclintr:1;/* exlcude interrupts from system wide monitoring */
+ unsigned int reserved:26;
} pfm_context_flags_t;
typedef struct pfm_context {
@@ -208,26 +212,33 @@
unsigned long ctx_iear_counter; /* which PMD holds I-EAR */
unsigned long ctx_btb_counter; /* which PMD holds BTB */
- pid_t ctx_notify_pid; /* who to notify on overflow */
- int ctx_notify_sig; /* XXX: SIGPROF or other */
- pfm_context_flags_t ctx_flags; /* block/noblock */
- pid_t ctx_creator; /* pid of creator (debug) */
- unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */
- unsigned long ctx_smpl_regs; /* which registers to record on overflow */
+ spinlock_t ctx_notify_lock;
+ pfm_context_flags_t ctx_flags; /* block/noblock */
+ int ctx_notify_sig; /* XXX: SIGPROF or other */
+ struct task_struct *ctx_notify_task; /* who to notify on overflow */
+ struct task_struct *ctx_creator; /* pid of creator (debug) */
+
+ unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */
+ unsigned long ctx_smpl_regs; /* which registers to record on overflow */
+
+ struct semaphore ctx_restart_sem; /* use for blocking notification mode */
- struct semaphore ctx_restart_sem; /* use for blocking notification mode */
+ unsigned long ctx_used_pmds[4]; /* bitmask of used PMD (speedup ctxsw) */
+ unsigned long ctx_used_pmcs[4]; /* bitmask of used PMC (speedup ctxsw) */
pfm_counter_t ctx_pmds[IA64_NUM_PMD_COUNTERS]; /* XXX: size should be dynamic */
+
} pfm_context_t;
+#define CTX_USED_PMD(ctx,n) (ctx)->ctx_used_pmds[(n)>>6] |= 1<< ((n) % 64)
+#define CTX_USED_PMC(ctx,n) (ctx)->ctx_used_pmcs[(n)>>6] |= 1<< ((n) % 64)
+
#define ctx_fl_inherit ctx_flags.inherit
#define ctx_fl_noblock ctx_flags.noblock
#define ctx_fl_system ctx_flags.system
#define ctx_fl_frozen ctx_flags.frozen
+#define ctx_fl_exclintr ctx_flags.exclintr
-#define CTX_IS_DEAR(c,n) ((c)->ctx_dear_counter = (n))
-#define CTX_IS_IEAR(c,n) ((c)->ctx_iear_counter = (n))
-#define CTX_IS_BTB(c,n) ((c)->ctx_btb_counter = (n))
#define CTX_OVFL_NOBLOCK(c) ((c)->ctx_fl_noblock = 1)
#define CTX_INHERIT_MODE(c) ((c)->ctx_fl_inherit)
#define CTX_HAS_SMPL(c) ((c)->ctx_smpl_buf != NULL)
@@ -235,17 +246,15 @@
static pmu_config_t pmu_conf;
/* for debug only */
-static unsigned long pfm_debug=0; /* 0= nodebug, >0= debug output on */
+static int pfm_debug=0; /* 0= nodebug, >0= debug output on */
+
#define DBprintk(a) \
do { \
- if (pfm_debug >0) { printk(__FUNCTION__" "); printk a; } \
+ if (pfm_debug >0) { printk(__FUNCTION__" %d: ", __LINE__); printk a; } \
} while (0);
-static void perfmon_softint(unsigned long ignored);
static void ia64_reset_pmu(void);
-DECLARE_TASKLET(pfm_tasklet, perfmon_softint, 0);
-
/*
* structure used to pass information between the interrupt handler
* and the tasklet.
@@ -257,26 +266,42 @@
unsigned long bitvect; /* which counters have overflowed */
} notification_info_t;
-#define notification_is_invalid(i) (i->to_pid < 2)
-/* will need to be cache line padded */
-static notification_info_t notify_info[NR_CPUS];
+typedef struct {
+ unsigned long pfs_proc_sessions;
+ unsigned long pfs_sys_session; /* can only be 0/1 */
+ unsigned long pfs_dfl_dcr; /* XXX: hack */
+ unsigned int pfs_pp;
+} pfm_session_t;
-/*
- * We force cache line alignment to avoid false sharing
- * given that we have one entry per CPU.
- */
-static struct {
+struct {
struct task_struct *owner;
} ____cacheline_aligned pmu_owners[NR_CPUS];
-/* helper macros */
+
+
+/*
+ * helper macros
+ */
#define SET_PMU_OWNER(t) do { pmu_owners[smp_processor_id()].owner = (t); } while(0);
#define PMU_OWNER() pmu_owners[smp_processor_id()].owner
+#ifdef CONFIG_SMP
+#define PFM_CAN_DO_LAZY() (smp_num_cpus=1 && pfs_info.pfs_sys_session=0)
+#else
+#define PFM_CAN_DO_LAZY() (pfs_info.pfs_sys_session=0)
+#endif
+
+static void pfm_lazy_save_regs (struct task_struct *ta);
+
/* for debug only */
static struct proc_dir_entry *perfmon_dir;
/*
+ * XXX: hack to indicate that a system wide monitoring session is active
+ */
+static pfm_session_t pfs_info;
+
+/*
* finds the number of PM(C|D) registers given
* the bitvector returned by PAL
*/
@@ -340,8 +365,7 @@
static inline unsigned long
kvirt_to_pa(unsigned long adr)
{
- __u64 pa;
- __asm__ __volatile__ ("tpa %0 = %1" : "=r"(pa) : "r"(adr) : "memory");
+ __u64 pa = ia64_tpa(adr);
DBprintk(("kv2pa(%lx-->%lx)\n", adr, pa));
return pa;
}
@@ -569,25 +593,44 @@
static int
pfx_is_sane(pfreq_context_t *pfx)
{
+ int ctx_flags;
+
/* valid signal */
- if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return 0;
+ //if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return -EINVAL;
+ if (pfx->notify_sig !=0 && pfx->notify_sig != SIGPROF) return -EINVAL;
/* cannot send to process 1, 0 means do not notify */
- if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return 0;
+ if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return -EINVAL;
+
+ ctx_flags = pfx->flags;
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+#ifdef CONFIG_SMP
+ if (smp_num_cpus > 1) {
+ printk("perfmon: system wide monitoring on SMP not yet supported\n");
+ return -EINVAL;
+ }
+#endif
+ if ((ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) = 0) {
+ printk("perfmon: system wide monitoring cannot use blocking notification mode\n");
+ return -EINVAL;
+ }
+ }
/* probably more to add here */
- return 1;
+ return 0;
}
static int
-pfm_context_create(struct task_struct *task, int flags, perfmon_req_t *req)
+pfm_context_create(int flags, perfmon_req_t *req)
{
pfm_context_t *ctx;
+ struct task_struct *task = NULL;
perfmon_req_t tmp;
void *uaddr = NULL;
- int ret = -EFAULT;
+ int ret;
int ctx_flags;
+ pid_t pid;
/* to go away */
if (flags) {
@@ -596,48 +639,156 @@
if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
+ ret = pfx_is_sane(&tmp.pfr_ctx);
+ if (ret < 0) return ret;
+
ctx_flags = tmp.pfr_ctx.flags;
- /* not yet supported */
- if (ctx_flags & PFM_FL_SYSTEMWIDE) return -EINVAL;
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+ /*
+ * XXX: This is not AT ALL SMP safe
+ */
+ if (pfs_info.pfs_proc_sessions > 0) return -EBUSY;
+ if (pfs_info.pfs_sys_session > 0) return -EBUSY;
+
+ pfs_info.pfs_sys_session = 1;
- if (!pfx_is_sane(&tmp.pfr_ctx)) return -EINVAL;
+ } else if (pfs_info.pfs_sys_session >0) {
+ /* no per-process monitoring while there is a system wide session */
+ return -EBUSY;
+ } else
+ pfs_info.pfs_proc_sessions++;
ctx = pfm_context_alloc();
- if (!ctx) return -ENOMEM;
+ if (!ctx) goto error;
+
+ /* record the creator (debug only) */
+ ctx->ctx_creator = current;
+
+ pid = tmp.pfr_ctx.notify_pid;
+
+ spin_lock_init(&ctx->ctx_notify_lock);
+
+ if (pid = current->pid) {
+ ctx->ctx_notify_task = task = current;
+ current->thread.pfm_context = ctx;
+
+ atomic_set(¤t->thread.pfm_notifiers_check, 1);
- /* record who the creator is (for debug) */
- ctx->ctx_creator = task->pid;
+ } else if (pid!=0) {
+ read_lock(&tasklist_lock);
+
+ task = find_task_by_pid(pid);
+ if (task) {
+ /*
+ * record who to notify
+ */
+ ctx->ctx_notify_task = task;
+
+ /*
+ * make visible
+ * must be done inside critical section
+ *
+ * if the initialization does not go through it is still
+ * okay because child will do the scan for nothing which
+ * won't hurt.
+ */
+ current->thread.pfm_context = ctx;
+
+ /*
+ * will cause task to check on exit for monitored
+ * processes that would notify it. see release_thread()
+ * Note: the scan MUST be done in release thread, once the
+ * task has been detached from the tasklist otherwise you are
+ * exposed to race conditions.
+ */
+ atomic_add(1, &task->thread.pfm_notifiers_check);
+ }
+ read_unlock(&tasklist_lock);
+ }
+
+ /*
+ * notification process does not exist
+ */
+ if (pid != 0 && task = NULL) {
+ ret = -EINVAL;
+ goto buffer_error;
+ }
- ctx->ctx_notify_pid = tmp.pfr_ctx.notify_pid;
ctx->ctx_notify_sig = SIGPROF; /* siginfo imposes a fixed signal */
if (tmp.pfr_ctx.smpl_entries) {
DBprintk((" sampling entries=%ld\n",tmp.pfr_ctx.smpl_entries));
- if ((ret=pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs, tmp.pfr_ctx.smpl_entries, &uaddr)) ) goto buffer_error;
+
+ ret = pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs,
+ tmp.pfr_ctx.smpl_entries, &uaddr);
+ if (ret<0) goto buffer_error;
+
tmp.pfr_ctx.smpl_vaddr = uaddr;
}
/* initialization of context's flags */
- ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
- ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0;
- ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEMWIDE) ? 1: 0;
- ctx->ctx_fl_frozen = 0;
+ ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
+ ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0;
+ ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEM_WIDE) ? 1: 0;
+ ctx->ctx_fl_exclintr = (ctx_flags & PFM_FL_EXCL_INTR) ? 1: 0;
+ ctx->ctx_fl_frozen = 0;
+
+ /*
+ * Keep track of the pmds we want to sample
+ * XXX: may be we don't need to save/restore the DEAR/IEAR pmds
+ * but we do need the BTB for sure. This is because of a hardware
+ * buffer of 1 only for non-BTB pmds.
+ */
+ ctx->ctx_used_pmds[0] = tmp.pfr_ctx.smpl_regs;
+ ctx->ctx_used_pmcs[0] = 1; /* always save/restore PMC[0] */
sema_init(&ctx->ctx_restart_sem, 0); /* init this semaphore to locked */
- if (copy_to_user(req, &tmp, sizeof(tmp))) goto buffer_error;
- DBprintk((" context=%p, pid=%d notify_sig %d notify_pid=%d\n",(void *)ctx, task->pid, ctx->ctx_notify_sig, ctx->ctx_notify_pid));
- DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+ if (copy_to_user(req, &tmp, sizeof(tmp))) {
+ ret = -EFAULT;
+ goto buffer_error;
+ }
+
+ DBprintk((" context=%p, pid=%d notify_sig %d notify_task=%p\n",(void *)ctx, current->pid, ctx->ctx_notify_sig, ctx->ctx_notify_task));
+ DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, current->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+
+ /*
+ * when no notification is required, we can make this visible at the last moment
+ */
+ if (pid = 0) current->thread.pfm_context = ctx;
+
+ /*
+ * by default, we always include interrupts for system wide
+ * DCR.pp is set by default to zero by kernel in cpu_init()
+ */
+ if (ctx->ctx_fl_system) {
+ if (ctx->ctx_fl_exclintr = 0) {
+ unsigned long dcr = ia64_get_dcr();
+
+ ia64_set_dcr(dcr|IA64_DCR_PP);
+ /*
+ * keep track of the kernel default value
+ */
+ pfs_info.pfs_dfl_dcr = dcr;
- /* link with task */
- task->thread.pfm_context = ctx;
+ DBprintk((" dcr.pp is set\n"));
+ }
+ }
return 0;
buffer_error:
- vfree(ctx);
-
+ pfm_context_free(ctx);
+error:
+ /*
+ * undo session reservation
+ */
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+ pfs_info.pfs_sys_session = 0;
+ } else {
+ pfs_info.pfs_proc_sessions--;
+ }
return ret;
}
@@ -658,10 +809,10 @@
/* upper part is ignored on rval */
ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
- /*
+ /*
* we must reset BTB index (clears pmd16.full to make
* sure we do not report the same branches twice.
- * The non-blocking case in handled in update_counters().
+ * The non-blocking case in handled in update_counters()
*/
if (cnum = ctx->ctx_btb_counter) {
DBprintk(("reseting PMD16\n"));
@@ -669,6 +820,8 @@
}
}
}
+ /* just in case ! */
+ ctx->ctx_ovfl_regs = 0;
}
static int
@@ -706,20 +859,23 @@
} else if (PMC_IS_BTB(&tmp.pfr_reg.reg_value)) {
ctx->ctx_btb_counter = cnum;
}
-
+#if 0
if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY)
ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY;
+#endif
}
-
+ /* keep track of what we use */
+ CTX_USED_PMC(ctx, cnum);
ia64_set_pmc(cnum, tmp.pfr_reg.reg_value);
- DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags));
+
+ DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x used_pmcs=0%lx\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags, ctx->ctx_used_pmcs[0]));
}
/*
* we have to set this here event hough we haven't necessarily started monitoring
* because we may be context switched out
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
return 0;
}
@@ -752,25 +908,32 @@
ctx->ctx_pmds[k].val = tmp.pfr_reg.reg_value & ~pmu_conf.perf_ovfl_val;
ctx->ctx_pmds[k].smpl_rval = tmp.pfr_reg.reg_smpl_reset;
ctx->ctx_pmds[k].ovfl_rval = tmp.pfr_reg.reg_ovfl_reset;
+
+ if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY)
+ ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY;
}
+ /* keep track of what we use */
+ CTX_USED_PMD(ctx, cnum);
/* writes to unimplemented part is ignored, so this is safe */
ia64_set_pmd(cnum, tmp.pfr_reg.reg_value);
/* to go away */
ia64_srlz_d();
- DBprintk((" setting PMD[%ld]: pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx\n",
+ DBprintk((" setting PMD[%ld]: ovfl_notify=%d pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx used_pmds=0%lx\n",
cnum,
+ PMD_OVFL_NOTIFY(ctx, cnum - PMU_FIRST_COUNTER),
ctx->ctx_pmds[k].val,
ctx->ctx_pmds[k].ovfl_rval,
ctx->ctx_pmds[k].smpl_rval,
- ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val));
+ ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val,
+ ctx->ctx_used_pmds[0]));
}
/*
* we have to set this here event hough we haven't necessarily started monitoring
* because we may be context switched out
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
return 0;
}
@@ -821,7 +984,8 @@
}
tmp.pfr_reg.reg_value = val;
- DBprintk((" reading PMD[%ld]=0x%lx reg=0x%lx ctx_val=0x%lx pmc=0x%lx\n", tmp.pfr_reg.reg_num, val, reg_val, ctx_val, ia64_get_pmc(tmp.pfr_reg.reg_num)));
+ DBprintk((" reading PMD[%ld]=0x%lx reg=0x%lx ctx_val=0x%lx pmc=0x%lx\n",
+ tmp.pfr_reg.reg_num, val, reg_val, ctx_val, ia64_get_pmc(tmp.pfr_reg.reg_num)));
if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT;
}
@@ -836,7 +1000,7 @@
void *sem = &ctx->ctx_restart_sem;
if (task = current) {
- DBprintk((" restartig self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
+ DBprintk((" restarting self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
pfm_reset_regs(ctx);
@@ -885,6 +1049,23 @@
return 0;
}
+/*
+ * system-wide mode: propagate activation/desactivation throughout the tasklist
+ *
+ * XXX: does not work for SMP, of course
+ */
+static void
+pfm_process_tasklist(int cmd)
+{
+ struct task_struct *p;
+ struct pt_regs *regs;
+
+ for_each_task(p) {
+ regs = (struct pt_regs *)((unsigned long)p + IA64_STK_OFFSET);
+ regs--;
+ ia64_psr(regs)->pp = cmd;
+ }
+}
static int
do_perfmonctl (struct task_struct *task, int cmd, int flags, perfmon_req_t *req, int count, struct pt_regs *regs)
@@ -895,19 +1076,26 @@
memset(&tmp, 0, sizeof(tmp));
+ if (ctx = NULL && cmd != PFM_CREATE_CONTEXT && cmd < PFM_DEBUG_BASE) {
+ DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid));
+ return -EINVAL;
+ }
+
switch (cmd) {
case PFM_CREATE_CONTEXT:
/* a context has already been defined */
if (ctx) return -EBUSY;
- /* may be a temporary limitation */
+ /*
+ * cannot directly create a context in another process
+ */
if (task != current) return -EINVAL;
if (req = NULL || count != 1) return -EINVAL;
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- return pfm_context_create(task, flags, req);
+ return pfm_context_create(flags, req);
case PFM_WRITE_PMCS:
/* we don't quite support this right now */
@@ -915,10 +1103,6 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_write_pmcs(task, req, count);
case PFM_WRITE_PMDS:
@@ -927,45 +1111,41 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_WRITE_PMDS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_write_pmds(task, req, count);
case PFM_START:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- if (!ctx) {
- DBprintk((" PFM_START: no context for task %d\n", task->pid));
- return -EINVAL;
- }
+ if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER());
SET_PMU_OWNER(current);
/* will start monitoring right after rfi */
ia64_psr(regs)->up = 1;
+ ia64_psr(regs)->pp = 1;
+
+ if (ctx->ctx_fl_system) {
+ pfm_process_tasklist(1);
+ pfs_info.pfs_pp = 1;
+ }
/*
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
ia64_set_pmc(0, 0);
ia64_srlz_d();
- break;
+ break;
case PFM_ENABLE:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- if (!ctx) {
- DBprintk((" PFM_ENABLE: no context for task %d\n", task->pid));
- return -EINVAL;
- }
+ if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER());
/* reset all registers to stable quiet state */
ia64_reset_pmu();
@@ -983,7 +1163,7 @@
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
/* simply unfreeze */
ia64_set_pmc(0, 0);
@@ -997,54 +1177,41 @@
/* simply freeze */
ia64_set_pmc(0, 1);
ia64_srlz_d();
+ /*
+ * XXX: cannot really toggle IA64_THREAD_PM_VALID
+ * but context is still considered valid, so any
+ * read request would return something valid. Same
+ * thing when this task terminates (pfm_flush_regs()).
+ */
break;
case PFM_READ_PMDS:
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
if (!access_ok(VERIFY_WRITE, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_READ_PMDS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_read_pmds(task, req, count);
case PFM_STOP:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- ia64_set_pmc(0, 1);
- ia64_srlz_d();
-
+ /* simply stop monitors, not PMU */
ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->pp = 0;
- th->flags &= ~IA64_THREAD_PM_VALID;
-
- SET_PMU_OWNER(NULL);
-
- /* we probably will need some more cleanup here */
- break;
-
- case PFM_DEBUG_ON:
- printk(" debugging on\n");
- pfm_debug = 1;
- break;
+ if (ctx->ctx_fl_system) {
+ pfm_process_tasklist(0);
+ pfs_info.pfs_pp = 0;
+ }
- case PFM_DEBUG_OFF:
- printk(" debugging off\n");
- pfm_debug = 0;
break;
case PFM_RESTART: /* temporary, will most likely end up as a PFM_ENABLE */
- if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0 && ctx->ctx_fl_system=0) {
printk(" PFM_RESTART not monitoring\n");
return -EINVAL;
}
- if (!ctx) {
- printk(" PFM_RESTART no ctx for %d\n", task->pid);
- return -EINVAL;
- }
if (CTX_OVFL_NOBLOCK(ctx) = 0 && ctx->ctx_fl_frozen=0) {
printk("task %d without pmu_frozen set\n", task->pid);
return -EINVAL;
@@ -1052,6 +1219,37 @@
return pfm_do_restart(task); /* we only look at first entry */
+ case PFM_DESTROY_CONTEXT:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
+
+ /* first stop monitors */
+ ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->pp = 0;
+
+ /* then freeze PMU */
+ ia64_set_pmc(0, 1);
+ ia64_srlz_d();
+
+ /* don't save/restore on context switch */
+ if (ctx->ctx_fl_system =0) task->thread.flags &= ~IA64_THREAD_PM_VALID;
+
+ SET_PMU_OWNER(NULL);
+
+ /* now free context and related state */
+ pfm_context_exit(task);
+ break;
+
+ case PFM_DEBUG_ON:
+ printk("perfmon debugging on\n");
+ pfm_debug = 1;
+ break;
+
+ case PFM_DEBUG_OFF:
+ printk("perfmon debugging off\n");
+ pfm_debug = 0;
+ break;
+
default:
DBprintk((" UNknown command 0x%x\n", cmd));
return -EINVAL;
@@ -1088,11 +1286,8 @@
/* XXX: pid interface is going away in favor of pfm context */
if (pid != current->pid) {
read_lock(&tasklist_lock);
- {
- child = find_task_by_pid(pid);
- if (child)
- get_task_struct(child);
- }
+
+ child = find_task_by_pid(pid);
if (!child) goto abort_call;
@@ -1115,93 +1310,44 @@
return ret;
}
-
-/*
- * This function is invoked on the exit path of the kernel. Therefore it must make sure
- * it does does modify the caller's input registers (in0-in7) in case of entry by system call
- * which can be restarted. That's why it's declared as a system call and all 8 possible args
- * are declared even though not used.
- */
#if __GNUC__ >= 3
void asmlinkage
-pfm_overflow_notify(void)
+pfm_block_on_overflow(void)
#else
void asmlinkage
-pfm_overflow_notify(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
+pfm_block_on_overflow(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
#endif
{
- struct task_struct *task;
struct thread_struct *th = ¤t->thread;
pfm_context_t *ctx = current->thread.pfm_context;
- struct siginfo si;
int ret;
/*
- * do some sanity checks first
- */
- if (!ctx) {
- printk("perfmon: process %d has no PFM context\n", current->pid);
- return;
- }
- if (ctx->ctx_notify_pid < 2) {
- printk("perfmon: process %d invalid notify_pid=%d\n", current->pid, ctx->ctx_notify_pid);
- return;
- }
-
- DBprintk((" current=%d ctx=%p bv=0%lx\n", current->pid, (void *)ctx, ctx->ctx_ovfl_regs));
- /*
* NO matter what notify_pid is,
* we clear overflow, won't notify again
*/
- th->pfm_pend_notify = 0;
+ th->pfm_must_block = 0;
/*
- * When measuring in kernel mode and non-blocking fashion, it is possible to
- * get an overflow while executing this code. Therefore the state of pend_notify
- * and ovfl_regs can be altered. The important point is not to loose any notification.
- * It is fine to get called for nothing. To make sure we do collect as much state as
- * possible, update_counters() always uses |= to add bit to the ovfl_regs field.
- *
- * In certain cases, it is possible to come here, with ovfl_regs = 0;
- *
- * XXX: pend_notify and ovfl_regs could be merged maybe !
+ * do some sanity checks first
*/
- if (ctx->ctx_ovfl_regs = 0) {
- DBprintk(("perfmon: spurious overflow notification from pid %d\n", current->pid));
+ if (!ctx) {
+ printk("perfmon: process %d has no PFM context\n", current->pid);
return;
}
- read_lock(&tasklist_lock);
-
- task = find_task_by_pid(ctx->ctx_notify_pid);
-
- if (task) {
- si.si_signo = ctx->ctx_notify_sig;
- si.si_errno = 0;
- si.si_code = PROF_OVFL; /* goes to user */
- si.si_addr = NULL;
- si.si_pid = current->pid; /* who is sending */
- si.si_pfm_ovfl = ctx->ctx_ovfl_regs;
-
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
-
- /* must be done with tasklist_lock locked */
- ret = send_sig_info(ctx->ctx_notify_sig, &si, task);
- if (ret != 0) {
- DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_pid, ret));
- task = NULL; /* will cause return */
- }
- } else {
- printk("perfmon: notify_pid %d not found\n", ctx->ctx_notify_pid);
+ if (ctx->ctx_notify_task = 0) {
+ printk("perfmon: process %d has no task to notify\n", current->pid);
+ return;
}
- read_unlock(&tasklist_lock);
+ DBprintk((" current=%d task=%d\n", current->pid, ctx->ctx_notify_task->pid));
- /* now that we have released the lock handle error condition */
- if (!task || CTX_OVFL_NOBLOCK(ctx)) {
- /* we clear all pending overflow bits in noblock mode */
- ctx->ctx_ovfl_regs = 0;
+ /* should not happen */
+ if (CTX_OVFL_NOBLOCK(ctx)) {
+ printk("perfmon: process %d non-blocking ctx should not be here\n", current->pid);
return;
}
+
DBprintk((" CPU%d %d before sleep\n", smp_processor_id(), current->pid));
/*
@@ -1225,9 +1371,6 @@
pfm_reset_regs(ctx);
- /* now we can clear this mask */
- ctx->ctx_ovfl_regs = 0;
-
/*
* Unlock sampling buffer and reset index atomically
* XXX: not really needed when blocking
@@ -1246,84 +1389,14 @@
}
}
-static void
-perfmon_softint(unsigned long ignored)
-{
- notification_info_t *info;
- int my_cpu = smp_processor_id();
- struct task_struct *task;
- struct siginfo si;
-
- info = notify_info+my_cpu;
-
- DBprintk((" CPU%d current=%d to_pid=%d from_pid=%d bv=0x%lx\n", \
- smp_processor_id(), current->pid, info->to_pid, info->from_pid, info->bitvect));
-
- /* assumption check */
- if (info->from_pid = info->to_pid) {
- DBprintk((" Tasklet assumption error: from=%d tor=%d\n", info->from_pid, info->to_pid));
- return;
- }
-
- if (notification_is_invalid(info)) {
- DBprintk((" invalid notification information\n"));
- return;
- }
-
- /* sanity check */
- if (info->to_pid = 1) {
- DBprintk((" cannot notify init\n"));
- return;
- }
- /*
- * XXX: needs way more checks here to make sure we send to a task we have control over
- */
- read_lock(&tasklist_lock);
-
- task = find_task_by_pid(info->to_pid);
-
- DBprintk((" after find %p\n", (void *)task));
-
- if (task) {
- int ret;
-
- si.si_signo = SIGPROF;
- si.si_errno = 0;
- si.si_code = PROF_OVFL; /* goes to user */
- si.si_addr = NULL;
- si.si_pid = info->from_pid; /* who is sending */
- si.si_pfm_ovfl = info->bitvect;
-
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
-
- /* must be done with tasklist_lock locked */
- ret = send_sig_info(SIGPROF, &si, task);
- if (ret != 0)
- DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", info->to_pid, ret));
-
- /* invalidate notification */
- info->to_pid = info->from_pid = 0;
- info->bitvect = 0;
- }
-
- read_unlock(&tasklist_lock);
-
- DBprintk((" after unlock %p\n", (void *)task));
-
- if (!task) {
- printk("perfmon: CPU%d cannot find process %d\n", smp_processor_id(), info->to_pid);
- }
-}
-
/*
* main overflow processing routine.
* it can be called from the interrupt path or explicitely during the context switch code
* Return:
- * 0 : do not unfreeze the PMU
- * 1 : PMU can be unfrozen
+ * new value of pmc[0]. if 0x0 then unfreeze, else keep frozen
*/
-static unsigned long
-update_counters (struct task_struct *ta, u64 pmc0, struct pt_regs *regs)
+unsigned long
+update_counters (struct task_struct *task, u64 pmc0, struct pt_regs *regs)
{
unsigned long mask, i, cnum;
struct thread_struct *th;
@@ -1331,7 +1404,9 @@
unsigned long bv = 0;
int my_cpu = smp_processor_id();
int ret = 1, buffer_is_full = 0;
- int ovfl_is_smpl, can_notify, need_reset_pmd16=0;
+ int ovfl_has_long_recovery, can_notify, need_reset_pmd16=0;
+ struct siginfo si;
+
/*
* It is never safe to access the task for which the overflow interrupt is destinated
* using the current variable as the interrupt may occur in the middle of a context switch
@@ -1345,25 +1420,23 @@
* valid one, i.e. the one that caused the interrupt.
*/
- if (ta = NULL) {
+ if (task = NULL) {
DBprintk((" owners[%d]=NULL\n", my_cpu));
return 0x1;
}
- th = &ta->thread;
+ th = &task->thread;
ctx = th->pfm_context;
/*
* XXX: debug test
* Don't think this could happen given upfront tests
*/
- if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
- DBprintk(("perfmon: Spurious overflow interrupt: process %d not using perfmon\n",
- ta->pid));
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0 && ctx->ctx_fl_system = 0) {
+ printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", task->pid);
return 0x1;
}
if (!ctx) {
- DBprintk(("perfmon: Spurious overflow interrupt: process %d has no PFM context\n",
- ta->pid));
+ printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", task->pid);
return 0;
}
@@ -1371,16 +1444,21 @@
* sanity test. Should never happen
*/
if ((pmc0 & 0x1 )= 0) {
- printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", ta->pid, pmc0);
+ printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", task->pid, pmc0);
return 0x0;
}
mask = pmc0 >> PMU_FIRST_COUNTER;
- DBprintk(("pmc0=0x%lx pid=%d\n", pmc0, ta->pid));
-
- DBprintk(("ctx is in %s mode\n", CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK"));
+ DBprintk(("pmc0=0x%lx pid=%d owner=%d iip=0x%lx, ctx is in %s mode used_pmds=0x%lx used_pmcs=0x%lx\n",
+ pmc0, task->pid, PMU_OWNER()->pid, regs->cr_iip,
+ CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK",
+ ctx->ctx_used_pmds[0],
+ ctx->ctx_used_pmcs[0]));
+ /*
+ * XXX: need to record sample only when an EAR/BTB has overflowed
+ */
if (CTX_HAS_SMPL(ctx)) {
pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
unsigned long *e, m, idx=0;
@@ -1388,11 +1466,15 @@
int j;
idx = ia64_fetch_and_add(1, &psb->psb_index);
- DBprintk((" trying to record index=%ld entries=%ld\n", idx, psb->psb_entries));
+ DBprintk((" recording index=%ld entries=%ld\n", idx, psb->psb_entries));
/*
* XXX: there is a small chance that we could run out on index before resetting
* but index is unsigned long, so it will take some time.....
+ * We use > instead of = because fetch_and_add() is off by one (see below)
+ *
+ * This case can happen in non-blocking mode or with multiple processes.
+ * For non-blocking, we need to reload and continue.
*/
if (idx > psb->psb_entries) {
buffer_is_full = 1;
@@ -1404,7 +1486,7 @@
h = (perfmon_smpl_entry_t *)(((char *)psb->psb_addr) + idx*(psb->psb_entry_size));
- h->pid = ta->pid;
+ h->pid = task->pid;
h->cpu = my_cpu;
h->rate = 0;
h->ip = regs ? regs->cr_iip : 0x0; /* where did the fault happened */
@@ -1414,6 +1496,7 @@
h->stamp = perfmon_get_stamp();
e = (unsigned long *)(h+1);
+
/*
* selectively store PMDs in increasing index number
*/
@@ -1422,35 +1505,66 @@
if (PMD_IS_COUNTER(j))
*e = ctx->ctx_pmds[j-PMU_FIRST_COUNTER].val
+ (ia64_get_pmd(j) & pmu_conf.perf_ovfl_val);
- else
+ else {
*e = ia64_get_pmd(j); /* slow */
+ }
DBprintk((" e=%p pmd%d =0x%lx\n", (void *)e, j, *e));
e++;
}
}
- /* make the new entry visible to user, needs to be atomic */
+ /*
+ * make the new entry visible to user, needs to be atomic
+ */
ia64_fetch_and_add(1, &psb->psb_hdr->hdr_count);
DBprintk((" index=%ld entries=%ld hdr_count=%ld\n", idx, psb->psb_entries, psb->psb_hdr->hdr_count));
-
- /* sampling buffer full ? */
+ /*
+ * sampling buffer full ?
+ */
if (idx = (psb->psb_entries-1)) {
- bv = mask;
+ /*
+ * will cause notification, cannot be 0
+ */
+ bv = mask << PMU_FIRST_COUNTER;
+
buffer_is_full = 1;
DBprintk((" sampling buffer full must notify bv=0x%lx\n", bv));
- if (!CTX_OVFL_NOBLOCK(ctx)) goto buffer_full;
+ /*
+ * we do not reload here, when context is blocking
+ */
+ if (!CTX_OVFL_NOBLOCK(ctx)) goto no_reload;
+
/*
* here, we have a full buffer but we are in non-blocking mode
- * so we need to reloads overflowed PMDs with sampling reset values
- * and restart
+ * so we need to reload overflowed PMDs with sampling reset values
+ * and restart right away.
*/
}
+ /* FALL THROUGH */
}
reload_pmds:
- ovfl_is_smpl = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full;
- can_notify = CTX_HAS_SMPL(ctx) = 0 && ctx->ctx_notify_pid;
+
+ /*
+ * in the case of a non-blocking context, we reload
+ * with the ovfl_rval when no user notification is taking place (short recovery)
+ * otherwise when the buffer is full which requires user interaction) then we use
+ * smpl_rval which is the long_recovery path (disturbance introduce by user execution).
+ *
+ * XXX: implies that when buffer is full then there is always notification.
+ */
+ ovfl_has_long_recovery = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full;
+
+ /*
+ * XXX: CTX_HAS_SMPL() should really be something like CTX_HAS_SMPL() and is activated,i.e.,
+ * one of the PMC is configured for EAR/BTB.
+ *
+ * When sampling, we can only notify when the sampling buffer is full.
+ */
+ can_notify = CTX_HAS_SMPL(ctx) = 0 && ctx->ctx_notify_task;
+
+ DBprintk((" ovfl_has_long_recovery=%d can_notify=%d\n", ovfl_has_long_recovery, can_notify));
for (i = 0, cnum = PMU_FIRST_COUNTER; mask ; cnum++, i++, mask >>= 1) {
@@ -1472,7 +1586,7 @@
DBprintk((" pmod[%ld].val=0x%lx pmd=0x%lx\n", i, ctx->ctx_pmds[i].val, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val));
if (can_notify && PMD_OVFL_NOTIFY(ctx, i)) {
- DBprintk((" CPU%d should notify process %d with signal %d\n", my_cpu, ctx->ctx_notify_pid, ctx->ctx_notify_sig));
+ DBprintk((" CPU%d should notify task %p with signal %d\n", my_cpu, ctx->ctx_notify_task, ctx->ctx_notify_sig));
bv |= 1 << i;
} else {
DBprintk((" CPU%d PMD[%ld] overflow, no notification\n", my_cpu, cnum));
@@ -1483,93 +1597,150 @@
*/
/* writes to upper part are ignored, so this is safe */
- if (ovfl_is_smpl) {
- DBprintk((" CPU%d PMD[%ld] reloaded with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ if (ovfl_has_long_recovery) {
+ DBprintk((" CPU%d PMD[%ld] reload with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
} else {
- DBprintk((" CPU%d PMD[%ld] reloaded with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ DBprintk((" CPU%d PMD[%ld] reload with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
ia64_set_pmd(cnum, ctx->ctx_pmds[i].ovfl_rval);
}
}
if (cnum = ctx->ctx_btb_counter) need_reset_pmd16=1;
}
/*
- * In case of BTB, overflow
- * we need to reset the BTB index.
+ * In case of BTB overflow we need to reset the BTB index.
*/
if (need_reset_pmd16) {
DBprintk(("reset PMD16\n"));
ia64_set_pmd(16, 0);
}
-buffer_full:
- /* see pfm_overflow_notify() on details for why we use |= here */
- ctx->ctx_ovfl_regs |= bv;
- /* nobody to notify, return and unfreeze */
+no_reload:
+
+ /*
+ * some counters overflowed, but they did not require
+ * user notification, so after having reloaded them above
+ * we simply restart
+ */
if (!bv) return 0x0;
+ ctx->ctx_ovfl_regs = bv; /* keep track of what to reset when unblocking */
+ /*
+ * Now we know that:
+ * - we have some counters which overflowed (contains in bv)
+ * - someone has asked to be notified on overflow.
+ */
+
+
+ /*
+ * If the notification task is still present, then notify_task is non
+ * null. It is clean by that task if it ever exits before we do.
+ */
- if (ctx->ctx_notify_pid = ta->pid) {
- struct siginfo si;
+ if (ctx->ctx_notify_task) {
si.si_errno = 0;
si.si_addr = NULL;
- si.si_pid = ta->pid; /* who is sending */
-
+ si.si_pid = task->pid; /* who is sending */
si.si_signo = ctx->ctx_notify_sig; /* is SIGPROF */
si.si_code = PROF_OVFL; /* goes to user */
si.si_pfm_ovfl = bv;
+
/*
- * in this case, we don't stop the task, we let it go on. It will
- * necessarily go to the signal handler (if any) when it goes back to
- * user mode.
+ * when the target of the signal is not ourself, we have to be more
+ * careful. The notify_task may being cleared by the target task itself
+ * in release_thread(). We must ensure mutual exclusion here such that
+ * the signal is delivered (even to a dying task) safely.
*/
- DBprintk((" sending %d notification to self %d\n", si.si_signo, ta->pid));
-
- /* this call is safe in an interrupt handler */
- ret = send_sig_info(ctx->ctx_notify_sig, &si, ta);
- if (ret != 0)
- printk(" send_sig_info(process %d, SIGPROF)=%d\n", ta->pid, ret);
- /*
- * no matter if we block or not, we keep PMU frozen and do not unfreeze on ctxsw
- */
- ctx->ctx_fl_frozen = 1;
+ if (ctx->ctx_notify_task != current) {
+ /*
+ * grab the notification lock for this task
+ */
+ spin_lock(&ctx->ctx_notify_lock);
- } else {
-#if 0
/*
- * The tasklet is guaranteed to be scheduled for this CPU only
+ * now notify_task cannot be modified until we're done
+ * if NULL, they it got modified while we were in the handler
*/
- notify_info[my_cpu].to_pid = ctx->notify_pid;
- notify_info[my_cpu].from_pid = ta->pid; /* for debug only */
- notify_info[my_cpu].bitvect = bv;
- /* tasklet is inserted and active */
- tasklet_schedule(&pfm_tasklet);
-#endif
+ if (ctx->ctx_notify_task = NULL) {
+ spin_unlock(&ctx->ctx_notify_lock);
+ goto lost_notify;
+ }
/*
- * stored the vector of overflowed registers for use in notification
- * mark that a notification/blocking is pending (arm the trap)
+ * required by send_sig_info() to make sure the target
+ * task does not disappear on us.
*/
- th->pfm_pend_notify = 1;
+ read_lock(&tasklist_lock);
+ }
+ /*
+ * in this case, we don't stop the task, we let it go on. It will
+ * necessarily go to the signal handler (if any) when it goes back to
+ * user mode.
+ */
+ DBprintk((" %d sending %d notification to %d\n", task->pid, si.si_signo, ctx->ctx_notify_task->pid));
+
+ /*
+ * this call is safe in an interrupt handler, so does read_lock() on tasklist_lock
+ */
+ ret = send_sig_info(ctx->ctx_notify_sig, &si, ctx->ctx_notify_task);
+ if (ret != 0) printk(" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_task->pid, ret);
/*
- * if we do block, then keep PMU frozen until restart
+ * now undo the protections in order
*/
- if (!CTX_OVFL_NOBLOCK(ctx)) ctx->ctx_fl_frozen = 1;
+ if (ctx->ctx_notify_task != current) {
+ read_unlock(&tasklist_lock);
+ spin_unlock(&ctx->ctx_notify_lock);
+ }
+
+ /*
+ * if we block set the pfm_must_block bit
+ * when in block mode, we can effectively block only when the notified
+ * task is not self, otherwise we would deadlock.
+ * in this configuration, the notification is sent, the task will not
+ * block on the way back to user mode, but the PMU will be kept frozen
+ * until PFM_RESTART.
+ * Note that here there is still a race condition with notify_task
+ * possibly being nullified behind our back, but this is fine because
+ * it can only be changed to NULL which by construction, can only be
+ * done when notify_task != current. So if it was already different
+ * before, changing it to NULL will still maintain this invariant.
+ * Of course, when it is equal to current it cannot change at this point.
+ */
+ if (!CTX_OVFL_NOBLOCK(ctx) && ctx->ctx_notify_task != current) {
+ th->pfm_must_block = 1; /* will cause blocking */
+ }
+ } else {
+lost_notify:
+ DBprintk((" notification task has disappeared !\n"));
+ /*
+ * for a non-blocking context, we make sure we do not fall into the pfm_overflow_notify()
+ * trap. Also in the case of a blocking context with lost notify process, then we do not
+ * want to block either (even though it is interruptible). In this case, the PMU will be kept
+ * frozen and the process will run to completion without monitoring enabled.
+ *
+ * Of course, we cannot loose notify process when self-monitoring.
+ */
+ th->pfm_must_block = 0;
- DBprintk((" process %d notify ovfl_regs=0x%lx\n", ta->pid, bv));
}
/*
- * keep PMU frozen (and overflowed bits cleared) when we have to stop,
- * otherwise return a resume 'value' for PMC[0]
- *
- * XXX: maybe that's enough to get rid of ctx_fl_frozen ?
+ * if we block, we keep the PMU frozen. If non-blocking we restart.
+ * in the case of non-blocking were the notify process is lost, we also
+ * restart.
*/
- DBprintk((" will return pmc0=0x%x\n",ctx->ctx_fl_frozen ? 0x1 : 0x0));
+ if (!CTX_OVFL_NOBLOCK(ctx))
+ ctx->ctx_fl_frozen = 1;
+ else
+ ctx->ctx_fl_frozen = 0;
+
+ DBprintk((" reload pmc0=0x%x must_block=%ld\n",
+ ctx->ctx_fl_frozen ? 0x1 : 0x0, th->pfm_must_block));
+
return ctx->ctx_fl_frozen ? 0x1 : 0x0;
}
@@ -1599,8 +1770,7 @@
ia64_set_pmc(0, pmc0);
ia64_srlz_d();
} else {
- DBprintk(("perfmon: Spurious PMU overflow interrupt: pmc0=0x%lx owner=%p\n",
- pmc0, (void *)PMU_OWNER()));
+ printk("perfmon: Spurious PMU overflow interrupt: pmc0=0x%lx owner=%p\n", pmc0, (void *)PMU_OWNER());
}
}
@@ -1612,10 +1782,17 @@
u64 pmc0 = ia64_get_pmc(0);
int i;
- p += sprintf(p, "PMC[0]=%lx\nPerfmon debug: %s\n", pmc0, pfm_debug ? "On" : "Off");
+ p += sprintf(p, "CPU%d.pmc[0]=%lx\nPerfmon debug: %s\n", smp_processor_id(), pmc0, pfm_debug ? "On" : "Off");
+ p += sprintf(p, "proc_sessions=%lu sys_sessions=%lu\n",
+ pfs_info.pfs_proc_sessions,
+ pfs_info.pfs_sys_session);
+
for(i=0; i < NR_CPUS; i++) {
- if (cpu_is_online(i))
- p += sprintf(p, "CPU%d.PMU %d\n", i, pmu_owners[i].owner ? pmu_owners[i].owner->pid: 0);
+ if (cpu_is_online(i)) {
+ p += sprintf(p, "CPU%d.pmu_owner: %-6d\n",
+ i,
+ pmu_owners[i].owner ? pmu_owners[i].owner->pid: -1);
+ }
}
return p - page;
}
@@ -1698,21 +1875,19 @@
ia64_srlz_d();
}
-/*
- * XXX: for system wide this function MUST never be called
- */
void
pfm_save_regs (struct task_struct *ta)
{
struct task_struct *owner;
+ pfm_context_t *ctx;
struct thread_struct *t;
u64 pmc0, psr;
+ unsigned long mask;
int i;
- if (ta = NULL) {
- panic(__FUNCTION__" task is NULL\n");
- }
- t = &ta->thread;
+ t = &ta->thread;
+ ctx = ta->thread.pfm_context;
+
/*
* We must make sure that we don't loose any potential overflow
* interrupt while saving PMU context. In this code, external
@@ -1732,7 +1907,7 @@
* in kernel.
* By now, we could still have an overflow interrupt in-flight.
*/
- __asm__ __volatile__ ("rum psr.up;;"::: "memory");
+ __asm__ __volatile__ ("rsm psr.up|psr.pp;;"::: "memory");
/*
* Mark the PMU as not owned
@@ -1761,7 +1936,6 @@
* next process does not start with monitoring on if not requested
*/
ia64_set_pmc(0, 1);
- ia64_srlz_d();
/*
* Check for overflow bits and proceed manually if needed
@@ -1772,87 +1946,104 @@
* next time the task exits from the kernel.
*/
if (pmc0 & ~0x1) {
- if (owner != ta) printk(__FUNCTION__" owner=%p task=%p\n", (void *)owner, (void *)ta);
- DBprintk((__FUNCTION__" Warning: pmc[0]=0x%lx explicit call\n", pmc0));
-
- pmc0 = update_counters(owner, pmc0, NULL);
+ update_counters(owner, pmc0, NULL);
/* we will save the updated version of pmc0 */
}
-
/*
* restore PSR for context switch to save
*/
__asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(psr): "memory");
+ /*
+ * we do not save registers if we can do lazy
+ */
+ if (PFM_CAN_DO_LAZY()) {
+ SET_PMU_OWNER(owner);
+ return;
+ }
/*
* XXX needs further optimization.
* Also must take holes into account
*/
- for (i=0; i< pmu_conf.num_pmds; i++) {
- t->pmd[i] = ia64_get_pmd(i);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i);
}
/* skip PMC[0], we handle it separately */
- for (i=1; i< pmu_conf.num_pmcs; i++) {
- t->pmc[i] = ia64_get_pmc(i);
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i);
}
-
/*
* Throughout this code we could have gotten an overflow interrupt. It is transformed
* into a spurious interrupt as soon as we give up pmu ownership.
*/
}
-void
-pfm_load_regs (struct task_struct *ta)
+static void
+pfm_lazy_save_regs (struct task_struct *ta)
{
- struct thread_struct *t = &ta->thread;
- pfm_context_t *ctx = ta->thread.pfm_context;
+ pfm_context_t *ctx;
+ struct thread_struct *t;
+ unsigned long mask;
int i;
+ DBprintk((" on [%d] by [%d]\n", ta->pid, current->pid));
+
+ t = &ta->thread;
+ ctx = ta->thread.pfm_context;
/*
* XXX needs further optimization.
* Also must take holes into account
*/
- for (i=0; i< pmu_conf.num_pmds; i++) {
- ia64_set_pmd(i, t->pmd[i]);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i);
}
-
- /* skip PMC[0] to avoid side effects */
- for (i=1; i< pmu_conf.num_pmcs; i++) {
- ia64_set_pmc(i, t->pmc[i]);
+
+ /* skip PMC[0], we handle it separately */
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i);
}
+ SET_PMU_OWNER(NULL);
+}
+
+void
+pfm_load_regs (struct task_struct *ta)
+{
+ struct thread_struct *t = &ta->thread;
+ pfm_context_t *ctx = ta->thread.pfm_context;
+ struct task_struct *owner;
+ unsigned long mask;
+ int i;
+
+ owner = PMU_OWNER();
+ if (owner = ta) goto skip_restore;
+ if (owner) pfm_lazy_save_regs(owner);
- /*
- * we first restore ownership of the PMU to the 'soon to be current'
- * context. This way, if, as soon as we unfreeze the PMU at the end
- * of this function, we get an interrupt, we attribute it to the correct
- * task
- */
SET_PMU_OWNER(ta);
-#if 0
- /*
- * check if we had pending overflow before context switching out
- * If so, we invoke the handler manually, i.e. simulate interrupt.
- *
- * XXX: given that we do not use the tasklet anymore to stop, we can
- * move this back to the pfm_save_regs() routine.
- */
- if (t->pmc[0] & ~0x1) {
- /* freeze set in pfm_save_regs() */
- DBprintk((" pmc[0]=0x%lx manual interrupt\n",t->pmc[0]));
- update_counters(ta, t->pmc[0], NULL);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) ia64_set_pmd(i, t->pmd[i]);
}
-#endif
+ /* skip PMC[0] to avoid side effects */
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) ia64_set_pmc(i, t->pmc[i]);
+ }
+skip_restore:
/*
* unfreeze only when possible
*/
if (ctx->ctx_fl_frozen = 0) {
ia64_set_pmc(0, 0);
ia64_srlz_d();
+ /* place where we potentially (kernel level) start monitoring again */
}
}
@@ -2002,7 +2193,7 @@
/* clears all PMD registers */
for(i=0;i< pmu_conf.num_pmds; i++) {
- if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0);
+ if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0);
}
ia64_srlz_d();
}
@@ -2011,7 +2202,7 @@
* task is the newly created task
*/
int
-pfm_inherit(struct task_struct *task)
+pfm_inherit(struct task_struct *task, struct pt_regs *regs)
{
pfm_context_t *ctx = current->thread.pfm_context;
pfm_context_t *nctx;
@@ -2019,12 +2210,22 @@
int i, cnum;
/*
+ * bypass completely for system wide
+ */
+ if (pfs_info.pfs_sys_session) {
+ DBprintk((" enabling psr.pp for %d\n", task->pid));
+ ia64_psr(regs)->pp = pfs_info.pfs_pp;
+ return 0;
+ }
+
+ /*
* takes care of easiest case first
*/
if (CTX_INHERIT_MODE(ctx) = PFM_FL_INHERIT_NONE) {
DBprintk((" removing PFM context for %d\n", task->pid));
task->thread.pfm_context = NULL;
- task->thread.pfm_pend_notify = 0;
+ task->thread.pfm_must_block = 0;
+ atomic_set(&task->thread.pfm_notifiers_check, 0);
/* copy_thread() clears IA64_THREAD_PM_VALID */
return 0;
}
@@ -2034,9 +2235,11 @@
/* copy content */
*nctx = *ctx;
- if (ctx->ctx_fl_inherit = PFM_FL_INHERIT_ONCE) {
+ if (CTX_INHERIT_MODE(ctx) = PFM_FL_INHERIT_ONCE) {
nctx->ctx_fl_inherit = PFM_FL_INHERIT_NONE;
+ atomic_set(&task->thread.pfm_notifiers_check, 0);
DBprintk((" downgrading to INHERIT_NONE for %d\n", task->pid));
+ pfs_info.pfs_proc_sessions++;
}
/* initialize counters in new context */
@@ -2058,7 +2261,7 @@
sema_init(&nctx->ctx_restart_sem, 0); /* reset this semaphore to locked */
/* clear pending notification */
- th->pfm_pend_notify = 0;
+ th->pfm_must_block = 0;
/* link with new task */
th->pfm_context = nctx;
@@ -2077,7 +2280,10 @@
return 0;
}
-/* called from exit_thread() */
+/*
+ * called from release_thread(), at this point this task is not in the
+ * tasklist anymore
+ */
void
pfm_context_exit(struct task_struct *task)
{
@@ -2093,16 +2299,126 @@
pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
/* if only user left, then remove */
- DBprintk((" pid %d: task %d sampling psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter));
+ DBprintk((" [%d] [%d] psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter));
if (atomic_dec_and_test(&psb->psb_refcnt) ) {
rvfree(psb->psb_hdr, psb->psb_size);
vfree(psb);
- DBprintk((" pid %d: cleaning task %d sampling buffer\n", current->pid, task->pid ));
+ DBprintk((" [%d] cleaning [%d] sampling buffer\n", current->pid, task->pid ));
+ }
+ }
+ DBprintk((" [%d] cleaning [%d] pfm_context @%p\n", current->pid, task->pid, (void *)ctx));
+
+ /*
+ * To avoid getting the notified task scan the entire process list
+ * when it exits because it would have pfm_notifiers_check set, we
+ * decrease it by 1 to inform the task, that one less task is going
+ * to send it notification. each new notifer increases this field by
+ * 1 in pfm_context_create(). Of course, there is race condition between
+ * decreasing the value and the notified task exiting. The danger comes
+ * from the fact that we have a direct pointer to its task structure
+ * thereby bypassing the tasklist. We must make sure that if we have
+ * notify_task!= NULL, the target task is still somewhat present. It may
+ * already be detached from the tasklist but that's okay. Note that it is
+ * okay if we 'miss the deadline' and the task scans the list for nothing,
+ * it will affect performance but not correctness. The correctness is ensured
+ * by using the notify_lock whic prevents the notify_task from changing on us.
+ * Once holdhing this lock, if we see notify_task!= NULL, then it will stay like
+ * that until we release the lock. If it is NULL already then we came too late.
+ */
+ spin_lock(&ctx->ctx_notify_lock);
+
+ if (ctx->ctx_notify_task) {
+ DBprintk((" [%d] [%d] atomic_sub on [%d] notifiers=%u\n", current->pid, task->pid,
+ ctx->ctx_notify_task->pid,
+ atomic_read(&ctx->ctx_notify_task->thread.pfm_notifiers_check)));
+
+ atomic_sub(1, &ctx->ctx_notify_task->thread.pfm_notifiers_check);
+ }
+
+ spin_unlock(&ctx->ctx_notify_lock);
+
+ if (ctx->ctx_fl_system) {
+ /*
+ * if included interrupts (true by default), then reset
+ * to get default value
+ */
+ if (ctx->ctx_fl_exclintr = 0) {
+ /*
+ * reload kernel default DCR value
+ */
+ ia64_set_dcr(pfs_info.pfs_dfl_dcr);
+ DBprintk((" restored dcr to 0x%lx\n", pfs_info.pfs_dfl_dcr));
}
+ /*
+ * free system wide session slot
+ */
+ pfs_info.pfs_sys_session = 0;
+ } else {
+ pfs_info.pfs_proc_sessions--;
}
- DBprintk((" pid %d: task %d pfm_context is freed @%p\n", current->pid, task->pid, (void *)ctx));
+
pfm_context_free(ctx);
+ /*
+ * clean pfm state in thread structure,
+ */
+ task->thread.pfm_context = NULL;
+ task->thread.pfm_must_block = 0;
+ /* pfm_notifiers is cleaned in pfm_cleanup_notifiers() */
+
+}
+
+void
+pfm_cleanup_notifiers(struct task_struct *task)
+{
+ struct task_struct *p;
+ pfm_context_t *ctx;
+
+ DBprintk((" [%d] called\n", task->pid));
+
+ read_lock(&tasklist_lock);
+
+ for_each_task(p) {
+ /*
+ * It is safe to do the 2-step test here, because thread.ctx
+ * is cleaned up only in release_thread() and at that point
+ * the task has been detached from the tasklist which is an
+ * operation which uses the write_lock() on the tasklist_lock
+ * so it cannot run concurrently to this loop. So we have the
+ * guarantee that if we find p and it has a perfmon ctx then
+ * it is going to stay like this for the entire execution of this
+ * loop.
+ */
+ ctx = p->thread.pfm_context;
+
+ DBprintk((" [%d] scanning task [%d] ctx=%p\n", task->pid, p->pid, ctx));
+
+ if (ctx && ctx->ctx_notify_task = task) {
+ DBprintk((" trying for notifier %d in %d\n", task->pid, p->pid));
+ /*
+ * the spinlock is required to take care of a race condition
+ * with the send_sig_info() call. We must make sure that
+ * either the send_sig_info() completes using a valid task,
+ * or the notify_task is cleared before the send_sig_info()
+ * can pick up a stale value. Note that by the time this
+ * function is executed the 'task' is already detached from the
+ * tasklist. The problem is that the notifiers have a direct
+ * pointer to it. It is okay to send a signal to a task in this
+ * stage, it simply will have no effect. But it is better than sending
+ * to a completely destroyed task or worse to a new task using the same
+ * task_struct address.
+ */
+ spin_lock(&ctx->ctx_notify_lock);
+
+ ctx->ctx_notify_task = NULL;
+
+ spin_unlock(&ctx->ctx_notify_lock);
+
+ DBprintk((" done for notifier %d in %d\n", task->pid, p->pid));
+ }
+ }
+ read_unlock(&tasklist_lock);
+
}
#else /* !CONFIG_PERFMON */
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.10-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/process.c Mon Sep 24 23:22:59 2001
@@ -63,7 +63,8 @@
{
unsigned long ip = regs->cr_iip + ia64_psr(regs)->ri;
- printk("\npsr : %016lx ifs : %016lx ip : [<%016lx>]\n",
+ printk("\nPid: %d, comm: %20s\n", current->pid, current->comm);
+ printk("psr : %016lx ifs : %016lx ip : [<%016lx>]\n",
regs->cr_ipsr, regs->cr_ifs, ip);
printk("unat: %016lx pfs : %016lx rsc : %016lx\n",
regs->ar_unat, regs->ar_pfs, regs->ar_rsc);
@@ -201,7 +202,7 @@
{
unsigned long rbs, child_rbs, rbs_size, stack_offset, stack_top, stack_used;
struct switch_stack *child_stack, *stack;
- extern char ia64_ret_from_clone;
+ extern char ia64_ret_from_clone, ia32_ret_from_clone;
struct pt_regs *child_ptregs;
int retval = 0;
@@ -250,7 +251,10 @@
child_ptregs->r12 = (unsigned long) (child_ptregs + 1); /* kernel sp */
child_ptregs->r13 = (unsigned long) p; /* set `current' pointer */
}
- child_stack->b0 = (unsigned long) &ia64_ret_from_clone;
+ if (IS_IA32_PROCESS(regs))
+ child_stack->b0 = (unsigned long) &ia32_ret_from_clone;
+ else
+ child_stack->b0 = (unsigned long) &ia64_ret_from_clone;
child_stack->ar_bspstore = child_rbs + rbs_size;
/* copy parts of thread_struct: */
@@ -285,9 +289,8 @@
ia32_save_state(p);
#endif
#ifdef CONFIG_PERFMON
- p->thread.pfm_pend_notify = 0;
if (p->thread.pfm_context)
- retval = pfm_inherit(p);
+ retval = pfm_inherit(p, child_ptregs);
#endif
return retval;
}
@@ -441,11 +444,24 @@
}
#ifdef CONFIG_PERFMON
+/*
+ * By the time we get here, the task is detached from the tasklist. This is important
+ * because it means that no other tasks can ever find it as a notifiied task, therfore
+ * there is no race condition between this code and let's say a pfm_context_create().
+ * Conversely, the pfm_cleanup_notifiers() cannot try to access a task's pfm context if
+ * this other task is in the middle of its own pfm_context_exit() because it would alreayd
+ * be out of the task list. Note that this case is very unlikely between a direct child
+ * and its parents (if it is the notified process) because of the way the exit is notified
+ * via SIGCHLD.
+ */
void
release_thread (struct task_struct *task)
{
if (task->thread.pfm_context)
pfm_context_exit(task);
+
+ if(atomic_read(&task->thread.pfm_notifiers_check) >0)
+ pfm_cleanup_notifiers(task);
}
#endif
@@ -524,6 +540,10 @@
void
machine_halt (void)
{
+#if 0
+ while (1)
+ ia64_pal_halt(0);
+#endif
}
void
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.4.10-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Mon Sep 24 15:06:13 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/ptrace.c Mon Sep 24 21:51:40 2001
@@ -2,7 +2,7 @@
* Kernel support for the ptrace() and syscall tracing interfaces.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Derived from the x86 and Alpha versions. Most of the code in here
* could actually be factored into a common set of routines.
@@ -794,11 +794,14 @@
*
* Make sure the single step bit is not set.
*/
-void ptrace_disable(struct task_struct *child)
+void
+ptrace_disable (struct task_struct *child)
{
+ struct ia64_psr *child_psr = ia64_psr(ia64_task_regs(child));
+
/* make sure the single step/take-branch tra bits are not set: */
- ia64_psr(pt)->ss = 0;
- ia64_psr(pt)->tb = 0;
+ child_psr->ss = 0;
+ child_psr->tb = 0;
/* Turn off flag indicating that the KRBS is sync'd with child's VM: */
child->thread.flags &= ~IA64_THREAD_KRBS_SYNCED;
@@ -855,6 +858,19 @@
if (child->p_pptr != current)
goto out_tsk;
+ if (request != PTRACE_KILL) {
+ if (child->state != TASK_STOPPED)
+ goto out_tsk;
+
+#ifdef CONFIG_SMP
+ while (child->has_cpu) {
+ if (child->state != TASK_STOPPED)
+ goto out_tsk;
+ barrier();
+ }
+#endif
+ }
+
pt = ia64_task_regs(child);
sw = (struct switch_stack *) (child->thread.ksp + 16);
@@ -925,7 +941,7 @@
child->ptrace &= ~PT_TRACESYS;
child->exit_code = data;
- /* make sure the single step/take-branch tra bits are not set: */
+ /* make sure the single step/taken-branch trap bits are not set: */
ia64_psr(pt)->ss = 0;
ia64_psr(pt)->tb = 0;
diff -urN linux-davidm/arch/ia64/kernel/sal.c linux-2.4.10-lia/arch/ia64/kernel/sal.c
--- linux-davidm/arch/ia64/kernel/sal.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/sal.c Mon Sep 24 21:51:59 2001
@@ -2,7 +2,7 @@
* System Abstraction Layer (SAL) interface routines.
*
* Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
*/
@@ -18,8 +18,6 @@
#include <asm/sal.h>
#include <asm/pal.h>
-#define SAL_DEBUG
-
spinlock_t sal_lock = SPIN_LOCK_UNLOCKED;
static struct {
@@ -122,10 +120,8 @@
switch (*p) {
case SAL_DESC_ENTRY_POINT:
ep = (struct ia64_sal_desc_entry_point *) p;
-#ifdef SAL_DEBUG
- printk("sal[%d] - entry: pal_proc=0x%lx, sal_proc=0x%lx\n",
- i, ep->pal_proc, ep->sal_proc);
-#endif
+ printk("SAL: entry: pal_proc=0x%lx, sal_proc=0x%lx\n",
+ ep->pal_proc, ep->sal_proc);
ia64_pal_handler_init(__va(ep->pal_proc));
ia64_sal_handler_init(__va(ep->sal_proc), __va(ep->gp));
break;
@@ -138,17 +134,12 @@
#ifdef CONFIG_SMP
{
struct ia64_sal_desc_ap_wakeup *ap = (void *) p;
-# ifdef SAL_DEBUG
- printk("sal[%d] - wakeup type %x, 0x%lx\n",
- i, ap->mechanism, ap->vector);
-# endif
+
switch (ap->mechanism) {
case IA64_SAL_AP_EXTERNAL_INT:
ap_wakeup_vector = ap->vector;
-# ifdef SAL_DEBUG
printk("SAL: AP wakeup using external interrupt "
"vector 0x%lx\n", ap_wakeup_vector);
-# endif
break;
default:
@@ -169,7 +160,7 @@
if (pf->feature_mask & (1 << 1)) {
printk("IRQ_Redirection ");
#ifdef CONFIG_SMP
- if (no_int_routing)
+ if (no_int_routing)
smp_int_redirect &= ~SMP_IRQ_REDIRECTION;
else
smp_int_redirect |= SMP_IRQ_REDIRECTION;
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.4.10-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/setup.c Mon Sep 24 21:52:16 2001
@@ -534,10 +534,13 @@
/*
* Initialize default control register to defer all speculative faults. The
* kernel MUST NOT depend on a particular setting of these bits (in other words,
- * the kernel must have recovery code for all speculative accesses).
+ * the kernel must have recovery code for all speculative accesses). Turn on
+ * dcr.lc as per recommendation by the architecture team. Most IA-32 apps
+ * shouldn't be affected by this (moral: keep your ia32 locks aligned and you'll
+ * be fine).
*/
ia64_set_dcr( IA64_DCR_DM | IA64_DCR_DP | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_DR
- | IA64_DCR_DA | IA64_DCR_DD);
+ | IA64_DCR_DA | IA64_DCR_DD | IA64_DCR_LC);
#ifndef CONFIG_SMP
ia64_set_fpu_owner(0);
#endif
diff -urN linux-davidm/arch/ia64/kernel/sigframe.h linux-2.4.10-lia/arch/ia64/kernel/sigframe.h
--- linux-davidm/arch/ia64/kernel/sigframe.h Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/sigframe.h Mon Sep 24 21:52:26 2001
@@ -7,10 +7,11 @@
unsigned long arg0; /* signum */
unsigned long arg1; /* siginfo pointer */
unsigned long arg2; /* sigcontext pointer */
+ /*
+ * End of architected state.
+ */
- unsigned long rbs_base; /* base of new register backing store (or NULL) */
void *handler; /* pointer to the plabel of the signal handler */
-
struct siginfo info;
struct sigcontext sc;
};
diff -urN linux-davidm/arch/ia64/kernel/signal.c linux-2.4.10-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/signal.c Mon Sep 24 21:52:38 2001
@@ -139,10 +139,9 @@
struct ia64_psr *psr = ia64_psr(&scr->pt);
__copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
- if (!psr->dfh) {
- psr->mfh = 0;
+ psr->mfh = 0; /* drop signal handler's fph contents... */
+ if (!psr->dfh)
__ia64_load_fpu(current->thread.fph);
- }
}
return err;
}
@@ -380,7 +379,8 @@
err = __put_user(sig, &frame->arg0);
err |= __put_user(&frame->info, &frame->arg1);
err |= __put_user(&frame->sc, &frame->arg2);
- err |= __put_user(new_rbs, &frame->rbs_base);
+ err |= __put_user(new_rbs, &frame->sc.sc_rbs_base);
+ err |= __put_user(0, &frame->sc.sc_loadrs); /* initialize to zero */
err |= __put_user(ka->sa.sa_handler, &frame->handler);
err |= copy_siginfo_to_user(&frame->info, info);
@@ -460,6 +460,7 @@
long
ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall)
{
+ struct signal_struct *sig;
struct k_sigaction *ka;
siginfo_t info;
long restart = in_syscall;
@@ -571,8 +572,8 @@
case SIGSTOP:
current->state = TASK_STOPPED;
current->exit_code = signr;
- if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags
- & SA_NOCLDSTOP))
+ sig = current->p_pptr->sig;
+ if (sig && !(sig->action[SIGCHLD-1].sa.sa_flags & SA_NOCLDSTOP))
notify_parent(current, SIGCHLD);
schedule();
continue;
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.4.10-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/smp.c Mon Sep 24 21:53:18 2001
@@ -222,8 +222,9 @@
atomic_set(&data.finished, 0);
spin_lock_bh(&call_lock);
- call_data = &data;
+ call_data = &data;
+ mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */
send_IPI_single(cpuid, IPI_CALL_FUNC);
/* Wait for response */
@@ -275,8 +276,9 @@
atomic_set(&data.finished, 0);
spin_lock_bh(&call_lock);
- call_data = &data;
+ call_data = &data;
+ mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */
send_IPI_allbutself(IPI_CALL_FUNC);
/* Wait for response */
diff -urN linux-davidm/arch/ia64/kernel/smpboot.c linux-2.4.10-lia/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/smpboot.c Mon Sep 24 21:53:33 2001
@@ -33,6 +33,7 @@
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/machvec.h>
+#include <asm/mca.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
@@ -42,6 +43,8 @@
#include <asm/system.h>
#include <asm/unistd.h>
+#define SMP_DEBUG 0
+
#if SMP_DEBUG
#define Dprintk(x...) printk(x)
#else
@@ -310,7 +313,7 @@
}
-void __init
+static void __init
smp_callin (void)
{
int cpuid, phys_id;
@@ -369,14 +372,15 @@
{
extern int cpu_idle (void);
+ Dprintk("start_secondary: starting CPU 0x%x\n", hard_smp_processor_id());
efi_map_pal_code();
cpu_init();
smp_callin();
- Dprintk("CPU %d is set to go. \n", smp_processor_id());
+ Dprintk("CPU %d is set to go.\n", smp_processor_id());
while (!atomic_read(&smp_commenced))
;
- Dprintk("CPU %d is starting idle. \n", smp_processor_id());
+ Dprintk("CPU %d is starting idle.\n", smp_processor_id());
return cpu_idle();
}
@@ -420,7 +424,7 @@
unhash_process(idle);
init_tasks[cpu] = idle;
- Dprintk("Sending Wakeup Vector to AP 0x%x/0x%x.\n", cpu, sapicid);
+ Dprintk("Sending wakeup vector %u to AP 0x%x/0x%x.\n", ap_wakeup_vector, cpu, sapicid);
platform_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
@@ -429,7 +433,6 @@
*/
Dprintk("Waiting on callin_map ...");
for (timeout = 0; timeout < 100000; timeout++) {
- Dprintk(".");
if (test_bit(cpu, &cpu_callin_map))
break; /* It has booted */
udelay(100);
diff -urN linux-davidm/arch/ia64/kernel/sys_ia64.c linux-2.4.10-lia/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/sys_ia64.c Mon Sep 24 21:53:53 2001
@@ -19,24 +19,29 @@
#include <asm/shmparam.h>
#include <asm/uaccess.h>
-#define COLOR_ALIGN(addr) (((addr) + SHMLBA - 1) & ~(SHMLBA - 1))
-
unsigned long
arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
{
- struct vm_area_struct * vmm;
long map_shared = (flags & MAP_SHARED);
+ unsigned long align_mask = PAGE_SIZE - 1;
+ struct vm_area_struct * vmm;
if (len > RGN_MAP_LIMIT)
return -ENOMEM;
if (!addr)
addr = TASK_UNMAPPED_BASE;
- if (map_shared)
- addr = COLOR_ALIGN(addr);
- else
- addr = PAGE_ALIGN(addr);
+ if (map_shared && (TASK_SIZE > 0xfffffffful))
+ /*
+ * For 64-bit tasks, align shared segments to 1MB to avoid potential
+ * performance penalty due to virtual aliasing (see ASDM). For 32-bit
+ * tasks, we prefer to avoid exhausting the address space too quickly by
+ * limiting alignment to a single page.
+ */
+ align_mask = SHMLBA - 1;
+
+ addr = (addr + align_mask) & ~align_mask;
for (vmm = find_vma(current->mm, addr); ; vmm = vmm->vm_next) {
/* At this point: (!vmm || addr < vmm->vm_end). */
@@ -46,9 +51,7 @@
return -ENOMEM;
if (!vmm || addr + len <= vmm->vm_start)
return addr;
- addr = vmm->vm_end;
- if (map_shared)
- addr = COLOR_ALIGN(addr);
+ addr = (vmm->vm_end + align_mask) & ~align_mask;
}
}
@@ -184,8 +187,10 @@
if (!file)
return -EBADF;
- if (!file->f_op || !file->f_op->mmap)
- return -ENODEV;
+ if (!file->f_op || !file->f_op->mmap) {
+ addr = -ENODEV;
+ goto out;
+ }
}
/*
@@ -194,22 +199,26 @@
*/
len = PAGE_ALIGN(len);
if (len = 0)
- return addr;
+ goto out;
/* don't permit mappings into unmapped space or the virtual page table of a region: */
roff = rgn_offset(addr);
- if ((len | roff | (roff + len)) >= RGN_MAP_LIMIT)
- return -EINVAL;
+ if ((len | roff | (roff + len)) >= RGN_MAP_LIMIT) {
+ addr = -EINVAL;
+ goto out;
+ }
/* don't permit mappings that would cross a region boundary: */
- if (rgn_index(addr) != rgn_index(addr + len))
- return -EINVAL;
+ if (rgn_index(addr) != rgn_index(addr + len)) {
+ addr = -EINVAL;
+ goto out;
+ }
down_write(¤t->mm->mmap_sem);
addr = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
up_write(¤t->mm->mmap_sem);
- if (file)
+out: if (file)
fput(file);
return addr;
}
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.10-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/traps.c Mon Sep 24 21:54:49 2001
@@ -2,7 +2,7 @@
* Architecture-specific trap handling.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE
*/
@@ -32,13 +32,17 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
+#include <linux/vt_kern.h> /* For unblank_screen() */
+#include <asm/hardirq.h>
#include <asm/ia32.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/fpswa.h>
+extern spinlock_t timerlist_lock;
+
static fpswa_interface_t *fpswa_interface;
void __init
@@ -50,22 +54,48 @@
fpswa_interface = __va(ia64_boot_param->fpswa);
}
+/*
+ * Unlock any spinlocks which will prevent us from getting the message out (timerlist_lock
+ * is acquired through the console unblank code)
+ */
void
-die_if_kernel (char *str, struct pt_regs *regs, long err)
+bust_spinlocks (int yes)
{
- if (user_mode(regs)) {
-#if 0
- /* XXX for debugging only */
- printk ("!!die_if_kernel: %s(%d): %s %ld\n",
- current->comm, current->pid, str, err);
- show_regs(regs);
+ spin_lock_init(&timerlist_lock);
+ if (yes) {
+ oops_in_progress = 1;
+#ifdef CONFIG_SMP
+ global_irq_lock = 0; /* Many serial drivers do __global_cli() */
#endif
- return;
+ } else {
+ int loglevel_save = console_loglevel;
+#ifdef CONFIG_VT
+ unblank_screen();
+#endif
+ oops_in_progress = 0;
+ /*
+ * OK, the message is on the console. Now we call printk() without
+ * oops_in_progress set so that printk will give klogd a poke. Hold onto
+ * your hats...
+ */
+ console_loglevel = 15; /* NMI oopser may have shut the console up */
+ printk(" ");
+ console_loglevel = loglevel_save;
}
+}
- printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
+void
+die (const char *str, struct pt_regs *regs, long err)
+{
+ static spinlock_t die_lock = SPIN_LOCK_UNLOCKED;
+ console_verbose();
+ spin_lock_irq(&die_lock);
+ bust_spinlocks(1);
+ printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
show_regs(regs);
+ bust_spinlocks(0);
+ spin_unlock_irq(&die_lock);
if (current->thread.flags & IA64_KERNEL_DEATH) {
printk("die_if_kernel recursion detected.\n");
@@ -77,6 +107,13 @@
}
void
+die_if_kernel (char *str, struct pt_regs *regs, long err)
+{
+ if (!user_mode(regs))
+ die(str, regs, err);
+}
+
+void
ia64_bad_break (unsigned long break_num, struct pt_regs *regs)
{
siginfo_t siginfo;
@@ -168,14 +205,12 @@
}
/*
- * disabled_fph_fault() is called when a user-level process attempts
- * to access one of the registers f32..f127 when it doesn't own the
- * fp-high register partition. When this happens, we save the current
- * fph partition in the task_struct of the fpu-owner (if necessary)
- * and then load the fp-high partition of the current task (if
- * necessary). Note that the kernel has access to fph by the time we
- * get here, as the IVT's "Diabled FP-Register" handler takes care of
- * clearing psr.dfh.
+ * disabled_fph_fault() is called when a user-level process attempts to access f32..f127
+ * and it doesn't own the fp-high register partition. When this happens, we save the
+ * current fph partition in the task_struct of the fpu-owner (if necessary) and then load
+ * the fp-high partition of the current task (if necessary). Note that the kernel has
+ * access to fph by the time we get here, as the IVT's "Disabled FP-Register" handler takes
+ * care of clearing psr.dfh.
*/
static inline void
disabled_fph_fault (struct pt_regs *regs)
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.4.10-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/kernel/unwind.c Mon Sep 24 21:55:01 2001
@@ -504,7 +504,7 @@
return 0;
}
-inline int
+int
unw_access_pr (struct unw_frame_info *info, unsigned long *val, int write)
{
unsigned long *addr;
diff -urN linux-davidm/arch/ia64/lib/clear_page.S linux-2.4.10-lia/arch/ia64/lib/clear_page.S
--- linux-davidm/arch/ia64/lib/clear_page.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/lib/clear_page.S Mon Sep 24 21:55:10 2001
@@ -47,5 +47,5 @@
br.cloop.dptk.few 1b
;;
mov ar.lc = r2 // restore lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(clear_page)
diff -urN linux-davidm/arch/ia64/lib/clear_user.S linux-2.4.10-lia/arch/ia64/lib/clear_user.S
--- linux-davidm/arch/ia64/lib/clear_user.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/lib/clear_user.S Mon Sep 24 21:55:19 2001
@@ -8,7 +8,7 @@
* r8: number of bytes that didn't get cleared due to a fault
*
* Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*/
#include <asm/asmmacro.h>
@@ -62,11 +62,11 @@
;; // avoid WAW on CFM
adds tmp=-1,len // br.ctop is repeat/until
mov ret0=len // return value is length at this point
-(p6) br.ret.spnt.few rp
+(p6) br.ret.spnt.many rp
;;
cmp.lt p6,p0\x16,len // if len > 16 then long memset
mov ar.lc=tmp // initialize lc for small count
-(p6) br.cond.dptk.few long_do_clear
+(p6) br.cond.dptk .long_do_clear
;; // WAR on ar.lc
//
// worst case 16 iterations, avg 8 iterations
@@ -79,7 +79,7 @@
1:
EX( .Lexit1, st1 [buf]=r0,1 )
adds len=-1,len // countdown length using len
- br.cloop.dptk.few 1b
+ br.cloop.dptk 1b
;; // avoid RAW on ar.lc
//
// .Lexit4: comes from byte by byte loop
@@ -87,7 +87,7 @@
.Lexit1:
mov ret0=len // faster than using ar.lc
mov ar.lc=saved_lc
- br.ret.sptk.few rp // end of short clear_user
+ br.ret.sptk.many rp // end of short clear_user
//
@@ -98,7 +98,7 @@
// instead of ret0 is due to the fact that the exception code
// changes the values of r8.
//
-long_do_clear:
+.long_do_clear:
tbit.nz p6,p0=buf,0 // odd alignment (for long_do_clear)
;;
EX( .Lexit3, (p6) st1 [buf]=r0,1 ) // 1-byte aligned
@@ -119,7 +119,7 @@
;;
cmp.eq p6,p0=r0,cnt
adds tmp=-1,cnt
-(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p6) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds buf2=8,buf // setup second base pointer
mov ar.lc=tmp
@@ -148,7 +148,7 @@
;; // needed to get len correct when error
st8 [buf2]=r0,16
adds len=-16,len
- br.cloop.dptk.few 2b
+ br.cloop.dptk 2b
;;
mov ar.lc=saved_lc
//
@@ -178,7 +178,7 @@
;;
EX( .Lexit2, (p7) st1 [buf]=r0 ) // only 1 byte left
mov ret0=r0 // success
- br.ret.dptk.few rp // end of most likely path
+ br.ret.sptk.many rp // end of most likely path
//
// Outlined error handling code
@@ -205,5 +205,5 @@
.Lexit3:
mov ret0=len
mov ar.lc=saved_lc
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(__do_clear_user)
diff -urN linux-davidm/arch/ia64/lib/copy_page.S linux-2.4.10-lia/arch/ia64/lib/copy_page.S
--- linux-davidm/arch/ia64/lib/copy_page.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/lib/copy_page.S Mon Sep 24 21:55:37 2001
@@ -90,5 +90,5 @@
mov pr=saved_pr,0xffffffffffff0000 // restore predicates
mov ar.pfs=saved_pfs
mov ar.lc=saved_lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(copy_page)
diff -urN linux-davidm/arch/ia64/lib/copy_user.S linux-2.4.10-lia/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.10-lia/arch/ia64/lib/copy_user.S Mon Sep 24 21:55:46 2001
@@ -19,8 +19,8 @@
* ret0 0 in case of success. The number of bytes NOT copied in
* case of error.
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* Fixme:
* - handle the case where we have more than 16 bytes and the alignment
@@ -85,7 +85,7 @@
cmp.eq p8,p0=r0,len // check for zero length
.save ar.lc, saved_lc
mov saved_lc=ar.lc // preserve ar.lc (slow)
-(p8) br.ret.spnt.few rp // empty mempcy()
+(p8) br.ret.spnt.many rp // empty mempcy()
;;
add enddst=dst,len // first byte after end of source
add endsrc=src,len // first byte after end of destination
@@ -103,26 +103,26 @@
cmp.lt p10,p7=COPY_BREAK,len // if len > COPY_BREAK then long copy
xor tmp=src,dst // same alignment test prepare
-(p10) br.cond.dptk.few long_copy_user
+(p10) br.cond.dptk .long_copy_user
;; // RAW pr.rot/p16 ?
//
// Now we do the byte by byte loop with software pipeline
//
// p7 is necessarily false by now
1:
- EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 1b
;;
mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs // restore ar.ec
- br.ret.sptk.few rp // end of short memcpy
+ br.ret.sptk.many rp // end of short memcpy
//
// Not 8-byte aligned
//
-diff_align_copy_user:
+.diff_align_copy_user:
// At this point we know we have more than 16 bytes to copy
// and also that src and dest do _not_ have the same alignment.
and src2=0x7,src1 // src offset
@@ -153,7 +153,7 @@
// We know src1 is not 8-byte aligned in this case.
//
cmp.eq p14,p15=r0,dst2
-(p15) br.cond.spnt.few 1f
+(p15) br.cond.spnt 1f
;;
sub t1=8,src2
mov t2=src2
@@ -163,7 +163,7 @@
;;
sub lshiftd,rshift
;;
- br.cond.spnt.few word_copy_user
+ br.cond.spnt .word_copy_user
;;
1:
cmp.leu p14,p15=src2,dst2
@@ -192,15 +192,15 @@
mov ar.lc=cnt
;;
2:
- EX(failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 2b
;;
clrrrb
;;
-word_copy_user:
+.word_copy_user:
cmp.gtu p9,p0\x16,len1
-(p9) br.cond.spnt.few 4f // if (16 > len1) skip 8-byte copy
+(p9) br.cond.spnt 4f // if (16 > len1) skip 8-byte copy
;;
shr.u cnt=len1,3 // number of 64-bit words
;;
@@ -232,24 +232,24 @@
#define EPI_1 p[PIPE_DEPTH-2]
#define SWITCH(pred, shift) cmp.eq pred,p0=shift,rshift
#define CASE(pred, shift) \
- (pred) br.cond.spnt.few copy_user_bit##shift
+ (pred) br.cond.spnt .copy_user_bit##shift
#define BODY(rshift) \
-copy_user_bit##rshift: \
+.copy_user_bit##rshift: \
1: \
- EX(failure_out,(EPI) st8 [dst1]=tmp,8); \
+ EX(.failure_out,(EPI) st8 [dst1]=tmp,8); \
(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
EX(3f,(p16) ld8 val1[0]=[src1],8); \
- br.ctop.dptk.few 1b; \
+ br.ctop.dptk 1b; \
;; \
- br.cond.sptk.few .diff_align_do_tail; \
+ br.cond.sptk.many .diff_align_do_tail; \
2: \
(EPI) st8 [dst1]=tmp,8; \
(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
3: \
(p16) mov val1[0]=r0; \
- br.ctop.dptk.few 2b; \
+ br.ctop.dptk 2b; \
;; \
- br.cond.sptk.few failure_in2
+ br.cond.sptk.many .failure_in2
//
// Since the instruction 'shrp' requires a fixed 128-bit value
@@ -301,25 +301,25 @@
mov ar.lc=len1
;;
5:
- EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 5b
;;
mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// Beginning of long mempcy (i.e. > 16 bytes)
//
-long_copy_user:
+.long_copy_user:
tbit.nz p6,p7=src1,0 // odd alignement
and tmp=7,tmp
;;
cmp.eq p10,p8=r0,tmp
mov len1=len // copy because of rotation
-(p8) br.cond.dpnt.few diff_align_copy_user
+(p8) br.cond.dpnt .diff_align_copy_user
;;
// At this point we know we have more than 16 bytes to copy
// and also that both src and dest have the same alignment
@@ -327,11 +327,11 @@
// forward slowly until we reach 16byte alignment: no need to
// worry about reaching the end of buffer.
//
- EX(failure_in1,(p6) ld1 val1[0]=[src1],1) // 1-byte aligned
+ EX(.failure_in1,(p6) ld1 val1[0]=[src1],1) // 1-byte aligned
(p6) adds len1=-1,len1;;
tbit.nz p7,p0=src1,1
;;
- EX(failure_in1,(p7) ld2 val1[1]=[src1],2) // 2-byte aligned
+ EX(.failure_in1,(p7) ld2 val1[1]=[src1],2) // 2-byte aligned
(p7) adds len1=-2,len1;;
tbit.nz p8,p0=src1,2
;;
@@ -339,28 +339,28 @@
// Stop bit not required after ld4 because if we fail on ld4
// we have never executed the ld1, therefore st1 is not executed.
//
- EX(failure_in1,(p8) ld4 val2[0]=[src1],4) // 4-byte aligned
+ EX(.failure_in1,(p8) ld4 val2[0]=[src1],4) // 4-byte aligned
;;
- EX(failure_out,(p6) st1 [dst1]=val1[0],1)
+ EX(.failure_out,(p6) st1 [dst1]=val1[0],1)
tbit.nz p9,p0=src1,3
;;
//
// Stop bit not required after ld8 because if we fail on ld8
// we have never executed the ld2, therefore st2 is not executed.
//
- EX(failure_in1,(p9) ld8 val2[1]=[src1],8) // 8-byte aligned
- EX(failure_out,(p7) st2 [dst1]=val1[1],2)
+ EX(.failure_in1,(p9) ld8 val2[1]=[src1],8) // 8-byte aligned
+ EX(.failure_out,(p7) st2 [dst1]=val1[1],2)
(p8) adds len1=-4,len1
;;
- EX(failure_out, (p8) st4 [dst1]=val2[0],4)
+ EX(.failure_out, (p8) st4 [dst1]=val2[0],4)
(p9) adds len1=-8,len1;;
shr.u cnt=len1,4 // number of 128-bit (2x64bit) words
;;
- EX(failure_out, (p9) st8 [dst1]=val2[1],8)
+ EX(.failure_out, (p9) st8 [dst1]=val2[1],8)
tbit.nz p6,p0=len1,3
cmp.eq p7,p0=r0,cnt
adds tmp=-1,cnt // br.ctop is repeat/until
-(p7) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p7) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds src2=8,src1
adds dst2=8,dst1
@@ -370,12 +370,12 @@
// 16bytes/iteration
//
2:
- EX(failure_in3,(p16) ld8 val1[0]=[src1],16)
+ EX(.failure_in3,(p16) ld8 val1[0]=[src1],16)
(p16) ld8 val2[0]=[src2],16
- EX(failure_out, (EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16)
+ EX(.failure_out, (EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16)
(EPI) st8 [dst2]=val2[PIPE_DEPTH-1],16
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;; // RAW on src1 when fall through from loop
//
// Tail correction based on len only
@@ -384,29 +384,28 @@
// is 16 byte aligned AND we have less than 16 bytes to copy.
//
.dotail:
- EX(failure_in1,(p6) ld8 val1[0]=[src1],8) // at least 8 bytes
+ EX(.failure_in1,(p6) ld8 val1[0]=[src1],8) // at least 8 bytes
tbit.nz p7,p0=len1,2
;;
- EX(failure_in1,(p7) ld4 val1[1]=[src1],4) // at least 4 bytes
+ EX(.failure_in1,(p7) ld4 val1[1]=[src1],4) // at least 4 bytes
tbit.nz p8,p0=len1,1
;;
- EX(failure_in1,(p8) ld2 val2[0]=[src1],2) // at least 2 bytes
+ EX(.failure_in1,(p8) ld2 val2[0]=[src1],2) // at least 2 bytes
tbit.nz p9,p0=len1,0
;;
- EX(failure_out, (p6) st8 [dst1]=val1[0],8)
+ EX(.failure_out, (p6) st8 [dst1]=val1[0],8)
;;
- EX(failure_in1,(p9) ld1 val2[1]=[src1]) // only 1 byte left
+ EX(.failure_in1,(p9) ld1 val2[1]=[src1]) // only 1 byte left
mov ar.lc=saved_lc
;;
- EX(failure_out,(p7) st4 [dst1]=val1[1],4)
+ EX(.failure_out,(p7) st4 [dst1]=val1[1],4)
mov pr=saved_pr,0xffffffffffff0000
;;
- EX(failure_out, (p8) st2 [dst1]=val2[0],2)
+ EX(.failure_out, (p8) st2 [dst1]=val2[0],2)
mov ar.pfs=saved_pfs
;;
- EX(failure_out, (p9) st1 [dst1]=val2[1])
- br.ret.dptk.few rp
-
+ EX(.failure_out, (p9) st1 [dst1]=val2[1])
+ br.ret.sptk.many rp
//
@@ -433,32 +432,32 @@
// pipeline going. We can't really do this inline because
// p16 is always reset to 1 when lc > 0.
//
-failure_in_pipe1:
+.failure_in_pipe1:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
1:
(p16) mov val1[0]=r0
(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1
- br.ctop.dptk.few 1b
+ br.ctop.dptk 1b
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// This is the case where the byte by byte copy fails on the load
// when we copy the head. We need to finish the pipeline and copy
// zeros for the rest of the destination. Since this happens
// at the top we still need to fill the body and tail.
-failure_in_pipe2:
+.failure_in_pipe2:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
2:
(p16) mov val1[0]=r0
(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;;
sub len=enddst,dst1,1 // precompute len
- br.cond.dptk.few failure_in1bis
+ br.cond.dptk.many .failure_in1bis
;;
//
@@ -533,9 +532,7 @@
// This means that we are in a situation similar the a fault in the
// head part. That's nice!
//
-failure_in1:
-// sub ret0=enddst,dst1 // number of bytes to zero, i.e. not copied
-// sub len=enddst,dst1,1
+.failure_in1:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
sub len=endsrc,src1,1
//
@@ -546,18 +543,17 @@
// calling side.
//
;;
-failure_in1bis: // from (failure_in3)
+.failure_in1bis: // from (.failure_in3)
mov ar.lc=len // Continue with a stupid byte store.
;;
5:
st1 [dst1]=r0,1
- br.cloop.dptk.few 5b
+ br.cloop.dptk 5b
;;
-skip_loop:
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// Here we simply restart the loop but instead
@@ -569,7 +565,7 @@
// we MUST use src1/endsrc here and not dst1/enddst because
// of the pipeline effect.
//
-failure_in3:
+.failure_in3:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
;;
2:
@@ -577,36 +573,36 @@
(p16) mov val2[0]=r0
(EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16
(EPI) st8 [dst2]=val2[PIPE_DEPTH-1],16
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;;
cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
sub len=enddst,dst1,1 // precompute len
-(p6) br.cond.dptk.few failure_in1bis
+(p6) br.cond.dptk .failure_in1bis
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
-failure_in2:
+.failure_in2:
sub ret0=endsrc,src1
cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
sub len=enddst,dst1,1 // precompute len
-(p6) br.cond.dptk.few failure_in1bis
+(p6) br.cond.dptk .failure_in1bis
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// handling of failures on stores: that's the easy part
//
-failure_out:
+.failure_out:
sub ret0=enddst,dst1
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(__copy_user)
diff -urN linux-davidm/arch/ia64/lib/do_csum.S linux-2.4.10-lia/arch/ia64/lib/do_csum.S
--- linux-davidm/arch/ia64/lib/do_csum.S Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/lib/do_csum.S Mon Sep 24 21:56:08 2001
@@ -129,7 +129,7 @@
;; // avoid WAW on CFM
mov tmp3=0x7 // a temporary mask/value
add tmp1=buf,len // last byte's address
-(p6) br.ret.spnt.few rp // return if true (hope we can avoid that)
+(p6) br.ret.spnt.many rp // return if true (hope we can avoid that)
and firstoff=7,buf // how many bytes off for first1 element
tbit.nz p15,p0=buf,0 // is buf an odd address ?
@@ -180,9 +180,9 @@
cmp.ltu p6,p0=result1[0],word1[0] // check the carry
;;
(p6) adds result1[0]=1,result1[0]
-(p8) br.cond.dptk.few do_csum_exit // if (within an 8-byte word)
+(p8) br.cond.dptk .do_csum_exit // if (within an 8-byte word)
;;
-(p11) br.cond.dptk.few do_csum16 // if (count is even)
+(p11) br.cond.dptk .do_csum16 // if (count is even)
;;
// Here count is odd.
ld8 word1[1]=[first1],8 // load an 8-byte word
@@ -195,14 +195,14 @@
;;
(p6) adds result1[0]=1,result1[0]
;;
-(p9) br.cond.sptk.few do_csum_exit // if (count = 1) exit
+(p9) br.cond.sptk .do_csum_exit // if (count = 1) exit
// Fall through to caluculate the checksum, feeding result1[0] as
// the initial value in result1[0].
;;
//
// Calculate the checksum loading two 8-byte words per loop.
//
-do_csum16:
+.do_csum16:
mov saved_lc=ar.lc
shr.u count=count,1 // we do 16 bytes per loop
;;
@@ -224,7 +224,7 @@
;;
add first2=8,first1
;;
-(p9) br.cond.sptk.few do_csum_exit
+(p9) br.cond.sptk .do_csum_exit
;;
nop.m 0
nop.i 0
@@ -240,7 +240,7 @@
2:
(p16) ld8 word1[0]=[first1],16
(p16) ld8 word2[0]=[first2],16
- br.ctop.sptk.few 1b
+ br.ctop.sptk 1b
;;
// Since len is a 32-bit value, carry cannot be larger than
// a 64-bit value.
@@ -262,7 +262,7 @@
;;
(p6) adds result1[0]=1,result1[0]
;;
-do_csum_exit:
+.do_csum_exit:
movl tmp3=0xffffffff
;;
// XXX Fixme
@@ -298,7 +298,7 @@
;;
mov ar.lc=saved_lc
(p15) shr.u ret0=ret0,64-16 // + shift back to position = swap bytes
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
// I (Jun Nakajima) wrote an equivalent code (see below), but it was
// not much better than the original. So keep the original there so that
@@ -330,6 +330,6 @@
//(p15) mux1 ret0=ret0,@rev // reverse word
// ;;
//(p15) shr.u ret0=ret0,64-16 // + shift back to position = swap bytes
-// br.ret.sptk.few rp
+// br.ret.sptk.many rp
END(do_csum)
diff -urN linux-davidm/arch/ia64/lib/idiv32.S linux-2.4.10-lia/arch/ia64/lib/idiv32.S
--- linux-davidm/arch/ia64/lib/idiv32.S Mon Oct 9 17:54:56 2000
+++ linux-2.4.10-lia/arch/ia64/lib/idiv32.S Mon Sep 24 21:56:20 2001
@@ -79,5 +79,5 @@
;;
#endif
getf.sig r8 = f6 // transfer result to result register
- br.ret.sptk rp
+ br.ret.sptk.many rp
END(NAME)
diff -urN linux-davidm/arch/ia64/lib/idiv64.S linux-2.4.10-lia/arch/ia64/lib/idiv64.S
--- linux-davidm/arch/ia64/lib/idiv64.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/lib/idiv64.S Mon Sep 24 21:56:27 2001
@@ -89,5 +89,5 @@
#endif
getf.sig r8 = f17 // transfer result to result register
ldf.fill f17 = [sp]
- br.ret.sptk rp
+ br.ret.sptk.many rp
END(NAME)
diff -urN linux-davidm/arch/ia64/lib/memcpy.S linux-2.4.10-lia/arch/ia64/lib/memcpy.S
--- linux-davidm/arch/ia64/lib/memcpy.S Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/lib/memcpy.S Mon Sep 24 21:56:35 2001
@@ -9,9 +9,9 @@
* Output:
* no return value
*
- * Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
@@ -97,8 +97,8 @@
cmp.ne p6,p0=t0,r0
mov src=in1 // copy because of rotation
-(p7) br.cond.spnt.few memcpy_short
-(p6) br.cond.spnt.few memcpy_long
+(p7) br.cond.spnt.few .memcpy_short
+(p6) br.cond.spnt.few .memcpy_long
;;
nop.m 0
;;
@@ -133,7 +133,7 @@
* issues, we want to avoid read-modify-write of entire words.
*/
.align 32
-memcpy_short:
+.memcpy_short:
adds cnt=-1,in2 // br.ctop is repeat/until
mov ar.ec=MEM_LAT
brp.loop.imp 1f, 2f
@@ -196,7 +196,7 @@
#define LOG_LOOP_SIZE 6
-memcpy_long:
+.memcpy_long:
alloc t3=ar.pfs,3,Nrot,0,Nrot // resize register frame
and t0=-8,src // t0 = src & ~7
and t2=7,src // t2 = src & 7
@@ -241,7 +241,7 @@
mov t4=ip
} ;;
and src2=-8,src // align source pointer
- adds t4=memcpy_loops-1b,t4
+ adds t4=.memcpy_loops-1b,t4
mov ar.ec=N
and t0=7,src // t0 = src & 7
@@ -260,7 +260,7 @@
mov pr=cnt,0x38 // set (p5,p4,p3) to # of bytes last-word bytes to copy
mov ar.lc=t2
;;
- nop.m 0
+ nop.m 0
;;
nop.m 0
nop.i 0
@@ -272,7 +272,7 @@
br.sptk.few b6
;;
-memcpy_tail:
+.memcpy_tail:
// At this point, (p5,p4,p3) are set to the number of bytes left to copy (which is
// less than 8) and t0 contains the last few bytes of the src buffer:
(p5) st4 [dst]=t0,4
@@ -305,8 +305,8 @@
ld8 val[N-1]=[src_end]; /* load last word (may be same as val[N]) */ \
;; \
shrp t0=val[N-1],val[N-index],shift; \
- br memcpy_tail
-memcpy_loops:
+ br .memcpy_tail
+.memcpy_loops:
COPY(0, 1) /* no point special casing this---it doesn't go any faster without shrp */
COPY(8, 0)
COPY(16, 0)
diff -urN linux-davidm/arch/ia64/lib/memset.S linux-2.4.10-lia/arch/ia64/lib/memset.S
--- linux-davidm/arch/ia64/lib/memset.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/lib/memset.S Mon Sep 24 21:56:50 2001
@@ -43,11 +43,11 @@
adds tmp=-1,len // br.ctop is repeat/until
tbit.nz p6,p0=buf,0 // odd alignment
-(p8) br.ret.spnt.few rp
+(p8) br.ret.spnt.many rp
cmp.lt p7,p0\x16,len // if len > 16 then long memset
mux1 val=val,@brcst // prepare value
-(p7) br.cond.dptk.few long_memset
+(p7) br.cond.dptk .long_memset
;;
mov ar.lc=tmp // initialize lc for small count
;; // avoid RAW and WAW on ar.lc
@@ -57,11 +57,11 @@
;; // avoid RAW on ar.lc
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.sptk.few rp // end of short memset
+ br.ret.sptk.many rp // end of short memset
// at this point we know we have more than 16 bytes to copy
// so we focus on alignment
-long_memset:
+.long_memset:
(p6) st1 [buf]=val,1 // 1-byte aligned
(p6) adds len=-1,len;; // sync because buf is modified
tbit.nz p6,p0=buf,1
@@ -80,7 +80,7 @@
;;
cmp.eq p6,p0=r0,cnt
adds tmp=-1,cnt
-(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p6) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds buf2=8,buf // setup second base pointer
mov ar.lc=tmp
@@ -104,5 +104,5 @@
mov ar.lc=saved_lc
;;
(p6) st1 [buf]=val // only 1 byte left
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(memset)
diff -urN linux-davidm/arch/ia64/lib/strlen.S linux-2.4.10-lia/arch/ia64/lib/strlen.S
--- linux-davidm/arch/ia64/lib/strlen.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/lib/strlen.S Mon Sep 24 21:56:58 2001
@@ -11,7 +11,7 @@
* does not count the \0
*
* Copyright (C) 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 09/24/99 S.Eranian add speculation recovery code
*/
@@ -116,7 +116,7 @@
ld8.s w[0]=[src],8 // speculatively load next to next
cmp.eq.and p6,p0=8,val1 // p6 = p6 and val1=8
cmp.eq.and p6,p0=8,val2 // p6 = p6 and mask=8
-(p6) br.wtop.dptk.few 1b // loop until p6 = 0
+(p6) br.wtop.dptk 1b // loop until p6 = 0
;;
//
// We must return try the recovery code iff
@@ -127,14 +127,14 @@
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
tnat.nz p6,p7=val1 // test NaT on val1
-(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT
;;
//
// if we come here p7 is true, i.e., initialized for // cmp
//
cmp.eq.and p7,p0=8,val1// val1=8?
tnat.nz.and p7,p0=val2 // test NaT if val2
-(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT
;;
(p8) mov val1=val2 // the other test got us out of the loop
(p8) adds src=-16,src // correct position when 3 ahead
@@ -146,7 +146,7 @@
;;
sub ret0=ret0,tmp // adjust
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of normal execution
+ br.ret.sptk.many rp // end of normal execution
//
// Outlined recovery code when speculation failed
@@ -165,7 +165,7 @@
// - today we restart from the beginning of the string instead
// of trying to continue where we left off.
//
-recover:
+.recover:
ld8 val=[base],8 // will fail if unrecoverable fault
;;
or val=val,mask // remask first bytes
@@ -180,7 +180,7 @@
czx1.r val1=val // search 0 byte from right
;;
cmp.eq p6,p0=8,val1 // val1=8 ?
-(p6) br.wtop.dptk.few 2b // loop until p6 = 0
+(p6) br.wtop.dptk 2b // loop until p6 = 0
;; // (avoid WAW on p63)
sub ret0ºse,orig // distance from base
sub tmp=8,val1
@@ -188,5 +188,5 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of successful recovery code
+ br.ret.sptk.many rp // end of successful recovery code
END(strlen)
diff -urN linux-davidm/arch/ia64/lib/strlen_user.S linux-2.4.10-lia/arch/ia64/lib/strlen_user.S
--- linux-davidm/arch/ia64/lib/strlen_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/lib/strlen_user.S Mon Sep 24 21:57:08 2001
@@ -8,8 +8,8 @@
* ret0 0 in case of fault, strlen(buffer)+1 otherwise
*
* Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 01/19/99 S.Eranian heavily enhanced version (see details below)
* 09/24/99 S.Eranian added speculation recovery code
@@ -108,7 +108,7 @@
mov ar.ec=r0 // clear epilogue counter (saved in ar.pfs)
;;
add base=-16,src // keep track of aligned base
- chk.s v[1], recover // if already NaT, then directly skip to recover
+ chk.s v[1], .recover // if already NaT, then directly skip to recover
or v[1]=v[1],mask // now we have a safe initial byte pattern
;;
1:
@@ -130,14 +130,14 @@
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
tnat.nz p6,p7=val1 // test NaT on val1
-(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT
;;
//
// if we come here p7 is true, i.e., initialized for // cmp
//
cmp.eq.and p7,p0=8,val1// val1=8?
tnat.nz.and p7,p0=val2 // test NaT if val2
-(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT
;;
(p8) mov val1=val2 // val2 contains the value
(p8) adds src=-16,src // correct position when 3 ahead
@@ -149,7 +149,7 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of normal execution
+ br.ret.sptk.many rp // end of normal execution
//
// Outlined recovery code when speculation failed
@@ -162,7 +162,7 @@
// - today we restart from the beginning of the string instead
// of trying to continue where we left off.
//
-recover:
+.recover:
EX(.Lexit1, ld8 val=[base],8) // load the initial bytes
;;
or val=val,mask // remask first bytes
@@ -185,7 +185,7 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of successful recovery code
+ br.ret.sptk.many rp // end of successful recovery code
//
// We failed even on the normal load (called from exception handler)
@@ -194,5 +194,5 @@
mov ret0=0
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strlen_user)
diff -urN linux-davidm/arch/ia64/lib/strncpy_from_user.S linux-2.4.10-lia/arch/ia64/lib/strncpy_from_user.S
--- linux-davidm/arch/ia64/lib/strncpy_from_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/lib/strncpy_from_user.S Mon Sep 24 21:57:19 2001
@@ -40,5 +40,5 @@
(p6) mov r8=in2 // buffer filled up---return buffer length
(p7) sub r8=in1,r9,1 // return string length (excluding NUL character)
[.Lexit:]
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strncpy_from_user)
diff -urN linux-davidm/arch/ia64/lib/strnlen_user.S linux-2.4.10-lia/arch/ia64/lib/strnlen_user.S
--- linux-davidm/arch/ia64/lib/strnlen_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/lib/strnlen_user.S Mon Sep 24 21:57:28 2001
@@ -33,7 +33,7 @@
add r9=1,r9
;;
cmp.eq p6,p0=r8,r0
-(p6) br.dpnt.few .Lexit
+(p6) br.cond.dpnt .Lexit
br.cloop.dptk.few .Loop1
add r9=1,in1 // NUL not found---return N+1
@@ -41,5 +41,5 @@
.Lexit:
mov r8=r9
mov ar.lc=r16 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strnlen_user)
diff -urN linux-davidm/arch/ia64/mm/fault.c linux-2.4.10-lia/arch/ia64/mm/fault.c
--- linux-davidm/arch/ia64/mm/fault.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/arch/ia64/mm/fault.c Mon Sep 24 21:58:13 2001
@@ -1,8 +1,8 @@
/*
* MMU fault handling support.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/sched.h>
#include <linux/kernel.h>
@@ -16,7 +16,7 @@
#include <asm/uaccess.h>
#include <asm/hardirq.h>
-extern void die_if_kernel (char *, struct pt_regs *, long);
+extern void die (char *, struct pt_regs *, long);
/*
* This routine is analogous to expand_stack() but instead grows the
@@ -46,16 +46,15 @@
void
ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *regs)
{
+ int signal = SIGSEGV, code = SEGV_MAPERR;
+ struct vm_area_struct *vma, *prev_vma;
struct mm_struct *mm = current->mm;
struct exception_fixup fix;
- struct vm_area_struct *vma, *prev_vma;
struct siginfo si;
- int signal = SIGSEGV;
unsigned long mask;
/*
- * If we're in an interrupt or have no user
- * context, we must not take the fault..
+ * If we're in an interrupt or have no user context, we must not take the fault..
*/
if (in_interrupt() || !mm)
goto no_context;
@@ -71,6 +70,8 @@
goto check_expansion;
good_area:
+ code = SEGV_ACCERR;
+
/* OK, we've got a good vm_area for this memory area. Check the access permissions: */
# define VM_READ_BIT 0
@@ -89,12 +90,13 @@
if ((vma->vm_flags & mask) != mask)
goto bad_area;
+ survive:
/*
* If for any reason at all we couldn't handle the fault, make
* sure we exit gracefully rather than endlessly redo the
* fault.
*/
- switch (handle_mm_fault(mm, vma, address, mask) != 0) {
+ switch (handle_mm_fault(mm, vma, address, mask)) {
case 1:
++current->min_flt;
break;
@@ -147,7 +149,7 @@
if (user_mode(regs)) {
si.si_signo = signal;
si.si_errno = 0;
- si.si_code = SI_KERNEL;
+ si.si_code = code;
si.si_addr = (void *) address;
force_sig_info(signal, &si, current);
return;
@@ -174,17 +176,29 @@
}
/*
- * Oops. The kernel tried to access some bad page. We'll have
- * to terminate things with extreme prejudice.
+ * Oops. The kernel tried to access some bad page. We'll have to terminate things
+ * with extreme prejudice.
*/
- printk(KERN_ALERT "Unable to handle kernel paging request at "
- "virtual address %016lx\n", address);
- die_if_kernel("Oops", regs, isr);
+ bust_spinlocks(1);
+
+ if (address < PAGE_SIZE)
+ printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk(KERN_ALERT "Unable to handle kernel paging request at "
+ "virtual address %016lx\n", address);
+ die("Oops", regs, isr);
+ bust_spinlocks(0);
do_exit(SIGKILL);
return;
out_of_memory:
up_read(&mm->mmap_sem);
+ if (current->pid = 1) {
+ current->policy |= SCHED_YIELD;
+ schedule();
+ down_read(&mm->mmap_sem);
+ goto survive;
+ }
printk("VM: killing process %s\n", current->comm);
if (user_mode(regs))
do_exit(SIGKILL);
diff -urN linux-davidm/arch/ia64/mm/tlb.c linux-2.4.10-lia/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/arch/ia64/mm/tlb.c Mon Sep 24 21:59:16 2001
@@ -2,7 +2,7 @@
* TLB support routines.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 08/02/00 A. Mallick <asit.k.mallick@intel.com>
* Modified RID allocation for SMP
@@ -41,11 +41,6 @@
};
/*
- * Seralize usage of ptc.g
- */
-spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED; /* see <asm/pgtable.h> */
-
-/*
* Acquire the ia64_ctx.lock before calling this function!
*/
void
@@ -84,6 +79,26 @@
flush_tlb_all();
}
+static void
+ia64_global_tlb_purge (unsigned long start, unsigned long end, unsigned long nbits)
+{
+ static spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED;
+
+ /* HW requires global serialization of ptc.ga. */
+ spin_lock(&ptcg_lock);
+ {
+ do {
+ /*
+ * Flush ALAT entries also.
+ */
+ asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2)
+ : "memory");
+ start += (1UL << nbits);
+ } while (start < end);
+ }
+ spin_unlock(&ptcg_lock);
+}
+
void
__flush_tlb_all (void)
{
@@ -144,19 +159,15 @@
}
start &= ~((1UL << nbits) - 1);
- spin_lock(&ptcg_lock);
do {
# ifdef CONFIG_SMP
- /*
- * Flush ALAT entries also.
- */
- asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2) : "memory");
+ platform_global_tlb_purge(start, end, nbits);
# else
asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
# endif
start += (1UL << nbits);
} while (start < end);
- spin_unlock(&ptcg_lock);
+
ia64_insn_group_barrier();
ia64_srlz_i(); /* srlz.i implies srlz.d */
ia64_insn_group_barrier();
diff -urN linux-davidm/arch/ia64/tools/print_offsets.c linux-2.4.10-lia/arch/ia64/tools/print_offsets.c
--- linux-davidm/arch/ia64/tools/print_offsets.c Tue Jul 31 10:30:09 2001
+++ linux-2.4.10-lia/arch/ia64/tools/print_offsets.c Mon Sep 24 21:59:30 2001
@@ -61,7 +61,7 @@
{ "IA64_TASK_THREAD_SIGMASK_OFFSET",offsetof (struct task_struct, thread.un.sigmask) },
#endif
#ifdef CONFIG_PERFMON
- { "IA64_TASK_PFM_NOTIFY_OFFSET", offsetof(struct task_struct, thread.pfm_pend_notify) },
+ { "IA64_TASK_PFM_MUST_BLOCK_OFFSET",offsetof(struct task_struct, thread.pfm_must_block) },
#endif
{ "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) },
{ "IA64_TASK_MM_OFFSET", offsetof (struct task_struct, mm) },
@@ -165,17 +165,18 @@
{ "IA64_SIGCONTEXT_FR6_OFFSET", offsetof (struct sigcontext, sc_fr[6]) },
{ "IA64_SIGCONTEXT_PR_OFFSET", offsetof (struct sigcontext, sc_pr) },
{ "IA64_SIGCONTEXT_R12_OFFSET", offsetof (struct sigcontext, sc_gr[12]) },
+ { "IA64_SIGCONTEXT_RBS_BASE_OFFSET",offsetof (struct sigcontext, sc_rbs_base) },
+ { "IA64_SIGCONTEXT_LOADRS_OFFSET", offsetof (struct sigcontext, sc_loadrs) },
{ "IA64_SIGFRAME_ARG0_OFFSET", offsetof (struct sigframe, arg0) },
{ "IA64_SIGFRAME_ARG1_OFFSET", offsetof (struct sigframe, arg1) },
{ "IA64_SIGFRAME_ARG2_OFFSET", offsetof (struct sigframe, arg2) },
- { "IA64_SIGFRAME_RBS_BASE_OFFSET", offsetof (struct sigframe, rbs_base) },
{ "IA64_SIGFRAME_HANDLER_OFFSET", offsetof (struct sigframe, handler) },
{ "IA64_SIGFRAME_SIGCONTEXT_OFFSET", offsetof (struct sigframe, sc) },
{ "IA64_CLONE_VFORK", CLONE_VFORK },
{ "IA64_CLONE_VM", CLONE_VM },
{ "IA64_CPU_IRQ_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.irq_count) },
{ "IA64_CPU_BH_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.bh_count) },
- { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET", offsetof (struct cpuinfo_ia64, phys_stacked_size_p8) },
+ { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET",offsetof (struct cpuinfo_ia64, phys_stacked_size_p8)},
};
static const char *tabs = "\t\t\t\t\t\t\t\t\t\t";
diff -urN linux-davidm/drivers/acpi/Makefile linux-2.4.10-lia/drivers/acpi/Makefile
--- linux-davidm/drivers/acpi/Makefile Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/Makefile Mon Sep 24 21:59:51 2001
@@ -39,7 +39,7 @@
obj-$(CONFIG_ACPI) += os.o acpi_ksyms.o
obj-$(CONFIG_ACPI) += $(foreach dir,$(acpi-subdirs),$(dir)/$(dir).o)
ifdef CONFIG_ACPI_KERNEL_CONFIG
- obj-$(CONFIG_ACPI) += acpiconf.o osconf.o
+ obj-$(CONFIG_ACPI) += acpiconf.o osconf.o driver.o
endif
obj-$(CONFIG_ACPI) += driver.o
diff -urN linux-davidm/drivers/acpi/acpiconf.c linux-2.4.10-lia/drivers/acpi/acpiconf.c
--- linux-davidm/drivers/acpi/acpiconf.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/acpiconf.c Mon Sep 24 22:00:01 2001
@@ -30,12 +30,12 @@
static int acpi_cf_initialized __initdata = 0;
-ACPI_STATUS __init
+acpi_status __init
acpi_cf_init (
void * rsdp
)
{
- ACPI_STATUS status;
+ acpi_status status;
acpi_os_bind_osd(ACPI_CF_PHASE_BOOTTIME);
@@ -44,9 +44,9 @@
printk ("Acpi cfg:initialize_subsystem error=0x%x\n", status);
return status;
}
- dprintk(("Acpi cfg:initializ_subsysteme pass\n"));
+ dprintk(("Acpi cfg:initialize_subsystem pass\n"));
- status = acpi_load_tables ((u64)rsdp);
+ status = acpi_load_tables ();
if (ACPI_FAILURE(status)) {
printk ("Acpi cfg:load firmware tables error=0x%x\n", status);
acpi_terminate();
@@ -68,13 +68,15 @@
}
-ACPI_STATUS __init
+acpi_status __init
acpi_cf_terminate ( void )
{
- ACPI_STATUS status;
+ acpi_status status;
- if (! ACPI_CF_INITIALIZED())
+ if (! ACPI_CF_INITIALIZED()) {
+ acpi_os_bind_osd(ACPI_CF_PHASE_RUNTIME);
return AE_ERROR;
+ }
status = acpi_disable ();
if (ACPI_FAILURE(status)) {
@@ -97,13 +99,13 @@
}
-ACPI_STATUS __init
+acpi_status __init
acpi_cf_get_pci_vectors (
struct pci_vector_struct **vectors,
int *num_pci_vectors
)
{
- ACPI_STATUS status;
+ acpi_status status;
void *prts;
if (! ACPI_CF_INITIALIZED()) {
@@ -134,28 +136,28 @@
}
-static PCI_ROUTING_TABLE *pci_routing_tables[PCI_MAX_BUS] __initdata = {NULL};
+static pci_routing_table *pci_routing_tables[PCI_MAX_BUS] __initdata = {NULL};
typedef struct _acpi_rpb {
NATIVE_UINT rpb_busnum;
NATIVE_UINT lastbusnum;
- ACPI_HANDLE rpb_handle;
+ acpi_handle rpb_handle;
} acpi_rpb_t;
-static ACPI_STATUS __init
+static acpi_status __init
acpi_cf_evaluate_method (
- ACPI_HANDLE handle,
+ acpi_handle handle,
UINT8 *method_name,
NATIVE_UINT *nuint
)
{
UINT32 tnuint = 0;
- ACPI_STATUS status;
+ acpi_status status;
- ACPI_BUFFER ret_buf;
- ACPI_OBJECT *ext_obj;
+ acpi_buffer ret_buf;
+ acpi_object *ext_obj;
UINT8 buf[PATHNAME_MAX];
@@ -170,7 +172,7 @@
printk("Acpi cfg: %s fail=0x%x\n", method_name, status);
}
} else {
- ext_obj = (ACPI_OBJECT *) ret_buf.pointer;
+ ext_obj = (acpi_object *) ret_buf.pointer;
switch (ext_obj->type) {
case ACPI_TYPE_INTEGER:
@@ -188,14 +190,14 @@
}
-static ACPI_STATUS __init
+static acpi_status __init
acpi_cf_evaluate_PRT (
- ACPI_HANDLE handle,
- PCI_ROUTING_TABLE **prt
+ acpi_handle handle,
+ pci_routing_table **prt
)
{
- ACPI_BUFFER acpi_buffer;
- ACPI_STATUS status;
+ acpi_buffer acpi_buffer;
+ acpi_status status;
acpi_buffer.length = 0;
acpi_buffer.pointer = NULL;
@@ -213,7 +215,7 @@
return status;
}
- *prt = (PCI_ROUTING_TABLE *) acpi_os_callocate (acpi_buffer.length);
+ *prt = (pci_routing_table *) acpi_os_callocate (acpi_buffer.length);
if (!*prt) {
printk("Acpi cfg: callocate %d bytes for _PRT fail\n",
acpi_buffer.length);
@@ -230,23 +232,22 @@
return status;
}
-static ACPI_STATUS __init
+static acpi_status __init
acpi_cf_get_root_pci_callback (
- ACPI_HANDLE handle,
+ acpi_handle handle,
UINT32 Level,
void *context,
void **retval
)
{
NATIVE_UINT busnum = 0;
- ACPI_STATUS status;
+ acpi_status status;
acpi_rpb_t rpb;
- PCI_ROUTING_TABLE *prt;
+ pci_routing_table *prt;
UINT8 path_name[PATHNAME_MAX];
-
#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
- ACPI_BUFFER ret_buf;
+ acpi_buffer ret_buf;
ret_buf.length = PATHNAME_MAX;
ret_buf.pointer = (void *) path_name;
@@ -313,23 +314,24 @@
/*
* handle _PRTs of immediate P2Ps of root pci.
*/
-static ACPI_STATUS __init
+static acpi_status __init
acpi_cf_associate_prt_to_bus (
- ACPI_HANDLE handle,
+ acpi_handle handle,
acpi_rpb_t *rpb,
NATIVE_UINT *retbusnum,
NATIVE_UINT depth
)
{
- ACPI_STATUS status;
+ acpi_status status;
UINT32 segbus;
NATIVE_UINT devfn;
UINT8 bn;
UINT8 path_name[PATHNAME_MAX];
+ acpi_pci_id pci_id;
#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
- ACPI_BUFFER ret_buf;
+ acpi_buffer ret_buf;
ret_buf.length = PATHNAME_MAX;
ret_buf.pointer = (void *) path_name;
@@ -356,10 +358,13 @@
* access pci config space for bus number
* segbus = from rpb, devfn = from _ADR
*/
- segbus = (UINT32)(rpb->rpb_busnum & 0xFFFFFFFF);
+ pci_id.segment = 0;
+ pci_id.bus = (u16)(rpb->rpb_busnum & 0xffffffff);
+ pci_id.device = (u16)((devfn >> 16) & 0xffff);
+ pci_id.function = (u16)(devfn & 0xffff);
- status = acpi_os_read_pci_cfg_byte(segbus,
- (UINT32)devfn, PCI_PRIMARY_BUS, &bn);
+ status = acpi_os_read_pci_configuration(&pci_id, PCI_PRIMARY_BUS,
+ &bn, 1);
if (ACPI_FAILURE(status)) {
*retbusnum = rpb->rpb_busnum + 1;
printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
@@ -372,8 +377,8 @@
dprintk(("Acpi cfg:%s pribus %d\n", path_name, bn));
- status = acpi_os_read_pci_cfg_byte(segbus,
- (UINT32)devfn, PCI_SECONDARY_BUS, &bn);
+ status = acpi_os_read_pci_configuration(&pci_id, PCI_SECONDARY_BUS,
+ &bn, 1);
if (ACPI_FAILURE(status)) {
*retbusnum = rpb->rpb_busnum + 1;
printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
@@ -390,12 +395,12 @@
}
-static ACPI_STATUS __init
+static acpi_status __init
acpi_cf_get_prt (
void **prts
)
{
- ACPI_STATUS status;
+ acpi_status status;
status = acpi_get_devices ( PCI_ROOT_HID_STRING,
acpi_cf_get_root_pci_callback,
@@ -411,23 +416,23 @@
return status;
}
-static ACPI_STATUS __init
+static acpi_status __init
acpi_cf_get_prt_callback (
- ACPI_HANDLE handle,
+ acpi_handle handle,
UINT32 Level,
void *context,
void **retval
)
{
- PCI_ROUTING_TABLE *prt;
+ pci_routing_table *prt;
NATIVE_UINT busnum = 0;
NATIVE_UINT temp = 0x0F;
- ACPI_STATUS status;
+ acpi_status status;
UINT8 path_name[PATHNAME_MAX];
#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
- ACPI_BUFFER ret_buf;
+ acpi_buffer ret_buf;
ret_buf.length = PATHNAME_MAX;
ret_buf.pointer = (void *) path_name;
@@ -487,7 +492,7 @@
static void __init
acpi_cf_add_to_pci_routing_tables (
NATIVE_UINT busnum,
- PCI_ROUTING_TABLE *prt
+ pci_routing_table *prt
)
{
if ( busnum >= PCI_MAX_BUS ) {
@@ -507,7 +512,7 @@
#define DUMPVECTOR(pv) printk("PCI bus=0x%x id=0x%x pin=0x%x irq=0x%x\n", pv->bus, pv->pci_id, pv->pin, pv->irq);
-static ACPI_STATUS __init
+static acpi_status __init
acpi_cf_convert_prt_to_vectors (
void *prts,
struct pci_vector_struct **vectors,
@@ -515,18 +520,18 @@
)
{
struct pci_vector_struct *pvec;
- PCI_ROUTING_TABLE **pprts, *prt, *prtf;
+ pci_routing_table **pprts, *prt, *prtf;
int nvec = 0;
int i;
- pprts = (PCI_ROUTING_TABLE **)prts;
+ pprts = (pci_routing_table **)prts;
for ( i = 0; i < PCI_MAX_BUS; i++) {
prt = *pprts++;
if (prt) {
for ( ; prt->length > 0; nvec++) {
- prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
+ prt = (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
}
}
}
@@ -539,7 +544,7 @@
}
pvec = *vectors;
- pprts = (PCI_ROUTING_TABLE **)prts;
+ pprts = (pci_routing_table **)prts;
for ( i = 0; i < PCI_MAX_BUS; i++) {
prt = prtf = *pprts++;
@@ -550,7 +555,7 @@
pvec->pin = (UINT8)prt->pin;
pvec->irq = (UINT8)prt->source_index;
- prt = (PCI_ROUTING_TABLE *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
+ prt = (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
}
acpi_os_free((void *)prtf);
}
diff -urN linux-davidm/drivers/acpi/acpiconf.h linux-2.4.10-lia/drivers/acpi/acpiconf.h
--- linux-davidm/drivers/acpi/acpiconf.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/acpiconf.h Mon Sep 24 22:21:57 2001
@@ -27,14 +27,14 @@
static
-ACPI_STATUS __init
+acpi_status __init
acpi_cf_get_prt (void **prts);
static
-ACPI_STATUS __init
+acpi_status __init
acpi_cf_get_prt_callback (
- ACPI_HANDLE handle,
+ acpi_handle handle,
UINT32 level,
void *context,
void **retval
@@ -45,12 +45,12 @@
void __init
acpi_cf_add_to_pci_routing_tables (
NATIVE_UINT busnum,
- PCI_ROUTING_TABLE *prt
+ pci_routing_table *prt
);
static
-ACPI_STATUS __init
+acpi_status __init
acpi_cf_convert_prt_to_vectors (
void *prts,
struct pci_vector_struct **vectors,
diff -urN linux-davidm/drivers/acpi/driver.c linux-2.4.10-lia/drivers/acpi/driver.c
--- linux-davidm/drivers/acpi/driver.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/driver.c Mon Sep 24 21:30:52 2001
@@ -112,9 +112,7 @@
printk(KERN_INFO "ACPI: Subsystem enabled\n");
-#ifdef CONFIG_PM
pm_active = 1;
-#endif
return 0;
}
@@ -127,9 +125,7 @@
{
acpi_terminate();
-#ifdef CONFIG_PM
pm_active = 0;
-#endif
printk(KERN_ERR "ACPI: Subsystem disabled\n");
}
diff -urN linux-davidm/drivers/acpi/hardware/hwacpi.c linux-2.4.10-lia/drivers/acpi/hardware/hwacpi.c
--- linux-davidm/drivers/acpi/hardware/hwacpi.c Mon Sep 24 15:06:41 2001
+++ linux-2.4.10-lia/drivers/acpi/hardware/hwacpi.c Mon Sep 24 22:00:26 2001
@@ -196,6 +196,7 @@
{
acpi_status status = AE_NO_HARDWARE_RESPONSE;
+ u32 retries = 20;
FUNCTION_TRACE ("Hw_set_mode");
@@ -220,11 +221,14 @@
/* Give the platform some time to react */
- acpi_os_stall (5000);
+ while (retries-- > 0) {
+ acpi_os_stall (5000);
- if (acpi_hw_get_mode () = mode) {
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
- status = AE_OK;
+ if (acpi_hw_get_mode () = mode) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
+ status = AE_OK;
+ break;
+ }
}
return_ACPI_STATUS (status);
diff -urN linux-davidm/drivers/acpi/include/actypes.h linux-2.4.10-lia/drivers/acpi/include/actypes.h
--- linux-davidm/drivers/acpi/include/actypes.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/include/actypes.h Mon Sep 24 22:00:40 2001
@@ -60,7 +60,7 @@
typedef int INT32;
typedef unsigned int UINT32;
typedef COMPILER_DEPENDENT_UINT64 UINT64;
-typedef long INT64;
+typedef long INT64;
typedef UINT64 NATIVE_UINT;
typedef INT64 NATIVE_INT;
diff -urN linux-davidm/drivers/acpi/os.c linux-2.4.10-lia/drivers/acpi/os.c
--- linux-davidm/drivers/acpi/os.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/os.c Mon Sep 24 21:30:48 2001
@@ -39,13 +39,10 @@
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/kmod.h>
-#include <linux/irq.h>
#include <linux/delay.h>
#include <asm/io.h>
#include <acpi.h>
-#ifndef CONFIG_ACPI_KERNEL_CONFIG_ONLY
#include "driver.h"
-#endif
#ifdef CONFIG_ACPI_EFI
#include <asm/efi.h>
@@ -53,7 +50,8 @@
#ifdef _IA64
#include <asm/hw_irq.h>
-#endif
+#include <asm/delay.h>
+#endif
#define _COMPONENT ACPI_OS_SERVICES
MODULE_NAME ("os")
@@ -66,7 +64,7 @@
/*****************************************************************************
- * Debugger Stuff
+ * Function Binding
*****************************************************************************/
#ifdef CONFIG_ACPI_KERNEL_CONFIG
@@ -78,26 +76,24 @@
acpi_os_callocate_rt,
acpi_os_free_rt,
acpi_os_queue_for_execution_rt,
- acpi_os_read_pci_cfg_byte_rt,
- acpi_os_read_pci_cfg_word_rt,
- acpi_os_read_pci_cfg_dword_rt,
- acpi_os_write_pci_cfg_byte_rt,
- acpi_os_write_pci_cfg_word_rt,
- acpi_os_write_pci_cfg_dword_rt
+ acpi_os_read_pci_configuration_rt,
+ acpi_os_write_pci_configuration_rt,
+ acpi_os_stall_rt
};
#else
#define acpi_os_allocate_rt acpi_os_allocate
#define acpi_os_callocate_rt acpi_os_callocate
#define acpi_os_free_rt acpi_os_free
#define acpi_os_queue_for_execution_rt acpi_os_queue_for_execution
-#define acpi_os_read_pci_cfg_byte_rt acpi_os_read_pci_cfg_byte
-#define acpi_os_read_pci_cfg_word_rt acpi_os_read_pci_cfg_word
-#define acpi_os_read_pci_cfg_dword_rt acpi_os_read_pci_cfg_dword
-#define acpi_os_write_pci_cfg_byte_rt acpi_os_write_pci_cfg_byte
-#define acpi_os_write_pci_cfg_word_rt acpi_os_write_pci_cfg_word
-#define acpi_os_write_pci_cfg_dword_rt acpi_os_write_pci_cfg_dword
+#define acpi_os_read_pci_configuration_rt acpi_os_read_pci_configuration
+#define acpi_os_write_pci_configuration_rt acpi_os_write_pci_configuration
+#define acpi_os_stall_rt acpi_os_stall
#endif
+/*****************************************************************************
+ * Debugger Stuff
+ *****************************************************************************/
+
#ifdef ENABLE_DEBUGGER
#include <linux/kdb.h>
@@ -267,12 +263,105 @@
(*acpi_irq_handler)(acpi_irq_context);
}
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+struct irqaction acpiirqaction;
+/*
+ * codes from request_irq and free_irq.
+ */
acpi_status
acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *context)
{
-#ifdef _IA64
+ struct irqaction *act;
+ int retval;
+
+ if (irq >= NR_IRQS) {
+ printk("ACPI: install SCI handler fail: invalid irq%d\n", irq);
+ return AE_ERROR;
+ }
+
+ if (!handler) {
+ printk("ACPI: install SCI handler fail: invalid handler\n");
+ return AE_ERROR;
+ }
+
+ act = & acpiirqaction;
+
irq = isa_irq_to_vector(irq);
-#endif /*_IA64*/
+ acpi_irq_irq = irq;
+ acpi_irq_handler = handler;
+ acpi_irq_context = context;
+
+ act->handler = acpi_irq;
+ act->flags = SA_INTERRUPT | SA_SHIRQ;
+ act->mask = 0;
+ act->name = "acpi";
+ act->next = NULL;
+ act->dev_id = acpi_irq;
+
+ retval = setup_irq(irq, act);
+ if (retval) {
+ printk("ACPI: install SCI handler fail: setup_irq\n");
+ acpi_irq_handler = NULL;
+ return AE_ERROR;
+ }
+ printk("ACPI: install SCI %d handler pass\n", irq);
+
+ return AE_OK;
+}
+
+acpi_status
+acpi_os_remove_interrupt_handler(u32 irq, OSD_HANDLER handler)
+{
+ irq_desc_t *desc;
+ struct irqaction **p;
+ unsigned long flags;
+
+ if (!acpi_irq_handler)
+ return AE_OK;
+
+ irq = isa_irq_to_vector(irq);
+ if (irq != acpi_irq_irq) return AE_ERROR;
+
+ acpi_irq_handler = NULL;
+
+ desc = irq_desc(irq);
+ spin_lock_irqsave(&desc->lock,flags);
+ p = &desc->action;
+ for (;;) {
+ struct irqaction * action = *p;
+ if (action) {
+ struct irqaction **pp = p;
+ p = &action->next;
+ if (action->dev_id != acpi_irq)
+ continue;
+
+ /* Found it - now remove it from the list of entries */
+ *pp = action->next;
+ if (!desc->action) {
+ desc->status |= IRQ_DISABLED;
+ desc->handler->shutdown(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock,flags);
+
+#ifdef CONFIG_SMP
+ /* Wait to make sure it's not being used on another CPU */
+ while (desc->status & IRQ_INPROGRESS)
+ barrier();
+#endif
+ return AE_OK;
+ }
+ printk("ACPI: Trying to free free IRQ%d\n",irq);
+ spin_unlock_irqrestore(&desc->lock,flags);
+ return AE_OK;
+ }
+
+ return AE_OK;
+}
+
+#else
+acpi_status
+acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *context)
+{
acpi_irq_irq = irq;
acpi_irq_handler = handler;
acpi_irq_context = context;
@@ -315,7 +404,7 @@
}
void
-acpi_os_stall(u32 us)
+acpi_os_stall_rt(u32 us)
{
if (us > 10000) {
mdelay(us / 1000);
@@ -357,7 +446,7 @@
acpi_status
acpi_os_write_port(
ACPI_IO_ADDRESS port,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
switch (width)
@@ -410,7 +499,7 @@
acpi_status
acpi_os_write_memory(
ACPI_PHYSICAL_ADDRESS phys_addr,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
switch (width)
@@ -470,27 +559,6 @@
return (result ? AE_ERROR : AE_OK);
}
-#ifdef CONFIG_ACPI_KERNEL_CONFIG
-/*
- * Queue for interpreter thread
- */
-
-ACPI_STATUS
-acpi_os_queue_for_execution_rt(
- u32 priority,
- OSD_EXECUTION_CALLBACK callback,
- void *context)
-{
-# ifndef CONFIG_ACPI_KERNEL_CONFIG_ONLY
- if (acpi_run(callback, context))
- return AE_ERROR;
-# else
- (*callback)(context);
-# endif
- return AE_OK;
-}
-#endif
-
acpi_status
acpi_os_write_pci_configuration (
acpi_pci_id *pci_id,
@@ -524,7 +592,7 @@
#else /*CONFIG_ACPI_PCI*/
acpi_status
-acpi_os_read_pci_configuration (
+acpi_os_read_pci_configuration_rt (
acpi_pci_id *pci_id,
u32 reg,
void *value,
@@ -558,10 +626,10 @@
}
acpi_status
-acpi_os_write_pci_configuration (
+acpi_os_write_pci_configuration_rt (
acpi_pci_id *pci_id,
u32 reg,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
int devfn = PCI_DEVFN(pci_id->device, pci_id->function);
@@ -676,6 +744,22 @@
acpi_os_free(dpc);
}
}
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+/*
+ * Queue for interpreter thread
+ */
+
+acpi_status
+acpi_os_queue_for_execution_rt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context)
+{
+ (*callback)(context);
+ return AE_OK;
+}
+#endif
acpi_status
acpi_os_queue_for_execution(
diff -urN linux-davidm/drivers/acpi/osconf.c linux-2.4.10-lia/drivers/acpi/osconf.c
--- linux-davidm/drivers/acpi/osconf.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/osconf.c Mon Sep 24 22:00:48 2001
@@ -16,6 +16,7 @@
#include <asm/system.h>
#include <asm/io.h>
#include <asm/sal.h>
+#include <asm/delay.h>
#include "acpi.h"
#include "osconf.h"
@@ -24,34 +25,22 @@
static void * __init acpi_os_allocate_bt(u32 size);
static void * __init acpi_os_callocate_bt(u32 size);
static void __init acpi_os_free_bt(void *ptr);
+static void __init acpi_os_stall_bt(u32 us);
-static ACPI_STATUS __init
+static acpi_status __init
acpi_os_queue_for_execution_bt(
u32 priority,
OSD_EXECUTION_CALLBACK callback,
void *context
);
-static ACPI_STATUS __init
-acpi_os_read_pci_cfg_byte_bt( u32 segbus, u32 func, u32 addr, u8 * val);
+static acpi_status __init
+acpi_os_read_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, void *value, u32 width);
-static ACPI_STATUS __init
-acpi_os_read_pci_cfg_word_bt( u32 segbus, u32 func, u32 addr, u16 * val);
+static acpi_status __init
+acpi_os_write_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width);
-static ACPI_STATUS __init
-acpi_os_read_pci_cfg_dword_bt( u32 segbus, u32 func, u32 addr, u32 * val);
-static ACPI_STATUS __init
-acpi_os_write_pci_cfg_byte_bt( u32 segbus, u32 func, u32 addr, u8 val);
-
-static ACPI_STATUS __init
-acpi_os_write_pci_cfg_word_bt( u32 segbus, u32 func, u32 addr, u16 val);
-
-static ACPI_STATUS __init
-acpi_os_write_pci_cfg_dword_bt( u32 segbus, u32 func, u32 addr, u32 val);
-
-
-static struct acpi_osd *acpi_osd;
extern struct acpi_osd acpi_osd_rt;
static struct acpi_osd acpi_osd_bt __initdata = {
/* these are boottime osd entries that differ from runtime entries */
@@ -59,13 +48,11 @@
acpi_os_callocate_bt,
acpi_os_free_bt,
acpi_os_queue_for_execution_bt,
- acpi_os_read_pci_cfg_byte_bt,
- acpi_os_read_pci_cfg_word_bt,
- acpi_os_read_pci_cfg_dword_bt,
- acpi_os_write_pci_cfg_byte_bt,
- acpi_os_write_pci_cfg_word_bt,
- acpi_os_write_pci_cfg_dword_bt
+ acpi_os_read_pci_configuration_bt,
+ acpi_os_write_pci_configuration_bt,
+ acpi_os_stall_bt
};
+static struct acpi_osd *acpi_osd = &acpi_osd_rt;
#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
static void __init
@@ -110,48 +97,27 @@
return;
}
-
-ACPI_STATUS
-acpi_os_read_pci_cfg_byte( u32 segbus, u32 func, u32 addr, u8 * val)
-{
- return acpi_osd->read_pci_cfg_byte(segbus, func, addr, val);
-}
-
-
-ACPI_STATUS
-acpi_os_read_pci_cfg_word( u32 segbus, u32 func, u32 addr, u16 * val)
-{
- return acpi_osd->read_pci_cfg_word(segbus, func, addr, val);
-}
-
-
-ACPI_STATUS
-acpi_os_read_pci_cfg_dword( u32 segbus, u32 func, u32 addr, u32 * val)
+void
+acpi_os_stall(u32 us)
{
- return acpi_osd->read_pci_cfg_dword(segbus, func, addr, val);
+ acpi_osd->stall(us);
+ return;
}
-
-ACPI_STATUS
-acpi_os_write_pci_cfg_byte( u32 segbus, u32 func, u32 addr, u8 val)
+acpi_status
+acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value, u32 width)
{
- return acpi_osd->write_pci_cfg_byte(segbus, func, addr, val);
+ return acpi_osd->read_pci_configuration(pci_id, reg, value, width);
}
-ACPI_STATUS
-acpi_os_write_pci_cfg_word( u32 segbus, u32 func, u32 addr, u16 val)
+acpi_status
+acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width)
{
- return acpi_osd->write_pci_cfg_word(segbus, func, addr, val);
+ return acpi_osd->write_pci_configuration(pci_id, reg, value, width);
}
-ACPI_STATUS
-acpi_os_write_pci_cfg_dword( u32 segbus, u32 func, u32 addr, u32 val)
-{
- return acpi_osd->write_pci_cfg_dword(segbus, func, addr, val);
-}
-
#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
/*
* Let's profile bootmem usage to see how much we consume. J.I.
@@ -230,7 +196,17 @@
}
-static ACPI_STATUS __init
+static void __init
+acpi_os_stall_bt(u32 us)
+{
+ unsigned long start = ia64_get_itc();
+ unsigned long cycles = us*733; /* XXX: 733 or 800 */
+ while (ia64_get_itc() - start < cycles)
+ /* skip */;
+}
+
+
+static acpi_status __init
acpi_os_queue_for_execution_bt(
u32 priority,
OSD_EXECUTION_CALLBACK callback,
@@ -244,78 +220,67 @@
}
-static ACPI_STATUS __init
-acpi_os_read_pci_cfg_byte_bt( u32 segbus, u32 func, u32 addr, u8 * val)
+static acpi_status __init
+acpi_os_read_pci_configuration_bt (
+ acpi_pci_id *pci_id,
+ u32 reg,
+ void *value,
+ u32 width)
{
unsigned int devfn;
s64 status;
u64 lval;
- devfn = PCI_DEVFN((func >> 16) & 0xffff, func & 0xffff);
- status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((segbus & 0xffff), devfn, addr), 1, &lval);
- *val = lval;
-
- return status;
-}
-
-
-static ACPI_STATUS __init
-acpi_os_read_pci_cfg_word_bt( u32 segbus, u32 func, u32 addr, u16 * val)
-{
- unsigned int devfn;
- s64 status;
- u64 lval;
+ devfn = PCI_DEVFN(pci_id->device, pci_id->function);
- devfn = PCI_DEVFN((func >> 16) & 0xffff, func & 0xffff);
- status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((segbus & 0xffff), devfn, addr), 2, &lval);
- *val = lval;
+ switch (width)
+ {
+ case 8:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 1, &lval);
+ *(u8*)value = (u8)lval;
+ break;
+ case 16:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 2, &lval);
+ *(u16*)value = (u16)lval;
+ break;
+ case 32:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 4, &lval);
+ *(u32*)value = (u32)lval;
+ break;
+ default:
+ BUG();
+ }
return status;
}
-static ACPI_STATUS __init
-acpi_os_read_pci_cfg_dword_bt( u32 segbus, u32 func, u32 addr, u32 * val)
+static acpi_status __init
+acpi_os_write_pci_configuration_bt (
+ acpi_pci_id *pci_id,
+ u32 reg,
+ NATIVE_UINT value,
+ u32 width)
{
unsigned int devfn;
s64 status;
- u64 lval;
-
- devfn = PCI_DEVFN((func >> 16) & 0xffff, func & 0xffff);
- status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((segbus & 0xffff), devfn, addr), 4, &lval);
- *val = lval;
-
- return status;
-}
-
-
-static ACPI_STATUS __init
-acpi_os_write_pci_cfg_byte_bt( u32 segbus, u32 func, u32 addr, u8 val)
-{
- unsigned int devfn;
-
- devfn = PCI_DEVFN((func >> 16) & 0xffff, func & 0xffff);
- return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((segbus & 0xffff), devfn, addr), 1, val);
-}
+ devfn = PCI_DEVFN(pci_id->device, pci_id->function);
-static ACPI_STATUS __init
-acpi_os_write_pci_cfg_word_bt( u32 segbus, u32 func, u32 addr, u16 val)
-{
- unsigned int devfn;
-
- devfn = PCI_DEVFN((func >> 16) & 0xffff, func & 0xffff);
- return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((segbus & 0xffff), devfn, addr), 2, val);
-}
-
-
-static ACPI_STATUS __init
-acpi_os_write_pci_cfg_dword_bt( u32 segbus, u32 func, u32 addr, u32 val)
-{
- unsigned int devfn;
+ switch (width)
+ {
+ case 8:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 1, value);
+ break;
+ case 16:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 2, value);
+ break;
+ case 32:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 4, value);
+ break;
+ default:
+ BUG();
+ }
- devfn = PCI_DEVFN((func >> 16) & 0xffff, func & 0xffff);
- return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((segbus & 0xffff), devfn, addr), 4, val);
+ return status;
}
-
-
diff -urN linux-davidm/drivers/acpi/osconf.h linux-2.4.10-lia/drivers/acpi/osconf.h
--- linux-davidm/drivers/acpi/osconf.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/acpi/osconf.h Mon Sep 24 22:01:49 2001
@@ -10,13 +10,10 @@
void * (*allocate)(u32 size);
void * (*callocate)(u32 size);
void (*free)(void *ptr);
- ACPI_STATUS (*queue_for_exec)(u32 pri, OSD_EXECUTION_CALLBACK cb, void *context);
- ACPI_STATUS (*read_pci_cfg_byte)(u32 bus, u32 func, u32 addr, u8 *val);
- ACPI_STATUS (*read_pci_cfg_word)(u32 bus, u32 func, u32 addr, u16 *val);
- ACPI_STATUS (*read_pci_cfg_dword)(u32 bus, u32 func, u32 addr, u32 *val);
- ACPI_STATUS (*write_pci_cfg_byte)(u32 bus, u32 func, u32 addr, u8 val);
- ACPI_STATUS (*write_pci_cfg_word)(u32 bus, u32 func, u32 addr, u16 val);
- ACPI_STATUS (*write_pci_cfg_dword)(u32 bus, u32 func, u32 addr, u32 val);
+ acpi_status (*queue_for_exec)(u32 pri, OSD_EXECUTION_CALLBACK cb, void *context);
+ acpi_status (*read_pci_configuration)(acpi_pci_id *pci_id, u32 reg, void *value, u32 width);
+ acpi_status (*write_pci_configuration)(acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width);
+ void (*stall)(u32 us);
};
@@ -31,54 +28,30 @@
void * acpi_os_allocate(u32 size);
void * acpi_os_callocate(u32 size);
void acpi_os_free(void *ptr);
+void acpi_os_stall(u32 us);
+acpi_status
+acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value, u32 width );
-ACPI_STATUS
-acpi_os_read_pci_cfg_byte( u32 segbus, u32 func, u32 addr, u8 * val);
-
-ACPI_STATUS
-acpi_os_read_pci_cfg_word( u32 segbus, u32 func, u32 addr, u16 * val);
-
-ACPI_STATUS
-acpi_os_read_pci_cfg_dword( u32 segbus, u32 func, u32 addr, u32 * val);
-
-ACPI_STATUS
-acpi_os_write_pci_cfg_byte( u32 segbus, u32 func, u32 addr, u8 val);
-
-ACPI_STATUS
-acpi_os_write_pci_cfg_word( u32 segbus, u32 func, u32 addr, u16 val);
-
-ACPI_STATUS
-acpi_os_write_pci_cfg_dword( u32 segbus, u32 func, u32 addr, u32 val);
+acpi_status
+acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width );
/* acpi_osd_rt functions */
extern void * acpi_os_allocate_rt(u32 size);
extern void * acpi_os_callocate_rt(u32 size);
extern void acpi_os_free_rt(void *ptr);
+extern void acpi_os_stall_rt(u32 us);
-extern ACPI_STATUS
+extern acpi_status
acpi_os_queue_for_execution_rt(
u32 priority,
OSD_EXECUTION_CALLBACK callback,
void *context
);
-extern ACPI_STATUS
-acpi_os_read_pci_cfg_byte_rt( u32 segbus, u32 func, u32 addr, u8 * val);
-
-extern ACPI_STATUS
-acpi_os_read_pci_cfg_word_rt( u32 segbus, u32 func, u32 addr, u16 * val);
-
-extern ACPI_STATUS
-acpi_os_read_pci_cfg_dword_rt( u32 segbus, u32 func, u32 addr, u32 * val);
-
-extern ACPI_STATUS
-acpi_os_write_pci_cfg_byte_rt( u32 segbus, u32 func, u32 addr, u8 val);
-
-extern ACPI_STATUS
-acpi_os_write_pci_cfg_word_rt( u32 segbus, u32 func, u32 addr, u16 val);
-
-extern ACPI_STATUS
-acpi_os_write_pci_cfg_dword_rt( u32 segbus, u32 func, u32 addr, u32 val);
+extern acpi_status
+acpi_os_read_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, void *value, u32 width );
+extern acpi_status
+acpi_os_write_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width );
diff -urN linux-davidm/drivers/acpi/ospm/include/ec.h linux-2.4.10-lia/drivers/acpi/ospm/include/ec.h
--- linux-davidm/drivers/acpi/ospm/include/ec.h Mon Sep 24 15:06:44 2001
+++ linux-2.4.10-lia/drivers/acpi/ospm/include/ec.h Mon Sep 24 22:03:30 2001
@@ -167,14 +167,14 @@
acpi_status
ec_io_read (
EC_CONTEXT *ec,
- u32 io_port,
+ ACPI_IO_ADDRESS io_port,
u8 *data,
EC_EVENT wait_event);
acpi_status
ec_io_write (
EC_CONTEXT *ec,
- u32 io_port,
+ ACPI_IO_ADDRESS io_port,
u8 data,
EC_EVENT wait_event);
diff -urN linux-davidm/drivers/acpi/ospm/system/sm_osl.c linux-2.4.10-lia/drivers/acpi/ospm/system/sm_osl.c
--- linux-davidm/drivers/acpi/ospm/system/sm_osl.c Mon Sep 24 15:06:44 2001
+++ linux-2.4.10-lia/drivers/acpi/ospm/system/sm_osl.c Mon Sep 24 22:03:42 2001
@@ -33,7 +33,9 @@
#include <asm/uaccess.h>
#include <linux/acpi.h>
#include <asm/io.h>
+#ifndef __ia64__
#include <linux/mc146818rtc.h>
+#endif
#include <linux/delay.h>
#include <acpi.h>
@@ -278,6 +280,7 @@
int *eof,
void *context)
{
+#ifndef _IA64
char *str = page;
int len;
u32 sec,min,hr;
@@ -351,6 +354,9 @@
*start = page;
return len;
+#else
+ return 0;
+#endif
}
static int get_date_field(char **str, u32 *value)
@@ -381,6 +387,7 @@
unsigned long count,
void *data)
{
+#ifndef _IA64
char buf[30];
char *str = buf;
u32 sec,min,hr;
@@ -520,6 +527,9 @@
error = 0;
out:
return error ? error : count;
+#else
+ return 0;
+#endif
}
static int
diff -urN linux-davidm/drivers/char/agp/agpgart_be.c linux-2.4.10-lia/drivers/char/agp/agpgart_be.c
--- linux-davidm/drivers/char/agp/agpgart_be.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/agp/agpgart_be.c Mon Sep 24 16:11:04 2001
@@ -466,9 +466,9 @@
/*
* Driver routines - start
* Currently this module supports the following chipsets:
- * i810, i815, 440lx, 440bx, 440gx, i840, i850, via vp3, via mvp3,
- * via kx133, via kt133, amd irongate, amd 761, amd 762, ALi M1541,
- * and generic support for the SiS chipsets.
+ * i810, 440lx, 440bx, 440gx, 460gx, i840, i850, via vp3, via mvp3, via kx133,
+ * via kt133, amd irongate, ALi M1541, and generic support for the SiS
+ * chipsets.
*/
/* Generic Agp routines - Start */
@@ -1200,6 +1200,633 @@
#endif /* CONFIG_AGP_I810 */
+#ifdef CONFIG_AGP_I460
+
+/* BIOS configures the chipset so that one of two apbase registers are used */
+static u8 intel_i460_dynamic_apbase = 0x10;
+
+/* 460 supports multiple GART page sizes, so GART pageshift is dynamic */
+static u8 intel_i460_pageshift = 12;
+
+/* Keep track of which is larger, chipset or kernel page size. */
+static u32 intel_i460_cpk = 1;
+
+/* Structure for tracking partial use of 4MB GART pages */
+static u32 **i460_pg_detail = NULL;
+static u32 *i460_pg_count = NULL;
+
+#define I460_CPAGES_PER_KPAGE (PAGE_SIZE >> intel_i460_pageshift)
+#define I460_KPAGES_PER_CPAGE ((1 << intel_i460_pageshift) >> PAGE_SHIFT)
+
+#define I460_SRAM_IO_DISABLE (1 << 4)
+#define I460_BAPBASE_ENABLE (1 << 3)
+#define I460_AGPSIZ_MASK 0x7
+#define I460_4M_PS (1 << 1)
+
+#define log2(x) ffz(~(x))
+
+static int intel_i460_fetch_size(void)
+{
+ int i;
+ u8 temp;
+ aper_size_info_8 *values;
+
+ /* Determine the GART page size */
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &temp);
+ intel_i460_pageshift = (temp & I460_4M_PS) ? 22 : 12;
+
+ values = A_SIZE_8(agp_bridge.aperture_sizes);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+
+ /* Exit now if the IO drivers for the GART SRAMS are turned off */
+ if(temp & I460_SRAM_IO_DISABLE) {
+ printk("[agpgart] GART SRAMS disabled on 460GX chipset\n");
+ printk("[agpgart] AGPGART operation not possible\n");
+ return 0;
+ }
+
+ /* Make sure we don't try to create an 2 ^ 23 entry GATT */
+ if((intel_i460_pageshift = 0) && ((temp & I460_AGPSIZ_MASK) = 4)) {
+ printk("[agpgart] We can't have a 32GB aperture with 4KB"
+ " GART pages\n");
+ return 0;
+ }
+
+ /* Determine the proper APBASE register */
+ if(temp & I460_BAPBASE_ENABLE)
+ intel_i460_dynamic_apbase = INTEL_I460_BAPBASE;
+ else intel_i460_dynamic_apbase = INTEL_I460_APBASE;
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+
+ /*
+ * Dynamically calculate the proper num_entries and page_order
+ * values for the define aperture sizes. Take care not to
+ * shift off the end of values[i].size.
+ */
+ values[i].num_entries = (values[i].size << 8) >>
+ (intel_i460_pageshift - 12);
+ values[i].page_order = log2((sizeof(u32)*values[i].num_entries)
+ >> PAGE_SHIFT);
+ }
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+ /* Neglect control bits when matching up size_value */
+ if ((temp & I460_AGPSIZ_MASK) = values[i].size_value) {
+ agp_bridge.previous_size + agp_bridge.current_size = (void *) (values + i);
+ agp_bridge.aperture_size_idx = i;
+ return values[i].size;
+ }
+ }
+
+ return 0;
+}
+
+/* There isn't anything to do here since 460 has no GART TLB. */
+static void intel_i460_tlb_flush(agp_memory * mem)
+{
+ return;
+}
+
+/*
+ * This utility function is needed to prevent corruption of the control bits
+ * which are stored along with the aperture size in 460's AGPSIZ register
+ */
+static void intel_i460_write_agpsiz(u8 size_value)
+{
+ u8 temp;
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ,
+ ((temp & ~I460_AGPSIZ_MASK) | size_value));
+}
+
+static void intel_i460_cleanup(void)
+{
+ aper_size_info_8 *previous_size;
+
+ previous_size = A_SIZE_8(agp_bridge.previous_size);
+ intel_i460_write_agpsiz(previous_size->size_value);
+
+ if(intel_i460_cpk = 0)
+ {
+ vfree(i460_pg_detail);
+ vfree(i460_pg_count);
+ }
+}
+
+
+/* Control bits for Out-Of-GART coherency and Burst Write Combining */
+#define I460_GXBCTL_OOG (1UL << 0)
+#define I460_GXBCTL_BWC (1UL << 2)
+
+static int intel_i460_configure(void)
+{
+ union {
+ u32 small[2];
+ u64 large;
+ } temp;
+ u8 scratch;
+ int i;
+
+ aper_size_info_8 *current_size;
+
+ temp.large = 0;
+
+ current_size = A_SIZE_8(agp_bridge.current_size);
+ intel_i460_write_agpsiz(current_size->size_value);
+
+ /*
+ * Do the necessary rigmarole to read all eight bytes of APBASE.
+ * This has to be done since the AGP aperture can be above 4GB on
+ * 460 based systems.
+ */
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase,
+ &(temp.small[0]));
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase + 4,
+ &(temp.small[1]));
+
+ /* Clear BAR control bits */
+ agp_bridge.gart_bus_addr = temp.large & ~((1UL << 3) - 1);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &scratch);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL,
+ (scratch & 0x02) | I460_GXBCTL_OOG | I460_GXBCTL_BWC);
+
+ /*
+ * Initialize partial allocation trackers if a GART page is bigger than
+ * a kernel page.
+ */
+ if(I460_CPAGES_PER_KPAGE >= 1) {
+ intel_i460_cpk = 1;
+ } else {
+ intel_i460_cpk = 0;
+
+ i460_pg_detail = (void *) vmalloc(sizeof(*i460_pg_detail) *
+ current_size->num_entries);
+ i460_pg_count = (void *) vmalloc(sizeof(*i460_pg_count) *
+ current_size->num_entries);
+
+ for (i = 0; i < current_size->num_entries; i++) {
+ i460_pg_count[i] = 0;
+ i460_pg_detail[i] = NULL;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_i460_create_gatt_table(void) {
+
+ char *table;
+ int i;
+ int page_order;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ /*
+ * Load up the fixed address of the GART SRAMS which hold our
+ * GATT table.
+ */
+ table = (char *) __va(INTEL_I460_ATTBASE);
+
+ temp = agp_bridge.current_size;
+ page_order = A_SIZE_8(temp)->page_order;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ agp_bridge.gatt_table_real = (u32 *) table;
+ agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
+ (PAGE_SIZE * (1 << page_order)));
+ agp_bridge.gatt_bus_addr = virt_to_phys(agp_bridge.gatt_table_real);
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ return 0;
+}
+
+static int intel_i460_free_gatt_table(void)
+{
+ int num_entries;
+ int i;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ iounmap(agp_bridge.gatt_table);
+
+ return 0;
+}
+
+/* These functions are called when PAGE_SIZE exceeds the GART page size */
+
+static int intel_i460_insert_memory_cpk(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, j, k, num_entries;
+ void *temp;
+ unsigned int hold;
+ unsigned int read_back;
+
+ /*
+ * The rest of the kernel will compute page offsets in terms of
+ * PAGE_SIZE.
+ */
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ if((pg_start + I460_CPAGES_PER_KPAGE * mem->page_count) > num_entries) {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ j = pg_start;
+ while (j < (pg_start + I460_CPAGES_PER_KPAGE * mem->page_count)) {
+ if (!PGE_EMPTY(agp_bridge.gatt_table[j])) {
+ return -EBUSY;
+ }
+ j++;
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for (i = 0, j = pg_start; i < mem->page_count; i++) {
+
+ hold = (unsigned int) (mem->memory[i]);
+
+ for (k = 0; k < I460_CPAGES_PER_KPAGE; k++, j++, hold++)
+ agp_bridge.gatt_table[j] = hold;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[j - 1];
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_cpk(agp_memory * mem, off_t pg_start,
+ int type)
+{
+ int i;
+ unsigned int read_back;
+
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ for (i = pg_start; i < (pg_start + I460_CPAGES_PER_KPAGE *
+ mem->page_count); i++)
+ agp_bridge.gatt_table[i] = 0;
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ return 0;
+}
+
+/*
+ * These functions are called when the GART page size exceeds PAGE_SIZE.
+ *
+ * This situation is interesting since AGP memory allocations that are
+ * smaller than a single GART page are possible. The structures i460_pg_count
+ * and i460_pg_detail track partial allocation of the large GART pages to
+ * work around this issue.
+ *
+ * i460_pg_count[pg_num] tracks the number of kernel pages in use within
+ * GART page pg_num. i460_pg_detail[pg_num] is an array containing a
+ * psuedo-GART entry for each of the aforementioned kernel pages. The whole
+ * of i460_pg_detail is equivalent to a giant GATT with page size equal to
+ * that of the kernel.
+ */
+
+static void *intel_i460_alloc_large_page(int pg_num)
+{
+ int i;
+ void *bp, *bp_end;
+ struct page *page;
+
+ i460_pg_detail[pg_num] = (void *) vmalloc(sizeof(u32) *
+ I460_KPAGES_PER_CPAGE);
+ if(i460_pg_detail[pg_num] = NULL) {
+ printk("[agpgart] Out of memory, we're in trouble...\n");
+ return NULL;
+ }
+
+ for(i = 0; i < I460_KPAGES_PER_CPAGE; i++)
+ i460_pg_detail[pg_num][i] = 0;
+
+ bp = (void *) __get_free_pages(GFP_KERNEL,
+ intel_i460_pageshift - PAGE_SHIFT);
+ if(bp = NULL) {
+ printk("[agpgart] Couldn't alloc 4M GART page...\n");
+ return NULL;
+ }
+
+ bp_end = bp + ((PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT))) - 1);
+
+ for (page = virt_to_page(bp); page <= virt_to_page(bp_end); page++)
+ {
+ atomic_inc(&page->count);
+ set_bit(PG_locked, &page->flags);
+ atomic_inc(&agp_bridge.current_memory_agp);
+ }
+
+ return bp;
+}
+
+static void intel_i460_free_large_page(int pg_num, unsigned long addr)
+{
+ struct page *page;
+ void *bp, *bp_end;
+
+ bp = (void *) __va(addr);
+ bp_end = bp + (PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT)));
+
+ vfree(i460_pg_detail[pg_num]);
+ i460_pg_detail[pg_num] = NULL;
+
+ for (page = virt_to_page(bp); page < virt_to_page(bp_end); page++)
+ {
+ atomic_dec(&page->count);
+ clear_bit(PG_locked, &page->flags);
+ wake_up(&page->wait);
+ atomic_dec(&agp_bridge.current_memory_agp);
+ }
+
+ free_pages((unsigned long) bp, intel_i460_pageshift - PAGE_SHIFT);
+}
+
+static int intel_i460_insert_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages */
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+ I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+ I460_KPAGES_PER_CPAGE;
+
+ if(end_pg > num_entries)
+ {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ /* Check if the requested region of the aperture is free */
+ for(pg = start_pg; pg <= end_pg; pg++)
+ {
+ /* Allocate new GART pages if necessary */
+ if(i460_pg_detail[pg] = NULL) {
+ temp = intel_i460_alloc_large_page(pg);
+ if(temp = NULL)
+ return -ENOMEM;
+ agp_bridge.gatt_table[pg] = agp_bridge.mask_memory(
+ (unsigned long) temp, 0);
+ read_back = agp_bridge.gatt_table[pg];
+ }
+
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++)
+ {
+ if(i460_pg_detail[pg][idx] != 0)
+ return -EBUSY;
+ }
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for(pg = start_pg, i = 0; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ i460_pg_detail[pg][idx] = agp_bridge.gatt_table[pg] +
+ ((idx * PAGE_SIZE) >> 12);
+ i460_pg_count[pg]++;
+
+ /* Finally we fill in mem->memory... */
+ mem->memory[i] = ((unsigned long) (0xffffff &
+ i460_pg_detail[pg][idx])) << 12;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+ unsigned long addr;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages */
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+ I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+ I460_KPAGES_PER_CPAGE;
+
+ for(i = 0, pg = start_pg; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ mem->memory[i] = 0;
+ i460_pg_detail[pg][idx] = 0;
+ i460_pg_count[pg]--;
+ }
+
+ /* Free GART pages if they are unused */
+ if(i460_pg_count[pg] = 0) {
+ addr = (0xffffffUL & (unsigned long)
+ (agp_bridge.gatt_table[pg])) << 12;
+
+ agp_bridge.gatt_table[pg] = 0;
+ read_back = agp_bridge.gatt_table[pg];
+
+ intel_i460_free_large_page(pg, addr);
+ }
+ }
+
+ return 0;
+}
+
+/* Dummy routines to call the approriate {cpk,kpc} function */
+
+static int intel_i460_insert_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_insert_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_insert_memory_kpc(mem, pg_start, type);
+}
+
+static int intel_i460_remove_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_remove_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_remove_memory_kpc(mem, pg_start, type);
+}
+
+/*
+ * If the kernel page size is smaller that the chipset page size, we don't
+ * want to allocate memory until we know where it is to be bound in the
+ * aperture (a multi-kernel-page alloc might fit inside of an already
+ * allocated GART page). Consequently, don't allocate or free anything
+ * if i460_cpk (meaning chipset pages per kernel page) isn't set.
+ *
+ * Let's just hope nobody counts on the allocated AGP memory being there
+ * before bind time (I don't think current drivers do)...
+ */
+static unsigned long intel_i460_alloc_page(void)
+{
+ if(intel_i460_cpk)
+ return agp_generic_alloc_page();
+
+ /* Returning NULL would cause problems */
+ return ((unsigned long) ~0UL);
+}
+
+static void intel_i460_destroy_page(unsigned long page)
+{
+ if(intel_i460_cpk)
+ agp_generic_destroy_page(page);
+}
+
+static gatt_mask intel_i460_masks[] +{
+ {
+ INTEL_I460_GATT_VALID,
+ 0
+ }
+};
+
+static unsigned long intel_i460_mask_memory(unsigned long addr, int type)
+{
+ /* Make sure the returned address is a valid GATT entry */
+ return (agp_bridge.masks[0].mask | (((addr &
+ ~((1 << intel_i460_pageshift) - 1)) & 0xffffff000) >> 12));
+}
+
+static unsigned long intel_i460_unmask_memory(unsigned long addr)
+{
+ /* Turn a GATT entry into a physical address */
+ return ((addr & 0xffffff) << 12);
+}
+
+static aper_size_info_8 intel_i460_sizes[3] +{
+ /*
+ * The 32GB aperture is only available with a 4M GART page size.
+ * Due to the dynamic GART page size, we can't figure out page_order
+ * or num_entries until runtime.
+ */
+ {32768, 0, 0, 4},
+ {1024, 0, 0, 2},
+ {256, 0, 0, 1}
+};
+
+static int __init intel_i460_setup (struct pci_dev *pdev)
+{
+
+ agp_bridge.masks = intel_i460_masks;
+ agp_bridge.num_of_masks = 1;
+ agp_bridge.aperture_sizes = (void *) intel_i460_sizes;
+ agp_bridge.size_type = U8_APER_SIZE;
+ agp_bridge.num_aperture_sizes = 3;
+ agp_bridge.dev_private_data = NULL;
+ agp_bridge.needs_scratch_page = FALSE;
+ agp_bridge.configure = intel_i460_configure;
+ agp_bridge.fetch_size = intel_i460_fetch_size;
+ agp_bridge.cleanup = intel_i460_cleanup;
+ agp_bridge.tlb_flush = intel_i460_tlb_flush;
+ agp_bridge.mask_memory = intel_i460_mask_memory;
+ agp_bridge.unmask_memory = intel_i460_unmask_memory;
+ agp_bridge.agp_enable = agp_generic_agp_enable;
+ agp_bridge.cache_flush = global_cache_flush;
+ agp_bridge.create_gatt_table = intel_i460_create_gatt_table;
+ agp_bridge.free_gatt_table = intel_i460_free_gatt_table;
+ agp_bridge.insert_memory = intel_i460_insert_memory;
+ agp_bridge.remove_memory = intel_i460_remove_memory;
+ agp_bridge.alloc_by_type = agp_generic_alloc_by_type;
+ agp_bridge.free_by_type = agp_generic_free_by_type;
+ agp_bridge.agp_alloc_page = intel_i460_alloc_page;
+ agp_bridge.agp_destroy_page = intel_i460_destroy_page;
+#if 0
+ agp_bridge.suspend = ??;
+ agp_bridge.resume = ??;
+#endif
+ agp_bridge.cant_use_aperture = 1;
+
+ return 0;
+
+ (void) pdev; /* unused */
+}
+
+#endif /* CONFIG_AGP_I460 */
+
#ifdef CONFIG_AGP_INTEL
static int intel_fetch_size(void)
@@ -1415,7 +2042,6 @@
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
agp_bridge.unmask_memory = agp_generic_unmask_memory;
- agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1449,6 +2075,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
diff -urN linux-davidm/drivers/char/drm/ati_pcigart.h linux-2.4.10-lia/drivers/char/drm/ati_pcigart.h
--- linux-davidm/drivers/char/drm/ati_pcigart.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/drm/ati_pcigart.h Mon Sep 24 15:57:21 2001
@@ -106,6 +106,7 @@
goto done;
}
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
if ( !dev->pdev ) {
DRM_ERROR( "PCI device unknown!\n" );
goto done;
@@ -120,6 +121,9 @@
address = 0;
goto done;
}
+#else
+ bus_address = virt_to_bus( (void *)address );
+#endif
pci_gart = (u32 *)address;
@@ -129,6 +133,7 @@
memset( pci_gart, 0, ATI_MAX_PCIGART_PAGES * sizeof(u32) );
for ( i = 0 ; i < pages ; i++ ) {
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
/* we need to support large memory configurations */
entry->busaddr[i] = pci_map_single(dev->pdev,
page_address( entry->pagelist[i] ),
@@ -142,7 +147,9 @@
goto done;
}
page_base = (u32) entry->busaddr[i];
-
+#else
+ page_base = page_to_bus( entry->pagelist[i] );
+#endif
for (j = 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) {
*pci_gart++ = cpu_to_le32( page_base );
page_base += ATI_PCIGART_PAGE_SIZE;
@@ -167,6 +174,7 @@
unsigned long addr,
dma_addr_t bus_addr)
{
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
drm_sg_mem_t *entry = dev->sg;
unsigned long pages;
int i;
@@ -191,6 +199,8 @@
PAGE_SIZE, PCI_DMA_TODEVICE);
}
}
+
+#endif
if ( addr ) {
DRM(ati_free_pcigart_table)( addr );
diff -urN linux-davidm/drivers/char/drm/drmP.h linux-2.4.10-lia/drivers/char/drm/drmP.h
--- linux-davidm/drivers/char/drm/drmP.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/drm/drmP.h Mon Sep 24 16:13:22 2001
@@ -87,17 +87,6 @@
#define pci_unmap_single(hwdev, dma_addr, size, direction)
#endif
-/* page_to_bus for earlier kernels, not optimal in all cases */
-#ifndef page_to_bus
-#define page_to_bus(page) ((unsigned int)(virt_to_bus(page_address(page))))
-#endif
-
-/* We just use virt_to_bus for pci_map_single on older kernels */
-#if LINUX_VERSION_CODE < 0x020400
-#define pci_map_single(hwdev, ptr, size, direction) virt_to_bus(ptr)
-#define pci_unmap_single(hwdev, dma_addr, size, direction)
-#endif
-
/* DRM template customization defaults
*/
#ifndef __HAVE_AGP
@@ -377,14 +366,13 @@
if (len > DRM_PROC_LIMIT) { ret; *eof = 1; return len - offset; }
/* Mapping helper macros */
-#define DRM_IOREMAP(map, dev) \
+#define DRM_IOREMAP(map) \
(map)->handle = DRM(ioremap)( (map)->offset, (map)->size, (dev) )
-#define DRM_IOREMAPFREE(map, dev) \
+#define DRM_IOREMAPFREE(map) \
do { \
if ( (map)->handle && (map)->size ) \
- DRM(ioremapfree)( (map)->handle, \
- (map)->size, (dev) ); \
+ DRM(ioremapfree)( (map)->handle, (map)->size, (dev) ); \
} while (0)
#define DRM_FIND_MAP(_map, _o) \
@@ -838,10 +826,8 @@
extern unsigned long DRM(alloc_pages)(int order, int area);
extern void DRM(free_pages)(unsigned long address, int order,
int area);
-extern void *DRM(ioremap)(unsigned long offset,
- unsigned long size, drm_device_t *dev);
-extern void DRM(ioremapfree)(void *pt,
- unsigned long size, drm_device_t *dev);
+extern void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev);
+extern void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev);
#if __REALLY_HAVE_AGP
extern agp_memory *DRM(alloc_agp)(int pages, u32 type);
diff -urN linux-davidm/drivers/char/drm/drm_memory.h linux-2.4.10-lia/drivers/char/drm/drm_memory.h
--- linux-davidm/drivers/char/drm/drm_memory.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/drm/drm_memory.h Wed Aug 22 11:05:01 2001
@@ -309,6 +309,11 @@
void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev)
{
void *pt;
+#if __REALLY_HAVE_AGP
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ struct list_head *list;
+#endif
if (!size) {
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
@@ -316,52 +321,51 @@
return NULL;
}
- if(dev->agp->cant_use_aperture = 0) {
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 0)
goto standard_ioremap;
- } else {
- drm_map_t *map = NULL;
- drm_map_list_t *r_list;
- struct list_head *list, *head;
-
- list_for_each(list, &dev->maplist->head) {
- r_list = (drm_map_list_t *)list;
- map = r_list->map;
- if (!map) continue;
- if (map->offset <= offset &&
- (map->offset + map->size) >= (offset + size))
+
+ list_for_each(list, &dev->maplist->head) {
+ r_list = (drm_map_list_t *)list;
+ map = r_list->map;
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
break;
}
-
- if(map && map->type = _DRM_AGP) {
- struct drm_agp_mem *agpmem;
-
- for(agpmem = dev->agp->memory; agpmem;
- agpmem = agpmem->next) {
- if(agpmem->bound <= offset &&
- (agpmem->bound + (agpmem->pages
- << PAGE_SHIFT)) >= (offset + size))
- break;
- }
-
- if(agpmem = NULL)
- goto standard_ioremap;
-
- pt = agpmem->memory->vmptr + (offset - agpmem->bound);
- goto ioremap_success;
- } else {
- goto standard_ioremap;
- }
+
+ if(agpmem = NULL)
+ goto ioremap_failure;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
}
standard_ioremap:
+#endif
if (!(pt = ioremap(offset, size))) {
+#if __REALLY_HAVE_AGP
+ioremap_failure:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&DRM(mem_lock));
return NULL;
}
-
+#if __REALLY_HAVE_AGP
ioremap_success:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count;
DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated += size;
@@ -377,7 +381,11 @@
if (!pt)
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
+#if __REALLY_HAVE_AGP
else if(dev->agp->cant_use_aperture = 0)
+#else
+ else
+#endif
iounmap(pt);
spin_lock(&DRM(mem_lock));
diff -urN linux-davidm/drivers/char/drm/drm_scatter.h linux-2.4.10-lia/drivers/char/drm/drm_scatter.h
--- linux-davidm/drivers/char/drm/drm_scatter.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/drm/drm_scatter.h Mon Sep 24 15:25:55 2001
@@ -47,9 +47,11 @@
vfree( entry->virtual );
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
DRM(free)( entry->busaddr,
entry->pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
+#endif
DRM(free)( entry->pagelist,
entry->pages * sizeof(*entry->pagelist),
DRM_MEM_PAGES );
@@ -97,6 +99,7 @@
return -ENOMEM;
}
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
entry->busaddr = DRM(alloc)( pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
if ( !entry->busaddr ) {
@@ -109,12 +112,15 @@
return -ENOMEM;
}
memset( (void *)entry->busaddr, 0, pages * sizeof(*entry->busaddr) );
+#endif
entry->virtual = vmalloc_32( pages << PAGE_SHIFT );
if ( !entry->virtual ) {
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
DRM(free)( entry->busaddr,
entry->pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
+#endif
DRM(free)( entry->pagelist,
entry->pages * sizeof(*entry->pagelist),
DRM_MEM_PAGES );
diff -urN linux-davidm/drivers/char/drm/drm_vm.h linux-2.4.10-lia/drivers/char/drm/drm_vm.h
--- linux-davidm/drivers/char/drm/drm_vm.h Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/drm/drm_vm.h Mon Sep 24 16:16:20 2001
@@ -89,7 +89,7 @@
if (map && map->type = _DRM_AGP) {
unsigned long offset = address - vma->vm_start;
- unsigned long baddr = VM_OFFSET(vma) + offset;
+ unsigned long baddr = VM_OFFSET(vma) + offset, paddr;
struct drm_agp_mem *agpmem;
struct page *page;
@@ -115,8 +115,19 @@
* Get the page, inc the use count, and return it
*/
offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
- agpmem->memory->memory[offset] &= dev->agp->page_mask;
- page = virt_to_page(__va(agpmem->memory->memory[offset]));
+
+ /*
+ * This is bad. What we really want to do here is unmask
+ * the GART table entry held in the agp_memory structure.
+ * There isn't a convenient way to call agp_bridge.unmask_
+ * memory from here, so hard code it for now.
+ */
+#if defined(__ia64__)
+ paddr = (agpmem->memory->memory[offset] & 0xffffff) << 12;
+#else
+ paddr = agpmem->memory->memory[offset] & dev->agp->page_mask;
+#endif
+ page = virt_to_page(__va(paddr));
get_page(page);
DRM_DEBUG("baddr = 0x%lx page = 0x%p, offset = 0x%lx\n",
@@ -503,15 +514,21 @@
switch (map->type) {
case _DRM_AGP:
-#if defined(__alpha__)
- /*
- * On Alpha we can't talk to bus dma address from the
- * CPU, so for memory of type DRM_AGP, we'll deal with
- * sorting out the real physical pages and mappings
- * in nopage()
- */
- vma->vm_ops = &DRM(vm_ops);
- break;
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 1) {
+ /*
+ * On some systems we can't talk to bus dma address from
+ * the CPU, so for memory of type DRM_AGP, we'll deal
+ * with sorting out the real physical pages and mappings
+ * in nopage()
+ */
+ vma->vm_ops = &DRM(vm_ops);
+#if defined(__ia64__)
+ if (map->type != _DRM_AGP)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+#endif
+ goto mapswitch_out;
+ }
#endif
/* fall through to _DRM_FRAME_BUFFER... */
case _DRM_FRAME_BUFFER:
@@ -565,6 +582,9 @@
default:
return -EINVAL; /* This should never happen. */
}
+#if __REALLY_HAVE_AGP
+mapswitch_out:
+#endif
vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
#if LINUX_VERSION_CODE < 0x020203 /* KERNEL_VERSION(2,2,3) */
diff -urN linux-davidm/drivers/char/drm/r128_cce.c linux-2.4.10-lia/drivers/char/drm/r128_cce.c
--- linux-davidm/drivers/char/drm/r128_cce.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/drm/r128_cce.c Mon Sep 24 15:25:55 2001
@@ -384,11 +384,20 @@
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
page_ofs = tmp_ofs >> PAGE_SHIFT;
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
entry->busaddr[page_ofs]);
DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
+#else
+ R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
+ page_to_bus(entry->pagelist[page_ofs]));
+
+ DRM_DEBUG( "ring rptr: offset=0x%08lx handle=0x%08lx\n",
+ page_to_bus(entry->pagelist[page_ofs]),
+ entry->handle + tmp_ofs );
+#endif
}
/* Set watermark control */
@@ -658,9 +667,9 @@
drm_r128_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cce_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cce_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
@@ -668,6 +677,21 @@
DRM_ERROR( "failed to cleanup PCI GART!\n" );
}
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * Free the page we grabbed for RPTR_ADDR
+ */
+ if( dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head;
+
+ atomic_dec(&virt_to_page(alt_rh_off)->count);
+ clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+ wake_up(&virt_to_page(alt_rh_off)->wait);
+ free_page(alt_rh_off);
+ }
+#endif
+
DRM(free)( dev->dev_private, sizeof(drm_r128_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN linux-davidm/drivers/char/drm/radeon_cp.c linux-2.4.10-lia/drivers/char/drm/radeon_cp.c
--- linux-davidm/drivers/char/drm/radeon_cp.c Mon Sep 24 22:31:43 2001
+++ linux-2.4.10-lia/drivers/char/drm/radeon_cp.c Mon Sep 24 15:25:55 2001
@@ -640,11 +640,19 @@
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
page_ofs = tmp_ofs >> PAGE_SHIFT;
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
entry->busaddr[page_ofs]);
DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
+#else
+ RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
+ entry->busaddr[page_ofs]);
+ DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
+ entry->busaddr[page_ofs],
+ entry->handle + tmp_ofs );
+#endif
}
/* Set ring buffer size */
@@ -1002,9 +1010,9 @@
drm_radeon_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cp_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cp_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
@@ -1012,6 +1020,21 @@
DRM_ERROR( "failed to cleanup PCI GART!\n" );
}
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * Free the page we grabbed for RPTR_ADDR
+ */
+ if( dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head;
+
+ atomic_dec(&virt_to_page(alt_rh_off)->count);
+ clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+ wake_up(&virt_to_page(alt_rh_off)->wait);
+ free_page(alt_rh_off);
+ }
+#endif
+
DRM(free)( dev->dev_private, sizeof(drm_radeon_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN linux-davidm/fs/exec.c linux-2.4.10-lia/fs/exec.c
--- linux-davidm/fs/exec.c Mon Sep 24 22:31:44 2001
+++ linux-2.4.10-lia/fs/exec.c Mon Sep 24 22:04:08 2001
@@ -176,7 +176,7 @@
}
/*
- * 'copy_strings()' copies argument/envelope strings from user
+ * 'copy_strings()' copies argument/environment strings from user
* memory to free pages in kernel mem. These are in a format ready
* to be put directly into the top of new user memory.
*/
diff -urN linux-davidm/fs/inode.c linux-2.4.10-lia/fs/inode.c
--- linux-davidm/fs/inode.c Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/fs/inode.c Mon Sep 24 21:29:26 2001
@@ -16,7 +16,6 @@
#include <linux/swap.h>
#include <linux/swapctl.h>
#include <linux/prefetch.h>
-#include <linux/locks.h>
/*
* New inode.c implementation.
@@ -836,7 +835,7 @@
{
struct inode * inode;
- prefetch_spin_lock(&inode_lock);
+ spin_lock_prefetch(&inode_lock);
inode = alloc_inode();
if (inode) {
diff -urN linux-davidm/fs/namei.c linux-2.4.10-lia/fs/namei.c
--- linux-davidm/fs/namei.c Mon Sep 24 15:08:16 2001
+++ linux-2.4.10-lia/fs/namei.c Mon Sep 24 22:04:25 2001
@@ -629,10 +629,14 @@
static int __emul_lookup_dentry(const char *name, struct nameidata *nd)
{
if (path_walk(name, nd))
- return 0;
+ return 0; /* something went wrong... */
- if (!nd->dentry->d_inode) {
+ if (!nd->dentry->d_inode || S_ISDIR(nd->dentry->d_inode->i_mode)) {
struct nameidata nd_root;
+ /*
+ * NAME was not found in alternate root or it's a directory. Try to find
+ * it in the normal root:
+ */
nd_root.last_type = LAST_ROOT;
nd_root.flags = nd->flags;
read_lock(¤t->fs->lock);
@@ -642,12 +646,14 @@
if (path_walk(name, &nd_root))
return 1;
if (nd_root.dentry->d_inode) {
+ /* file exists in normal root filesystem: */
path_release(nd);
nd->dentry = nd_root.dentry;
nd->mnt = nd_root.mnt;
nd->last = nd_root.last;
return 1;
}
+ /* file doesn't exist */
path_release(&nd_root);
}
return 1;
diff -urN linux-davidm/include/asm-ia64/ia32.h linux-2.4.10-lia/include/asm-ia64/ia32.h
--- linux-davidm/include/asm-ia64/ia32.h Tue Jul 31 10:30:09 2001
+++ linux-2.4.10-lia/include/asm-ia64/ia32.h Mon Sep 24 22:56:58 2001
@@ -33,7 +33,9 @@
typedef __kernel_fsid_t __kernel_fsid_t32;
#define IA32_PAGE_SHIFT 12 /* 4KB pages */
-#define IA32_PAGE_SIZE (1ULL << IA32_PAGE_SHIFT)
+#define IA32_PAGE_SIZE (1UL << IA32_PAGE_SHIFT)
+#define IA32_PAGE_MASK (~(IA32_PAGE_SIZE - 1))
+#define IA32_PAGE_ALIGN(addr) (((addr) + IA32_PAGE_SIZE - 1) & IA32_PAGE_MASK)
#define IA32_CLOCKS_PER_SEC 100 /* Cast in stone for IA32 Linux */
#define IA32_TICK(tick) ((unsigned long long)(tick) * IA32_CLOCKS_PER_SEC / CLOCKS_PER_SEC)
@@ -46,6 +48,9 @@
__kernel_pid_t32 l_pid;
};
+#define F_GETLK64 12
+#define F_SETLK64 13
+#define F_SETLKW64 14
/* sigcontext.h */
/*
@@ -103,13 +108,19 @@
#define _IA32_NSIG_BPW 32
#define _IA32_NSIG_WORDS (_IA32_NSIG / _IA32_NSIG_BPW)
+#define IA32_SET_SA_HANDLER(ka,handler,restorer) \
+ ((ka)->sa.sa_handler = (__sighandler_t) \
+ (((unsigned long)(restorer) << 32) \
+ | ((handler) & 0xffffffff)))
+#define IA32_SA_HANDLER(ka) ((unsigned long) (ka)->sa.sa_handler & 0xffffffff)
+#define IA32_SA_RESTORER(ka) ((unsigned long) (ka)->sa.sa_handler >> 32)
+
typedef struct {
unsigned int sig[_IA32_NSIG_WORDS];
} sigset32_t;
struct sigaction32 {
- unsigned int sa_handler; /* Really a pointer, but need to deal
- with 32 bits */
+ unsigned int sa_handler; /* Really a pointer, but need to deal with 32 bits */
unsigned int sa_flags;
unsigned int sa_restorer; /* Another 32 bit pointer */
sigset32_t sa_mask; /* A 32 bit mask */
@@ -162,6 +173,31 @@
unsigned int __unused5;
};
+struct stat64 {
+ unsigned short st_dev;
+ unsigned char __pad0[10];
+ unsigned int __st_ino;
+ unsigned int st_mode;
+ unsigned int st_nlink;
+ unsigned int st_uid;
+ unsigned int st_gid;
+ unsigned short st_rdev;
+ unsigned char __pad3[10];
+ unsigned int st_size_lo;
+ unsigned int st_size_hi;
+ unsigned int st_blksize;
+ unsigned int st_blocks; /* Number 512-byte blocks allocated. */
+ unsigned int __pad4; /* future possible st_blocks high bits */
+ unsigned int st_atime;
+ unsigned int __pad5;
+ unsigned int st_mtime;
+ unsigned int __pad6;
+ unsigned int st_ctime;
+ unsigned int __pad7; /* will be high 32 bits of ctime someday */
+ unsigned int st_ino_lo;
+ unsigned int st_ino_hi;
+};
+
struct statfs32 {
int f_type;
int f_bsize;
@@ -252,7 +288,7 @@
#define ELF_ARCH EM_386
#define IA32_PAGE_OFFSET 0xc0000000
-#define IA32_STACK_TOP ((IA32_PAGE_OFFSET/3) * 2)
+#define IA32_STACK_TOP IA32_PAGE_OFFSET
/*
* The system segments (GDT, TSS, LDT) have to be mapped below 4GB so the IA-32 engine can
@@ -377,7 +413,7 @@
*/
#define IA32_FSR_DEFAULT 0x55550000 /* set all tag bits */
-#define IA32_FCR_DEFAULT 0x17800000037fULL /* extended precision, all masks */
+#define IA32_FCR_DEFAULT 0x17800000037fUL /* extended precision, all masks */
#define IA32_PTRACE_GETREGS 12
#define IA32_PTRACE_SETREGS 13
@@ -421,6 +457,7 @@
extern void ia32_init_addr_space (struct pt_regs *regs);
extern int ia32_setup_arg_pages (struct linux_binprm *bprm);
extern int ia32_exception (struct pt_regs *regs, unsigned long isr);
+extern unsigned long ia32_do_mmap (struct file *, unsigned long, unsigned long, int, int, loff_t);
#endif /* !CONFIG_IA32_SUPPORT */
diff -urN linux-davidm/include/asm-ia64/iosapic.h linux-2.4.10-lia/include/asm-ia64/iosapic.h
--- linux-davidm/include/asm-ia64/iosapic.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/asm-ia64/iosapic.h Mon Sep 24 22:05:11 2001
@@ -53,9 +53,15 @@
extern void __init iosapic_init (unsigned long address, unsigned int base_irq,
int pcat_compat);
+extern int iosapic_register_irq (u32 global_vector, unsigned long polarity,
+ unsigned long edge_triggered, u32 base_irq,
+ char *iosapic_address);
extern void iosapic_register_legacy_irq (unsigned long irq, unsigned long pin,
unsigned long polarity, unsigned long trigger);
-extern int iosapic_register_platform_irq (u32 int_type, u32 global_vector, u32 iosapic_vector, u16 eid, u16 id, unsigned long polarity, unsigned long edge_triggered, u32 base_irq, char *iosapic_address);
+extern int iosapic_register_platform_irq (u32 int_type, u32 global_vector, u32 iosapic_vector,
+ u16 eid, u16 id, unsigned long polarity,
+ unsigned long edge_triggered, u32 base_irq,
+ char *iosapic_address);
extern unsigned int iosapic_version (char *addr);
extern void iosapic_pci_fixup (int);
diff -urN linux-davidm/include/asm-ia64/ipc.h linux-2.4.10-lia/include/asm-ia64/ipc.h
--- linux-davidm/include/asm-ia64/ipc.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.10-lia/include/asm-ia64/ipc.h Wed Dec 31 16:00:00 1969
@@ -1,31 +0,0 @@
-#ifndef __i386_IPC_H__
-#define __i386_IPC_H__
-
-/*
- * These are used to wrap system calls on x86.
- *
- * See arch/i386/kernel/sys_i386.c for ugly details..
- */
-struct ipc_kludge {
- struct msgbuf *msgp;
- long msgtyp;
-};
-
-#define SEMOP 1
-#define SEMGET 2
-#define SEMCTL 3
-#define MSGSND 11
-#define MSGRCV 12
-#define MSGGET 13
-#define MSGCTL 14
-#define SHMAT 21
-#define SHMDT 22
-#define SHMGET 23
-#define SHMCTL 24
-
-/* Used by the DIPC package, try and avoid reusing it */
-#define DIPC 25
-
-#define IPCCALL(version,op) ((version)<<16 | (op))
-
-#endif
diff -urN linux-davidm/include/asm-ia64/keyboard.h linux-2.4.10-lia/include/asm-ia64/keyboard.h
--- linux-davidm/include/asm-ia64/keyboard.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/asm-ia64/keyboard.h Mon Sep 24 23:00:16 2001
@@ -23,7 +23,6 @@
char raw_mode);
extern char pckbd_unexpected_up(unsigned char keycode);
extern void pckbd_leds(unsigned char leds);
-extern int pckbd_rate(struct kbd_repeat *rep);
extern void pckbd_init_hw(void);
extern unsigned char pckbd_sysrq_xlate[128];
@@ -33,7 +32,6 @@
#define kbd_translate pckbd_translate
#define kbd_unexpected_up pckbd_unexpected_up
#define kbd_leds pckbd_leds
-#define kbd_rate pckbd_rate
#define kbd_init_hw pckbd_init_hw
#define kbd_sysrq_xlate pckbd_sysrq_xlate
diff -urN linux-davidm/include/asm-ia64/machvec.h linux-2.4.10-lia/include/asm-ia64/machvec.h
--- linux-davidm/include/asm-ia64/machvec.h Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/include/asm-ia64/machvec.h Mon Sep 24 22:20:43 2001
@@ -1,11 +1,11 @@
/*
* Machine vector for IA-64.
- *
+ *
* Copyright (C) 1999 Silicon Graphics, Inc.
* Copyright (C) Srinivasa Thirumalachar <sprasad@engr.sgi.com>
* Copyright (C) Vijay Chander <vijay@engr.sgi.com>
* Copyright (C) 1999-2001 Hewlett-Packard Co.
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#ifndef _ASM_IA64_MACHVEC_H
#define _ASM_IA64_MACHVEC_H
@@ -28,6 +28,7 @@
typedef void ia64_mv_cmci_handler_t (int, void *, struct pt_regs *);
typedef void ia64_mv_log_print_t (void);
typedef void ia64_mv_send_ipi_t (int, int, int, int);
+typedef void ia64_mv_global_tlb_purge_t (unsigned long, unsigned long, unsigned long);
typedef struct irq_desc *ia64_mv_irq_desc (unsigned int);
typedef u8 ia64_mv_irq_to_vector (u8);
typedef unsigned int ia64_mv_local_vector_to_irq (u8 vector);
@@ -67,6 +68,8 @@
# include <asm/machvec_dig.h>
# elif defined (CONFIG_IA64_SGI_SN1)
# include <asm/machvec_sn1.h>
+# elif defined (CONFIG_IA64_SGI_SN2)
+# include <asm/machvec_sn2.h>
# elif defined (CONFIG_IA64_GENERIC)
# ifdef MACHVEC_PLATFORM_HEADER
@@ -82,6 +85,7 @@
# define platform_log_print ia64_mv.log_print
# define platform_pci_fixup ia64_mv.pci_fixup
# define platform_send_ipi ia64_mv.send_ipi
+# define platform_global_tlb_purge ia64_mv.global_tlb_purge
# define platform_pci_dma_init ia64_mv.dma_init
# define platform_pci_alloc_consistent ia64_mv.alloc_consistent
# define platform_pci_free_consistent ia64_mv.free_consistent
@@ -147,6 +151,7 @@
platform_cmci_handler, \
platform_log_print, \
platform_send_ipi, \
+ platform_global_tlb_purge, \
platform_pci_dma_init, \
platform_pci_alloc_consistent, \
platform_pci_free_consistent, \
@@ -216,6 +221,9 @@
#endif
#ifndef platform_send_ipi
# define platform_send_ipi ia64_send_ipi /* default to architected version */
+#endif
+#ifndef platform_global_tlb_purge
+# define platform_global_tlb_purge ia64_global_tlb_purge /* default to architected version */
#endif
#ifndef platform_pci_dma_init
# define platform_pci_dma_init swiotlb_init
diff -urN linux-davidm/include/asm-ia64/machvec_sn1.h linux-2.4.10-lia/include/asm-ia64/machvec_sn1.h
--- linux-davidm/include/asm-ia64/machvec_sn1.h Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/include/asm-ia64/machvec_sn1.h Mon Sep 24 22:06:06 2001
@@ -5,6 +5,7 @@
extern ia64_mv_irq_init_t sn1_irq_init;
extern ia64_mv_map_nr_t sn1_map_nr;
extern ia64_mv_send_ipi_t sn1_send_IPI;
+extern ia64_mv_global_tlb_purge_t sn1_global_tlb_purge;
extern ia64_mv_pci_fixup_t sn1_pci_fixup;
extern ia64_mv_inb_t sn1_inb;
extern ia64_mv_inw_t sn1_inw;
@@ -34,6 +35,7 @@
#define platform_irq_init sn1_irq_init
#define platform_map_nr sn1_map_nr
#define platform_send_ipi sn1_send_IPI
+#define platform_global_tlb_purge sn1_global_tlb_purge
#define platform_pci_fixup sn1_pci_fixup
#define platform_inb sn1_inb
#define platform_inw sn1_inw
diff -urN linux-davidm/include/asm-ia64/mca.h linux-2.4.10-lia/include/asm-ia64/mca.h
--- linux-davidm/include/asm-ia64/mca.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/asm-ia64/mca.h Mon Sep 24 22:58:05 2001
@@ -61,8 +61,6 @@
IA64_MCA_RENDEZ_CHECKIN_DONE = 0x1
};
-#define IA64_MAXCPUS 64 /* Need to do something about this */
-
/* Information maintained by the MC infrastructure */
typedef struct ia64_mc_info_s {
u64 imi_mca_handler;
@@ -71,7 +69,7 @@
size_t imi_monarch_init_handler_size;
u64 imi_slave_init_handler;
size_t imi_slave_init_handler_size;
- u8 imi_rendez_checkin[IA64_MAXCPUS];
+ u8 imi_rendez_checkin[NR_CPUS];
} ia64_mc_info_t;
diff -urN linux-davidm/include/asm-ia64/mmu_context.h linux-2.4.10-lia/include/asm-ia64/mmu_context.h
--- linux-davidm/include/asm-ia64/mmu_context.h Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/include/asm-ia64/mmu_context.h Mon Sep 24 23:02:56 2001
@@ -60,7 +60,6 @@
static inline void
get_mmu_context (struct mm_struct *mm)
{
- /* check if our ASN is of an older generation and thus invalid: */
if (mm->context = 0)
get_new_mmu_context(mm);
}
diff -urN linux-davidm/include/asm-ia64/module.h linux-2.4.10-lia/include/asm-ia64/module.h
--- linux-davidm/include/asm-ia64/module.h Mon Sep 24 15:08:24 2001
+++ linux-2.4.10-lia/include/asm-ia64/module.h Mon Sep 24 22:57:00 2001
@@ -14,7 +14,6 @@
#define module_map(x) vmalloc(x)
#define module_unmap(x) ia64_module_unmap(x)
#define module_arch_init(x) ia64_module_init(x)
-#define arch_init_modules(x) do { } while (0)
/*
* This must match in size and layout the data created by
@@ -28,12 +27,23 @@
const char *gp;
};
+static inline void
+arch_init_modules (struct module *kmod)
+{
+ static struct archdata archdata;
+ register char *kernel_gp asm ("gp");
+
+ archdata.gp = kernel_gp;
+ kmod->archdata_start = (const char *) &archdata;
+ kmod->archdata_end = (const char *) (&archdata + 1);
+}
+
/*
* functions to add/remove a modules unwind info when
* it is loaded or unloaded.
*/
static inline int
-ia64_module_init(struct module *mod)
+ia64_module_init (struct module *mod)
{
struct archdata *archdata;
@@ -45,28 +55,23 @@
* Make sure the unwind pointers are sane.
*/
- if (archdata->unw_table)
- {
+ if (archdata->unw_table) {
printk(KERN_ERR "module_arch_init: archdata->unw_table must be zero.\n");
return 1;
}
- if (!mod_bound(archdata->gp, 0, mod))
- {
+ if (!mod_bound(archdata->gp, 0, mod)) {
printk(KERN_ERR "module_arch_init: archdata->gp out of bounds.\n");
return 1;
}
- if (!mod_bound(archdata->unw_start, 0, mod))
- {
+ if (!mod_bound(archdata->unw_start, 0, mod)) {
printk(KERN_ERR "module_arch_init: archdata->unw_start out of bounds.\n");
return 1;
}
- if (!mod_bound(archdata->unw_end, 0, mod))
- {
+ if (!mod_bound(archdata->unw_end, 0, mod)) {
printk(KERN_ERR "module_arch_init: archdata->unw_end out of bounds.\n");
return 1;
}
- if (!mod_bound(archdata->segment_base, 0, mod))
- {
+ if (!mod_bound(archdata->segment_base, 0, mod)) {
printk(KERN_ERR "module_arch_init: archdata->unw_table out of bounds.\n");
return 1;
}
@@ -82,7 +87,7 @@
}
static inline void
-ia64_module_unmap(void * addr)
+ia64_module_unmap (void * addr)
{
struct module *mod = (struct module *) addr;
struct archdata *archdata;
@@ -90,8 +95,7 @@
/*
* Before freeing the module memory remove the unwind table entry
*/
- if (mod_member_present(mod, archdata_start) && mod->archdata_start)
- {
+ if (mod_member_present(mod, archdata_start) && mod->archdata_start) {
archdata = (struct archdata *)(mod->archdata_start);
if (archdata->unw_table != NULL)
diff -urN linux-davidm/include/asm-ia64/msgbuf.h linux-2.4.10-lia/include/asm-ia64/msgbuf.h
--- linux-davidm/include/asm-ia64/msgbuf.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.10-lia/include/asm-ia64/msgbuf.h Mon Sep 24 22:06:49 2001
@@ -1,7 +1,7 @@
#ifndef _ASM_IA64_MSGBUF_H
#define _ASM_IA64_MSGBUF_H
-/*
+/*
* The msqid64_ds structure for IA-64 architecture.
* Note extra padding because this structure is passed back and forth
* between kernel and user space.
diff -urN linux-davidm/include/asm-ia64/namei.h linux-2.4.10-lia/include/asm-ia64/namei.h
--- linux-davidm/include/asm-ia64/namei.h Sun Apr 2 15:49:07 2000
+++ linux-2.4.10-lia/include/asm-ia64/namei.h Mon Sep 24 23:01:39 2001
@@ -2,15 +2,24 @@
#define _ASM_IA64_NAMEI_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
-/*
- * This dummy routine maybe changed to something useful
- * for /usr/gnemul/ emulation stuff.
- * Look at asm-sparc/namei.h for details.
- */
-#define __emul_prefix() NULL
+#include <asm/ptrace.h>
+#include <asm/system.h>
+
+#define EMUL_PREFIX_LINUX_IA32 "emul/ia32-linux/"
+
+static inline char *
+__emul_prefix (void)
+{
+ switch (current->personality) {
+ case PER_LINUX32:
+ return EMUL_PREFIX_LINUX_IA32;
+ default:
+ return NULL;
+ }
+}
#endif /* _ASM_IA64_NAMEI_H */
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.10-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Tue Jul 31 10:30:09 2001
+++ linux-2.4.10-lia/include/asm-ia64/offsets.h Mon Sep 24 22:07:13 2001
@@ -8,23 +8,23 @@
*/
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 3904 /* 0xf40 */
+#define IA64_TASK_SIZE 3920 /* 0xf50 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
#define IA64_CPU_SIZE 16384 /* 0x4000 */
-#define SIGFRAME_SIZE 2832 /* 0xb10 */
+#define SIGFRAME_SIZE 2816 /* 0xb00 */
#define UNW_FRAME_INFO_SIZE 448 /* 0x1c0 */
#define IA64_TASK_PTRACE_OFFSET 48 /* 0x30 */
#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
-#define IA64_TASK_THREAD_OFFSET 1456 /* 0x5b0 */
-#define IA64_TASK_THREAD_KSP_OFFSET 1456 /* 0x5b0 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 1568 /* 0x620 */
-#define IA64_TASK_PFM_NOTIFY_OFFSET 2088 /* 0x828 */
-#define IA64_TASK_PID_OFFSET 196 /* 0xc4 */
+#define IA64_TASK_THREAD_OFFSET 1472 /* 0x5c0 */
+#define IA64_TASK_THREAD_KSP_OFFSET 1472 /* 0x5c0 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 1584 /* 0x630 */
+#define IA64_TASK_PFM_MUST_BLOCK_OFFSET 2104 /* 0x838 */
+#define IA64_TASK_PID_OFFSET 220 /* 0xdc */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
#define IA64_PT_REGS_CR_IIP_OFFSET 8 /* 0x8 */
@@ -126,12 +126,13 @@
#define IA64_SIGCONTEXT_FR6_OFFSET 560 /* 0x230 */
#define IA64_SIGCONTEXT_PR_OFFSET 128 /* 0x80 */
#define IA64_SIGCONTEXT_R12_OFFSET 296 /* 0x128 */
+#define IA64_SIGCONTEXT_RBS_BASE_OFFSET 2512 /* 0x9d0 */
+#define IA64_SIGCONTEXT_LOADRS_OFFSET 2520 /* 0x9d8 */
#define IA64_SIGFRAME_ARG0_OFFSET 0 /* 0x0 */
#define IA64_SIGFRAME_ARG1_OFFSET 8 /* 0x8 */
#define IA64_SIGFRAME_ARG2_OFFSET 16 /* 0x10 */
-#define IA64_SIGFRAME_RBS_BASE_OFFSET 24 /* 0x18 */
-#define IA64_SIGFRAME_HANDLER_OFFSET 32 /* 0x20 */
-#define IA64_SIGFRAME_SIGCONTEXT_OFFSET 176 /* 0xb0 */
+#define IA64_SIGFRAME_HANDLER_OFFSET 24 /* 0x18 */
+#define IA64_SIGFRAME_SIGCONTEXT_OFFSET 160 /* 0xa0 */
#define IA64_CLONE_VFORK 16384 /* 0x4000 */
#define IA64_CLONE_VM 256 /* 0x100 */
#define IA64_CPU_IRQ_COUNT_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/pal.h linux-2.4.10-lia/include/asm-ia64/pal.h
--- linux-davidm/include/asm-ia64/pal.h Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/include/asm-ia64/pal.h Mon Sep 24 22:07:25 2001
@@ -7,9 +7,9 @@
* This is based on Intel IA-64 Architecture Software Developer's Manual rev 1.0
* chapter 11 IA-64 Processor Abstraction Layer
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Srinivasa Prasad Thirumalachar <sprasad@sprasad.engr.sgi.com>
@@ -17,7 +17,7 @@
* 99/10/01 davidm Make sure we pass zero for reserved parameters.
* 00/03/07 davidm Updated pal_cache_flush() to be in sync with PAL v2.6.
* 00/03/23 cfleck Modified processor min-state save area to match updated PAL & SAL info
- * 00/05/24 eranian Updated to latest PAL spec, fix structures bugs, added
+ * 00/05/24 eranian Updated to latest PAL spec, fix structures bugs, added
* 00/05/25 eranian Support for stack calls, and static physical calls
* 00/06/18 eranian Support for stacked physical calls
*/
@@ -91,9 +91,9 @@
#define PAL_STATUS_UNIMPLEMENTED -1 /* Unimplemented procedure */
#define PAL_STATUS_EINVAL -2 /* Invalid argument */
#define PAL_STATUS_ERROR -3 /* Error */
-#define PAL_STATUS_CACHE_INIT_FAIL -4 /* Could not initialize the
+#define PAL_STATUS_CACHE_INIT_FAIL -4 /* Could not initialize the
* specified level and type of
- * cache without sideeffects
+ * cache without sideeffects
* and "restrict" was 1
*/
@@ -189,8 +189,8 @@
#define PAL_CACHE_ATTR_WT 0 /* Write through cache */
#define PAL_CACHE_ATTR_WB 1 /* Write back cache */
-#define PAL_CACHE_ATTR_WT_OR_WB 2 /* Either write thru or write
- * back depending on TLB
+#define PAL_CACHE_ATTR_WT_OR_WB 2 /* Either write thru or write
+ * back depending on TLB
* memory attributes
*/
@@ -211,13 +211,13 @@
tagprot_lsb : 6, /* Least -do- */
tagprot_msb : 6, /* Most Sig. tag address
- * bit that this
+ * bit that this
* protection covers.
*/
prot_bits : 6, /* # of protection bits */
method : 4, /* Protection method */
- t_d : 2; /* Indicates which part
- * of the cache this
+ t_d : 2; /* Indicates which part
+ * of the cache this
* protection encoding
* applies.
*/
@@ -239,7 +239,7 @@
*/
#define PAL_CACHE_PROT_PART_DATA_TAG 3 /* Data+tag protection (data is
* more significant )
- */
+ */
#define PAL_CACHE_PROT_PART_MAX 6
@@ -247,7 +247,7 @@
pal_status_t pcpi_status;
pal_cache_protection_element_t pcp_info[PAL_CACHE_PROT_PART_MAX];
} pal_cache_protection_info_t;
-
+
/* Processor cache protection method encodings */
#define PAL_CACHE_PROT_METHOD_NONE 0 /* No protection */
@@ -262,41 +262,41 @@
struct {
u64 cache_type : 8, /* 7-0 cache type */
level : 8, /* 15-8 level of the
- * cache in the
+ * cache in the
* heirarchy.
*/
way : 8, /* 23-16 way in the set
*/
part : 8, /* 31-24 part of the
- * cache
+ * cache
*/
reserved : 32; /* 63-32 is reserved*/
} pclid_info_read;
struct {
u64 cache_type : 8, /* 7-0 cache type */
level : 8, /* 15-8 level of the
- * cache in the
+ * cache in the
* heirarchy.
*/
way : 8, /* 23-16 way in the set
*/
part : 8, /* 31-24 part of the
- * cache
+ * cache
*/
- mesi : 8, /* 39-32 cache line
+ mesi : 8, /* 39-32 cache line
* state
*/
start : 8, /* 47-40 lsb of data to
* invert
*/
length : 8, /* 55-48 #bits to
- * invert
+ * invert
*/
trigger : 8; /* 63-56 Trigger error
- * by doing a load
- * after the write
- */
-
+ * by doing a load
+ * after the write
+ */
+
} pclid_info_write;
} pal_cache_line_id_u_t;
@@ -319,11 +319,11 @@
#define PAL_CACHE_LINE_ID_PART_TAG 1 /* Tag */
#define PAL_CACHE_LINE_ID_PART_DATA_PROT 2 /* Data protection */
#define PAL_CACHE_LINE_ID_PART_TAG_PROT 3 /* Tag protection */
-#define PAL_CACHE_LINE_ID_PART_DATA_TAG_PROT 4 /* Data+tag
+#define PAL_CACHE_LINE_ID_PART_DATA_TAG_PROT 4 /* Data+tag
* protection
*/
typedef struct pal_cache_line_info_s {
- pal_status_t pcli_status; /* Return status of the read cache line
+ pal_status_t pcli_status; /* Return status of the read cache line
* info call.
*/
u64 pcli_data; /* 64-bit data, tag, protection bits .. */
@@ -351,15 +351,15 @@
#define PAL_MC_INFO_REQ_ADDR 4 /* Requestor address */
#define PAL_MC_INFO_RESP_ADDR 5 /* Responder address */
#define PAL_MC_INFO_TARGET_ADDR 6 /* Target address */
-#define PAL_MC_INFO_IMPL_DEP 7 /* Implementation
- * dependent
+#define PAL_MC_INFO_IMPL_DEP 7 /* Implementation
+ * dependent
*/
typedef struct pal_process_state_info_s {
u64 reserved1 : 2,
rz : 1, /* PAL_CHECK processor
- * rendezvous
+ * rendezvous
* successful.
*/
@@ -370,13 +370,13 @@
* errors occurred
*/
- mn : 1, /* Min. state save
- * area has been
+ mn : 1, /* Min. state save
+ * area has been
* registered with PAL
*/
sy : 1, /* Storage integrity
- * synched
+ * synched
*/
@@ -389,8 +389,8 @@
hd : 1, /* Non-essential hw
* lost (no loss of
- * functionality)
- * causing the
+ * functionality)
+ * causing the
* processor to run in
* degraded mode.
*/
@@ -398,9 +398,9 @@
tl : 1, /* 1 => MC occurred
* after an instr was
* executed but before
- * the trap that
+ * the trap that
* resulted from instr
- * execution was
+ * execution was
* generated.
* (Trap Lost )
*/
@@ -410,7 +410,7 @@
*/
dy : 1, /* Processor dynamic
- * state valid
+ * state valid
*/
@@ -441,10 +441,10 @@
* are valid
*/
gr : 1, /* General registers
- * are valid
+ * are valid
* (excl. banked regs)
*/
- dsize : 16, /* size of dynamic
+ dsize : 16, /* size of dynamic
* state returned
* by the processor
*/
@@ -459,8 +459,8 @@
typedef struct pal_cache_check_info_s {
u64 reserved1 : 16,
- way : 5, /* Way in which the
- * error occurred
+ way : 5, /* Way in which the
+ * error occurred
*/
reserved2 : 1,
mc : 1, /* Machine check corrected */
@@ -469,8 +469,8 @@
*/
wv : 1, /* Way field valid */
- op : 3, /* Type of cache
- * operation that
+ op : 3, /* Type of cache
+ * operation that
* caused the machine
* check.
*/
@@ -493,7 +493,7 @@
typedef struct pal_tlb_check_info_s {
u64 tr_slot : 8, /* Slot# of TR where
- * error occurred
+ * error occurred
*/
reserved2 : 8,
dtr : 1, /* Fail in data TR */
@@ -509,7 +509,7 @@
u64 size : 5, /* Xaction size*/
ib : 1, /* Internal bus error */
eb : 1, /* External bus error */
- cc : 1, /* Error occurred
+ cc : 1, /* Error occurred
* during cache-cache
* transfer.
*/
@@ -518,7 +518,7 @@
tv : 1, /* Targ addr valid */
rp : 1, /* Resp addr valid */
rq : 1, /* Req addr valid */
- bsi : 8, /* Bus error status
+ bsi : 8, /* Bus error status
* info
*/
mc : 1, /* Machine check corrected */
@@ -601,8 +601,8 @@
#define pmci_bus_external_error pme_bus.eb
#define pmci_bus_mc pme_bus.mc
-/*
- * NOTE: this min_state_save area struct only includes the 1KB
+/*
+ * NOTE: this min_state_save area struct only includes the 1KB
* architectural state save area. The other 3 KB is scratch space
* for PAL.
*/
@@ -703,12 +703,12 @@
u64 pbf_disable_bus_addr_err_signal : 1;
u64 pbf_disable_bus_data_err_check : 1;
} pal_bus_features_s;
-} pal_bus_features_u_t;
+} pal_bus_features_u_t;
extern void pal_bus_features_print (u64);
/* Provide information about configurable processor bus features */
-static inline s64
+static inline s64
ia64_pal_bus_get_features (pal_bus_features_u_t *features_avail,
pal_bus_features_u_t *features_status,
pal_bus_features_u_t *features_control)
@@ -721,13 +721,13 @@
features_status->pal_bus_features_val = iprv.v1;
if (features_control)
features_control->pal_bus_features_val = iprv.v2;
- return iprv.status;
+ return iprv.status;
}
/* Enables/disables specific processor bus features */
-static inline s64
-ia64_pal_bus_set_features (pal_bus_features_u_t feature_select)
-{
+static inline s64
+ia64_pal_bus_set_features (pal_bus_features_u_t feature_select)
+{
struct ia64_pal_retval iprv;
PAL_CALL_PHYS(iprv, PAL_BUS_SET_FEATURES, feature_select.pal_bus_features_val, 0, 0);
return iprv.status;
@@ -739,7 +739,7 @@
{
struct ia64_pal_retval iprv;
- PAL_CALL(iprv, PAL_CACHE_INFO, cache_level, cache_type, 0);
+ PAL_CALL(iprv, PAL_CACHE_INFO, cache_level, cache_type, 0);
if (iprv.status = 0) {
conf->pcci_status = iprv.status;
@@ -747,7 +747,7 @@
conf->pcci_info_2.pcci2_data = iprv.v1;
conf->pcci_reserved = iprv.v2;
}
- return iprv.status;
+ return iprv.status;
}
@@ -757,7 +757,7 @@
{
struct ia64_pal_retval iprv;
- PAL_CALL(iprv, PAL_CACHE_PROT_INFO, cache_level, cache_type, 0);
+ PAL_CALL(iprv, PAL_CACHE_PROT_INFO, cache_level, cache_type, 0);
if (iprv.status = 0) {
prot->pcpi_status = iprv.status;
@@ -768,106 +768,106 @@
prot->pcp_info[4].pcpi_data = iprv.v2 & 0xffffffff;
prot->pcp_info[5].pcpi_data = iprv.v2 >> 32;
}
- return iprv.status;
+ return iprv.status;
}
-
+
/*
* Flush the processor instruction or data caches. *PROGRESS must be
* initialized to zero before calling this for the first time..
*/
-static inline s64
-ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 *progress)
-{
+static inline s64
+ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 *progress)
+{
struct ia64_pal_retval iprv;
- PAL_CALL_IC_OFF(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, *progress);
+ PAL_CALL_IC_OFF(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, *progress);
*progress = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
/* Initialize the processor controlled caches */
-static inline s64
-ia64_pal_cache_init (u64 level, u64 cache_type, u64 restrict)
-{
+static inline s64
+ia64_pal_cache_init (u64 level, u64 cache_type, u64 restrict)
+{
struct ia64_pal_retval iprv;
- PAL_CALL(iprv, PAL_CACHE_INIT, level, cache_type, restrict);
- return iprv.status;
+ PAL_CALL(iprv, PAL_CACHE_INIT, level, cache_type, restrict);
+ return iprv.status;
}
-/* Initialize the tags and data of a data or unified cache line of
- * processor controlled cache to known values without the availability
+/* Initialize the tags and data of a data or unified cache line of
+ * processor controlled cache to known values without the availability
* of backing memory.
*/
-static inline s64
-ia64_pal_cache_line_init (u64 physical_addr, u64 data_value)
-{
+static inline s64
+ia64_pal_cache_line_init (u64 physical_addr, u64 data_value)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_CACHE_LINE_INIT, physical_addr, data_value, 0);
- return iprv.status;
+ return iprv.status;
}
/* Read the data and tag of a processor controlled cache line for diags */
-static inline s64
-ia64_pal_cache_read (pal_cache_line_id_u_t line_id, u64 physical_addr)
-{
+static inline s64
+ia64_pal_cache_read (pal_cache_line_id_u_t line_id, u64 physical_addr)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_CACHE_READ, line_id.pclid_data, physical_addr, 0);
- return iprv.status;
+ return iprv.status;
}
/* Return summary information about the heirarchy of caches controlled by the processor */
-static inline s64
-ia64_pal_cache_summary (u64 *cache_levels, u64 *unique_caches)
-{
+static inline s64
+ia64_pal_cache_summary (u64 *cache_levels, u64 *unique_caches)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_CACHE_SUMMARY, 0, 0, 0);
if (cache_levels)
*cache_levels = iprv.v0;
if (unique_caches)
*unique_caches = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
/* Write the data and tag of a processor-controlled cache line for diags */
-static inline s64
-ia64_pal_cache_write (pal_cache_line_id_u_t line_id, u64 physical_addr, u64 data)
-{
- struct ia64_pal_retval iprv;
+static inline s64
+ia64_pal_cache_write (pal_cache_line_id_u_t line_id, u64 physical_addr, u64 data)
+{
+ struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_CACHE_WRITE, line_id.pclid_data, physical_addr, data);
- return iprv.status;
+ return iprv.status;
}
/* Return the parameters needed to copy relocatable PAL procedures from ROM to memory */
-static inline s64
+static inline s64
ia64_pal_copy_info (u64 copy_type, u64 num_procs, u64 num_iopics,
- u64 *buffer_size, u64 *buffer_align)
-{
+ u64 *buffer_size, u64 *buffer_align)
+{
struct ia64_pal_retval iprv;
- PAL_CALL(iprv, PAL_COPY_INFO, copy_type, num_procs, num_iopics);
+ PAL_CALL(iprv, PAL_COPY_INFO, copy_type, num_procs, num_iopics);
if (buffer_size)
*buffer_size = iprv.v0;
if (buffer_align)
*buffer_align = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
/* Copy relocatable PAL procedures from ROM to memory */
-static inline s64
-ia64_pal_copy_pal (u64 target_addr, u64 alloc_size, u64 processor, u64 *pal_proc_offset)
-{
+static inline s64
+ia64_pal_copy_pal (u64 target_addr, u64 alloc_size, u64 processor, u64 *pal_proc_offset)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_COPY_PAL, target_addr, alloc_size, processor);
if (pal_proc_offset)
*pal_proc_offset = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
/* Return the number of instruction and data debug register pairs */
-static inline s64
-ia64_pal_debug_info (u64 *inst_regs, u64 *data_regs)
-{
+static inline s64
+ia64_pal_debug_info (u64 *inst_regs, u64 *data_regs)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_DEBUG_INFO, 0, 0, 0);
if (inst_regs)
@@ -875,50 +875,50 @@
if (data_regs)
*data_regs = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
#ifdef TBD
/* Switch from IA64-system environment to IA-32 system environment */
-static inline s64
-ia64_pal_enter_ia32_env (ia32_env1, ia32_env2, ia32_env3)
-{
+static inline s64
+ia64_pal_enter_ia32_env (ia32_env1, ia32_env2, ia32_env3)
+{
struct ia64_pal_retval iprv;
- PAL_CALL(iprv, PAL_ENTER_IA_32_ENV, ia32_env1, ia32_env2, ia32_env3);
- return iprv.status;
+ PAL_CALL(iprv, PAL_ENTER_IA_32_ENV, ia32_env1, ia32_env2, ia32_env3);
+ return iprv.status;
}
#endif
/* Get unique geographical address of this processor on its bus */
-static inline s64
-ia64_pal_fixed_addr (u64 *global_unique_addr)
-{
+static inline s64
+ia64_pal_fixed_addr (u64 *global_unique_addr)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_FIXED_ADDR, 0, 0, 0);
if (global_unique_addr)
*global_unique_addr = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
/* Get base frequency of the platform if generated by the processor */
-static inline s64
-ia64_pal_freq_base (u64 *platform_base_freq)
-{
+static inline s64
+ia64_pal_freq_base (u64 *platform_base_freq)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_FREQ_BASE, 0, 0, 0);
if (platform_base_freq)
*platform_base_freq = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
/*
* Get the ratios for processor frequency, bus frequency and interval timer to
- * to base frequency of the platform
+ * to base frequency of the platform
*/
-static inline s64
+static inline s64
ia64_pal_freq_ratios (struct pal_freq_ratio *proc_ratio, struct pal_freq_ratio *bus_ratio,
- struct pal_freq_ratio *itc_ratio)
-{
+ struct pal_freq_ratio *itc_ratio)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_FREQ_RATIOS, 0, 0, 0);
if (proc_ratio)
@@ -927,20 +927,21 @@
*(u64 *)bus_ratio = iprv.v1;
if (itc_ratio)
*(u64 *)itc_ratio = iprv.v2;
- return iprv.status;
+ return iprv.status;
}
-/* Make the processor enter HALT or one of the implementation dependent low
+/* Make the processor enter HALT or one of the implementation dependent low
* power states where prefetching and execution are suspended and cache and
* TLB coherency is not maintained.
*/
-static inline s64
-ia64_pal_halt (u64 halt_state)
-{
+static inline s64
+ia64_pal_halt (u64 halt_state)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_HALT, halt_state, 0, 0);
- return iprv.status;
+ return iprv.status;
}
+
typedef union pal_power_mgmt_info_u {
u64 ppmi_data;
struct {
@@ -954,87 +955,87 @@
} pal_power_mgmt_info_u_t;
/* Return information about processor's optional power management capabilities. */
-static inline s64
-ia64_pal_halt_info (pal_power_mgmt_info_u_t *power_buf)
-{
+static inline s64
+ia64_pal_halt_info (pal_power_mgmt_info_u_t *power_buf)
+{
struct ia64_pal_retval iprv;
PAL_CALL_STK(iprv, PAL_HALT_INFO, (unsigned long) power_buf, 0, 0);
- return iprv.status;
+ return iprv.status;
}
/* Cause the processor to enter LIGHT HALT state, where prefetching and execution are
* suspended, but cache and TLB coherency is maintained.
*/
-static inline s64
-ia64_pal_halt_light (void)
-{
+static inline s64
+ia64_pal_halt_light (void)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_HALT_LIGHT, 0, 0, 0);
- return iprv.status;
+ return iprv.status;
}
/* Clear all the processor error logging registers and reset the indicator that allows
* the error logging registers to be written. This procedure also checks the pending
* machine check bit and pending INIT bit and reports their states.
*/
-static inline s64
-ia64_pal_mc_clear_log (u64 *pending_vector)
-{
+static inline s64
+ia64_pal_mc_clear_log (u64 *pending_vector)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MC_CLEAR_LOG, 0, 0, 0);
if (pending_vector)
*pending_vector = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
-/* Ensure that all outstanding transactions in a processor are completed or that any
+/* Ensure that all outstanding transactions in a processor are completed or that any
* MCA due to thes outstanding transaction is taken.
*/
-static inline s64
-ia64_pal_mc_drain (void)
-{
+static inline s64
+ia64_pal_mc_drain (void)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MC_DRAIN, 0, 0, 0);
- return iprv.status;
+ return iprv.status;
}
/* Return the machine check dynamic processor state */
-static inline s64
-ia64_pal_mc_dynamic_state (u64 offset, u64 *size, u64 *pds)
-{
+static inline s64
+ia64_pal_mc_dynamic_state (u64 offset, u64 *size, u64 *pds)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MC_DYNAMIC_STATE, offset, 0, 0);
if (size)
*size = iprv.v0;
if (pds)
*pds = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
/* Return processor machine check information */
-static inline s64
-ia64_pal_mc_error_info (u64 info_index, u64 type_index, u64 *size, u64 *error_info)
-{
+static inline s64
+ia64_pal_mc_error_info (u64 info_index, u64 type_index, u64 *size, u64 *error_info)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MC_ERROR_INFO, info_index, type_index, 0);
if (size)
*size = iprv.v0;
if (error_info)
- *error_info = iprv.v1;
- return iprv.status;
+ *error_info = iprv.v1;
+ return iprv.status;
}
/* Inform PALE_CHECK whether a machine check is expected so that PALE_CHECK willnot
* attempt to correct any expected machine checks.
*/
-static inline s64
-ia64_pal_mc_expected (u64 expected, u64 *previous)
-{
+static inline s64
+ia64_pal_mc_expected (u64 expected, u64 *previous)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MC_EXPECTED, expected, 0, 0);
if (previous)
*previous = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
/* Register a platform dependent location with PAL to which it can save
@@ -1042,39 +1043,39 @@
* event.
*/
static inline s64
-ia64_pal_mc_register_mem (u64 physical_addr)
-{
+ia64_pal_mc_register_mem (u64 physical_addr)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MC_REGISTER_MEM, physical_addr, 0, 0);
- return iprv.status;
+ return iprv.status;
}
/* Restore minimal architectural processor state, set CMC interrupt if necessary
* and resume execution
*/
-static inline s64
-ia64_pal_mc_resume (u64 set_cmci, u64 save_ptr)
-{
+static inline s64
+ia64_pal_mc_resume (u64 set_cmci, u64 save_ptr)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MC_RESUME, set_cmci, save_ptr, 0);
- return iprv.status;
+ return iprv.status;
}
/* Return the memory attributes implemented by the processor */
-static inline s64
-ia64_pal_mem_attrib (u64 *mem_attrib)
-{
+static inline s64
+ia64_pal_mem_attrib (u64 *mem_attrib)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_MEM_ATTRIB, 0, 0, 0);
if (mem_attrib)
*mem_attrib = iprv.v0 & 0xff;
- return iprv.status;
+ return iprv.status;
}
/* Return the amount of memory needed for second phase of processor
* self-test and the required alignment of memory.
*/
-static inline s64
+static inline s64
ia64_pal_mem_for_test (u64 *bytes_needed, u64 *alignment)
{
struct ia64_pal_retval iprv;
@@ -1083,60 +1084,60 @@
*bytes_needed = iprv.v0;
if (alignment)
*alignment = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
typedef union pal_perf_mon_info_u {
u64 ppmi_data;
struct {
u64 generic : 8,
- width : 8,
- cycles : 8,
+ width : 8,
+ cycles : 8,
retired : 8,
reserved : 32;
} pal_perf_mon_info_s;
} pal_perf_mon_info_u_t;
-
+
/* Return the performance monitor information about what can be counted
* and how to configure the monitors to count the desired events.
*/
-static inline s64
-ia64_pal_perf_mon_info (u64 *pm_buffer, pal_perf_mon_info_u_t *pm_info)
-{
+static inline s64
+ia64_pal_perf_mon_info (u64 *pm_buffer, pal_perf_mon_info_u_t *pm_info)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_PERF_MON_INFO, (unsigned long) pm_buffer, 0, 0);
if (pm_info)
pm_info->ppmi_data = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
/* Specifies the physical address of the processor interrupt block
* and I/O port space.
*/
-static inline s64
-ia64_pal_platform_addr (u64 type, u64 physical_addr)
-{
+static inline s64
+ia64_pal_platform_addr (u64 type, u64 physical_addr)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_PLATFORM_ADDR, type, physical_addr, 0);
- return iprv.status;
+ return iprv.status;
}
/* Set the SAL PMI entrypoint in memory */
-static inline s64
-ia64_pal_pmi_entrypoint (u64 sal_pmi_entry_addr)
-{
+static inline s64
+ia64_pal_pmi_entrypoint (u64 sal_pmi_entry_addr)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_PMI_ENTRYPOINT, sal_pmi_entry_addr, 0, 0);
- return iprv.status;
+ return iprv.status;
}
struct pal_features_s;
/* Provide information about configurable processor features */
-static inline s64
-ia64_pal_proc_get_features (u64 *features_avail,
- u64 *features_status,
+static inline s64
+ia64_pal_proc_get_features (u64 *features_avail,
+ u64 *features_status,
u64 *features_control)
-{
+{
struct ia64_pal_retval iprv;
PAL_CALL_PHYS(iprv, PAL_PROC_GET_FEATURES, 0, 0, 0);
if (iprv.status = 0) {
@@ -1144,16 +1145,16 @@
*features_status = iprv.v1;
*features_control = iprv.v2;
}
- return iprv.status;
+ return iprv.status;
}
/* Enable/disable processor dependent features */
-static inline s64
-ia64_pal_proc_set_features (u64 feature_select)
-{
+static inline s64
+ia64_pal_proc_set_features (u64 feature_select)
+{
struct ia64_pal_retval iprv;
PAL_CALL_PHYS(iprv, PAL_PROC_SET_FEATURES, feature_select, 0, 0);
- return iprv.status;
+ return iprv.status;
}
/*
@@ -1162,7 +1163,7 @@
*/
typedef struct ia64_ptce_info_s {
u64 base;
- u32 count[2];
+ u32 count[2];
u32 stride[2];
} ia64_ptce_info_t;
@@ -1189,9 +1190,9 @@
}
/* Return info about implemented application and control registers. */
-static inline s64
-ia64_pal_register_info (u64 info_request, u64 *reg_info_1, u64 *reg_info_2)
-{
+static inline s64
+ia64_pal_register_info (u64 info_request, u64 *reg_info_1, u64 *reg_info_2)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_REGISTER_INFO, info_request, 0, 0);
if (reg_info_1)
@@ -1199,7 +1200,7 @@
if (reg_info_2)
*reg_info_2 = iprv.v1;
return iprv.status;
-}
+}
typedef union pal_hints_u {
u64 ph_data;
@@ -1210,62 +1211,62 @@
} pal_hints_s;
} pal_hints_u_t;
-/* Return information about the register stack and RSE for this processor
+/* Return information about the register stack and RSE for this processor
* implementation.
*/
-static inline s64
+static inline s64
ia64_pal_rse_info (u64 *num_phys_stacked, pal_hints_u_t *hints)
-{
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_RSE_INFO, 0, 0, 0);
if (num_phys_stacked)
*num_phys_stacked = iprv.v0;
if (hints)
hints->ph_data = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
-/* Cause the processor to enter SHUTDOWN state, where prefetching and execution are
+/* Cause the processor to enter SHUTDOWN state, where prefetching and execution are
* suspended, but cause cache and TLB coherency to be maintained.
* This is usually called in IA-32 mode.
*/
-static inline s64
-ia64_pal_shutdown (void)
-{
+static inline s64
+ia64_pal_shutdown (void)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_SHUTDOWN, 0, 0, 0);
- return iprv.status;
+ return iprv.status;
}
/* Perform the second phase of processor self-test. */
-static inline s64
+static inline s64
ia64_pal_test_proc (u64 test_addr, u64 test_size, u64 attributes, u64 *self_test_state)
{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_TEST_PROC, test_addr, test_size, attributes);
if (self_test_state)
*self_test_state = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
typedef union pal_version_u {
u64 pal_version_val;
struct {
- u64 pv_pal_b_rev : 8;
+ u64 pv_pal_b_rev : 8;
u64 pv_pal_b_model : 8;
u64 pv_reserved1 : 8;
u64 pv_pal_vendor : 8;
u64 pv_pal_a_rev : 8;
u64 pv_pal_a_model : 8;
- u64 pv_reserved2 : 16;
+ u64 pv_reserved2 : 16;
} pal_version_s;
} pal_version_u_t;
/* Return PAL version information */
-static inline s64
-ia64_pal_version (pal_version_u_t *pal_min_version, pal_version_u_t *pal_cur_version)
-{
+static inline s64
+ia64_pal_version (pal_version_u_t *pal_min_version, pal_version_u_t *pal_cur_version)
+{
struct ia64_pal_retval iprv;
PAL_CALL_PHYS(iprv, PAL_VERSION, 0, 0, 0);
if (pal_min_version)
@@ -1274,7 +1275,7 @@
if (pal_cur_version)
pal_cur_version->pal_version_val = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
typedef union pal_tc_info_u {
@@ -1288,8 +1289,8 @@
reduce_tr : 1,
reserved : 29;
} pal_tc_info_s;
-} pal_tc_info_u_t;
-
+} pal_tc_info_u_t;
+
#define tc_reduce_tr pal_tc_info_s.reduce_tr
#define tc_unified pal_tc_info_s.unified
#define tc_pf pal_tc_info_s.pf
@@ -1298,10 +1299,10 @@
#define tc_num_sets pal_tc_info_s.num_sets
-/* Return information about the virtual memory characteristics of the processor
+/* Return information about the virtual memory characteristics of the processor
* implementation.
*/
-static inline s64
+static inline s64
ia64_pal_vm_info (u64 tc_level, u64 tc_type, pal_tc_info_u_t *tc_info, u64 *tc_pages)
{
struct ia64_pal_retval iprv;
@@ -1309,14 +1310,14 @@
if (tc_info)
tc_info->pti_val = iprv.v0;
if (tc_pages)
- *tc_pages = iprv.v1;
- return iprv.status;
+ *tc_pages = iprv.v1;
+ return iprv.status;
}
-/* Get page size information about the virtual memory characteristics of the processor
+/* Get page size information about the virtual memory characteristics of the processor
* implementation.
*/
-static inline s64
+static inline s64
ia64_pal_vm_page_size (u64 *tr_pages, u64 *vw_pages)
{
struct ia64_pal_retval iprv;
@@ -1324,8 +1325,8 @@
if (tr_pages)
*tr_pages = iprv.v0;
if (vw_pages)
- *vw_pages = iprv.v1;
- return iprv.status;
+ *vw_pages = iprv.v1;
+ return iprv.status;
}
typedef union pal_vm_info_1_u {
@@ -1348,23 +1349,23 @@
struct {
u64 impl_va_msb : 8,
rid_size : 8,
- reserved : 48;
+ reserved : 48;
} pal_vm_info_2_s;
} pal_vm_info_2_u_t;
-
-/* Get summary information about the virtual memory characteristics of the processor
+
+/* Get summary information about the virtual memory characteristics of the processor
* implementation.
*/
-static inline s64
-ia64_pal_vm_summary (pal_vm_info_1_u_t *vm_info_1, pal_vm_info_2_u_t *vm_info_2)
-{
+static inline s64
+ia64_pal_vm_summary (pal_vm_info_1_u_t *vm_info_1, pal_vm_info_2_u_t *vm_info_2)
+{
struct ia64_pal_retval iprv;
PAL_CALL(iprv, PAL_VM_SUMMARY, 0, 0, 0);
if (vm_info_1)
vm_info_1->pvi1_val = iprv.v0;
if (vm_info_2)
vm_info_2->pvi2_val = iprv.v1;
- return iprv.status;
+ return iprv.status;
}
typedef union pal_itr_valid_u {
@@ -1379,14 +1380,14 @@
} pal_tr_valid_u_t;
/* Read a translation register */
-static inline s64
+static inline s64
ia64_pal_tr_read (u64 reg_num, u64 tr_type, u64 *tr_buffer, pal_tr_valid_u_t *tr_valid)
{
struct ia64_pal_retval iprv;
PAL_CALL_PHYS_STK(iprv, PAL_VM_TR_READ, reg_num, tr_type,(u64)__pa(tr_buffer));
if (tr_valid)
tr_valid->piv_val = iprv.v0;
- return iprv.status;
+ return iprv.status;
}
static inline s64
diff -urN linux-davidm/include/asm-ia64/perfmon.h linux-2.4.10-lia/include/asm-ia64/perfmon.h
--- linux-davidm/include/asm-ia64/perfmon.h Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/include/asm-ia64/perfmon.h Mon Sep 24 22:07:39 2001
@@ -9,7 +9,7 @@
#include <linux/types.h>
/*
- * Structure used to define a context
+ * Request structure used to define a context
*/
typedef struct {
unsigned long smpl_entries; /* how many entries in sampling buffer */
@@ -23,7 +23,7 @@
} pfreq_context_t;
/*
- * Structure used to configure a PMC or PMD
+ * Request structure used to write/read a PMC or PMD
*/
typedef struct {
unsigned long reg_num; /* which register */
@@ -41,11 +41,16 @@
pfreq_reg_t pfr_reg; /* request to configure a PMD/PMC */
} perfmon_req_t;
+#ifdef __KERNEL__
+
extern void pfm_save_regs (struct task_struct *);
extern void pfm_load_regs (struct task_struct *);
-extern int pfm_inherit (struct task_struct *);
+extern int pfm_inherit (struct task_struct *, struct pt_regs *);
extern void pfm_context_exit (struct task_struct *);
extern void pfm_flush_regs (struct task_struct *);
+extern void pfm_cleanup_notifiers (struct task_struct *);
+
+#endif /* __KERNEL__ */
#endif /* _ASM_IA64_PERFMON_H */
diff -urN linux-davidm/include/asm-ia64/pgalloc.h linux-2.4.10-lia/include/asm-ia64/pgalloc.h
--- linux-davidm/include/asm-ia64/pgalloc.h Tue Jul 31 10:30:09 2001
+++ linux-2.4.10-lia/include/asm-ia64/pgalloc.h Mon Sep 24 23:02:56 2001
@@ -9,7 +9,7 @@
* in <asm/page.h> (currently 8192).
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 2000, Goutham Rao <goutham.rao@intel.com>
*/
@@ -164,11 +164,6 @@
#else
# define flush_tlb_all() __flush_tlb_all()
#endif
-
-/*
- * Serialize usage of ptc.g:
- */
-extern spinlock_t ptcg_lock;
/*
* Flush a specified user mapping
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.10-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/asm-ia64/processor.h Mon Sep 24 22:26:48 2001
@@ -190,6 +190,7 @@
#include <asm/page.h>
#include <asm/rse.h>
#include <asm/unwind.h>
+#include <asm/atomic.h>
/* like above but expressed as bitfields for more efficient access: */
struct ia64_psr {
@@ -332,6 +333,9 @@
__u64 map_base; /* base address for get_unmapped_area() */
__u64 task_size; /* limit for task size */
struct siginfo *siginfo; /* current siginfo struct for ptrace() */
+#if JITTER_BUG
+ __u64 mem;
+#endif
#ifdef CONFIG_IA32_SUPPORT
__u64 eflag; /* IA32 EFLAGS reg */
@@ -353,9 +357,10 @@
#ifdef CONFIG_PERFMON
__u64 pmc[IA64_NUM_PMC_REGS];
__u64 pmd[IA64_NUM_PMD_REGS];
- unsigned long pfm_pend_notify; /* non-zero if we need to notify and block */
+ unsigned long pfm_must_block; /* non-zero if we need to block on overflow */
void *pfm_context; /* pointer to detailed PMU context */
-# define INIT_THREAD_PM {0, }, {0, }, 0, 0,
+ atomic_t pfm_notifiers_check; /* indicate if release_thread much check tasklist */
+# define INIT_THREAD_PM {0, }, {0, }, 0, 0, {0},
#else
# define INIT_THREAD_PM
#endif
@@ -815,9 +820,22 @@
#define THREAD_SIZE IA64_STK_OFFSET
/* NOTE: The task struct and the stacks are allocated together. */
-#define alloc_task_struct() \
+#if JITTER_BUG
+# define alloc_task_struct() \
+({ \
+ unsigned long color, mem = __get_free_pages(GFP_KERNEL, IA64_TASK_STRUCT_LOG_NUM_PAGES); \
+ struct task_struct *p; \
+ color = ia64_fetch_and_add(1, &task_color) % NUM_TASK_COLORS; \
+ p = (struct task_struct *) (mem + color*MAX_JITTER_SIZE/(NUM_TASK_COLORS - 1)); \
+ p->thread.mem = mem; \
+ p; \
+})
+# define free_task_struct(p) free_pages((p)->thread.mem, IA64_TASK_STRUCT_LOG_NUM_PAGES)
+#else
+# define alloc_task_struct() \
((struct task_struct *) __get_free_pages(GFP_KERNEL, IA64_TASK_STRUCT_LOG_NUM_PAGES))
-#define free_task_struct(p) free_pages((unsigned long)(p), IA64_TASK_STRUCT_LOG_NUM_PAGES)
+# define free_task_struct(p) free_pages((unsigned long)(p), IA64_TASK_STRUCT_LOG_NUM_PAGES)
+#endif
#define get_task_struct(tsk) atomic_inc(&virt_to_page(tsk)->count)
#define init_task (init_task_union.task)
@@ -970,45 +988,33 @@
return result;
}
-#define ARCH_HAS_PREFETCH
-#define ARCH_HAS_PREFETCHW
-#define ARCH_HAS_SPINLOCK_PREFETCH
-
-#define PREFETCH_STRIDE 256
-
-extern inline void
-prefetch (const void *x)
-{
- __asm__ __volatile__ ("lfetch [%0]" : : "r"(x));
-}
-
-extern inline void
-prefetchw (const void *x)
+static inline __u64
+ia64_tpa (__u64 addr)
{
- __asm__ __volatile__ ("lfetch.excl [%0]" : : "r"(x));
+ __u64 result;
+ asm ("tpa %0=%1" : "=r"(result) : "r"(addr));
+ return result;
}
-#define prefetch_spin_lock(x) prefetchw(x)
-
-
#define ARCH_HAS_PREFETCH
#define ARCH_HAS_PREFETCHW
#define ARCH_HAS_SPINLOCK_PREFETCH
#define PREFETCH_STRIDE 256
-extern inline void prefetch(const void *x)
+extern inline void
+prefetch (const void *x)
{
__asm__ __volatile__ ("lfetch [%0]" : : "r"(x));
}
-
-extern inline void prefetchw(const void *x)
+
+extern inline void
+prefetchw (const void *x)
{
__asm__ __volatile__ ("lfetch.excl [%0]" : : "r"(x));
}
-#define spin_lock_prefetch(x) prefetchw(x)
+#define spin_lock_prefetch(x) prefetchw(x)
-
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_IA64_PROCESSOR_H */
diff -urN linux-davidm/include/asm-ia64/sal.h linux-2.4.10-lia/include/asm-ia64/sal.h
--- linux-davidm/include/asm-ia64/sal.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/asm-ia64/sal.h Mon Sep 24 22:57:49 2001
@@ -179,10 +179,8 @@
} ia64_sal_ptc_domain_info_t;
typedef struct ia64_sal_ptc_domain_proc_entry {
- u64 reserved : 16;
- u64 eid : 8; /* eid of processor */
u64 id : 8; /* id of processor */
- u64 ignored : 32;
+ u64 eid : 8; /* eid of processor */
} ia64_sal_ptc_domain_proc_entry_t;
diff -urN linux-davidm/include/asm-ia64/semaphore.h linux-2.4.10-lia/include/asm-ia64/semaphore.h
--- linux-davidm/include/asm-ia64/semaphore.h Tue Apr 17 17:19:31 2001
+++ linux-2.4.10-lia/include/asm-ia64/semaphore.h Mon Sep 24 23:02:56 2001
@@ -63,8 +63,6 @@
extern int __down_trylock (struct semaphore * sem);
extern void __up (struct semaphore * sem);
-extern spinlock_t semaphore_wake_lock;
-
/*
* Atomically decrement the semaphore's count. If it goes negative,
* block the calling thread in the TASK_UNINTERRUPTIBLE state.
diff -urN linux-davidm/include/asm-ia64/sembuf.h linux-2.4.10-lia/include/asm-ia64/sembuf.h
--- linux-davidm/include/asm-ia64/sembuf.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.10-lia/include/asm-ia64/sembuf.h Mon Sep 24 22:10:28 2001
@@ -1,7 +1,7 @@
#ifndef _ASM_IA64_SEMBUF_H
#define _ASM_IA64_SEMBUF_H
-/*
+/*
* The semid64_ds structure for IA-64 architecture.
* Note extra padding because this structure is passed back and forth
* between kernel and user space.
diff -urN linux-davidm/include/asm-ia64/shmbuf.h linux-2.4.10-lia/include/asm-ia64/shmbuf.h
--- linux-davidm/include/asm-ia64/shmbuf.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.10-lia/include/asm-ia64/shmbuf.h Mon Sep 24 22:10:36 2001
@@ -1,7 +1,7 @@
#ifndef _ASM_IA64_SHMBUF_H
#define _ASM_IA64_SHMBUF_H
-/*
+/*
* The shmid64_ds structure for IA-64 architecture.
* Note extra padding because this structure is passed back and forth
* between kernel and user space.
diff -urN linux-davidm/include/asm-ia64/sigcontext.h linux-2.4.10-lia/include/asm-ia64/sigcontext.h
--- linux-davidm/include/asm-ia64/sigcontext.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/asm-ia64/sigcontext.h Mon Sep 24 22:10:45 2001
@@ -18,6 +18,19 @@
# ifndef __ASSEMBLY__
+/*
+ * Note on handling of register backing store: sc_ar_bsp contains the address that would
+ * be found in ar.bsp after executing a "cover" instruction the context in which the
+ * signal was raised. If signal delivery required switching to an alternate signal stack
+ * (sc_rbs_base is not NULL), the "dirty" partition (as it would exist after executing the
+ * imaginary "cover" instruction) is backed by the *alternate* signal stack, not the
+ * original one. In this case, sc_rbs_base contains the base address of the new register
+ * backing store. The number of registers in the dirty partition can be calculated as:
+ *
+ * ndirty = ia64_rse_num_regs(sc_rbs_base, sc_rbs_base + (sc_loadrs >> 16))
+ *
+ */
+
struct sigcontext {
unsigned long sc_flags; /* see manifest constants above */
unsigned long sc_nat; /* bit i = 1 iff scratch reg gr[i] is a NaT */
@@ -40,8 +53,10 @@
unsigned long sc_gr[32]; /* general registers (static partition) */
struct ia64_fpreg sc_fr[128]; /* floating-point registers */
- unsigned long sc_rsvd[16]; /* reserved for future use */
+ unsigned long sc_rbs_base; /* NULL or new base of sighandler's rbs */
+ unsigned long sc_loadrs; /* see description above */
+ unsigned long sc_rsvd[14]; /* reserved for future use */
/*
* The mask must come last so we can increase _NSIG_WORDS
* without breaking binary compatibility.
diff -urN linux-davidm/include/asm-ia64/signal.h linux-2.4.10-lia/include/asm-ia64/signal.h
--- linux-davidm/include/asm-ia64/signal.h Tue Jul 31 10:30:09 2001
+++ linux-2.4.10-lia/include/asm-ia64/signal.h Mon Sep 24 22:10:57 2001
@@ -2,8 +2,8 @@
#define _ASM_IA64_SIGNAL_H
/*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Unfortunately, this file is being included by bits/signal.h in
* glibc-2.x. Hence the #ifdef __KERNEL__ ugliness.
@@ -80,14 +80,24 @@
#define SA_RESTORER 0x04000000
-/*
+/*
* sigaltstack controls
*/
#define SS_ONSTACK 1
#define SS_DISABLE 2
-#define MINSIGSTKSZ 2048
-#define SIGSTKSZ 8192
+/*
+ * The minimum stack size needs to be fairly large because we want to
+ * be sure that an app compiled for today's CPUs will continue to run
+ * on all future CPU models. The CPU model matters because the signal
+ * frame needs to have space for the complete machine state, including
+ * all physical stacked registers. The number of physical stacked
+ * registers is CPU model dependent, but given that the width of
+ * ar.rsc.loadrs is 14 bits, we can assume that they'll never take up
+ * more than 16KB of space.
+ */
+#define MINSIGSTKSZ 131027 /* min. stack size for sigaltstack() */
+#define SIGSTKSZ 262144 /* default stack size for sigaltstack() */
#ifdef __KERNEL__
diff -urN linux-davidm/include/asm-ia64/smp.h linux-2.4.10-lia/include/asm-ia64/smp.h
--- linux-davidm/include/asm-ia64/smp.h Tue Jul 31 10:30:09 2001
+++ linux-2.4.10-lia/include/asm-ia64/smp.h Mon Sep 24 22:26:48 2001
@@ -4,7 +4,7 @@
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#ifndef _ASM_IA64_SMP_H
#define _ASM_IA64_SMP_H
@@ -18,6 +18,7 @@
#include <linux/kernel.h>
#include <asm/io.h>
+#include <asm/param.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
@@ -107,7 +108,12 @@
return lid.f.id << 8 | lid.f.eid;
}
-#define NO_PROC_ID (-1)
+#define NO_PROC_ID 0xffffffff /* no processor magic marker */
+
+/*
+ * Extra overhead to move a task from one cpu to another (due to TLB and cache misses).
+ * Expressed in "negative nice value" units (larger number means higher priority/penalty).
+ */
#define PROC_CHANGE_PENALTY 20
extern void __init init_smp_config (void);
diff -urN linux-davidm/include/asm-ia64/smplock.h linux-2.4.10-lia/include/asm-ia64/smplock.h
--- linux-davidm/include/asm-ia64/smplock.h Thu Apr 5 12:51:47 2001
+++ linux-2.4.10-lia/include/asm-ia64/smplock.h Mon Sep 24 23:02:56 2001
@@ -17,7 +17,7 @@
/*
* Release global kernel lock and global interrupt lock
*/
-static __inline__ void
+static __inline__ void
release_kernel_lock(struct task_struct *task, int cpu)
{
if (task->lock_depth >= 0)
@@ -29,7 +29,7 @@
/*
* Re-acquire the kernel lock
*/
-static __inline__ void
+static __inline__ void
reacquire_kernel_lock(struct task_struct *task)
{
if (task->lock_depth >= 0)
@@ -43,14 +43,14 @@
* so we only need to worry about other
* CPU's.
*/
-static __inline__ void
+static __inline__ void
lock_kernel(void)
{
if (!++current->lock_depth)
spin_lock(&kernel_flag);
}
-static __inline__ void
+static __inline__ void
unlock_kernel(void)
{
if (--current->lock_depth < 0)
diff -urN linux-davidm/include/asm-ia64/spinlock.h linux-2.4.10-lia/include/asm-ia64/spinlock.h
--- linux-davidm/include/asm-ia64/spinlock.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/asm-ia64/spinlock.h Mon Sep 24 22:56:20 2001
@@ -84,7 +84,7 @@
"mov r29 = 1\n" \
";;\n" \
"1:\n" \
- "ld4 r2 = [%0]\n" \
+ "ld4.bias r2 = [%0]\n" \
";;\n" \
"cmp4.eq p0,p7 = r0,r2\n" \
"(p7) br.cond.spnt.few 1b \n" \
diff -urN linux-davidm/include/linux/highmem.h linux-2.4.10-lia/include/linux/highmem.h
--- linux-davidm/include/linux/highmem.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/linux/highmem.h Mon Sep 24 23:02:56 2001
@@ -46,7 +46,7 @@
static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *addr = kmap_atomic(page, KM_USER0);
- clear_user_page(addr, vaddr);
+ clear_user_page(addr, vaddr, page);
kunmap_atomic(addr, KM_USER0);
}
@@ -88,7 +88,7 @@
vfrom = kmap_atomic(from, KM_USER0);
vto = kmap_atomic(to, KM_USER1);
- copy_user_page(vto, vfrom, vaddr);
+ copy_user_page(vto, vfrom, vaddr, to);
kunmap_atomic(vfrom, KM_USER0);
kunmap_atomic(vto, KM_USER1);
}
diff -urN linux-davidm/include/linux/list.h linux-2.4.10-lia/include/linux/list.h
--- linux-davidm/include/linux/list.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/linux/list.h Mon Sep 24 22:26:48 2001
@@ -5,8 +5,6 @@
#include <linux/prefetch.h>
-#include <linux/prefetch.h>
-
/*
* Simple doubly linked list implementation.
*
diff -urN linux-davidm/include/linux/smp.h linux-2.4.10-lia/include/linux/smp.h
--- linux-davidm/include/linux/smp.h Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/include/linux/smp.h Mon Sep 24 22:26:48 2001
@@ -35,11 +35,6 @@
extern void smp_boot_cpus(void);
/*
- * Processor call in. Must hold processors until ..
- */
-extern void smp_callin(void);
-
-/*
* Multiprocessors may now schedule
*/
extern void smp_commence(void);
@@ -56,10 +51,6 @@
extern volatile int smp_threads_ready;
extern int smp_num_cpus;
-
-extern volatile unsigned long smp_msg_data;
-extern volatile int smp_src_cpu;
-extern volatile int smp_msg_id;
#define MSG_ALL_BUT_SELF 0x8000 /* Assume <32768 CPU's */
#define MSG_ALL 0x8001
diff -urN linux-davidm/kernel/module.c linux-2.4.10-lia/kernel/module.c
--- linux-davidm/kernel/module.c Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/kernel/module.c Mon Sep 24 21:29:34 2001
@@ -245,7 +245,6 @@
void __init init_modules(void)
{
kernel_module.nsyms = __stop___ksymtab - __start___ksymtab;
-
arch_init_modules(&kernel_module);
}
diff -urN linux-davidm/kernel/printk.c linux-2.4.10-lia/kernel/printk.c
--- linux-davidm/kernel/printk.c Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/kernel/printk.c Mon Sep 24 21:29:44 2001
@@ -16,8 +16,6 @@
* 01Mar01 Andrew Morton <andrewm@uow.edu.au>
*/
-#include <linux/config.h>
-
#include <linux/mm.h>
#include <linux/tty.h>
#include <linux/tty_driver.h>
@@ -312,6 +310,12 @@
__call_console_drivers(start, end);
}
}
+#ifdef CONFIG_IA64_EARLY_PRINTK
+ if (!console_drivers) {
+ static void early_printk (const char *str, size_t len);
+ early_printk(&LOG_BUF(start), end - start);
+ }
+#endif
}
/*
@@ -662,12 +666,13 @@
static int current_ypos = VGALINES, current_xpos = 0;
void
-early_printk (const char *str)
+early_printk (const char *str, size_t len)
{
char c;
int i, k, j;
- while ((c = *str++) != '\0') {
+ while (len-- > 0) {
+ c = *str++;
if (current_ypos >= VGALINES) {
/* scroll 1 line up */
for (k = 1, j = 0; k < VGALINES; k++, j++) {
diff -urN linux-davidm/kernel/sched.c linux-2.4.10-lia/kernel/sched.c
--- linux-davidm/kernel/sched.c Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/kernel/sched.c Mon Sep 24 21:29:47 2001
@@ -540,9 +540,6 @@
spin_lock_prefetch(&runqueue_lock);
-
- prefetch_spin_lock(&runqueue_lock);
-
if (!current->active_mm) BUG();
need_resched_back:
prev = current;
diff -urN linux-davidm/kernel/signal.c linux-2.4.10-lia/kernel/signal.c
--- linux-davidm/kernel/signal.c Mon Sep 24 15:08:37 2001
+++ linux-2.4.10-lia/kernel/signal.c Mon Sep 24 22:13:29 2001
@@ -1104,8 +1104,19 @@
ss_sp = NULL;
} else {
error = -ENOMEM;
+#ifdef __ia64__
+ /*
+ * XXX fix me: due to an oversight, MINSIGSTKSZ used to be defined
+ * as 2KB, which is far too small. This was after Linux kernel
+ * 2.4.9 but since there are a fair number of ia64 apps out there,
+ * we continue to allow "too" small sigaltstacks for a while.
+ */
+ if (ss_size < 2048)
+ goto out;
+#else
if (ss_size < MINSIGSTKSZ)
goto out;
+#endif
}
current->sas_ss_sp = (unsigned long) ss_sp;
diff -urN linux-davidm/lib/Makefile linux-2.4.10-lia/lib/Makefile
--- linux-davidm/lib/Makefile Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/lib/Makefile Mon Sep 24 21:29:52 2001
@@ -10,7 +10,7 @@
export-objs := cmdline.o dec_and_lock.o rwsem-spinlock.o rwsem.o
-obj-y := errno.o ctype.o string.o vsprintf.o brlock.o cmdline.o bust_spinlocks.o rbtree.o
+obj-y := errno.o ctype.o string.o vsprintf.o brlock.o cmdline.o bust_spinlocks.o rbtree.o crc32.o
obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
diff -urN linux-davidm/lib/dec_and_lock.c linux-2.4.10-lia/lib/dec_and_lock.c
--- linux-davidm/lib/dec_and_lock.c Mon Sep 24 15:08:39 2001
+++ linux-2.4.10-lia/lib/dec_and_lock.c Mon Sep 24 22:13:39 2001
@@ -26,7 +26,6 @@
* store-conditional approach, for example.
*/
-#ifndef atomic_dec_and_lock
int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
{
spin_lock(lock);
@@ -37,4 +36,3 @@
}
EXPORT_SYMBOL(atomic_dec_and_lock);
-#endif
diff -urN linux-davidm/mm/memory.c linux-2.4.10-lia/mm/memory.c
--- linux-davidm/mm/memory.c Mon Sep 24 22:31:45 2001
+++ linux-2.4.10-lia/mm/memory.c Mon Sep 24 21:30:00 2001
@@ -127,7 +127,7 @@
pmd = pmd_offset(dir, 0);
pgd_clear(dir);
for (j = 0; j < PTRS_PER_PMD ; j++) {
- prefetchw(pmd+j+(PREFETCH_STRIDE/16));
+ prefetchw(pmd + j + PREFETCH_STRIDE/sizeof(*pmd));
free_one_pmd(pmd+j);
}
pmd_free(pmd);
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (68 preceding siblings ...)
2001-09-25 7:13 ` [Linux-ia64] kernel update (relative to 2.4.10) David Mosberger
@ 2001-09-25 7:17 ` David Mosberger
2001-09-25 12:17 ` Andreas Schwab
` (145 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-09-25 7:17 UTC (permalink / raw)
To: linux-ia64
[Argh, the linux-ia64 mailer didn't like the original mail because
the patch was too big. Here follows the same without patch...]
Here is a long-awaited kernel update which brings us in sync with
2.4.10. As usual, it's available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.10-ia64-010924.diff*
The major changes this time include:
- Major ia32 subsystem update by yours truly:
- enable use of emulator prefix: for a non-directory file with
path PATH, the ia32 subsystem now uses /emul/ia32-linux/PATH
if the file exists
- fix a nasty bug which caused ia32 binaries to subtly corrupt
page table page (this was the reason "realplay" often crashed
on start up, but the bug could affect any process, not just
ia32 processes)
- make the address space layout of ia32 tasks match that of
a real linux/x86 task; in particular, the stack now starts
at 0xc0000000 (and grows towards lower addresses)
- fix emulated mmap() and mprotect() to work when page size is 4K
and work better when the page size is larger
- fix emulated ptrace() sys x86 strace() works
- fill in missing system calls (pread, pwrite, sendfile, SuS-compliant
getrlimit, mmap2, {,f}truncate64, {,l,f}stat64, 32-bit uid/gid
versions of lchown, getuid etc., pivot_root, mincore, madvise,
getdents64, fcntl64) and drop some unsupported ones
(module related syscalls, vm86, bdflush)
- support signal delivery with SA_RESTORER sigreturn
- don't align shared mmap()s to 1MB---this filled up a 32-bit address
space too quickly
with these fixes in place, I'm able to successfully run the ia32
versions of realplay, OpenOffice, mozilla, netscape, acrobat, strace,
Intel compilers, etc. But as usual, your mileage may vary.
- update perform to v0.3 (Stephane Eranian)
- SN updates from SGI (just a starter, more to follow...)
- be more consistent about using i-cache prefetches (use .few for branches
within a routine, .many otherwise)
- fix alternate stack signal delivery (used to break if the old stack
didn't have enough memory to hold the dirty stacked registers); ironically,
this fix probably also speeds up this case (it's no longer necessary
to do a flushrs...)
- McKinley related I/O SAPIC updates from Alex Williamson
- fix irq bug which caused console keyboard to report timeout when
pressing the caps lock key (Richard Hirst)
- fix ptrace race condition (thanks to Shoichi Sakon for providing
a test case that reproduced this problem)
- turn on dcr.lc by default
- many sync-ups for 2.4.10 (thanks to KOCHI Takayoshi for sending a timely
ACPI update)
I should warn that this kernel has received only moderate testing so
far. The remote access server I'm normally using to access a test
ia64 machine crashed just shortly before making the patch, so I wasn't
able to do my usual round of testing. Having said that, the kernel
seems to work fine on a dual Itanium Big Sur, so there is hope. As
you know, Linus and co decided to basically rewrite the VM system. As
far as I can tell, this hasn't had any negative impact on ia64, but
I'd recommend doing a lot of testing before declaring this kernel as
ready for prime time.
Oh, and as usual, the patch below is fyi, only. The the full patch
for definite answers.
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (69 preceding siblings ...)
2001-09-25 7:17 ` David Mosberger
@ 2001-09-25 12:17 ` Andreas Schwab
2001-09-25 15:14 ` Andreas Schwab
` (144 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-09-25 12:17 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> Here is a long-awaited kernel update which brings us in sync with
|> 2.4.10. As usual, it's available at
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
|>
|> linux-2.4.10-ia64-010924.diff*
The kernel does not compile with CONFIG_SMP=n and CONFIG_MODVERSIONS=n:
dec_and_lock.c:29: parse error before `1'
dec_and_lock.c:29: parse error before `atomic'
dec_and_lock.c:29: warning: type defaults to `int' in declaration of `_v'
dec_and_lock.c:29: parse error before `atomic'
dec_and_lock.c:29: parse error before `switch'
dec_and_lock.c:32: `atomic' undeclared here (not in a function)
dec_and_lock.c:32: warning: type defaults to `int' in declaration of `_v'
dec_and_lock.c:32: redefinition of `_v'
dec_and_lock.c:29: `_v' previously defined here
dec_and_lock.c:32: `atomic' undeclared here (not in a function)
dec_and_lock.c:32: parse error before `switch'
dec_and_lock.c:38: `atomic_dec_and_lock' undeclared here (not in a function)
dec_and_lock.c:38: initializer element is not constant
dec_and_lock.c:38: (near initialization for `__ksymtab_atomic_dec_and_lock.value')
Andreas.
--
Andreas Schwab "And now for something
Andreas.Schwab@suse.de completely different."
SuSE Labs, SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (70 preceding siblings ...)
2001-09-25 12:17 ` Andreas Schwab
@ 2001-09-25 15:14 ` Andreas Schwab
2001-09-25 15:45 ` Andreas Schwab
` (143 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-09-25 15:14 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> [Argh, the linux-ia64 mailer didn't like the original mail because
|> the patch was too big. Here follows the same without patch...]
|>
|> Here is a long-awaited kernel update which brings us in sync with
|> 2.4.10. As usual, it's available at
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
|>
|> linux-2.4.10-ia64-010924.diff*
CONFIG_DISABLE_VHPT=y seems to be broken (does not boot). How about
removing it, since we don't support anything below B3 anyway?
Andreas.
--
Andreas Schwab "And now for something
Andreas.Schwab@suse.de completely different."
SuSE Labs, SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (71 preceding siblings ...)
2001-09-25 15:14 ` Andreas Schwab
@ 2001-09-25 15:45 ` Andreas Schwab
2001-09-26 22:49 ` David Mosberger
` (142 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-09-25 15:45 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> Here is a long-awaited kernel update which brings us in sync with
|> 2.4.10. As usual, it's available at
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
|>
|> linux-2.4.10-ia64-010924.diff*
This patch is needed to get the DRM code to compile:
--- linux/drivers/char/drm/drmP.h 2001/09/25 12:19:14 1.1
+++ linux/drivers/char/drm/drmP.h 2001/09/25 12:19:24
@@ -366,10 +366,10 @@
if (len > DRM_PROC_LIMIT) { ret; *eof = 1; return len - offset; }
/* Mapping helper macros */
-#define DRM_IOREMAP(map) \
+#define DRM_IOREMAP(map, dev) \
(map)->handle = DRM(ioremap)( (map)->offset, (map)->size, (dev) )
-#define DRM_IOREMAPFREE(map) \
+#define DRM_IOREMAPFREE(map, dev) \
do { \
if ( (map)->handle && (map)->size ) \
DRM(ioremapfree)( (map)->handle, (map)->size, (dev) ); \
Andreas.
--
Andreas Schwab "And now for something
Andreas.Schwab@suse.de completely different."
SuSE Labs, SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (72 preceding siblings ...)
2001-09-25 15:45 ` Andreas Schwab
@ 2001-09-26 22:49 ` David Mosberger
2001-09-26 22:51 ` David Mosberger
` (141 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-09-26 22:49 UTC (permalink / raw)
To: linux-ia64
>>>>> On 25 Sep 2001 17:14:36 +0200, Andreas Schwab <schwab@suse.de> said:
Andreas> CONFIG_DISABLE_VHPT=y seems to be broken (does not boot).
Andreas> How about removing it, since we don't support anything
Andreas> below B3 anyway?
It's a debugging & performance evaluation feature, so I have no plans
to remove it. It is listed under the "Kernel debugging" section, so I
think we can leave it there, even if it's broken at the moment.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (73 preceding siblings ...)
2001-09-26 22:49 ` David Mosberger
@ 2001-09-26 22:51 ` David Mosberger
2001-09-27 4:57 ` Keith Owens
` (140 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-09-26 22:51 UTC (permalink / raw)
To: linux-ia64
>>>>> On 25 Sep 2001 17:45:38 +0200, Andreas Schwab <schwab@suse.de> said:
Andreas> David Mosberger <davidm@hpl.hp.com> writes:
Andreas> |> Here is a long-awaited kernel update which brings us in sync with
Andreas> |> 2.4.10. As usual, it's available at
Andreas> |> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
Andreas> |>
Andreas> |> linux-2.4.10-ia64-010924.diff*
Andreas> This patch is needed to get the DRM code to compile:
Thanks.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (74 preceding siblings ...)
2001-09-26 22:51 ` David Mosberger
@ 2001-09-27 4:57 ` Keith Owens
2001-09-27 17:48 ` David Mosberger
` (139 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-09-27 4:57 UTC (permalink / raw)
To: linux-ia64
Some of us are still using gcc 2.96 which does not support --param
max-inline-insns.
Index: 10.15/arch/ia64/Makefile
--- 10.15/arch/ia64/Makefile Thu, 27 Sep 2001 14:15:19 +1000 kaos (linux-2.4/s/c/42_Makefile 1.1.3.1.3.3 644)
+++ 10.15(w)/arch/ia64/Makefile Thu, 27 Sep 2001 14:42:23 +1000 kaos (linux-2.4/s/c/42_Makefile 1.1.3.1.2.4 644)
@@ -17,9 +17,10 @@ LINKFLAGS = -static -T arch/$(ARCH)/vmli
AFLAGS_KERNEL := -mconstant-gp
EXTRA
-CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 \
- -falign-functions2 --param max-inline-insns@0
-# -ffunction-sections
+CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 -falign-functions2
+
+CFLAGS += $(shell if $(CC) --param max-inline-insns@0 -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "--param max-inline-insns@0"; fi)
+
CFLAGS_KERNEL := -mconstant-gp
GCC_VERSION=$(shell $(CROSS_COMPILE)$(HOSTCC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.')
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (75 preceding siblings ...)
2001-09-27 4:57 ` Keith Owens
@ 2001-09-27 17:48 ` David Mosberger
2001-10-02 5:20 ` Keith Owens
` (138 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-09-27 17:48 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 27 Sep 2001 14:57:34 +1000, Keith Owens <kaos@ocs.com.au> said:
Keith> Some of us are still using gcc 2.96 which does not support
Keith> --param max-inline-insns.
Oh, I didn't realize this option was added for gcc 3.x. It certainly
wasn't my intention to break compilation with gcc 2.96. Anyhow, I
fixed this now in my tree as well (note: option -frename-registers is
already gcc 3.x specific, so there is no need to introduce yet another
test for gcc 3).
Thanks,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (76 preceding siblings ...)
2001-09-27 17:48 ` David Mosberger
@ 2001-10-02 5:20 ` Keith Owens
2001-10-02 5:50 ` Keith Owens
` (137 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-10-02 5:20 UTC (permalink / raw)
To: linux-ia64
2.4.10-ia64-010924 incorrectly patches include/asm-alpha/processor.h
and include/asm-i386/processor.h, partially duplicating code that is
already in Linus's tree. Completely delete those sections of the
2.4.10-ia64-010924 patch.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (77 preceding siblings ...)
2001-10-02 5:20 ` Keith Owens
@ 2001-10-02 5:50 ` Keith Owens
2001-10-11 2:47 ` [Linux-ia64] kernel update (relative to 2.4.11) David Mosberger
` (136 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-10-02 5:50 UTC (permalink / raw)
To: linux-ia64
2.4.10-ia64-010924 patches drivers/acpi/Makefile to add driver.o to a
list of objects. That is no longer required, Linus's tree already has
the required code a couple of lines below.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.11)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (78 preceding siblings ...)
2001-10-02 5:50 ` Keith Owens
@ 2001-10-11 2:47 ` David Mosberger
2001-10-11 4:39 ` Keith Owens
` (135 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-10-11 2:47 UTC (permalink / raw)
To: linux-ia64
An updated ia64 patch for 2.4.11 is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.11-ia64-011010.diff*
There are no major changes. The patch adds mostly a couple of bug
fixes, addition of the gettid() system call introduced by 2.4.11, and
the FPSWA control from Jesse Barnes.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.11)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (79 preceding siblings ...)
2001-10-11 2:47 ` [Linux-ia64] kernel update (relative to 2.4.11) David Mosberger
@ 2001-10-11 4:39 ` Keith Owens
2001-10-25 4:27 ` [Linux-ia64] kernel update (relative to 2.4.13) David Mosberger
` (134 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-10-11 4:39 UTC (permalink / raw)
To: linux-ia64
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Content-Type: text/plain; charset=us-ascii
On Wed, 10 Oct 2001 19:47:41 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>An updated ia64 patch for 2.4.11 is now available at
>ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
>
> linux-2.4.11-ia64-011010.diff*
kdb v1.9-2.4.11-ia64-011010 is available in
ftp://oss.sgi.com/projects/kdb/download/ia64, no major changes from
2.4.10-ia64-010924.
bfd.h and ansidecl.h are now included as part of the kdb patch instead
of assuming that they exist in /usr/include, the kdb Makefiles have
been changed accordingly.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: Exmh version 2.1.1 10/15/1999
iD8DBQE7xSKci4UHNye0ZOoRAp7hAKCuyj149d93N4rQL61UMAy2IVPKUwCg+FTM
z9Owfl+Q0yxyp11KovUUYhc=9Xx/
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (80 preceding siblings ...)
2001-10-11 4:39 ` Keith Owens
@ 2001-10-25 4:27 ` David Mosberger
2001-10-25 4:30 ` David Mosberger
` (133 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-10-25 4:27 UTC (permalink / raw)
To: linux-ia64
An updated ia64 patch for 2.4.13 is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.11-ia64-011010.diff*
change log:
- support readahead() syscall added by 2.4.13 (both for ia64 and ia32)
- console log level fix by Jesper Juhl
- half-hearted attempt add supporting reading of "default LDT entry" in ia32
modify_ldt() syscall; someone who understands what this is supposed to do
should take a look at this...
- palinfo update by Stephane Eranian
- die() fix by Keith Owens
- unaligned handler fix for rotating fp regs by Tony Luck
- ACPI fix to get AGP bus scanned again by Chris Ahna
- implementation ia64 version of wbinvd() for ACPI; this hasn't been tested
and may not work; shouldn't be an issue at the moment as this is needed only
for ACPI functionality that is not supported on Itanium; still someone who
knows ACPI better may want to take a look at this
- update PCI DMA interface to support page-based mapping/unmapping and
the optional DAC interface
This kernel has been tested with gcc-3.0 on Big Sur, Lion, and HP
simulator. Both UP and MP seem to compile fine. As usual, your
mileage may vary.
Enjoy,
--david
diff -urN linux-2.4.13/Documentation/Configure.help linux-2.4.13-lia/Documentation/Configure.help
--- linux-2.4.13/Documentation/Configure.help Wed Oct 24 10:17:38 2001
+++ linux-2.4.13-lia/Documentation/Configure.help Wed Oct 24 10:21:05 2001
@@ -2632,6 +2632,16 @@
the GLX component for XFree86 3.3.6, which can be downloaded from
http://utah-glx.sourceforge.net/ .
+Intel 460GX support
+CONFIG_AGP_I460
+ This option gives you AGP support for the Intel 460GX chipset. This
+ chipset, the first to support Intel Itanium processors, is new and
+ this option is correspondingly a little experimental.
+
+ If you don't have a 460GX based machine (such as BigSur) with an AGP
+ slot then this option isn't going to do you much good. If you're
+ dying to do Direct Rendering on IA-64, this is what you're looking for.
+
Intel I810/I810 DC100/I810e support
CONFIG_AGP_I810
This option gives you AGP support for the Xserver on the Intel 810,
@@ -12846,6 +12856,18 @@
Say Y here if you would like to be able to read the hard disk
partition table format used by SGI machines.
+Intel EFI GUID partition support
+CONFIG_EFI_PARTITION
+ Say Y here if you would like to use hard disks under Linux which
+ were partitioned using EFI GPT. Presently only useful on the
+ IA-64 platform.
+
+/dev/guid support (EXPERIMENTAL)
+CONFIG_DEVFS_GUID
+ Say Y here if you would like to access disks and partitions by
+ their Globally Unique Identifiers (GUIDs) which will appear as
+ symbolic links in /dev/guid.
+
Ultrix partition support
CONFIG_ULTRIX_PARTITION
Say Y here if you would like to be able to read the hard disk
@@ -18964,11 +18986,22 @@
so the "DIG-compliant" option is usually the right choice.
HP-simulator For the HP simulator (http://software.hp.com/ia64linux/).
- SN1-simulator For the SGI SN1 simulator.
+ SGI-SN1 For SGI SN1 Platforms.
+ SGI-SN2 For SGI SN2 Platforms.
DIG-compliant For DIG ("Developer's Interface Guide") compliant system.
If you don't know what to do, choose "generic".
+CONFIG_IA64_SGI_SN_SIM
+ Build a kernel that runs on both the SGI simulator AND on hardware.
+ There is a very slight performance penalty on hardware for including this
+ option.
+
+CONFIG_IA64_SGI_SN_DEBUG
+ This enables addition debug code that helps isolate
+ platform/kernel bugs. There is a small but measurable performance
+ degradation when this option is enabled.
+
Kernel page size
CONFIG_IA64_PAGE_SIZE_4KB
@@ -18986,56 +19019,13 @@
If you don't know what to do, choose 8KB.
-Enable Itanium A-step specific code
-CONFIG_ITANIUM_ASTEP_SPECIFIC
- Select this option to build a kernel for an Itanium prototype system
- with an A-step CPU. You have an A-step CPU if the "revision" field in
- /proc/cpuinfo is 0.
-
Enable Itanium B-step specific code
CONFIG_ITANIUM_BSTEP_SPECIFIC
Select this option to build a kernel for an Itanium prototype system
- with a B-step CPU. You have a B-step CPU if the "revision" field in
- /proc/cpuinfo has a value in the range from 1 to 4.
-
-Enable Itanium B0-step specific code
-CONFIG_ITANIUM_B0_SPECIFIC
- Select this option to bild a kernel for an Itanium prototype system
- with a B0-step CPU. You have a B0-step CPU if the "revision" field in
- /proc/cpuinfo is 1.
-
-Force interrupt redirection
-CONFIG_IA64_HAVE_IRQREDIR
- Select this option if you know that your system has the ability to
- redirect interrupts to different CPUs. Select N here if you're
- unsure.
-
-Enable use of global TLB purge instruction (ptc.g)
-CONFIG_ITANIUM_PTCG
- Say Y here if you want the kernel to use the IA-64 "ptc.g"
- instruction to flush the TLB on all CPUs. Select N here if
- you're unsure.
-
-Enable SoftSDV hacks
-CONFIG_IA64_SOFTSDV_HACKS
- Say Y here to enable hacks to make the kernel work on the Intel
- SoftSDV simulator. Select N here if you're unsure.
-
-Enable AzusA hacks
-CONFIG_IA64_AZUSA_HACKS
- Say Y here to enable hacks to make the kernel work on the NEC
- AzusA platform. Select N here if you're unsure.
-
-Force socket buffers below 4GB?
-CONFIG_SKB_BELOW_4GB
- Most of today's network interface cards (NICs) support DMA to
- the low 32 bits of the address space only. On machines with
- more then 4GB of memory, this can cause the system to slow
- down if there is no I/O TLB hardware. Turning this option on
- avoids the slow-down by forcing socket buffers to be allocated
- from memory below 4GB. The downside is that your system could
- run out of memory below 4GB before all memory has been used up.
- If you're unsure how to answer this question, answer Y.
+ with a B-step CPU. Only B3 step CPUs are supported. You have a B3-step
+ CPU if the "revision" field in /proc/cpuinfo is equal to 4. If the
+ "revision" field shows a number bigger than 4, you do not have to turn
+ on this option.
Enable IA-64 Machine Check Abort
CONFIG_IA64_MCA
@@ -19055,6 +19045,15 @@
Layer) information in /proc/pal. This contains useful information
about the processors in your systems, such as cache and TLB sizes
and the PAL firmware version in use.
+
+ To use this option, you have to check that the "/proc file system
+ support" (CONFIG_PROC_FS) is enabled, too.
+
+/proc/efi/vars support
+CONFIG_EFI_VARS
+ If you say Y here, you are able to get EFI (Extensible Firmware
+ Interface) variable information in /proc/efi/vars. You may read,
+ write, create, and destroy EFI variables through this interface.
To use this option, you have to check that the "/proc file system
support" (CONFIG_PROC_FS) is enabled, too.
diff -urN linux-2.4.13/Documentation/kernel-parameters.txt linux-2.4.13-lia/Documentation/kernel-parameters.txt
--- linux-2.4.13/Documentation/kernel-parameters.txt Wed Jun 20 11:21:33 2001
+++ linux-2.4.13-lia/Documentation/kernel-parameters.txt Wed Oct 10 17:33:26 2001
@@ -17,6 +17,7 @@
CD Appropriate CD support is enabled.
DEVFS devfs support is enabled.
DRM Direct Rendering Management support is enabled.
+ EFI EFI Partitioning (GPT) is enabled
EIDE EIDE/ATAPI support is enabled.
FB The frame buffer device is enabled.
HW Appropriate hardware is enabled.
@@ -211,6 +212,9 @@
gc_3= [HW,JOY]
gdth= [HW,SCSI]
+
+ gpt [EFI] Forces disk with valid GPT signature but
+ invalid Protective MBR to be treated as GPT.
gscd= [HW,CD]
diff -urN linux-2.4.13/Makefile linux-2.4.13-lia/Makefile
--- linux-2.4.13/Makefile Wed Oct 24 10:17:41 2001
+++ linux-2.4.13-lia/Makefile Wed Oct 24 10:21:05 2001
@@ -88,7 +88,7 @@
CPPFLAGS := -D__KERNEL__ -I$(HPATH)
-CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -O2 \
+CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 \
-fomit-frame-pointer -fno-strict-aliasing -fno-common
AFLAGS := -D__ASSEMBLY__ $(CPPFLAGS)
@@ -137,7 +137,8 @@
drivers/net/net.o \
drivers/media/media.o
DRIVERS-$(CONFIG_AGP) += drivers/char/agp/agp.o
-DRIVERS-$(CONFIG_DRM) += drivers/char/drm/drm.o
+DRIVERS-$(CONFIG_DRM_NEW) += drivers/char/drm/drm.o
+DRIVERS-$(CONFIG_DRM_OLD) += drivers/char/drm-4.0/drm.o
DRIVERS-$(CONFIG_NUBUS) += drivers/nubus/nubus.a
DRIVERS-$(CONFIG_ISDN) += drivers/isdn/isdn.a
DRIVERS-$(CONFIG_NET_FC) += drivers/net/fc/fc.o
@@ -241,14 +242,14 @@
include arch/$(ARCH)/Makefile
-export CPPFLAGS CFLAGS AFLAGS
+export CPPFLAGS CFLAGS CFLAGS_KERNEL AFLAGS AFLAGS_KERNEL
export NETWORKS DRIVERS LIBS HEAD LDFLAGS LINKFLAGS MAKEBOOT ASFLAGS
.S.s:
- $(CPP) $(AFLAGS) -traditional -o $*.s $<
+ $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -o $*.s $<
.S.o:
- $(CC) $(AFLAGS) -traditional -c -o $*.o $<
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -c -o $*.o $<
Version: dummy
@rm -f include/linux/compile.h
diff -urN linux-2.4.13/arch/i386/lib/usercopy.c linux-2.4.13-lia/arch/i386/lib/usercopy.c
--- linux-2.4.13/arch/i386/lib/usercopy.c Mon Sep 24 15:06:13 2001
+++ linux-2.4.13-lia/arch/i386/lib/usercopy.c Thu Oct 4 00:21:39 2001
@@ -14,6 +14,7 @@
unsigned long
__generic_copy_to_user(void *to, const void *from, unsigned long n)
{
+ prefetch(from);
if (access_ok(VERIFY_WRITE, to, n))
{
if(n<512)
@@ -27,6 +28,7 @@
unsigned long
__generic_copy_from_user(void *to, const void *from, unsigned long n)
{
+ prefetchw(to);
if (access_ok(VERIFY_READ, from, n))
{
if(n<512)
diff -urN linux-2.4.13/arch/i386/mm/fault.c linux-2.4.13-lia/arch/i386/mm/fault.c
--- linux-2.4.13/arch/i386/mm/fault.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/i386/mm/fault.c Wed Oct 24 18:11:25 2001
@@ -27,8 +27,6 @@
extern void die(const char *,struct pt_regs *,long);
-extern int console_loglevel;
-
/*
* Ugly, ugly, but the goto's result in better assembly..
*/
diff -urN linux-2.4.13/arch/ia64/Makefile linux-2.4.13-lia/arch/ia64/Makefile
--- linux-2.4.13/arch/ia64/Makefile Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/Makefile Thu Oct 4 00:21:52 2001
@@ -17,13 +17,15 @@
AFLAGS_KERNEL := -mconstant-gp
EXTRA
-CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 -falign-functions2
+CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 \
+ -falign-functions2
+# -ffunction-sections
CFLAGS_KERNEL := -mconstant-gp
GCC_VERSION=$(shell $(CROSS_COMPILE)$(HOSTCC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.')
ifneq ($(GCC_VERSION),2)
- CFLAGS += -frename-registers
+ CFLAGS += -frename-registers --param max-inline-insns@0
endif
ifeq ($(CONFIG_ITANIUM_BSTEP_SPECIFIC),y)
@@ -32,7 +34,7 @@
ifdef CONFIG_IA64_GENERIC
CORE_FILES := arch/$(ARCH)/hp/hp.a \
- arch/$(ARCH)/sn/sn.a \
+ arch/$(ARCH)/sn/sn.o \
arch/$(ARCH)/dig/dig.a \
arch/$(ARCH)/sn/io/sgiio.o \
$(CORE_FILES)
@@ -52,15 +54,14 @@
$(CORE_FILES)
endif
-ifdef CONFIG_IA64_SGI_SN1
+ifdef CONFIG_IA64_SGI_SN
CFLAGS += -DBRINGUP
- SUBDIRS := arch/$(ARCH)/sn/sn1 \
- arch/$(ARCH)/sn \
+ SUBDIRS := arch/$(ARCH)/sn/kernel \
arch/$(ARCH)/sn/io \
arch/$(ARCH)/sn/fprom \
$(SUBDIRS)
- CORE_FILES := arch/$(ARCH)/sn/sn.a \
- arch/$(ARCH)/sn/io/sgiio.o\
+ CORE_FILES := arch/$(ARCH)/sn/kernel/sn.o \
+ arch/$(ARCH)/sn/io/sgiio.o \
$(CORE_FILES)
endif
@@ -105,7 +106,7 @@
compressed: vmlinux
$(OBJCOPY) --strip-all vmlinux vmlinux-tmp
- gzip -9 vmlinux-tmp
+ gzip vmlinux-tmp
mv vmlinux-tmp.gz vmlinux.gz
rawboot:
diff -urN linux-2.4.13/arch/ia64/config.in linux-2.4.13-lia/arch/ia64/config.in
--- linux-2.4.13/arch/ia64/config.in Wed Oct 24 10:17:42 2001
+++ linux-2.4.13-lia/arch/ia64/config.in Wed Oct 24 10:21:06 2001
@@ -28,6 +28,7 @@
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
define_bool CONFIG_ACPI y
+ define_bool CONFIG_ACPI_EFI y
define_bool CONFIG_ACPI_INTERPRETER y
define_bool CONFIG_ACPI_KERNEL_CONFIG y
fi
@@ -40,7 +41,8 @@
"generic CONFIG_IA64_GENERIC \
DIG-compliant CONFIG_IA64_DIG \
HP-simulator CONFIG_IA64_HP_SIM \
- SGI-SN1 CONFIG_IA64_SGI_SN1" generic
+ SGI-SN1 CONFIG_IA64_SGI_SN1 \
+ SGI-SN2 CONFIG_IA64_SGI_SN2" generic
choice 'Kernel page size' \
"4KB CONFIG_IA64_PAGE_SIZE_4KB \
@@ -51,25 +53,6 @@
if [ "$CONFIG_ITANIUM" = "y" ]; then
define_bool CONFIG_IA64_BRL_EMU y
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
- fi
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B1-step specific code' CONFIG_ITANIUM_B1_SPECIFIC
- fi
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B2-step specific code' CONFIG_ITANIUM_B2_SPECIFIC
- fi
- bool ' Enable Itanium C-step specific code' CONFIG_ITANIUM_CSTEP_SPECIFIC
- if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium C0-step specific code' CONFIG_ITANIUM_C0_SPECIFIC
- fi
- if [ "$CONFIG_ITANIUM_B0_SPECIFIC" = "y" \
- -o "$CONFIG_ITANIUM_B1_SPECIFIC" = "y" -o "$CONFIG_ITANIUM_B2_SPECIFIC" = "y" ]; then
- define_bool CONFIG_ITANIUM_PTCG n
- else
- define_bool CONFIG_ITANIUM_PTCG y
- fi
if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align cache-sensitive data to 128 bytes
else
@@ -78,7 +61,6 @@
fi
if [ "$CONFIG_MCKINLEY" = "y" ]; then
- define_bool CONFIG_ITANIUM_PTCG y
define_int CONFIG_IA64_L1_CACHE_SHIFT 7
bool ' Enable McKinley A-step specific code' CONFIG_MCKINLEY_ASTEP_SPECIFIC
if [ "$CONFIG_MCKINLEY_ASTEP_SPECIFIC" = "y" ]; then
@@ -87,28 +69,32 @@
fi
if [ "$CONFIG_IA64_DIG" = "y" ]; then
- bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
define_bool CONFIG_PM y
fi
-if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
- bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM
- define_bool CONFIG_DEVFS_DEBUG y
+if [ "$CONFIG_IA64_SGI_SN1" = "y" ] || [ "$CONFIG_IA64_SGI_SN2" = "y" ]; then
+ define_bool CONFIG_IA64_SGI_SN y
+ bool ' Enable extra debugging code' CONFIG_IA64_SGI_SN_DEBUG n
+ bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN_SIM
+ bool ' Enable autotest (llsc). Option to run cache test instead of booting' \
+ CONFIG_IA64_SGI_AUTOTEST n
define_bool CONFIG_DEVFS_FS y
- define_bool CONFIG_IA64_BRL_EMU y
+ if [ "$CONFIG_DEVFS_FS" = "y" ]; then
+ bool ' Enable DEVFS Debug Code' CONFIG_DEVFS_DEBUG n
+ fi
+ bool ' Enable protocol mode for the L1 console' CONFIG_SERIAL_SGI_L1_PROTOCOL y
+ define_bool CONFIG_DISCONTIGMEM y
define_bool CONFIG_IA64_MCA y
- define_bool CONFIG_ITANIUM y
- define_bool CONFIG_SGI_IOC3_ETH y
+ define_bool CONFIG_NUMA y
define_bool CONFIG_PERCPU_IRQ y
- define_int CONFIG_CACHE_LINE_SHIFT 7
- bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM
- bool ' Enable NUMA support' CONFIG_NUMA
+ tristate ' PCIBA support' CONFIG_PCIBA
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
bool 'SMP support' CONFIG_SMP
+tristate 'Support running of Linux/x86 binaries' CONFIG_IA32_SUPPORT
bool 'Performance monitor support' CONFIG_PERFMON
tristate '/proc/pal support' CONFIG_IA64_PALINFO
tristate '/proc/efi/vars support' CONFIG_EFI_VARS
@@ -270,19 +256,19 @@
mainmenu_option next_comment
comment 'Kernel hacking'
-#bool 'Debug kmalloc/kfree' CONFIG_DEBUG_MALLOC
-if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
- tristate 'Kernel support for IA-32 emulation' CONFIG_IA32_SUPPORT
- tristate 'Kernel FP software completion' CONFIG_MATHEMU
-else
- define_bool CONFIG_MATHEMU y
+bool 'Kernel debugging' CONFIG_DEBUG_KERNEL
+if [ "$CONFIG_DEBUG_KERNEL" != "n" ]; then
+ bool ' Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
+ bool ' Disable VHPT' CONFIG_DISABLE_VHPT
+ bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ
+
+# early printk is currently broken for SMP: the secondary processors get stuck...
+# bool ' Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK
+
+ bool ' Debug memory allocations' CONFIG_DEBUG_SLAB
+ bool ' Spinlock debugging' CONFIG_DEBUG_SPINLOCK
+ bool ' Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG
+ bool ' Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
fi
-
-bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
-bool 'Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK
-bool 'Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG
-bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
-bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
-bool 'Disable VHPT' CONFIG_DISABLE_VHPT
endmenu
diff -urN linux-2.4.13/arch/ia64/defconfig linux-2.4.13-lia/arch/ia64/defconfig
--- linux-2.4.13/arch/ia64/defconfig Thu Jun 22 07:09:44 2000
+++ linux-2.4.13-lia/arch/ia64/defconfig Thu Oct 4 00:21:39 2001
@@ -3,53 +3,131 @@
#
#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODVERSIONS=y
+# CONFIG_KMOD is not set
+
+#
# General setup
#
CONFIG_IA64=y
# CONFIG_ISA is not set
+# CONFIG_EISA is not set
+# CONFIG_MCA is not set
# CONFIG_SBUS is not set
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_ACPI=y
+CONFIG_ACPI_EFI=y
+CONFIG_ACPI_INTERPRETER=y
+CONFIG_ACPI_KERNEL_CONFIG=y
+CONFIG_ITANIUM=y
+# CONFIG_MCKINLEY is not set
# CONFIG_IA64_GENERIC is not set
-CONFIG_IA64_HP_SIM=y
-# CONFIG_IA64_SGI_SN1_SIM is not set
-# CONFIG_IA64_DIG is not set
+CONFIG_IA64_DIG=y
+# CONFIG_IA64_HP_SIM is not set
+# CONFIG_IA64_SGI_SN1 is not set
+# CONFIG_IA64_SGI_SN2 is not set
# CONFIG_IA64_PAGE_SIZE_4KB is not set
# CONFIG_IA64_PAGE_SIZE_8KB is not set
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
+CONFIG_IA64_BRL_EMU=y
+CONFIG_ITANIUM_BSTEP_SPECIFIC=y
+CONFIG_IA64_L1_CACHE_SHIFT=6
+CONFIG_IA64_MCA=y
+CONFIG_PM=y
CONFIG_KCORE_ELF=y
-# CONFIG_SMP is not set
-# CONFIG_PERFMON is not set
-# CONFIG_NET is not set
-# CONFIG_SYSVIPC is not set
+CONFIG_SMP=y
+CONFIG_IA32_SUPPORT=y
+CONFIG_PERFMON=y
+CONFIG_IA64_PALINFO=y
+CONFIG_EFI_VARS=y
+CONFIG_NET=y
+CONFIG_SYSVIPC=y
# CONFIG_BSD_PROCESS_ACCT is not set
-# CONFIG_SYSCTL is not set
-# CONFIG_BINFMT_ELF is not set
+CONFIG_SYSCTL=y
+CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
+# CONFIG_ACPI_DEBUG is not set
+# CONFIG_ACPI_BUSMGR is not set
+# CONFIG_ACPI_SYS is not set
+# CONFIG_ACPI_CPU is not set
+# CONFIG_ACPI_BUTTON is not set
+# CONFIG_ACPI_AC is not set
+# CONFIG_ACPI_EC is not set
+# CONFIG_ACPI_CMBATT is not set
+# CONFIG_ACPI_THERMAL is not set
CONFIG_PCI=y
CONFIG_PCI_NAMES=y
# CONFIG_HOTPLUG is not set
# CONFIG_PCMCIA is not set
#
-# Code maturity level options
+# Parallel port support
#
-CONFIG_EXPERIMENTAL=y
+# CONFIG_PARPORT is not set
#
-# Loadable module support
+# Networking options
#
-# CONFIG_MODULES is not set
+CONFIG_PACKET=y
+CONFIG_PACKET_MMAP=y
+# CONFIG_NETLINK is not set
+# CONFIG_NETFILTER is not set
+CONFIG_FILTER=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+# CONFIG_IP_PNP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_INET_ECN is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_IPV6 is not set
+# CONFIG_KHTTPD is not set
+# CONFIG_ATM is not set
+
+#
+#
+#
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_DECNET is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_LLC is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
#
-# Parallel port support
+# QoS and/or fair queueing
#
-# CONFIG_PARPORT is not set
+# CONFIG_NET_SCHED is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
#
# Plug and Play configuration
#
# CONFIG_PNP is not set
# CONFIG_ISAPNP is not set
+# CONFIG_PNPBIOS is not set
#
# Block devices
@@ -58,14 +136,12 @@
# CONFIG_BLK_DEV_XD is not set
# CONFIG_PARIDE is not set
# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
-
-#
-# Additional Block Devices
-#
-# CONFIG_BLK_DEV_LOOP is not set
-# CONFIG_BLK_DEV_MD is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
+# CONFIG_BLK_DEV_INITRD is not set
#
# I2O device support
@@ -73,10 +149,23 @@
# CONFIG_I2O is not set
# CONFIG_I2O_PCI is not set
# CONFIG_I2O_BLOCK is not set
+# CONFIG_I2O_LAN is not set
# CONFIG_I2O_SCSI is not set
# CONFIG_I2O_PROC is not set
#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+# CONFIG_BLK_DEV_MD is not set
+# CONFIG_MD_LINEAR is not set
+# CONFIG_MD_RAID0 is not set
+# CONFIG_MD_RAID1 is not set
+# CONFIG_MD_RAID5 is not set
+# CONFIG_MD_MULTIPATH is not set
+# CONFIG_BLK_DEV_LVM is not set
+
+#
# ATA/IDE/MFM/RLL support
#
CONFIG_IDE=y
@@ -92,12 +181,21 @@
# CONFIG_BLK_DEV_HD_IDE is not set
# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_IDEDISK=y
-# CONFIG_IDEDISK_MULTI_MODE is not set
+CONFIG_IDEDISK_MULTI_MODE=y
+# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set
+# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set
+# CONFIG_BLK_DEV_IDEDISK_IBM is not set
+# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set
+# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set
+# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set
+# CONFIG_BLK_DEV_IDEDISK_WD is not set
+# CONFIG_BLK_DEV_COMMERIAL is not set
+# CONFIG_BLK_DEV_TIVO is not set
# CONFIG_BLK_DEV_IDECS is not set
CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
-# CONFIG_BLK_DEV_IDEFLOPPY is not set
-# CONFIG_BLK_DEV_IDESCSI is not set
+CONFIG_BLK_DEV_IDEFLOPPY=y
+CONFIG_BLK_DEV_IDESCSI=y
#
# IDE chipset support/bugfixes
@@ -109,45 +207,209 @@
CONFIG_BLK_DEV_IDEPCI=y
CONFIG_IDEPCI_SHARE_IRQ=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
+CONFIG_BLK_DEV_ADMA=y
# CONFIG_BLK_DEV_OFFBOARD is not set
-CONFIG_IDEDMA_PCI_AUTO=y
+# CONFIG_IDEDMA_PCI_AUTO is not set
CONFIG_BLK_DEV_IDEDMA=y
-CONFIG_IDEDMA_PCI_EXPERIMENTAL=y
# CONFIG_IDEDMA_PCI_WIP is not set
# CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set
-# CONFIG_BLK_DEV_AEC6210 is not set
-# CONFIG_AEC6210_TUNING is not set
+# CONFIG_BLK_DEV_AEC62XX is not set
+# CONFIG_AEC62XX_TUNING is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_WDC_ALI15X3 is not set
-# CONFIG_BLK_DEV_AMD7409 is not set
-# CONFIG_AMD7409_OVERRIDE is not set
+# CONFIG_BLK_DEV_AMD74XX is not set
+# CONFIG_AMD74XX_OVERRIDE is not set
# CONFIG_BLK_DEV_CMD64X is not set
-# CONFIG_CMD64X_RAID is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5530 is not set
# CONFIG_BLK_DEV_HPT34X is not set
# CONFIG_HPT34X_AUTODMA is not set
# CONFIG_BLK_DEV_HPT366 is not set
-# CONFIG_HPT366_FIP is not set
-# CONFIG_HPT366_MODE3 is not set
CONFIG_BLK_DEV_PIIX=y
-CONFIG_PIIX_TUNING=y
+# CONFIG_PIIX_TUNING is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX is not set
# CONFIG_PDC202XX_BURST is not set
-# CONFIG_PDC202XX_MASTER is not set
+# CONFIG_PDC202XX_FORCE is not set
+# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIS5513 is not set
+# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_IDE_CHIPSETS is not set
-CONFIG_IDEDMA_AUTO=y
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_IDEDMA_IVB is not set
+# CONFIG_DMA_NONPCI is not set
CONFIG_BLK_DEV_IDE_MODES=y
+# CONFIG_BLK_DEV_ATARAID is not set
+# CONFIG_BLK_DEV_ATARAID_PDC is not set
+# CONFIG_BLK_DEV_ATARAID_HPT is not set
#
# SCSI support
#
-# CONFIG_SCSI is not set
+CONFIG_SCSI=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+CONFIG_SD_EXTRA_DEVS@
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_CHR_DEV_OSST is not set
+# CONFIG_BLK_DEV_SR is not set
+# CONFIG_CHR_DEV_SG is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+CONFIG_SCSI_DEBUG_QUEUES=y
+# CONFIG_SCSI_MULTI_LUN is not set
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_7000FASST is not set
+# CONFIG_SCSI_ACARD is not set
+# CONFIG_SCSI_AHA152X is not set
+# CONFIG_SCSI_AHA1542 is not set
+# CONFIG_SCSI_AHA1740 is not set
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_DPT_I2O is not set
+# CONFIG_SCSI_ADVANSYS is not set
+# CONFIG_SCSI_IN2000 is not set
+# CONFIG_SCSI_AM53C974 is not set
+# CONFIG_SCSI_MEGARAID is not set
+# CONFIG_SCSI_BUSLOGIC is not set
+# CONFIG_SCSI_CPQFCTS is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_DTC3280 is not set
+# CONFIG_SCSI_EATA is not set
+# CONFIG_SCSI_EATA_DMA is not set
+# CONFIG_SCSI_EATA_PIO is not set
+# CONFIG_SCSI_FUTURE_DOMAIN is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_GENERIC_NCR5380 is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+# CONFIG_SCSI_NCR53C406A is not set
+# CONFIG_SCSI_NCR_D700 is not set
+# CONFIG_SCSI_NCR53C7xx is not set
+# CONFIG_SCSI_NCR53C8XX is not set
+# CONFIG_SCSI_SYM53C8XX is not set
+# CONFIG_SCSI_PAS16 is not set
+# CONFIG_SCSI_PCI2000 is not set
+# CONFIG_SCSI_PCI2220I is not set
+# CONFIG_SCSI_PSI240I is not set
+# CONFIG_SCSI_QLOGIC_FAS is not set
+# CONFIG_SCSI_QLOGIC_ISP is not set
+# CONFIG_SCSI_QLOGIC_FC is not set
+CONFIG_SCSI_QLOGIC_1280=y
+# CONFIG_SCSI_QLOGIC_QLA2100 is not set
+# CONFIG_SCSI_SIM710 is not set
+# CONFIG_SCSI_SYM53C416 is not set
+# CONFIG_SCSI_DC390T is not set
+# CONFIG_SCSI_T128 is not set
+# CONFIG_SCSI_U14_34F is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# Network device support
+#
+CONFIG_NETDEVICES=y
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+CONFIG_DUMMY=y
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+# CONFIG_SUNLANCE is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNBMAC is not set
+# CONFIG_SUNQE is not set
+# CONFIG_SUNLANCE is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+# CONFIG_LANCE is not set
+# CONFIG_NET_VENDOR_SMC is not set
+# CONFIG_NET_VENDOR_RACAL is not set
+# CONFIG_HP100 is not set
+# CONFIG_NET_ISA is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_APRICOT is not set
+# CONFIG_CS89x0 is not set
+# CONFIG_TULIP is not set
+# CONFIG_DE4X5 is not set
+# CONFIG_DGRS is not set
+# CONFIG_DM9102 is not set
+CONFIG_EEPRO100=y
+# CONFIG_LNE390 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_NE3210 is not set
+# CONFIG_ES3210 is not set
+# CONFIG_8139TOO is not set
+# CONFIG_8139TOO_PIO is not set
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+# CONFIG_8139TOO_8129 is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+# CONFIG_WINBOND_840 is not set
+# CONFIG_LAN_SAA9730 is not set
+# CONFIG_NET_POCKET is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_MYRI_SBUS is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PLIP is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+# CONFIG_NET_FC is not set
+# CONFIG_RCPCI is not set
+# CONFIG_SHAPER is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
#
# Amateur Radio support
@@ -165,13 +427,27 @@
# CONFIG_CD_NO_IDESCSI is not set
#
+# Input core support
+#
+CONFIG_INPUT=y
+CONFIG_INPUT_KEYBDEV=y
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X\x1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Yv8
+# CONFIG_INPUT_JOYDEV is not set
+CONFIG_INPUT_EVDEV=y
+
+#
# Character devices
#
-# CONFIG_VT is not set
-# CONFIG_SERIAL is not set
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_SERIAL=y
+CONFIG_SERIAL_CONSOLE=y
# CONFIG_SERIAL_EXTENDED is not set
# CONFIG_SERIAL_NONSTANDARD is not set
-# CONFIG_UNIX98_PTYS is not set
+CONFIG_UNIX98_PTYS=y
+CONFIG_UNIX98_PTY_COUNT%6
#
# I2C support
@@ -182,97 +458,382 @@
# Mice
#
# CONFIG_BUSMOUSE is not set
-# CONFIG_MOUSE is not set
+CONFIG_MOUSE=y
+CONFIG_PSMOUSE=y
+# CONFIG_82C710_MOUSE is not set
+# CONFIG_PC110_PAD is not set
+
+#
+# Joysticks
+#
+# CONFIG_INPUT_GAMEPORT is not set
+# CONFIG_INPUT_NS558 is not set
+# CONFIG_INPUT_LIGHTNING is not set
+# CONFIG_INPUT_PCIGAME is not set
+# CONFIG_INPUT_CS461X is not set
+# CONFIG_INPUT_EMU10K1 is not set
+CONFIG_INPUT_SERIO=y
+CONFIG_INPUT_SERPORT=y
#
# Joysticks
#
-# CONFIG_JOYSTICK is not set
+# CONFIG_INPUT_ANALOG is not set
+# CONFIG_INPUT_A3D is not set
+# CONFIG_INPUT_ADI is not set
+# CONFIG_INPUT_COBRA is not set
+# CONFIG_INPUT_GF2K is not set
+# CONFIG_INPUT_GRIP is not set
+# CONFIG_INPUT_INTERACT is not set
+# CONFIG_INPUT_TMDC is not set
+# CONFIG_INPUT_SIDEWINDER is not set
+# CONFIG_INPUT_IFORCE_USB is not set
+# CONFIG_INPUT_IFORCE_232 is not set
+# CONFIG_INPUT_WARRIOR is not set
+# CONFIG_INPUT_MAGELLAN is not set
+# CONFIG_INPUT_SPACEORB is not set
+# CONFIG_INPUT_SPACEBALL is not set
+# CONFIG_INPUT_STINGER is not set
+# CONFIG_INPUT_DB9 is not set
+# CONFIG_INPUT_GAMECON is not set
+# CONFIG_INPUT_TURBOGRAFX is not set
# CONFIG_QIC02_TAPE is not set
#
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
+# CONFIG_INTEL_RNG is not set
# CONFIG_NVRAM is not set
# CONFIG_RTC is not set
CONFIG_EFI_RTC=y
-
-#
-# Video For Linux
-#
-# CONFIG_VIDEO_DEV is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
+# CONFIG_SONYPI is not set
#
# Ftape, the floppy tape device driver
#
# CONFIG_FTAPE is not set
-# CONFIG_DRM is not set
-# CONFIG_DRM_TDFX is not set
-# CONFIG_AGP is not set
+CONFIG_AGP=y
+# CONFIG_AGP_INTEL is not set
+CONFIG_AGP_I460=y
+# CONFIG_AGP_I810 is not set
+# CONFIG_AGP_VIA is not set
+# CONFIG_AGP_AMD is not set
+# CONFIG_AGP_SIS is not set
+# CONFIG_AGP_ALI is not set
+# CONFIG_AGP_SWORKS is not set
+CONFIG_DRM=y
+# CONFIG_DRM_NEW is not set
+CONFIG_DRM_OLD=y
+CONFIG_DRM40_TDFX=y
+# CONFIG_DRM40_GAMMA is not set
+# CONFIG_DRM40_R128 is not set
+# CONFIG_DRM40_RADEON is not set
+# CONFIG_DRM40_I810 is not set
+# CONFIG_DRM40_MGA is not set
#
-# USB support
+# Multimedia devices
+#
+CONFIG_VIDEO_DEV=y
+
+#
+# Video For Linux
#
-# CONFIG_USB is not set
+CONFIG_VIDEO_PROC_FS=y
+# CONFIG_I2C_PARPORT is not set
+
+#
+# Video Adapters
+#
+# CONFIG_VIDEO_PMS is not set
+# CONFIG_VIDEO_CPIA is not set
+# CONFIG_VIDEO_SAA5249 is not set
+# CONFIG_TUNER_3036 is not set
+# CONFIG_VIDEO_STRADIS is not set
+# CONFIG_VIDEO_ZORAN is not set
+# CONFIG_VIDEO_ZR36120 is not set
+# CONFIG_VIDEO_MEYE is not set
+
+#
+# Radio Adapters
+#
+# CONFIG_RADIO_CADET is not set
+# CONFIG_RADIO_RTRACK is not set
+# CONFIG_RADIO_RTRACK2 is not set
+# CONFIG_RADIO_AZTECH is not set
+# CONFIG_RADIO_GEMTEK is not set
+# CONFIG_RADIO_GEMTEK_PCI is not set
+# CONFIG_RADIO_MAXIRADIO is not set
+# CONFIG_RADIO_MAESTRO is not set
+# CONFIG_RADIO_MIROPCM20 is not set
+# CONFIG_RADIO_MIROPCM20_RDS is not set
+# CONFIG_RADIO_SF16FMI is not set
+# CONFIG_RADIO_TERRATEC is not set
+# CONFIG_RADIO_TRUST is not set
+# CONFIG_RADIO_TYPHOON is not set
+# CONFIG_RADIO_ZOLTRIX is not set
#
# File systems
#
# CONFIG_QUOTA is not set
-# CONFIG_AUTOFS_FS is not set
+CONFIG_AUTOFS_FS=y
# CONFIG_AUTOFS4_FS is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_REISERFS_CHECK is not set
# CONFIG_ADFS_FS is not set
+# CONFIG_ADFS_FS_RW is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_BFS_FS is not set
-# CONFIG_FAT_FS is not set
-# CONFIG_MSDOS_FS is not set
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
# CONFIG_UMSDOS_FS is not set
-# CONFIG_VFAT_FS is not set
+CONFIG_VFAT_FS=y
# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
# CONFIG_CRAMFS is not set
-# CONFIG_ISO9660_FS is not set
+# CONFIG_TMPFS is not set
+# CONFIG_RAMFS is not set
+CONFIG_ISO9660_FS=y
# CONFIG_JOLIET is not set
# CONFIG_MINIX_FS is not set
+# CONFIG_VXFS_FS is not set
# CONFIG_NTFS_FS is not set
+# CONFIG_NTFS_RW is not set
# CONFIG_HPFS_FS is not set
-# CONFIG_PROC_FS is not set
+CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVFS_MOUNT is not set
# CONFIG_DEVFS_DEBUG is not set
-# CONFIG_DEVPTS_FS is not set
+CONFIG_DEVPTS_FS=y
# CONFIG_QNX4FS_FS is not set
+# CONFIG_QNX4FS_RW is not set
# CONFIG_ROMFS_FS is not set
-# CONFIG_EXT2_FS is not set
+CONFIG_EXT2_FS=y
# CONFIG_SYSV_FS is not set
# CONFIG_UDF_FS is not set
+# CONFIG_UDF_RW is not set
# CONFIG_UFS_FS is not set
+# CONFIG_UFS_FS_WRITE is not set
+
+#
+# Network File Systems
+#
+# CONFIG_CODA_FS is not set
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_ROOT_NFS is not set
+CONFIG_NFSD=y
+CONFIG_NFSD_V3=y
+CONFIG_SUNRPC=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_SMB_FS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_NCPFS_PACKET_SIGNING is not set
+# CONFIG_NCPFS_IOCTL_LOCKING is not set
+# CONFIG_NCPFS_STRONG is not set
+# CONFIG_NCPFS_NFS_NS is not set
+# CONFIG_NCPFS_OS2_NS is not set
+# CONFIG_NCPFS_SMALLDOS is not set
+# CONFIG_NCPFS_NLS is not set
+# CONFIG_NCPFS_EXTRAS is not set
#
# Partition Types
#
-# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
-# CONFIG_NLS is not set
-# CONFIG_NLS is not set
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+CONFIG_EFI_PARTITION=y
+# CONFIG_DEVFS_GUID is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_SMB_NLS is not set
+CONFIG_NLS=y
+
+#
+# Native Language Support
+#
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ISO8859_1 is not set
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+
+#
+# Console drivers
+#
+CONFIG_VGA_CONSOLE=y
+
+#
+# Frame-buffer support
+#
+# CONFIG_FB is not set
#
# Sound
#
-# CONFIG_SOUND is not set
+CONFIG_SOUND=y
+# CONFIG_SOUND_BT878 is not set
+# CONFIG_SOUND_CMPCI is not set
+# CONFIG_SOUND_EMU10K1 is not set
+# CONFIG_MIDI_EMU10K1 is not set
+# CONFIG_SOUND_FUSION is not set
+CONFIG_SOUND_CS4281=y
+# CONFIG_SOUND_ES1370 is not set
+# CONFIG_SOUND_ES1371 is not set
+# CONFIG_SOUND_ESSSOLO1 is not set
+# CONFIG_SOUND_MAESTRO is not set
+# CONFIG_SOUND_MAESTRO3 is not set
+# CONFIG_SOUND_ICH is not set
+# CONFIG_SOUND_RME96XX is not set
+# CONFIG_SOUND_SONICVIBES is not set
+# CONFIG_SOUND_TRIDENT is not set
+# CONFIG_SOUND_MSNDCLAS is not set
+# CONFIG_SOUND_MSNDPIN is not set
+# CONFIG_SOUND_VIA82CXXX is not set
+# CONFIG_MIDI_VIA82CXXX is not set
+# CONFIG_SOUND_OSS is not set
+# CONFIG_SOUND_TVMIXER is not set
+
+#
+# USB support
+#
+CONFIG_USB=y
+# CONFIG_USB_DEBUG is not set
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEVICEFS=y
+# CONFIG_USB_BANDWIDTH is not set
+
+#
+# USB Controllers
+#
+CONFIG_USB_UHCI_ALT=y
+CONFIG_USB_OHCI=y
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_AUDIO is not set
+# CONFIG_USB_BLUETOOTH is not set
+# CONFIG_USB_STORAGE is not set
+# CONFIG_USB_ACM is not set
+# CONFIG_USB_PRINTER is not set
+
+#
+# USB Human Interface Devices (HID)
+#
+# CONFIG_USB_HID is not set
+CONFIG_USB_KBD=y
+CONFIG_USB_MOUSE=y
+# CONFIG_USB_WACOM is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_DC2XX is not set
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_SCANNER is not set
+# CONFIG_USB_MICROTEK is not set
+
+#
+# USB Multimedia devices
+#
+CONFIG_USB_IBMCAM=y
+# CONFIG_USB_OV511 is not set
+# CONFIG_USB_PWC is not set
+# CONFIG_USB_SE401 is not set
+# CONFIG_USB_DSBR is not set
+# CONFIG_USB_DABUSB is not set
+
+#
+# USB Network adaptors
+#
+# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_CDCETHER is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_USBNET is not set
+
+#
+# USB port drivers
+#
+# CONFIG_USB_USS720 is not set
+
+#
+# USB Serial Converter support
+#
+# CONFIG_USB_SERIAL is not set
+
+#
+# USB misc drivers
+#
+# CONFIG_USB_RIO500 is not set
+
+#
+# Bluetooth support
+#
+# CONFIG_BLUEZ is not set
#
# Kernel hacking
#
-# CONFIG_IA32_SUPPORT is not set
-# CONFIG_MATHEMU is not set
-# CONFIG_MAGIC_SYSRQ is not set
-# CONFIG_IA64_EARLY_PRINTK is not set
+CONFIG_DEBUG_KERNEL=y
+CONFIG_IA64_PRINT_HAZARDS=y
+# CONFIG_DISABLE_VHPT is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_IA64_DEBUG_CMPXCHG is not set
# CONFIG_IA64_DEBUG_IRQ is not set
-# CONFIG_IA64_PRINT_HAZARDS is not set
-# CONFIG_KDB is not set
diff -urN linux-2.4.13/arch/ia64/ia32/binfmt_elf32.c linux-2.4.13-lia/arch/ia64/ia32/binfmt_elf32.c
--- linux-2.4.13/arch/ia64/ia32/binfmt_elf32.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/binfmt_elf32.c Thu Oct 4 00:21:52 2001
@@ -3,10 +3,11 @@
*
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 06/16/00 A. Mallick initialize csd/ssd/tssd/cflg for ia32_load_state
* 04/13/01 D. Mosberger dropped saving tssd in ar.k1---it's not needed
+ * 09/14/01 D. Mosberger fixed memory management for gdt/tss page
*/
#include <linux/config.h>
@@ -41,65 +42,59 @@
extern void ia64_elf32_init (struct pt_regs *regs);
extern void put_dirty_page (struct task_struct * tsk, struct page *page, unsigned long address);
+static void elf32_set_personality (void);
+
#define ELF_PLAT_INIT(_r) ia64_elf32_init(_r)
#define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm)
-#define elf_map elf_map32
+#define elf_map elf32_map
+#define SET_PERSONALITY(ex, ibcs2) elf32_set_personality()
/* Ugly but avoids duplication */
#include "../../../fs/binfmt_elf.c"
-/* Global descriptor table */
-unsigned long *ia32_gdt_table, *ia32_tss;
+extern struct page *ia32_shared_page[];
+extern unsigned long *ia32_gdt;
struct page *
-put_shared_page (struct task_struct * tsk, struct page *page, unsigned long address)
+ia32_install_shared_page (struct vm_area_struct *vma, unsigned long address, int no_share)
{
- pgd_t * pgd;
- pmd_t * pmd;
- pte_t * pte;
-
- if (page_count(page) != 1)
- printk("mem_map disagrees with %p at %08lx\n", (void *) page, address);
+ struct page *pg = ia32_shared_page[(address - vma->vm_start)/PAGE_SIZE];
- pgd = pgd_offset(tsk->mm, address);
-
- spin_lock(&tsk->mm->page_table_lock);
- {
- pmd = pmd_alloc(tsk->mm, pgd, address);
- if (!pmd)
- goto out;
- pte = pte_alloc(tsk->mm, pmd, address);
- if (!pte)
- goto out;
- if (!pte_none(*pte))
- goto out;
- flush_page_to_ram(page);
- set_pte(pte, pte_mkwrite(mk_pte(page, PAGE_SHARED)));
- }
- spin_unlock(&tsk->mm->page_table_lock);
- /* no need for flush_tlb */
- return page;
-
- out:
- spin_unlock(&tsk->mm->page_table_lock);
- __free_page(page);
- return 0;
+ get_page(pg);
+ return pg;
}
+static struct vm_operations_struct ia32_shared_page_vm_ops = {
+ nopage: ia32_install_shared_page
+};
+
void
ia64_elf32_init (struct pt_regs *regs)
{
struct vm_area_struct *vma;
- int nr;
/*
* Map GDT and TSS below 4GB, where the processor can find them. We need to map
* it with privilege level 3 because the IVE uses non-privileged accesses to these
* tables. IA-32 segmentation is used to protect against IA-32 accesses to them.
*/
- put_shared_page(current, virt_to_page(ia32_gdt_table), IA32_GDT_OFFSET);
- if (PAGE_SHIFT <= IA32_PAGE_SHIFT)
- put_shared_page(current, virt_to_page(ia32_tss), IA32_TSS_OFFSET);
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (vma) {
+ vma->vm_mm = current->mm;
+ vma->vm_start = IA32_GDT_OFFSET;
+ vma->vm_end = vma->vm_start + max(PAGE_SIZE, 2*IA32_PAGE_SIZE);
+ vma->vm_page_prot = PAGE_SHARED;
+ vma->vm_flags = VM_READ|VM_MAYREAD;
+ vma->vm_ops = &ia32_shared_page_vm_ops;
+ vma->vm_pgoff = 0;
+ vma->vm_file = NULL;
+ vma->vm_private_data = NULL;
+ down_write(¤t->mm->mmap_sem);
+ {
+ insert_vm_struct(current->mm, vma);
+ }
+ up_write(¤t->mm->mmap_sem);
+ }
/*
* Install LDT as anonymous memory. This gives us all-zero segment descriptors
@@ -116,34 +111,13 @@
vma->vm_pgoff = 0;
vma->vm_file = NULL;
vma->vm_private_data = NULL;
- insert_vm_struct(current->mm, vma);
+ down_write(¤t->mm->mmap_sem);
+ {
+ insert_vm_struct(current->mm, vma);
+ }
+ up_write(¤t->mm->mmap_sem);
}
- nr = smp_processor_id();
-
- current->thread.map_base = IA32_PAGE_OFFSET/3;
- current->thread.task_size = IA32_PAGE_OFFSET; /* use what Linux/x86 uses... */
- set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
-
- /* Setup the segment selectors */
- regs->r16 = (__USER_DS << 16) | __USER_DS; /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state() sets TSS and LDT */
-
- /* Setup the segment descriptors */
- regs->r24 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* ESD */
- regs->r27 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* DSD */
- regs->r28 = 0; /* FSD (null) */
- regs->r29 = 0; /* GSD (null) */
- regs->r30 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_LDT(nr)]); /* LDTD */
-
- /*
- * Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descriptor
- * format defined by Figure 3-11 "Pseudo-Descriptor Format" in the IA-32
- * architecture manual.
- */
- regs->r31 = IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA32_PAGE_SIZE - 1, 0,
- 0, 0, 0, 0, 0, 0));
-
ia64_psr(regs)->ac = 0; /* turn off alignment checking */
regs->loadrs = 0;
/*
@@ -164,10 +138,19 @@
current->thread.fcr = IA32_FCR_DEFAULT;
current->thread.fir = 0;
current->thread.fdr = 0;
- current->thread.csd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_CS >> 3]);
- current->thread.ssd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]);
- current->thread.tssd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_TSS(nr)]);
+ /*
+ * Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descriptor
+ * format defined by Figure 3-11 "Pseudo-Descriptor Format" in the IA-32
+ * architecture manual.
+ */
+ regs->r31 = IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA32_PAGE_SIZE - 1, 0,
+ 0, 0, 0, 0, 0, 0));
+ /* Setup the segment selectors */
+ regs->r16 = (__USER_DS << 16) | __USER_DS; /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state() sets TSS and LDT */
+
+ ia32_load_segment_descriptors(current);
ia32_load_state(current);
}
@@ -189,6 +172,7 @@
if (!mpnt)
return -ENOMEM;
+ down_write(¤t->mm->mmap_sem);
{
mpnt->vm_mm = current->mm;
mpnt->vm_start = PAGE_MASK & (unsigned long) bprm->p;
@@ -204,54 +188,32 @@
}
for (i = 0 ; i < MAX_ARG_PAGES ; i++) {
- if (bprm->page[i]) {
- put_dirty_page(current,bprm->page[i],stack_base);
+ struct page *page = bprm->page[i];
+ if (page) {
+ bprm->page[i] = NULL;
+ put_dirty_page(current, page, stack_base);
}
stack_base += PAGE_SIZE;
}
+ up_write(¤t->mm->mmap_sem);
return 0;
}
-static unsigned long
-ia32_mm_addr (unsigned long addr)
+static void
+elf32_set_personality (void)
{
- struct vm_area_struct *vma;
-
- if ((vma = find_vma(current->mm, addr)) = NULL)
- return ELF_PAGESTART(addr);
- if (vma->vm_start > addr)
- return ELF_PAGESTART(addr);
- return ELF_PAGEALIGN(addr);
+ set_personality(PER_LINUX32);
+ current->thread.map_base = IA32_PAGE_OFFSET/3;
+ current->thread.task_size = IA32_PAGE_OFFSET; /* use what Linux/x86 uses... */
+ set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
}
-/*
- * Normally we would do an `mmap' to map in the process's text section.
- * This doesn't work with IA32 processes as the ELF file might specify
- * a non page size aligned address. Instead we will just allocate
- * memory and read the data in from the file. Slightly less efficient
- * but it works.
- */
-extern long ia32_do_mmap (struct file *filep, unsigned int len, unsigned int prot,
- unsigned int flags, unsigned int fd, unsigned int offset);
-
static unsigned long
-elf_map32 (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
+elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
{
- unsigned long retval;
+ unsigned long pgoff = (eppnt->p_vaddr) & ~IA32_PAGE_MASK;
- if (eppnt->p_memsz >= (1UL<<32) || addr > (1UL<<32) - eppnt->p_memsz)
- return -EINVAL;
-
- /*
- * Make sure the elf interpreter doesn't get loaded at location 0
- * so that NULL pointers correctly cause segfaults.
- */
- if (addr = 0)
- addr += PAGE_SIZE;
- set_brk(ia32_mm_addr(addr), addr + eppnt->p_memsz);
- memset((char *) addr + eppnt->p_filesz, 0, eppnt->p_memsz - eppnt->p_filesz);
- kernel_read(filep, eppnt->p_offset, (char *) addr, eppnt->p_filesz);
- retval = (unsigned long) addr;
- return retval;
+ return ia32_do_mmap(filep, (addr & IA32_PAGE_MASK), eppnt->p_filesz + pgoff, prot, type,
+ eppnt->p_offset - pgoff);
}
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_entry.S linux-2.4.13-lia/arch/ia64/ia32/ia32_entry.S
--- linux-2.4.13/arch/ia64/ia32/ia32_entry.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_entry.S Wed Oct 24 18:11:48 2001
@@ -2,7 +2,7 @@
#include <asm/offsets.h>
#include <asm/signal.h>
-#include "../kernel/entry.h"
+#include "../kernel/minstate.h"
/*
* execve() is special because in case of success, we need to
@@ -14,13 +14,13 @@
alloc loc1=ar.pfs,3,2,4,0
mov loc0=rp
.body
- mov out0=in0 // filename
+ zxt4 out0=in0 // filename
;; // stop bit between alloc and call
- mov out1=in1 // argv
- mov out2=in2 // envp
+ zxt4 out1=in1 // argv
+ zxt4 out2=in2 // envp
add out3\x16,sp // regs
br.call.sptk.few rp=sys32_execve
-1: cmp4.ge p6,p0=r8,r0
+1: cmp.ge p6,p0=r8,r0
mov ar.pfs=loc1 // restore ar.pfs
;;
(p6) mov ar.pfs=r0 // clear ar.pfs in case of success
@@ -29,31 +29,80 @@
br.ret.sptk.few rp
END(ia32_execve)
- //
- // Get possibly unaligned sigmask argument into an aligned
- // kernel buffer
-GLOBAL_ENTRY(ia32_rt_sigsuspend)
- // We'll cheat and not do an alloc here since we are ultimately
- // going to do a simple branch to the IA64 sys_rt_sigsuspend.
- // r32 is still the first argument which is the signal mask.
- // We copy this 4-byte aligned value to an 8-byte aligned buffer
- // in the task structure and then jump to the IA64 code.
+ENTRY(ia32_clone)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2)
+ alloc r16=ar.pfs,2,2,4,0
+ DO_SAVE_SWITCH_STACK
+ mov loc0=rp
+ mov loc1=r16 // save ar.pfs across do_fork
+ .body
+ zxt4 out1=in1 // newsp
+ mov out3=0 // stacksize
+ adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
+ zxt4 out0=in0 // out0 = clone_flags
+ br.call.sptk.many rp=do_fork
+.ret0: .restore sp
+ adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
+ mov ar.pfs=loc1
+ mov rp=loc0
+ br.ret.sptk.many rp
+END(ia32_clone)
- EX(.Lfail, ld4 r2=[r32],4) // load low part of sigmask
- ;;
- EX(.Lfail, ld4 r3=[r32]) // load high part of sigmask
- adds r32=IA64_TASK_THREAD_SIGMASK_OFFSET,r13
- ;;
- st8 [r32]=r2
- adds r10=IA64_TASK_THREAD_SIGMASK_OFFSET+4,r13
+ENTRY(sys32_rt_sigsuspend)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
+ alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs
+ mov loc0=rp
+ mov out0=in0 // mask
+ mov out1=in1 // sigsetsize
+ mov out2=sp // out2 = &sigscratch
+ .fframe 16
+ adds sp=-16,sp // allocate dummy "sigscratch"
;;
+ .body
+ br.call.sptk.many rp=ia32_rt_sigsuspend
+1: .restore sp
+ adds sp\x16,sp
+ mov rp=loc0
+ mov ar.pfs=loc1
+ br.ret.sptk.many rp
+END(sys32_rt_sigsuspend)
- st4 [r10]=r3
- br.cond.sptk.many sys_rt_sigsuspend
-
-.Lfail: br.ret.sptk.many rp // failed to read sigmask
-END(ia32_rt_sigsuspend)
+ENTRY(sys32_sigsuspend)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
+ alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs
+ mov loc0=rp
+ mov out0=in2 // mask (first two args are ignored)
+ ;;
+ mov out1=sp // out1 = &sigscratch
+ .fframe 16
+ adds sp=-16,sp // allocate dummy "sigscratch"
+ .body
+ br.call.sptk.many rp=ia32_sigsuspend
+1: .restore sp
+ adds sp\x16,sp
+ mov rp=loc0
+ mov ar.pfs=loc1
+ br.ret.sptk.many rp
+END(sys32_sigsuspend)
+GLOBAL_ENTRY(ia32_ret_from_clone)
+ PT_REGS_UNWIND_INFO(0)
+ /*
+ * We need to call schedule_tail() to complete the scheduling process.
+ * Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
+ * address of the previously executing task.
+ */
+ br.call.sptk.many rp=ia64_invoke_schedule_tail
+.ret1: adds r2=IA64_TASK_PTRACE_OFFSET,r13
+ ;;
+ ld8 r2=[r2]
+ ;;
+ mov r8=0
+ tbit.nz p6,p0=r2,PT_TRACESYS_BIT
+(p6) br.cond.spnt .ia32_strace_check_retval
+ ;; // prevent RAW on r8
+END(ia32_ret_from_clone)
+ // fall thrugh
GLOBAL_ENTRY(ia32_ret_from_syscall)
PT_REGS_UNWIND_INFO(0)
@@ -72,20 +121,25 @@
// manipulate ar.pfs.
//
// Input:
- // r15 = syscall number
- // b6 = syscall entry point
+ // r8 = syscall number
+ // b6 = syscall entry point
//
GLOBAL_ENTRY(ia32_trace_syscall)
PT_REGS_UNWIND_INFO(0)
+ mov r3=-38
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,sp
+ ;;
+ st8 [r2]=r3 // initialize return code to -ENOSYS
br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
-.ret0: br.call.sptk.few rp¶ // do the syscall
-.ret1: cmp.lt p6,p0=r8,r0 // syscall failed?
+.ret2: br.call.sptk.few rp¶ // do the syscall
+.ia32_strace_check_retval:
+ cmp.lt p6,p0=r8,r0 // syscall failed?
adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
;;
st8.spill [r2]=r8 // store return value in slot for r8
br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
-.ret2: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
- br.cond.sptk.many ia64_leave_kernel // rp MUST be != ia64_leave_kernel!
+.ret4: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
+ br.cond.sptk.many ia64_leave_kernel
END(ia32_trace_syscall)
GLOBAL_ENTRY(sys32_vfork)
@@ -110,7 +164,7 @@
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
br.call.sptk.few rp=do_fork
-.ret3: mov ar.pfs=loc1
+.ret5: mov ar.pfs=loc1
.restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov rp=loc0
@@ -137,21 +191,21 @@
data8 sys32_time
data8 sys_mknod
data8 sys_chmod /* 15 */
- data8 sys_lchown
+ data8 sys_lchown /* 16-bit version */
data8 sys32_ni_syscall /* old break syscall holder */
data8 sys32_ni_syscall
data8 sys32_lseek
data8 sys_getpid /* 20 */
data8 sys_mount
data8 sys_oldumount
- data8 sys_setuid
- data8 sys_getuid
+ data8 sys_setuid /* 16-bit version */
+ data8 sys_getuid /* 16-bit version */
data8 sys32_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */
data8 sys32_ptrace
data8 sys32_alarm
data8 sys32_ni_syscall
- data8 sys_pause
- data8 ia32_utime /* 30 */
+ data8 sys32_pause
+ data8 sys32_utime /* 30 */
data8 sys32_ni_syscall /* old stty syscall holder */
data8 sys32_ni_syscall /* old gtty syscall holder */
data8 sys_access
@@ -167,15 +221,15 @@
data8 sys32_times
data8 sys32_ni_syscall /* old prof syscall holder */
data8 sys_brk /* 45 */
- data8 sys_setgid
- data8 sys_getgid
+ data8 sys_setgid /* 16-bit version */
+ data8 sys_getgid /* 16-bit version */
data8 sys32_signal
- data8 sys_geteuid
- data8 sys_getegid /* 50 */
+ data8 sys_geteuid /* 16-bit version */
+ data8 sys_getegid /* 16-bit version */ /* 50 */
data8 sys_acct
data8 sys_umount /* recycled never used phys( */
data8 sys32_ni_syscall /* old lock syscall holder */
- data8 ia32_ioctl
+ data8 sys32_ioctl
data8 sys32_fcntl /* 55 */
data8 sys32_ni_syscall /* old mpx syscall holder */
data8 sys_setpgid
@@ -191,19 +245,19 @@
data8 sys32_sigaction
data8 sys32_ni_syscall
data8 sys32_ni_syscall
- data8 sys_setreuid /* 70 */
- data8 sys_setregid
- data8 sys32_ni_syscall
- data8 sys_sigpending
+ data8 sys_setreuid /* 16-bit version */ /* 70 */
+ data8 sys_setregid /* 16-bit version */
+ data8 sys32_sigsuspend
+ data8 sys32_sigpending
data8 sys_sethostname
data8 sys32_setrlimit /* 75 */
- data8 sys32_getrlimit
+ data8 sys32_old_getrlimit
data8 sys32_getrusage
data8 sys32_gettimeofday
data8 sys32_settimeofday
- data8 sys_getgroups /* 80 */
- data8 sys_setgroups
- data8 old_select
+ data8 sys32_getgroups16 /* 80 */
+ data8 sys32_setgroups16
+ data8 sys32_old_select
data8 sys_symlink
data8 sys32_ni_syscall
data8 sys_readlink /* 85 */
@@ -212,17 +266,17 @@
data8 sys_reboot
data8 sys32_readdir
data8 sys32_mmap /* 90 */
- data8 sys_munmap
+ data8 sys32_munmap
data8 sys_truncate
data8 sys_ftruncate
data8 sys_fchmod
- data8 sys_fchown /* 95 */
+ data8 sys_fchown /* 16-bit version */ /* 95 */
data8 sys_getpriority
data8 sys_setpriority
data8 sys32_ni_syscall /* old profil syscall holder */
data8 sys32_statfs
data8 sys32_fstatfs /* 100 */
- data8 sys_ioperm
+ data8 sys32_ioperm
data8 sys32_socketcall
data8 sys_syslog
data8 sys32_setitimer
@@ -231,36 +285,36 @@
data8 sys32_newlstat
data8 sys32_newfstat
data8 sys32_ni_syscall
- data8 sys_iopl /* 110 */
+ data8 sys32_iopl /* 110 */
data8 sys_vhangup
data8 sys32_ni_syscall /* used to be sys_idle */
data8 sys32_ni_syscall
data8 sys32_wait4
data8 sys_swapoff /* 115 */
- data8 sys_sysinfo
+ data8 sys32_sysinfo
data8 sys32_ipc
data8 sys_fsync
data8 sys32_sigreturn
- data8 sys_clone /* 120 */
+ data8 ia32_clone /* 120 */
data8 sys_setdomainname
data8 sys32_newuname
data8 sys32_modify_ldt
- data8 sys_adjtimex
+ data8 sys32_ni_syscall /* adjtimex */
data8 sys32_mprotect /* 125 */
- data8 sys_sigprocmask
- data8 sys_create_module
- data8 sys_init_module
- data8 sys_delete_module
- data8 sys_get_kernel_syms /* 130 */
- data8 sys_quotactl
+ data8 sys32_sigprocmask
+ data8 sys32_ni_syscall /* create_module */
+ data8 sys32_ni_syscall /* init_module */
+ data8 sys32_ni_syscall /* delete_module */
+ data8 sys32_ni_syscall /* get_kernel_syms */ /* 130 */
+ data8 sys32_quotactl
data8 sys_getpgid
data8 sys_fchdir
- data8 sys_bdflush
- data8 sys_sysfs /* 135 */
- data8 sys_personality
+ data8 sys32_ni_syscall /* sys_bdflush */
+ data8 sys_sysfs /* 135 */
+ data8 sys32_personality
data8 sys32_ni_syscall /* for afs_syscall */
- data8 sys_setfsuid
- data8 sys_setfsgid
+ data8 sys_setfsuid /* 16-bit version */
+ data8 sys_setfsgid /* 16-bit version */
data8 sys_llseek /* 140 */
data8 sys32_getdents
data8 sys32_select
@@ -282,66 +336,73 @@
data8 sys_sched_yield
data8 sys_sched_get_priority_max
data8 sys_sched_get_priority_min /* 160 */
- data8 sys_sched_rr_get_interval
+ data8 sys32_sched_rr_get_interval
data8 sys32_nanosleep
data8 sys_mremap
- data8 sys_setresuid
- data8 sys32_getresuid /* 165 */
- data8 sys_vm86
- data8 sys_query_module
+ data8 sys_setresuid /* 16-bit version */
+ data8 sys32_getresuid16 /* 16-bit version */ /* 165 */
+ data8 sys32_ni_syscall /* vm86 */
+ data8 sys32_ni_syscall /* sys_query_module */
data8 sys_poll
- data8 sys_nfsservctl
+ data8 sys32_ni_syscall /* nfsservctl */
data8 sys_setresgid /* 170 */
- data8 sys32_getresgid
+ data8 sys32_getresgid16
data8 sys_prctl
data8 sys32_rt_sigreturn
data8 sys32_rt_sigaction
data8 sys32_rt_sigprocmask /* 175 */
data8 sys_rt_sigpending
- data8 sys_rt_sigtimedwait
- data8 sys_rt_sigqueueinfo
- data8 ia32_rt_sigsuspend
- data8 sys_pread /* 180 */
- data8 sys_pwrite
- data8 sys_chown
+ data8 sys32_rt_sigtimedwait
+ data8 sys32_rt_sigqueueinfo
+ data8 sys32_rt_sigsuspend
+ data8 sys32_pread /* 180 */
+ data8 sys32_pwrite
+ data8 sys_chown /* 16-bit version */
data8 sys_getcwd
data8 sys_capget
data8 sys_capset /* 185 */
data8 sys32_sigaltstack
- data8 sys_sendfile
+ data8 sys32_sendfile
data8 sys32_ni_syscall /* streams1 */
data8 sys32_ni_syscall /* streams2 */
data8 sys32_vfork /* 190 */
+ data8 sys32_getrlimit
+ data8 sys32_mmap2
+ data8 sys32_truncate64
+ data8 sys32_ftruncate64
+ data8 sys32_stat64 /* 195 */
+ data8 sys32_lstat64
+ data8 sys32_fstat64
+ data8 sys_lchown
+ data8 sys_getuid
+ data8 sys_getgid /* 200 */
+ data8 sys_geteuid
+ data8 sys_getegid
+ data8 sys_setreuid
+ data8 sys_setregid
+ data8 sys_getgroups /* 205 */
+ data8 sys_setgroups
+ data8 sys_fchown
+ data8 sys_setresuid
+ data8 sys_getresuid
+ data8 sys_setresgid /* 210 */
+ data8 sys_getresgid
+ data8 sys_chown
+ data8 sys_setuid
+ data8 sys_setgid
+ data8 sys_setfsuid /* 215 */
+ data8 sys_setfsgid
+ data8 sys_pivot_root
+ data8 sys_mincore
+ data8 sys_madvise
+ data8 sys_getdents64 /* 220 */
+ data8 sys32_fcntl64
+ data8 sys_ni_syscall /* reserved for TUX */
+ data8 sys_ni_syscall /* reserved for Security */
+ data8 sys_gettid
+ data8 sys_readahead /* 225 */
data8 sys_ni_syscall
data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 195 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 200 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 205 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 210 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 215 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 220 */
data8 sys_ni_syscall
data8 sys_ni_syscall
/*
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_ioctl.c linux-2.4.13-lia/arch/ia64/ia32/ia32_ioctl.c
--- linux-2.4.13/arch/ia64/ia32/ia32_ioctl.c Thu Jan 4 12:50:17 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_ioctl.c Thu Oct 4 00:21:52 2001
@@ -3,6 +3,8 @@
*
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
+ * Copyright (C) 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/types.h>
@@ -22,8 +24,12 @@
#include <linux/if_ppp.h>
#include <linux/ixjuser.h>
#include <linux/i2o-dev.h>
+
+#include <asm/ia32.h>
+
#include <../drivers/char/drm/drm.h>
+
#define IOCTL_NR(a) ((a) & ~(_IOC_SIZEMASK << _IOC_SIZESHIFT))
#define DO_IOCTL(fd, cmd, arg) ({ \
@@ -36,179 +42,200 @@
_ret; \
})
-#define P(i) ((void *)(long)(i))
-
+#define P(i) ((void *)(unsigned long)(i))
asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg);
-asmlinkage long ia32_ioctl(unsigned int fd, unsigned int cmd, unsigned int arg)
+static long
+put_dirent32 (struct dirent *d, struct linux32_dirent *d32)
+{
+ size_t namelen = strlen(d->d_name);
+
+ return (put_user(d->d_ino, &d32->d_ino)
+ || put_user(d->d_off, &d32->d_off)
+ || put_user(d->d_reclen, &d32->d_reclen)
+ || copy_to_user(d32->d_name, d->d_name, namelen + 1));
+}
+
+asmlinkage long
+sys32_ioctl (unsigned int fd, unsigned int cmd, unsigned int arg)
{
long ret;
switch (IOCTL_NR(cmd)) {
-
- case IOCTL_NR(DRM_IOCTL_VERSION):
- {
- drm_version_t ver;
- struct {
- int version_major;
- int version_minor;
- int version_patchlevel;
- unsigned int name_len;
- unsigned int name; /* pointer */
- unsigned int date_len;
- unsigned int date; /* pointer */
- unsigned int desc_len;
- unsigned int desc; /* pointer */
- } ver32;
-
- if (copy_from_user(&ver32, P(arg), sizeof(ver32)))
- return -EFAULT;
- ver.name_len = ver32.name_len;
- ver.name = P(ver32.name);
- ver.date_len = ver32.date_len;
- ver.date = P(ver32.date);
- ver.desc_len = ver32.desc_len;
- ver.desc = P(ver32.desc);
- ret = DO_IOCTL(fd, cmd, &ver);
- if (ret >= 0) {
- ver32.version_major = ver.version_major;
- ver32.version_minor = ver.version_minor;
- ver32.version_patchlevel = ver.version_patchlevel;
- ver32.name_len = ver.name_len;
- ver32.date_len = ver.date_len;
- ver32.desc_len = ver.desc_len;
- if (copy_to_user(P(arg), &ver32, sizeof(ver32)))
- return -EFAULT;
- }
- return(ret);
- }
-
- case IOCTL_NR(DRM_IOCTL_GET_UNIQUE):
- {
- drm_unique_t un;
- struct {
- unsigned int unique_len;
- unsigned int unique;
- } un32;
-
- if (copy_from_user(&un32, P(arg), sizeof(un32)))
- return -EFAULT;
- un.unique_len = un32.unique_len;
- un.unique = P(un32.unique);
- ret = DO_IOCTL(fd, cmd, &un);
- if (ret >= 0) {
- un32.unique_len = un.unique_len;
- if (copy_to_user(P(arg), &un32, sizeof(un32)))
- return -EFAULT;
- }
- return(ret);
- }
- case IOCTL_NR(DRM_IOCTL_SET_UNIQUE):
- case IOCTL_NR(DRM_IOCTL_ADD_MAP):
- case IOCTL_NR(DRM_IOCTL_ADD_BUFS):
- case IOCTL_NR(DRM_IOCTL_MARK_BUFS):
- case IOCTL_NR(DRM_IOCTL_INFO_BUFS):
- case IOCTL_NR(DRM_IOCTL_MAP_BUFS):
- case IOCTL_NR(DRM_IOCTL_FREE_BUFS):
- case IOCTL_NR(DRM_IOCTL_ADD_CTX):
- case IOCTL_NR(DRM_IOCTL_RM_CTX):
- case IOCTL_NR(DRM_IOCTL_MOD_CTX):
- case IOCTL_NR(DRM_IOCTL_GET_CTX):
- case IOCTL_NR(DRM_IOCTL_SWITCH_CTX):
- case IOCTL_NR(DRM_IOCTL_NEW_CTX):
- case IOCTL_NR(DRM_IOCTL_RES_CTX):
-
- case IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE):
- case IOCTL_NR(DRM_IOCTL_AGP_RELEASE):
- case IOCTL_NR(DRM_IOCTL_AGP_ENABLE):
- case IOCTL_NR(DRM_IOCTL_AGP_INFO):
- case IOCTL_NR(DRM_IOCTL_AGP_ALLOC):
- case IOCTL_NR(DRM_IOCTL_AGP_FREE):
- case IOCTL_NR(DRM_IOCTL_AGP_BIND):
- case IOCTL_NR(DRM_IOCTL_AGP_UNBIND):
-
- /* Mga specific ioctls */
-
- case IOCTL_NR(DRM_IOCTL_MGA_INIT):
-
- /* I810 specific ioctls */
-
- case IOCTL_NR(DRM_IOCTL_I810_GETBUF):
- case IOCTL_NR(DRM_IOCTL_I810_COPY):
-
- /* Rage 128 specific ioctls */
-
- case IOCTL_NR(DRM_IOCTL_R128_PACKET):
-
- case IOCTL_NR(VFAT_IOCTL_READDIR_BOTH):
- case IOCTL_NR(VFAT_IOCTL_READDIR_SHORT):
- case IOCTL_NR(MTIOCGET):
- case IOCTL_NR(MTIOCPOS):
- case IOCTL_NR(MTIOCGETCONFIG):
- case IOCTL_NR(MTIOCSETCONFIG):
- case IOCTL_NR(PPPIOCSCOMPRESS):
- case IOCTL_NR(PPPIOCGIDLE):
- case IOCTL_NR(NCP_IOC_GET_FS_INFO_V2):
- case IOCTL_NR(NCP_IOC_GETOBJECTNAME):
- case IOCTL_NR(NCP_IOC_SETOBJECTNAME):
- case IOCTL_NR(NCP_IOC_GETPRIVATEDATA):
- case IOCTL_NR(NCP_IOC_SETPRIVATEDATA):
- case IOCTL_NR(NCP_IOC_GETMOUNTUID2):
- case IOCTL_NR(CAPI_MANUFACTURER_CMD):
- case IOCTL_NR(VIDIOCGTUNER):
- case IOCTL_NR(VIDIOCSTUNER):
- case IOCTL_NR(VIDIOCGWIN):
- case IOCTL_NR(VIDIOCSWIN):
- case IOCTL_NR(VIDIOCGFBUF):
- case IOCTL_NR(VIDIOCSFBUF):
- case IOCTL_NR(MGSL_IOCSPARAMS):
- case IOCTL_NR(MGSL_IOCGPARAMS):
- case IOCTL_NR(ATM_GETNAMES):
- case IOCTL_NR(ATM_GETLINKRATE):
- case IOCTL_NR(ATM_GETTYPE):
- case IOCTL_NR(ATM_GETESI):
- case IOCTL_NR(ATM_GETADDR):
- case IOCTL_NR(ATM_RSTADDR):
- case IOCTL_NR(ATM_ADDADDR):
- case IOCTL_NR(ATM_DELADDR):
- case IOCTL_NR(ATM_GETCIRANGE):
- case IOCTL_NR(ATM_SETCIRANGE):
- case IOCTL_NR(ATM_SETESI):
- case IOCTL_NR(ATM_SETESIF):
- case IOCTL_NR(ATM_GETSTAT):
- case IOCTL_NR(ATM_GETSTATZ):
- case IOCTL_NR(ATM_GETLOOP):
- case IOCTL_NR(ATM_SETLOOP):
- case IOCTL_NR(ATM_QUERYLOOP):
- case IOCTL_NR(ENI_SETMULT):
- case IOCTL_NR(NS_GETPSTAT):
- /* case IOCTL_NR(NS_SETBUFLEV): This is a duplicate case with ZATM_GETPOOLZ */
- case IOCTL_NR(ZATM_GETPOOLZ):
- case IOCTL_NR(ZATM_GETPOOL):
- case IOCTL_NR(ZATM_SETPOOL):
- case IOCTL_NR(ZATM_GETTHIST):
- case IOCTL_NR(IDT77105_GETSTAT):
- case IOCTL_NR(IDT77105_GETSTATZ):
- case IOCTL_NR(IXJCTL_TONE_CADENCE):
- case IOCTL_NR(IXJCTL_FRAMES_READ):
- case IOCTL_NR(IXJCTL_FRAMES_WRITTEN):
- case IOCTL_NR(IXJCTL_READ_WAIT):
- case IOCTL_NR(IXJCTL_WRITE_WAIT):
- case IOCTL_NR(IXJCTL_DRYBUFFER_READ):
- case IOCTL_NR(I2OHRTGET):
- case IOCTL_NR(I2OLCTGET):
- case IOCTL_NR(I2OPARMSET):
- case IOCTL_NR(I2OPARMGET):
- case IOCTL_NR(I2OSWDL):
- case IOCTL_NR(I2OSWUL):
- case IOCTL_NR(I2OSWDEL):
- case IOCTL_NR(I2OHTML):
+ case IOCTL_NR(VFAT_IOCTL_READDIR_SHORT):
+ case IOCTL_NR(VFAT_IOCTL_READDIR_BOTH): {
+ struct linux32_dirent *d32 = P(arg);
+ struct dirent d[2];
+
+ ret = DO_IOCTL(fd, _IOR('r', _IOC_NR(cmd),
+ struct dirent [2]),
+ (unsigned long) d);
+ if (ret < 0)
+ return ret;
+
+ if (put_dirent32(d, d32) || put_dirent32(d + 1, d32 + 1))
+ return -EFAULT;
+
+ return ret;
+ }
+
+ case IOCTL_NR(DRM_IOCTL_VERSION):
+ {
+ drm_version_t ver;
+ struct {
+ int version_major;
+ int version_minor;
+ int version_patchlevel;
+ unsigned int name_len;
+ unsigned int name; /* pointer */
+ unsigned int date_len;
+ unsigned int date; /* pointer */
+ unsigned int desc_len;
+ unsigned int desc; /* pointer */
+ } ver32;
+
+ if (copy_from_user(&ver32, P(arg), sizeof(ver32)))
+ return -EFAULT;
+ ver.name_len = ver32.name_len;
+ ver.name = P(ver32.name);
+ ver.date_len = ver32.date_len;
+ ver.date = P(ver32.date);
+ ver.desc_len = ver32.desc_len;
+ ver.desc = P(ver32.desc);
+ ret = DO_IOCTL(fd, DRM_IOCTL_VERSION, &ver);
+ if (ret >= 0) {
+ ver32.version_major = ver.version_major;
+ ver32.version_minor = ver.version_minor;
+ ver32.version_patchlevel = ver.version_patchlevel;
+ ver32.name_len = ver.name_len;
+ ver32.date_len = ver.date_len;
+ ver32.desc_len = ver.desc_len;
+ if (copy_to_user(P(arg), &ver32, sizeof(ver32)))
+ return -EFAULT;
+ }
+ return ret;
+ }
+
+ case IOCTL_NR(DRM_IOCTL_GET_UNIQUE):
+ {
+ drm_unique_t un;
+ struct {
+ unsigned int unique_len;
+ unsigned int unique;
+ } un32;
+
+ if (copy_from_user(&un32, P(arg), sizeof(un32)))
+ return -EFAULT;
+ un.unique_len = un32.unique_len;
+ un.unique = P(un32.unique);
+ ret = DO_IOCTL(fd, DRM_IOCTL_GET_UNIQUE, &un);
+ if (ret >= 0) {
+ un32.unique_len = un.unique_len;
+ if (copy_to_user(P(arg), &un32, sizeof(un32)))
+ return -EFAULT;
+ }
+ return ret;
+ }
+ case IOCTL_NR(DRM_IOCTL_SET_UNIQUE):
+ case IOCTL_NR(DRM_IOCTL_ADD_MAP):
+ case IOCTL_NR(DRM_IOCTL_ADD_BUFS):
+ case IOCTL_NR(DRM_IOCTL_MARK_BUFS):
+ case IOCTL_NR(DRM_IOCTL_INFO_BUFS):
+ case IOCTL_NR(DRM_IOCTL_MAP_BUFS):
+ case IOCTL_NR(DRM_IOCTL_FREE_BUFS):
+ case IOCTL_NR(DRM_IOCTL_ADD_CTX):
+ case IOCTL_NR(DRM_IOCTL_RM_CTX):
+ case IOCTL_NR(DRM_IOCTL_MOD_CTX):
+ case IOCTL_NR(DRM_IOCTL_GET_CTX):
+ case IOCTL_NR(DRM_IOCTL_SWITCH_CTX):
+ case IOCTL_NR(DRM_IOCTL_NEW_CTX):
+ case IOCTL_NR(DRM_IOCTL_RES_CTX):
+
+ case IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE):
+ case IOCTL_NR(DRM_IOCTL_AGP_RELEASE):
+ case IOCTL_NR(DRM_IOCTL_AGP_ENABLE):
+ case IOCTL_NR(DRM_IOCTL_AGP_INFO):
+ case IOCTL_NR(DRM_IOCTL_AGP_ALLOC):
+ case IOCTL_NR(DRM_IOCTL_AGP_FREE):
+ case IOCTL_NR(DRM_IOCTL_AGP_BIND):
+ case IOCTL_NR(DRM_IOCTL_AGP_UNBIND):
+
+ /* Mga specific ioctls */
+
+ case IOCTL_NR(DRM_IOCTL_MGA_INIT):
+
+ /* I810 specific ioctls */
+
+ case IOCTL_NR(DRM_IOCTL_I810_GETBUF):
+ case IOCTL_NR(DRM_IOCTL_I810_COPY):
+
+ case IOCTL_NR(MTIOCGET):
+ case IOCTL_NR(MTIOCPOS):
+ case IOCTL_NR(MTIOCGETCONFIG):
+ case IOCTL_NR(MTIOCSETCONFIG):
+ case IOCTL_NR(PPPIOCSCOMPRESS):
+ case IOCTL_NR(PPPIOCGIDLE):
+ case IOCTL_NR(NCP_IOC_GET_FS_INFO_V2):
+ case IOCTL_NR(NCP_IOC_GETOBJECTNAME):
+ case IOCTL_NR(NCP_IOC_SETOBJECTNAME):
+ case IOCTL_NR(NCP_IOC_GETPRIVATEDATA):
+ case IOCTL_NR(NCP_IOC_SETPRIVATEDATA):
+ case IOCTL_NR(NCP_IOC_GETMOUNTUID2):
+ case IOCTL_NR(CAPI_MANUFACTURER_CMD):
+ case IOCTL_NR(VIDIOCGTUNER):
+ case IOCTL_NR(VIDIOCSTUNER):
+ case IOCTL_NR(VIDIOCGWIN):
+ case IOCTL_NR(VIDIOCSWIN):
+ case IOCTL_NR(VIDIOCGFBUF):
+ case IOCTL_NR(VIDIOCSFBUF):
+ case IOCTL_NR(MGSL_IOCSPARAMS):
+ case IOCTL_NR(MGSL_IOCGPARAMS):
+ case IOCTL_NR(ATM_GETNAMES):
+ case IOCTL_NR(ATM_GETLINKRATE):
+ case IOCTL_NR(ATM_GETTYPE):
+ case IOCTL_NR(ATM_GETESI):
+ case IOCTL_NR(ATM_GETADDR):
+ case IOCTL_NR(ATM_RSTADDR):
+ case IOCTL_NR(ATM_ADDADDR):
+ case IOCTL_NR(ATM_DELADDR):
+ case IOCTL_NR(ATM_GETCIRANGE):
+ case IOCTL_NR(ATM_SETCIRANGE):
+ case IOCTL_NR(ATM_SETESI):
+ case IOCTL_NR(ATM_SETESIF):
+ case IOCTL_NR(ATM_GETSTAT):
+ case IOCTL_NR(ATM_GETSTATZ):
+ case IOCTL_NR(ATM_GETLOOP):
+ case IOCTL_NR(ATM_SETLOOP):
+ case IOCTL_NR(ATM_QUERYLOOP):
+ case IOCTL_NR(ENI_SETMULT):
+ case IOCTL_NR(NS_GETPSTAT):
+ /* case IOCTL_NR(NS_SETBUFLEV): This is a duplicate case with ZATM_GETPOOLZ */
+ case IOCTL_NR(ZATM_GETPOOLZ):
+ case IOCTL_NR(ZATM_GETPOOL):
+ case IOCTL_NR(ZATM_SETPOOL):
+ case IOCTL_NR(ZATM_GETTHIST):
+ case IOCTL_NR(IDT77105_GETSTAT):
+ case IOCTL_NR(IDT77105_GETSTATZ):
+ case IOCTL_NR(IXJCTL_TONE_CADENCE):
+ case IOCTL_NR(IXJCTL_FRAMES_READ):
+ case IOCTL_NR(IXJCTL_FRAMES_WRITTEN):
+ case IOCTL_NR(IXJCTL_READ_WAIT):
+ case IOCTL_NR(IXJCTL_WRITE_WAIT):
+ case IOCTL_NR(IXJCTL_DRYBUFFER_READ):
+ case IOCTL_NR(I2OHRTGET):
+ case IOCTL_NR(I2OLCTGET):
+ case IOCTL_NR(I2OPARMSET):
+ case IOCTL_NR(I2OPARMGET):
+ case IOCTL_NR(I2OSWDL):
+ case IOCTL_NR(I2OSWUL):
+ case IOCTL_NR(I2OSWDEL):
+ case IOCTL_NR(I2OHTML):
break;
- default:
- return(sys_ioctl(fd, cmd, (unsigned long)arg));
+ default:
+ return sys_ioctl(fd, cmd, (unsigned long)arg);
}
printk("%x:unimplemented IA32 ioctl system call\n", cmd);
- return(-EINVAL);
+ return -EINVAL;
}
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_ldt.c linux-2.4.13-lia/arch/ia64/ia32/ia32_ldt.c
--- linux-2.4.13/arch/ia64/ia32/ia32_ldt.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_ldt.c Wed Oct 24 18:12:38 2001
@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Adapted from arch/i386/kernel/ldt.c
*/
@@ -16,6 +16,8 @@
#include <asm/uaccess.h>
#include <asm/ia32.h>
+#define P(p) ((void *) (unsigned long) (p))
+
/*
* read_ldt() is not really atomic - this is not a problem since synchronization of reads
* and writes done to the LDT has to be assured by user-space anyway. Writes are atomic,
@@ -58,10 +60,30 @@
}
static int
+read_default_ldt (void * ptr, unsigned long bytecount)
+{
+ unsigned long size;
+ int err;
+
+ /* XXX fix me: should return equivalent of default_ldt[0] */
+ err = 0;
+ size = 8;
+ if (size > bytecount)
+ size = bytecount;
+
+ err = size;
+ if (clear_user(ptr, size))
+ err = -EFAULT;
+
+ return err;
+}
+
+static int
write_ldt (void * ptr, unsigned long bytecount, int oldmode)
{
struct ia32_modify_ldt_ldt_s ldt_info;
__u64 entry;
+ int ret;
if (bytecount != sizeof(ldt_info))
return -EINVAL;
@@ -97,23 +119,28 @@
* memory, but we still need to guard against out-of-memory, hence we must use
* put_user().
*/
- return __put_user(entry, (__u64 *) IA32_LDT_OFFSET + ldt_info.entry_number);
+ ret = __put_user(entry, (__u64 *) IA32_LDT_OFFSET + ldt_info.entry_number);
+ ia32_load_segment_descriptors(current);
+ return ret;
}
asmlinkage int
-sys32_modify_ldt (int func, void *ptr, unsigned int bytecount)
+sys32_modify_ldt (int func, unsigned int ptr, unsigned int bytecount)
{
int ret = -ENOSYS;
switch (func) {
case 0:
- ret = read_ldt(ptr, bytecount);
+ ret = read_ldt(P(ptr), bytecount);
break;
case 1:
- ret = write_ldt(ptr, bytecount, 1);
+ ret = write_ldt(P(ptr), bytecount, 1);
+ break;
+ case 2:
+ ret = read_default_ldt(P(ptr), bytecount);
break;
case 0x11:
- ret = write_ldt(ptr, bytecount, 0);
+ ret = write_ldt(P(ptr), bytecount, 0);
break;
}
return ret;
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_signal.c linux-2.4.13-lia/arch/ia64/ia32/ia32_signal.c
--- linux-2.4.13/arch/ia64/ia32/ia32_signal.c Mon Oct 9 17:54:53 2000
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_signal.c Wed Oct 10 17:38:49 2001
@@ -1,8 +1,8 @@
/*
* IA32 Architecture-specific signal handling support.
*
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999, 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
@@ -13,6 +13,7 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/personality.h>
#include <linux/ptrace.h>
#include <linux/sched.h>
#include <linux/signal.h>
@@ -28,9 +29,15 @@
#include <asm/segment.h>
#include <asm/ia32.h>
+#include "../kernel/sigframe.h"
+
+#define A(__x) ((unsigned long)(__x))
+
#define DEBUG_SIG 0
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+#define __IA32_NR_sigreturn 119
+#define __IA32_NR_rt_sigreturn 173
struct sigframe_ia32
{
@@ -54,12 +61,51 @@
char retcode[8];
};
-static int
+int
+copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from)
+{
+ unsigned long tmp;
+ int err;
+
+ if (!access_ok(VERIFY_READ, from, sizeof(siginfo_t32)))
+ return -EFAULT;
+
+ err = __get_user(to->si_signo, &from->si_signo);
+ err |= __get_user(to->si_errno, &from->si_errno);
+ err |= __get_user(to->si_code, &from->si_code);
+
+ if (from->si_code < 0)
+ err |= __copy_from_user(&to->_sifields._pad, &from->_sifields._pad, SI_PAD_SIZE);
+ else {
+ switch (from->si_code >> 16) {
+ case __SI_CHLD >> 16:
+ err |= __get_user(to->si_utime, &from->si_utime);
+ err |= __get_user(to->si_stime, &from->si_stime);
+ err |= __get_user(to->si_status, &from->si_status);
+ default:
+ err |= __get_user(to->si_pid, &from->si_pid);
+ err |= __get_user(to->si_uid, &from->si_uid);
+ break;
+ case __SI_FAULT >> 16:
+ err |= __get_user(tmp, &from->si_addr);
+ to->si_addr = (void *) tmp;
+ break;
+ case __SI_POLL >> 16:
+ err |= __get_user(to->si_band, &from->si_band);
+ err |= __get_user(to->si_fd, &from->si_fd);
+ break;
+ /* case __SI_RT: This is not generated by the kernel as of now. */
+ }
+ }
+ return err;
+}
+
+int
copy_siginfo_to_user32 (siginfo_t32 *to, siginfo_t *from)
{
int err;
- if (!access_ok (VERIFY_WRITE, to, sizeof(siginfo_t32)))
+ if (!access_ok(VERIFY_WRITE, to, sizeof(siginfo_t32)))
return -EFAULT;
/* If you change siginfo_t structure, please be sure
@@ -97,110 +143,329 @@
return err;
}
+static inline void
+sigact_set_handler (struct k_sigaction *sa, unsigned int handler, unsigned int restorer)
+{
+ if (handler + 1 <= 2)
+ /* SIG_DFL, SIG_IGN, or SIG_ERR: must sign-extend to 64-bits */
+ sa->sa.sa_handler = (__sighandler_t) A((int) handler);
+ else
+ sa->sa.sa_handler = (__sighandler_t) (((unsigned long) restorer << 32) | handler);
+}
+asmlinkage long
+ia32_rt_sigsuspend (sigset32_t *uset, unsigned int sigsetsize, struct sigscratch *scr)
+{
+ extern long ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall);
+ sigset_t oldset, set;
-static int
-setup_sigcontext_ia32(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
- struct pt_regs *regs, unsigned long mask)
+ scr->scratch_unat = 0; /* avoid leaking kernel bits to user level */
+ memset(&set, 0, sizeof(&set));
+
+ if (sigsetsize > sizeof(sigset_t))
+ return -EINVAL;
+
+ if (copy_from_user(&set.sig, &uset->sig, sigsetsize))
+ return -EFAULT;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sigmask_lock);
+ {
+ oldset = current->blocked;
+ current->blocked = set;
+ recalc_sigpending(current);
+ }
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ /*
+ * The return below usually returns to the signal handler. We need to pre-set the
+ * correct error code here to ensure that the right values get saved in sigcontext
+ * by ia64_do_signal.
+ */
+ scr->pt.r8 = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ if (ia64_do_signal(&oldset, scr, 1))
+ return -EINTR;
+ }
+}
+
+asmlinkage long
+ia32_sigsuspend (unsigned int mask, struct sigscratch *scr)
+{
+ return ia32_rt_sigsuspend((sigset32_t *)&mask, sizeof(mask), scr);
+}
+
+asmlinkage long
+sys32_signal (int sig, unsigned int handler)
+{
+ struct k_sigaction new_sa, old_sa;
+ int ret;
+
+ sigact_set_handler(&new_sa, handler, 0);
+ new_sa.sa.sa_flags = SA_ONESHOT | SA_NOMASK;
+
+ ret = do_sigaction(sig, &new_sa, &old_sa);
+
+ return ret ? ret : IA32_SA_HANDLER(&old_sa);
+}
+
+asmlinkage long
+sys32_rt_sigaction (int sig, struct sigaction32 *act,
+ struct sigaction32 *oact, unsigned int sigsetsize)
+{
+ struct k_sigaction new_ka, old_ka;
+ unsigned int handler, restorer;
+ int ret;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset32_t))
+ return -EINVAL;
+
+ if (act) {
+ ret = get_user(handler, &act->sa_handler);
+ ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= get_user(restorer, &act->sa_restorer);
+ ret |= copy_from_user(&new_ka.sa.sa_mask, &act->sa_mask, sizeof(sigset32_t));
+ if (ret)
+ return -EFAULT;
+
+ sigact_set_handler(&new_ka, handler, restorer);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ ret = put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler);
+ ret |= put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer);
+ ret |= copy_to_user(&oact->sa_mask, &old_ka.sa.sa_mask, sizeof(sigset32_t));
+ }
+ return ret;
+}
+
+
+extern asmlinkage long sys_rt_sigprocmask (int how, sigset_t *set, sigset_t *oset,
+ size_t sigsetsize);
+
+asmlinkage long
+sys32_rt_sigprocmask (int how, sigset32_t *set, sigset32_t *oset, unsigned int sigsetsize)
+{
+ mm_segment_t old_fs = get_fs();
+ sigset_t s;
+ long ret;
+
+ if (sigsetsize > sizeof(s))
+ return -EINVAL;
+
+ if (set) {
+ memset(&s, 0, sizeof(s));
+ if (copy_from_user(&s.sig, set, sigsetsize))
+ return -EFAULT;
+ }
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL, sizeof(s));
+ set_fs(old_fs);
+ if (ret)
+ return ret;
+ if (oset) {
+ if (copy_to_user(oset, &s.sig, sigsetsize))
+ return -EFAULT;
+ }
+ return 0;
+}
+
+asmlinkage long
+sys32_sigprocmask (int how, unsigned int *set, unsigned int *oset)
{
- int err = 0;
- unsigned long flag;
+ return sys32_rt_sigprocmask(how, (sigset32_t *) set, (sigset32_t *) oset, sizeof(*set));
+}
- err |= __put_user((regs->r16 >> 32) & 0xffff , (unsigned int *)&sc->fs);
- err |= __put_user((regs->r16 >> 48) & 0xffff , (unsigned int *)&sc->gs);
+asmlinkage long
+sys32_rt_sigtimedwait (sigset32_t *uthese, siginfo_t32 *uinfo, struct timespec32 *uts,
+ unsigned int sigsetsize)
+{
+ extern asmlinkage long sys_rt_sigtimedwait (const sigset_t *, siginfo_t *,
+ const struct timespec *, size_t);
+ extern int copy_siginfo_to_user32 (siginfo_t32 *, siginfo_t *);
+ mm_segment_t old_fs = get_fs();
+ struct timespec t;
+ siginfo_t info;
+ sigset_t s;
+ int ret;
- err |= __put_user((regs->r16 >> 56) & 0xffff, (unsigned int *)&sc->es);
- err |= __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds);
- err |= __put_user(regs->r15, &sc->edi);
- err |= __put_user(regs->r14, &sc->esi);
- err |= __put_user(regs->r13, &sc->ebp);
- err |= __put_user(regs->r12, &sc->esp);
- err |= __put_user(regs->r11, &sc->ebx);
- err |= __put_user(regs->r10, &sc->edx);
- err |= __put_user(regs->r9, &sc->ecx);
- err |= __put_user(regs->r8, &sc->eax);
+ if (copy_from_user(&s.sig, uthese, sizeof(sigset32_t)))
+ return -EFAULT;
+ if (uts) {
+ ret = get_user(t.tv_sec, &uts->tv_sec);
+ ret |= get_user(t.tv_nsec, &uts->tv_nsec);
+ if (ret)
+ return -EFAULT;
+ }
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
+ set_fs(old_fs);
+ if (ret >= 0 && uinfo) {
+ if (copy_siginfo_to_user32(uinfo, &info))
+ return -EFAULT;
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_rt_sigqueueinfo (int pid, int sig, siginfo_t32 *uinfo)
+{
+ extern asmlinkage long sys_rt_sigqueueinfo (int, int, siginfo_t *);
+ extern int copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from);
+ mm_segment_t old_fs = get_fs();
+ siginfo_t info;
+ int ret;
+
+ if (copy_siginfo_from_user32(&info, uinfo))
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigqueueinfo(pid, sig, &info);
+ set_fs(old_fs);
+ return ret;
+}
+
+asmlinkage long
+sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigaction32 *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ unsigned int handler, restorer;
+ int ret;
+
+ if (act) {
+ old_sigset32_t mask;
+
+ ret = get_user(handler, &act->sa_handler);
+ ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= get_user(restorer, &act->sa_restorer);
+ ret |= get_user(mask, &act->sa_mask);
+ if (ret)
+ return ret;
+
+ sigact_set_handler(&new_ka, handler, restorer);
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ ret = put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler);
+ ret |= put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer);
+ ret |= put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+static int
+setup_sigcontext_ia32 (struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
+ struct pt_regs *regs, unsigned long mask)
+{
+ int err = 0;
+ unsigned long flag;
+
+ err |= __put_user((regs->r16 >> 32) & 0xffff, (unsigned int *)&sc->fs);
+ err |= __put_user((regs->r16 >> 48) & 0xffff, (unsigned int *)&sc->gs);
+ err |= __put_user((regs->r16 >> 16) & 0xffff, (unsigned int *)&sc->es);
+ err |= __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds);
+ err |= __put_user(regs->r15, &sc->edi);
+ err |= __put_user(regs->r14, &sc->esi);
+ err |= __put_user(regs->r13, &sc->ebp);
+ err |= __put_user(regs->r12, &sc->esp);
+ err |= __put_user(regs->r11, &sc->ebx);
+ err |= __put_user(regs->r10, &sc->edx);
+ err |= __put_user(regs->r9, &sc->ecx);
+ err |= __put_user(regs->r8, &sc->eax);
#if 0
- err |= __put_user(current->tss.trap_no, &sc->trapno);
- err |= __put_user(current->tss.error_code, &sc->err);
+ err |= __put_user(current->tss.trap_no, &sc->trapno);
+ err |= __put_user(current->tss.error_code, &sc->err);
#endif
- err |= __put_user(regs->cr_iip, &sc->eip);
- err |= __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs);
- /*
- * `eflags' is in an ar register for this context
- */
- asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
- err |= __put_user((unsigned int)flag, &sc->eflags);
-
- err |= __put_user(regs->r12, &sc->esp_at_signal);
- err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
+ err |= __put_user(regs->cr_iip, &sc->eip);
+ err |= __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs);
+ /*
+ * `eflags' is in an ar register for this context
+ */
+ asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
+ err |= __put_user((unsigned int)flag, &sc->eflags);
+ err |= __put_user(regs->r12, &sc->esp_at_signal);
+ err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
#if 0
- tmp = save_i387(fpstate);
- if (tmp < 0)
- err = 1;
- else
- err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
+ tmp = save_i387(fpstate);
+ if (tmp < 0)
+ err = 1;
+ else
+ err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
- /* non-iBCS2 extensions.. */
+ /* non-iBCS2 extensions.. */
#endif
- err |= __put_user(mask, &sc->oldmask);
+ err |= __put_user(mask, &sc->oldmask);
#if 0
- err |= __put_user(current->tss.cr2, &sc->cr2);
+ err |= __put_user(current->tss.cr2, &sc->cr2);
#endif
-
- return err;
+ return err;
}
static int
-restore_sigcontext_ia32(struct pt_regs *regs, struct sigcontext_ia32 *sc, int *peax)
+restore_sigcontext_ia32 (struct pt_regs *regs, struct sigcontext_ia32 *sc, int *peax)
{
- unsigned int err = 0;
+ unsigned int err = 0;
+
+#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
-#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
+#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48)
+#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32)
+#define copyseg_cs(tmp) (regs->r17 |= tmp)
+#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16)
+#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16)
+#define copyseg_ds(tmp) (regs->r16 |= tmp)
+
+#define COPY_SEG(seg) \
+ { \
+ unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp); \
+ }
+#define COPY_SEG_STRICT(seg) \
+ { \
+ unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp|3); \
+ }
-#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48)
-#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32)
-#define copyseg_cs(tmp) (regs->r17 |= tmp)
-#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16)
-#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16)
-#define copyseg_ds(tmp) (regs->r16 |= tmp)
-
-#define COPY_SEG(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
- copyseg_##seg(tmp); }
-
-#define COPY_SEG_STRICT(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
- copyseg_##seg(tmp|3); }
-
- /* To make COPY_SEGs easier, we zero r16, r17 */
- regs->r16 = 0;
- regs->r17 = 0;
-
- COPY_SEG(gs);
- COPY_SEG(fs);
- COPY_SEG(es);
- COPY_SEG(ds);
- COPY(r15, edi);
- COPY(r14, esi);
- COPY(r13, ebp);
- COPY(r12, esp);
- COPY(r11, ebx);
- COPY(r10, edx);
- COPY(r9, ecx);
- COPY(cr_iip, eip);
- COPY_SEG_STRICT(cs);
- COPY_SEG_STRICT(ss);
- {
+ /* To make COPY_SEGs easier, we zero r16, r17 */
+ regs->r16 = 0;
+ regs->r17 = 0;
+
+ COPY_SEG(gs);
+ COPY_SEG(fs);
+ COPY_SEG(es);
+ COPY_SEG(ds);
+ COPY(r15, edi);
+ COPY(r14, esi);
+ COPY(r13, ebp);
+ COPY(r12, esp);
+ COPY(r11, ebx);
+ COPY(r10, edx);
+ COPY(r9, ecx);
+ COPY(cr_iip, eip);
+ COPY_SEG_STRICT(cs);
+ COPY_SEG_STRICT(ss);
+ ia32_load_segment_descriptors(current);
+ {
unsigned int tmpflags;
unsigned long flag;
/*
- * IA32 `eflags' is not part of `pt_regs', it's
- * in an ar register which is part of the thread
- * context. Fortunately, we are executing in the
+ * IA32 `eflags' is not part of `pt_regs', it's in an ar register which
+ * is part of the thread context. Fortunately, we are executing in the
* IA32 process's context.
*/
err |= __get_user(tmpflags, &sc->eflags);
@@ -210,186 +475,191 @@
asm volatile ("mov ar.eflag=%0 ;;" :: "r"(flag));
regs->r1 = -1; /* disable syscall checks, r1 is orig_eax */
- }
+ }
#if 0
- {
- struct _fpstate * buf;
- err |= __get_user(buf, &sc->fpstate);
- if (buf) {
- if (verify_area(VERIFY_READ, buf, sizeof(*buf)))
- goto badframe;
- err |= restore_i387(buf);
- }
- }
+ {
+ struct _fpstate * buf;
+ err |= __get_user(buf, &sc->fpstate);
+ if (buf) {
+ if (verify_area(VERIFY_READ, buf, sizeof(*buf)))
+ goto badframe;
+ err |= restore_i387(buf);
+ }
+ }
#endif
- err |= __get_user(*peax, &sc->eax);
- return err;
+ err |= __get_user(*peax, &sc->eax);
+ return err;
-#if 0
-badframe:
- return 1;
+#if 0
+ badframe:
+ return 1;
#endif
-
}
/*
* Determine which stack to use..
*/
static inline void *
-get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
+get_sigframe (struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
{
- unsigned long esp;
- unsigned int xss;
+ unsigned long esp;
- /* Default to using normal stack */
- esp = regs->r12;
- xss = regs->r16 >> 16;
-
- /* This is the X/Open sanctioned signal stack switching. */
- if (ka->sa.sa_flags & SA_ONSTACK) {
- if (! on_sig_stack(esp))
- esp = current->sas_ss_sp + current->sas_ss_size;
- }
- /* Legacy stack switching not supported */
-
- return (void *)((esp - frame_size) & -8ul);
+ /* Default to using normal stack (truncate off sign-extension of bit 31: */
+ esp = (unsigned int) regs->r12;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+ if (!on_sig_stack(esp))
+ esp = current->sas_ss_sp + current->sas_ss_size;
+ }
+ /* Legacy stack switching not supported */
+
+ return (void *)((esp - frame_size) & -8ul);
}
static int
-setup_frame_ia32(int sig, struct k_sigaction *ka, sigset_t *set,
- struct pt_regs * regs)
-{
- struct sigframe_ia32 *frame;
- int err = 0;
-
- frame = get_sigframe(ka, regs, sizeof(*frame));
-
- if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
- goto give_sigsegv;
-
- err |= __put_user((current->exec_domain
- && current->exec_domain->signal_invmap
- && sig < 32
- ? (int)(current->exec_domain->signal_invmap[sig])
- : sig),
- &frame->sig);
-
- err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
-
- if (_IA32_NSIG_WORDS > 1) {
- err |= __copy_to_user(frame->extramask, &set->sig[1],
- sizeof(frame->extramask));
- }
-
- /* Set up to return from userspace. If provided, use a stub
- already in userspace. */
- err |= __put_user((long)frame->retcode, &frame->pretcode);
- /* This is popl %eax ; movl $,%eax ; int $0x80 */
- err |= __put_user(0xb858, (short *)(frame->retcode+0));
-#define __IA32_NR_sigreturn 119
- err |= __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retcode+2));
- err |= __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+4));
- err |= __put_user(0x80cd, (short *)(frame->retcode+6));
-
- if (err)
- goto give_sigsegv;
-
- /* Set up registers for signal handler */
- regs->r12 = (unsigned long) frame;
- regs->cr_iip = (unsigned long) ka->sa.sa_handler;
-
- set_fs(USER_DS);
- regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS;
+setup_frame_ia32 (int sig, struct k_sigaction *ka, sigset_t *set, struct pt_regs * regs)
+{
+ struct sigframe_ia32 *frame;
+ int err = 0;
+
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? (int)(current->exec_domain->signal_invmap[sig])
+ : sig),
+ &frame->sig);
+
+ err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
+
+ if (_IA32_NSIG_WORDS > 1)
+ err |= __copy_to_user(frame->extramask, (char *) &set->sig + 4,
+ sizeof(frame->extramask));
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ unsigned int restorer = IA32_SA_RESTORER(ka);
+ err |= __put_user(restorer, &frame->pretcode);
+ } else {
+ err |= __put_user((long)frame->retcode, &frame->pretcode);
+ /* This is popl %eax ; movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb858, (short *)(frame->retcode+0));
+ err |= __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retcode+2));
+ err |= __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+4));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+6));
+ }
+
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = IA32_SA_HANDLER(ka);
+
+ set_fs(USER_DS);
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
#if 0
- regs->eflags &= ~TF_MASK;
+ regs->eflags &= ~TF_MASK;
#endif
#if 0
- printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
+ printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
current->comm, current->pid, sig, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
- return 1;
+ return 1;
-give_sigsegv:
- if (sig = SIGSEGV)
- ka->sa.sa_handler = SIG_DFL;
- force_sig(SIGSEGV, current);
- return 0;
+ give_sigsegv:
+ if (sig = SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+ return 0;
}
static int
-setup_rt_frame_ia32(int sig, struct k_sigaction *ka, siginfo_t *info,
- sigset_t *set, struct pt_regs * regs)
+setup_rt_frame_ia32 (int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs * regs)
{
- struct rt_sigframe_ia32 *frame;
- int err = 0;
+ struct rt_sigframe_ia32 *frame;
+ int err = 0;
- frame = get_sigframe(ka, regs, sizeof(*frame));
+ frame = get_sigframe(ka, regs, sizeof(*frame));
- if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
- goto give_sigsegv;
-
- err |= __put_user((current->exec_domain
- && current->exec_domain->signal_invmap
- && sig < 32
- ? current->exec_domain->signal_invmap[sig]
- : sig),
- &frame->sig);
- err |= __put_user((long)&frame->info, &frame->pinfo);
- err |= __put_user((long)&frame->uc, &frame->puc);
- err |= copy_siginfo_to_user32(&frame->info, info);
-
- /* Create the ucontext. */
- err |= __put_user(0, &frame->uc.uc_flags);
- err |= __put_user(0, &frame->uc.uc_link);
- err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
- err |= __put_user(sas_ss_flags(regs->r12),
- &frame->uc.uc_stack.ss_flags);
- err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
- err |= setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate,
- regs, set->sig[0]);
- err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
-
- err |= __put_user((long)frame->retcode, &frame->pretcode);
- /* This is movl $,%eax ; int $0x80 */
- err |= __put_user(0xb8, (char *)(frame->retcode+0));
-#define __IA32_NR_rt_sigreturn 173
- err |= __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1));
- err |= __put_user(0x80cd, (short *)(frame->retcode+5));
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? current->exec_domain->signal_invmap[sig]
+ : sig),
+ &frame->sig);
+ err |= __put_user((long)&frame->info, &frame->pinfo);
+ err |= __put_user((long)&frame->uc, &frame->puc);
+ err |= copy_siginfo_to_user32(&frame->info, info);
+
+ /* Create the ucontext. */
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(sas_ss_flags(regs->r12), &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate, regs, set->sig[0]);
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ unsigned int restorer = IA32_SA_RESTORER(ka);
+ err |= __put_user(restorer, &frame->pretcode);
+ } else {
+ err |= __put_user((long)frame->retcode, &frame->pretcode);
+ /* This is movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb8, (char *)(frame->retcode+0));
+ err |= __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+5));
+ }
- if (err)
- goto give_sigsegv;
+ if (err)
+ goto give_sigsegv;
- /* Set up registers for signal handler */
- regs->r12 = (unsigned long) frame;
- regs->cr_iip = (unsigned long) ka->sa.sa_handler;
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = IA32_SA_HANDLER(ka);
- set_fs(USER_DS);
+ set_fs(USER_DS);
- regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS;
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
#if 0
- regs->eflags &= ~TF_MASK;
+ regs->eflags &= ~TF_MASK;
#endif
#if 0
- printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
+ printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
current->comm, current->pid, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
- return 1;
+ return 1;
give_sigsegv:
- if (sig = SIGSEGV)
- ka->sa.sa_handler = SIG_DFL;
- force_sig(SIGSEGV, current);
- return 0;
+ if (sig = SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+ return 0;
}
int
@@ -398,95 +668,78 @@
{
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
- return(setup_rt_frame_ia32(sig, ka, info, set, regs));
+ return setup_rt_frame_ia32(sig, ka, info, set, regs);
else
- return(setup_frame_ia32(sig, ka, set, regs));
+ return setup_frame_ia32(sig, ka, set, regs);
}
-asmlinkage int
-sys32_sigreturn(
-int arg0,
-int arg1,
-int arg2,
-int arg3,
-int arg4,
-int arg5,
-int arg6,
-int arg7,
-unsigned long stack)
-{
- struct pt_regs *regs = (struct pt_regs *) &stack;
- struct sigframe_ia32 *frame = (struct sigframe_ia32 *)(regs->r12- 8);
- sigset_t set;
- int eax;
-
- if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
- goto badframe;
-
- if (__get_user(set.sig[0], &frame->sc.oldmask)
- || (_IA32_NSIG_WORDS > 1
- && __copy_from_user((((char *) &set.sig) + 4),
- &frame->extramask,
- sizeof(frame->extramask))))
- goto badframe;
-
- sigdelsetmask(&set, ~_BLOCKABLE);
- spin_lock_irq(¤t->sigmask_lock);
- current->blocked = (sigset_t) set;
- recalc_sigpending(current);
- spin_unlock_irq(¤t->sigmask_lock);
-
- if (restore_sigcontext_ia32(regs, &frame->sc, &eax))
- goto badframe;
- return eax;
-
-badframe:
- force_sig(SIGSEGV, current);
- return 0;
-}
-
-asmlinkage int
-sys32_rt_sigreturn(
-int arg0,
-int arg1,
-int arg2,
-int arg3,
-int arg4,
-int arg5,
-int arg6,
-int arg7,
-unsigned long stack)
-{
- struct pt_regs *regs = (struct pt_regs *) &stack;
- struct rt_sigframe_ia32 *frame = (struct rt_sigframe_ia32 *)(regs->r12 - 4);
- sigset_t set;
- stack_t st;
- int eax;
-
- if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
- goto badframe;
- if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
- goto badframe;
-
- sigdelsetmask(&set, ~_BLOCKABLE);
- spin_lock_irq(¤t->sigmask_lock);
- current->blocked = set;
- recalc_sigpending(current);
- spin_unlock_irq(¤t->sigmask_lock);
-
- if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax))
- goto badframe;
-
- if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
- goto badframe;
- /* It is more difficult to avoid calling this function than to
- call it and ignore errors. */
- do_sigaltstack(&st, NULL, regs->r12);
-
- return eax;
-
-badframe:
- force_sig(SIGSEGV, current);
- return 0;
-}
+asmlinkage long
+sys32_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int arg5, int arg6, int arg7,
+ unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long esp = (unsigned int) regs->r12;
+ struct sigframe_ia32 *frame = (struct sigframe_ia32 *)(esp - 8);
+ sigset_t set;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_IA32_NSIG_WORDS > 1 && __copy_from_user((char *) &set.sig + 4, &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = (sigset_t) set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->sc, &eax))
+ goto badframe;
+ return eax;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+asmlinkage long
+sys32_rt_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int arg5, int arg6, int arg7,
+ unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long esp = (unsigned int) regs->r12;
+ struct rt_sigframe_ia32 *frame = (struct rt_sigframe_ia32 *)(esp - 4);
+ sigset_t set;
+ stack_t st;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax))
+ goto badframe;
+
+ if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
+ goto badframe;
+ /* It is more difficult to avoid calling this function than to
+ call it and ignore errors. */
+ do_sigaltstack(&st, NULL, esp);
+
+ return eax;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_support.c linux-2.4.13-lia/arch/ia64/ia32/ia32_support.c
--- linux-2.4.13/arch/ia64/ia32/ia32_support.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_support.c Wed Oct 10 17:39:02 2001
@@ -4,15 +4,18 @@
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2000 Asit K. Mallick <asit.k.mallick@intel.com>
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 06/16/00 A. Mallick added csd/ssd/tssd for ia32 thread context
* 02/19/01 D. Mosberger dropped tssd; it's not needed
+ * 09/14/01 D. Mosberger fixed memory management for gdt/tss page
+ * 09/29/01 D. Mosberger added ia32_load_segment_descriptors()
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/mm.h>
+#include <linux/personality.h>
#include <linux/sched.h>
#include <asm/page.h>
@@ -21,10 +24,46 @@
#include <asm/processor.h>
#include <asm/ia32.h>
-extern unsigned long *ia32_gdt_table, *ia32_tss;
-
extern void die_if_kernel (char *str, struct pt_regs *regs, long err);
+struct exec_domain ia32_exec_domain;
+struct page *ia32_shared_page[(2*IA32_PAGE_SIZE + PAGE_SIZE - 1)/PAGE_SIZE];
+unsigned long *ia32_gdt;
+
+static unsigned long
+load_desc (u16 selector)
+{
+ unsigned long *table, limit, index;
+
+ if (!selector)
+ return 0;
+ if (selector & IA32_SEGSEL_TI) {
+ table = (unsigned long *) IA32_LDT_OFFSET;
+ limit = IA32_LDT_ENTRIES;
+ } else {
+ table = ia32_gdt;
+ limit = IA32_PAGE_SIZE / sizeof(ia32_gdt[0]);
+ }
+ index = selector >> IA32_SEGSEL_INDEX_SHIFT;
+ if (index >= limit)
+ return 0;
+ return IA32_SEG_UNSCRAMBLE(table[index]);
+}
+
+void
+ia32_load_segment_descriptors (struct task_struct *task)
+{
+ struct pt_regs *regs = ia64_task_regs(task);
+
+ /* Setup the segment descriptors */
+ regs->r24 = load_desc(regs->r16 >> 16); /* ESD */
+ regs->r27 = load_desc(regs->r16 >> 0); /* DSD */
+ regs->r28 = load_desc(regs->r16 >> 32); /* FSD */
+ regs->r29 = load_desc(regs->r16 >> 48); /* GSD */
+ task->thread.csd = load_desc(regs->r17 >> 0); /* CSD */
+ task->thread.ssd = load_desc(regs->r17 >> 16); /* SSD */
+}
+
void
ia32_save_state (struct task_struct *t)
{
@@ -46,14 +85,17 @@
t->thread.csd = csd;
t->thread.ssd = ssd;
ia64_set_kr(IA64_KR_IO_BASE, t->thread.old_iob);
+ ia64_set_kr(IA64_KR_TSSD, t->thread.old_k1);
}
void
ia32_load_state (struct task_struct *t)
{
- unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd;
+ unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd, tssd;
struct pt_regs *regs = ia64_task_regs(t);
- int nr;
+ int nr = smp_processor_id(); /* LDT and TSS depend on CPU number: */
+
+ nr = smp_processor_id();
eflag = t->thread.eflag;
fsr = t->thread.fsr;
@@ -62,6 +104,7 @@
fdr = t->thread.fdr;
csd = t->thread.csd;
ssd = t->thread.ssd;
+ tssd = load_desc(_TSS(nr)); /* TSSD */
asm volatile ("mov ar.eflag=%0;"
"mov ar.fsr=%1;"
@@ -72,11 +115,12 @@
"mov ar.ssd=%6;"
:: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr), "r"(csd), "r"(ssd));
current->thread.old_iob = ia64_get_kr(IA64_KR_IO_BASE);
+ current->thread.old_k1 = ia64_get_kr(IA64_KR_TSSD);
ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE);
+ ia64_set_kr(IA64_KR_TSSD, tssd);
- /* load TSS and LDT while preserving SS and CS: */
- nr = smp_processor_id();
regs->r17 = (_TSS(nr) << 48) | (_LDT(nr) << 32) | (__u32) regs->r17;
+ regs->r30 = load_desc(_LDT(nr)); /* LDTD */
}
/*
@@ -85,36 +129,34 @@
void
ia32_gdt_init (void)
{
- unsigned long gdt_and_tss_page, ldt_size;
+ unsigned long *tss;
+ unsigned long ldt_size;
int nr;
- /* allocate two IA-32 pages of memory: */
- gdt_and_tss_page = __get_free_pages(GFP_KERNEL,
- (IA32_PAGE_SHIFT < PAGE_SHIFT)
- ? 0 : (IA32_PAGE_SHIFT + 1) - PAGE_SHIFT);
- ia32_gdt_table = (unsigned long *) gdt_and_tss_page;
- ia32_tss = (unsigned long *) (gdt_and_tss_page + IA32_PAGE_SIZE);
-
- /* Zero the gdt and tss */
- memset((void *) gdt_and_tss_page, 0, 2*IA32_PAGE_SIZE);
+ ia32_shared_page[0] = alloc_page(GFP_KERNEL);
+ ia32_gdt = page_address(ia32_shared_page[0]);
+ tss = ia32_gdt + IA32_PAGE_SIZE/sizeof(ia32_gdt[0]);
+
+ if (IA32_PAGE_SIZE = PAGE_SIZE) {
+ ia32_shared_page[1] = alloc_page(GFP_KERNEL);
+ tss = page_address(ia32_shared_page[1]);
+ }
/* CS descriptor in IA-32 (scrambled) format */
- ia32_gdt_table[__USER_CS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET - 1) >> IA32_PAGE_SHIFT,
- 0xb, 1, 3, 1, 1, 1, 1);
+ ia32_gdt[__USER_CS >> 3] = IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) >> IA32_PAGE_SHIFT,
+ 0xb, 1, 3, 1, 1, 1, 1);
/* DS descriptor in IA-32 (scrambled) format */
- ia32_gdt_table[__USER_DS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET - 1) >> IA32_PAGE_SHIFT,
- 0x3, 1, 3, 1, 1, 1, 1);
+ ia32_gdt[__USER_DS >> 3] = IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) >> IA32_PAGE_SHIFT,
+ 0x3, 1, 3, 1, 1, 1, 1);
/* We never change the TSS and LDT descriptors, so we can share them across all CPUs. */
ldt_size = PAGE_ALIGN(IA32_LDT_ENTRIES*IA32_LDT_ENTRY_SIZE);
for (nr = 0; nr < NR_CPUS; ++nr) {
- ia32_gdt_table[_TSS(nr)] = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235,
- 0xb, 0, 3, 1, 1, 1, 0);
- ia32_gdt_table[_LDT(nr)] = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1,
- 0x2, 0, 3, 1, 1, 1, 0);
+ ia32_gdt[_TSS(nr)] = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235,
+ 0xb, 0, 3, 1, 1, 1, 0);
+ ia32_gdt[_LDT(nr)] = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1,
+ 0x2, 0, 3, 1, 1, 1, 0);
}
}
@@ -133,3 +175,18 @@
siginfo.si_code = TRAP_BRKPT;
force_sig_info(SIGTRAP, &siginfo, current);
}
+
+static int __init
+ia32_init (void)
+{
+ ia32_exec_domain.name = "Linux/x86";
+ ia32_exec_domain.handler = NULL;
+ ia32_exec_domain.pers_low = PER_LINUX32;
+ ia32_exec_domain.pers_high = PER_LINUX32;
+ ia32_exec_domain.signal_map = default_exec_domain.signal_map;
+ ia32_exec_domain.signal_invmap = default_exec_domain.signal_invmap;
+ register_exec_domain(&ia32_exec_domain);
+ return 0;
+}
+
+__initcall(ia32_init);
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_traps.c linux-2.4.13-lia/arch/ia64/ia32/ia32_traps.c
--- linux-2.4.13/arch/ia64/ia32/ia32_traps.c Thu Jan 4 12:50:17 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_traps.c Thu Oct 4 00:21:52 2001
@@ -1,7 +1,12 @@
/*
- * IA32 exceptions handler
+ * IA-32 exception handlers
*
+ * Copyright (C) 2000 Asit K. Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+/*
* 06/16/00 A. Mallick added siginfo for most cases (close to IA32)
+ * 09/29/00 D. Mosberger added ia32_intercept()
*/
#include <linux/kernel.h>
@@ -9,6 +14,26 @@
#include <asm/ia32.h>
#include <asm/ptrace.h>
+
+int
+ia32_intercept (struct pt_regs *regs, unsigned long isr)
+{
+ switch ((isr >> 16) & 0xff) {
+ case 0: /* Instruction intercept fault */
+ case 3: /* Locked Data reference fault */
+ case 1: /* Gate intercept trap */
+ return -1;
+
+ case 2: /* System flag trap */
+ if (((isr >> 14) & 0x3) >= 2) {
+ /* MOV SS, POP SS instructions */
+ ia64_psr(regs)->id = 1;
+ return 0;
+ } else
+ return -1;
+ }
+ return -1;
+}
int
ia32_exception (struct pt_regs *regs, unsigned long isr)
diff -urN linux-2.4.13/arch/ia64/ia32/sys_ia32.c linux-2.4.13-lia/arch/ia64/ia32/sys_ia32.c
--- linux-2.4.13/arch/ia64/ia32/sys_ia32.c Mon Aug 20 10:18:26 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/sys_ia32.c Wed Oct 10 17:39:17 2001
@@ -1,14 +1,13 @@
/*
- * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on
- * sys_sparc32
+ * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Derived from sys_sparc32.c.
*
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
- * Copyright (C) 2000 Hewlett-Packard Co.
- * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* These routines maintain argument size conversion between 32bit and 64bit
* environment.
@@ -53,31 +52,56 @@
#include <asm/types.h>
#include <asm/uaccess.h>
#include <asm/semaphore.h>
-#include <asm/ipc.h>
#include <net/scm.h>
#include <net/sock.h>
#include <asm/ia32.h>
+#define DEBUG 0
+
+#if DEBUG
+# define DBG(fmt...) printk(KERN_DEBUG fmt)
+#else
+# define DBG(fmt...)
+#endif
+
#define A(__x) ((unsigned long)(__x))
#define AA(__x) ((unsigned long)(__x))
#define ROUND_UP(x,a) ((__typeof__(x))(((unsigned long)(x) + ((a) - 1)) & ~((a) - 1)))
#define NAME_OFFSET(de) ((int) ((de)->d_name - (char *) (de)))
+#define OFFSET4K(a) ((a) & 0xfff)
+#define PAGE_START(addr) ((addr) & PAGE_MASK)
+#define PAGE_OFF(addr) ((addr) & ~PAGE_MASK)
+
extern asmlinkage long sys_execve (char *, char **, char **, struct pt_regs *);
extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long);
+extern asmlinkage long sys_munmap (unsigned long, size_t);
+extern unsigned long arch_get_unmapped_area (struct file *, unsigned long, unsigned long,
+ unsigned long, unsigned long);
+
+/* forward declaration: */
+asmlinkage long sys32_mprotect (unsigned int, unsigned int, int);
+
+/*
+ * Anything that modifies or inspects ia32 user virtual memory must hold this semaphore
+ * while doing so.
+ */
+/* XXX make per-mm: */
+static DECLARE_MUTEX(ia32_mmap_sem);
static int
nargs (unsigned int arg, char **ap)
{
- int n, err, addr;
+ unsigned int addr;
+ int n, err;
if (!arg)
return 0;
n = 0;
do {
- err = get_user(addr, (int *)A(arg));
+ err = get_user(addr, (unsigned int *)A(arg));
if (err)
return err;
if (ap)
@@ -94,7 +118,7 @@
int stack)
{
struct pt_regs *regs = (struct pt_regs *)&stack;
- unsigned long old_map_base, old_task_size;
+ unsigned long old_map_base, old_task_size, tssd;
char **av, **ae;
int na, ne, len;
long r;
@@ -123,15 +147,20 @@
old_map_base = current->thread.map_base;
old_task_size = current->thread.task_size;
+ tssd = ia64_get_kr(IA64_KR_TSSD);
- /* we may be exec'ing a 64-bit process: reset map base & task-size: */
+ /* we may be exec'ing a 64-bit process: reset map base, task-size, and io-base: */
current->thread.map_base = DEFAULT_MAP_BASE;
current->thread.task_size = DEFAULT_TASK_SIZE;
+ ia64_set_kr(IA64_KR_IO_BASE, current->thread.old_iob);
+ ia64_set_kr(IA64_KR_TSSD, current->thread.old_k1);
set_fs(KERNEL_DS);
r = sys_execve(filename, av, ae, regs);
if (r < 0) {
- /* oops, execve failed, switch back to old map base & task-size: */
+ /* oops, execve failed, switch back to old values... */
+ ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE);
+ ia64_set_kr(IA64_KR_TSSD, tssd);
current->thread.map_base = old_map_base;
current->thread.task_size = old_task_size;
set_fs(USER_DS); /* establish new task-size as the address-limit */
@@ -142,30 +171,33 @@
}
static inline int
-putstat(struct stat32 *ubuf, struct stat *kbuf)
+putstat (struct stat32 *ubuf, struct stat *kbuf)
{
int err;
- err = put_user (kbuf->st_dev, &ubuf->st_dev);
- err |= __put_user (kbuf->st_ino, &ubuf->st_ino);
- err |= __put_user (kbuf->st_mode, &ubuf->st_mode);
- err |= __put_user (kbuf->st_nlink, &ubuf->st_nlink);
- err |= __put_user (kbuf->st_uid, &ubuf->st_uid);
- err |= __put_user (kbuf->st_gid, &ubuf->st_gid);
- err |= __put_user (kbuf->st_rdev, &ubuf->st_rdev);
- err |= __put_user (kbuf->st_size, &ubuf->st_size);
- err |= __put_user (kbuf->st_atime, &ubuf->st_atime);
- err |= __put_user (kbuf->st_mtime, &ubuf->st_mtime);
- err |= __put_user (kbuf->st_ctime, &ubuf->st_ctime);
- err |= __put_user (kbuf->st_blksize, &ubuf->st_blksize);
- err |= __put_user (kbuf->st_blocks, &ubuf->st_blocks);
+ if (clear_user(ubuf, sizeof(*ubuf)))
+ return 1;
+
+ err = __put_user(kbuf->st_dev, &ubuf->st_dev);
+ err |= __put_user(kbuf->st_ino, &ubuf->st_ino);
+ err |= __put_user(kbuf->st_mode, &ubuf->st_mode);
+ err |= __put_user(kbuf->st_nlink, &ubuf->st_nlink);
+ err |= __put_user(kbuf->st_uid, &ubuf->st_uid);
+ err |= __put_user(kbuf->st_gid, &ubuf->st_gid);
+ err |= __put_user(kbuf->st_rdev, &ubuf->st_rdev);
+ err |= __put_user(kbuf->st_size, &ubuf->st_size);
+ err |= __put_user(kbuf->st_atime, &ubuf->st_atime);
+ err |= __put_user(kbuf->st_mtime, &ubuf->st_mtime);
+ err |= __put_user(kbuf->st_ctime, &ubuf->st_ctime);
+ err |= __put_user(kbuf->st_blksize, &ubuf->st_blksize);
+ err |= __put_user(kbuf->st_blocks, &ubuf->st_blocks);
return err;
}
-extern asmlinkage long sys_newstat(char * filename, struct stat * statbuf);
+extern asmlinkage long sys_newstat (char * filename, struct stat * statbuf);
asmlinkage long
-sys32_newstat(char * filename, struct stat32 *statbuf)
+sys32_newstat (char *filename, struct stat32 *statbuf)
{
int ret;
struct stat s;
@@ -173,8 +205,8 @@
set_fs(KERNEL_DS);
ret = sys_newstat(filename, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
@@ -182,16 +214,16 @@
extern asmlinkage long sys_newlstat(char * filename, struct stat * statbuf);
asmlinkage long
-sys32_newlstat(char * filename, struct stat32 *statbuf)
+sys32_newlstat (char *filename, struct stat32 *statbuf)
{
- int ret;
- struct stat s;
mm_segment_t old_fs = get_fs();
+ struct stat s;
+ int ret;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_newlstat(filename, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
@@ -199,112 +231,249 @@
extern asmlinkage long sys_newfstat(unsigned int fd, struct stat * statbuf);
asmlinkage long
-sys32_newfstat(unsigned int fd, struct stat32 *statbuf)
+sys32_newfstat (unsigned int fd, struct stat32 *statbuf)
{
- int ret;
- struct stat s;
mm_segment_t old_fs = get_fs();
+ struct stat s;
+ int ret;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_newfstat(fd, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
-#define OFFSET4K(a) ((a) & 0xfff)
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
-unsigned long
-do_mmap_fake(struct file *file, unsigned long addr, unsigned long len,
- unsigned long prot, unsigned long flags, loff_t off)
+
+static int
+get_page_prot (unsigned long addr)
+{
+ struct vm_area_struct *vma = find_vma(current->mm, addr);
+ int prot = 0;
+
+ if (!vma || vma->vm_start > addr)
+ return 0;
+
+ if (vma->vm_flags & VM_READ)
+ prot |= PROT_READ;
+ if (vma->vm_flags & VM_WRITE)
+ prot |= PROT_WRITE;
+ if (vma->vm_flags & VM_EXEC)
+ prot |= PROT_EXEC;
+ return prot;
+}
+
+/*
+ * Map a subpage by creating an anonymous page that contains the union of the old page and
+ * the subpage.
+ */
+static unsigned long
+mmap_subpage (struct file *file, unsigned long start, unsigned long end, int prot, int flags,
+ loff_t off)
{
+ void *page = (void *) get_zeroed_page(GFP_KERNEL);
struct inode *inode;
- void *front, *back;
- unsigned long baddr;
- int r;
- char c;
+ unsigned long ret;
+ int old_prot = get_page_prot(start);
- if (OFFSET4K(addr) || OFFSET4K(off))
- return -EINVAL;
- prot |= PROT_WRITE;
- front = NULL;
- back = NULL;
- if ((baddr = (addr & PAGE_MASK)) != addr && get_user(c, (char *)baddr) = 0) {
- front = kmalloc(addr - baddr, GFP_KERNEL);
- if (!front)
- return -ENOMEM;
- __copy_user(front, (void *)baddr, addr - baddr);
+ DBG("mmap_subpage(file=%p,start=0x%lx,end=0x%lx,prot=%x,flags=%x,off=0x%llx)\n",
+ file, start, end, prot, flags, off);
+
+ if (!page)
+ return -ENOMEM;
+
+ if (old_prot)
+ copy_from_user(page, (void *) PAGE_START(start), PAGE_SIZE);
+
+ down_write(¤t->mm->mmap_sem);
+ {
+ ret = do_mmap(0, PAGE_START(start), PAGE_SIZE, prot | PROT_WRITE,
+ flags | MAP_FIXED | MAP_ANONYMOUS, 0);
}
- if (addr && ((addr + len) & ~PAGE_MASK) && get_user(c, (char *)(addr + len)) = 0) {
- back = kmalloc(PAGE_SIZE - ((addr + len) & ~PAGE_MASK), GFP_KERNEL);
- if (!back) {
- if (front)
- kfree(front);
- return -ENOMEM;
+ up_write(¤t->mm->mmap_sem);
+
+ if (IS_ERR((void *) ret))
+ goto out;
+
+ if (old_prot) {
+ /* copy back the old page contents. */
+ if (PAGE_OFF(start))
+ copy_to_user((void *) PAGE_START(start), page, PAGE_OFF(start));
+ if (PAGE_OFF(end))
+ copy_to_user((void *) end, page + PAGE_OFF(end),
+ PAGE_SIZE - PAGE_OFF(end));
+ }
+ if (!(flags & MAP_ANONYMOUS)) {
+ /* read the file contents */
+ inode = file->f_dentry->d_inode;
+ if (!inode->i_fop || !file->f_op->read
+ || ((*file->f_op->read)(file, (char *) start, end - start, &off) < 0))
+ {
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+ if (!(prot & PROT_WRITE))
+ ret = sys_mprotect(PAGE_START(start), PAGE_SIZE, prot | old_prot);
+ out:
+ free_page((unsigned long) page);
+ return ret;
+}
+
+static unsigned long
+emulate_mmap (struct file *file, unsigned long start, unsigned long len, int prot, int flags,
+ loff_t off)
+{
+ unsigned long tmp, end, pend, pstart, ret, is_congruent, fudge = 0;
+ struct inode *inode;
+ loff_t poff;
+
+ end = start + len;
+ pstart = PAGE_START(start);
+ pend = PAGE_ALIGN(end);
+
+ if (flags & MAP_FIXED) {
+ if (start > pstart) {
+ if (flags & MAP_SHARED)
+ printk(KERN_INFO
+ "%s(%d): emulate_mmap() can't share head (addr=0x%lx)\n",
+ current->comm, current->pid, start);
+ ret = mmap_subpage(file, start, min(PAGE_ALIGN(start), end), prot, flags,
+ off);
+ if (IS_ERR((void *) ret))
+ return ret;
+ pstart += PAGE_SIZE;
+ if (pstart >= pend)
+ return start; /* done */
+ }
+ if (end < pend) {
+ if (flags & MAP_SHARED)
+ printk(KERN_INFO
+ "%s(%d): emulate_mmap() can't share tail (end=0x%lx)\n",
+ current->comm, current->pid, end);
+ ret = mmap_subpage(file, max(start, PAGE_START(end)), end, prot, flags,
+ (off + len) - PAGE_OFF(end));
+ if (IS_ERR((void *) ret))
+ return ret;
+ pend -= PAGE_SIZE;
+ if (pstart >= pend)
+ return start; /* done */
+ }
+ } else {
+ /*
+ * If a start address was specified, use it if the entire rounded out area
+ * is available.
+ */
+ if (start && !pstart)
+ fudge = 1; /* handle case of mapping to range (0,PAGE_SIZE) */
+ tmp = arch_get_unmapped_area(file, pstart - fudge, pend - pstart, 0, flags);
+ if (tmp != pstart) {
+ pstart = tmp;
+ start = pstart + PAGE_OFF(off); /* make start congruent with off */
+ end = start + len;
+ pend = PAGE_ALIGN(end);
}
- __copy_user(back, (char *)addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
}
+
+ poff = off + (pstart - start); /* note: (pstart - start) may be negative */
+ is_congruent = (flags & MAP_ANONYMOUS) || (PAGE_OFF(poff) = 0);
+
+ if ((flags & MAP_SHARED) && !is_congruent)
+ printk(KERN_INFO "%s(%d): emulate_mmap() can't share contents of incongruent mmap "
+ "(addr=0x%lx,off=0x%llx)\n", current->comm, current->pid, start, off);
+
+ DBG("mmap_body: mapping [0x%lx-0x%lx) %s with poff 0x%llx\n", pstart, pend,
+ is_congruent ? "congruent" : "not congruent", poff);
+
down_write(¤t->mm->mmap_sem);
- r = do_mmap(0, baddr, len + (addr - baddr), prot, flags | MAP_ANONYMOUS, 0);
+ {
+ if (!(flags & MAP_ANONYMOUS) && is_congruent)
+ ret = do_mmap(file, pstart, pend - pstart, prot, flags | MAP_FIXED, poff);
+ else
+ ret = do_mmap(0, pstart, pend - pstart,
+ prot | ((flags & MAP_ANONYMOUS) ? 0 : PROT_WRITE),
+ flags | MAP_FIXED | MAP_ANONYMOUS, 0);
+ }
up_write(¤t->mm->mmap_sem);
- if (r < 0)
- return(r);
- if (addr = 0)
- addr = r;
- if (back) {
- __copy_user((char *)addr + len, back, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
- kfree(back);
- }
- if (front) {
- __copy_user((void *)baddr, front, addr - baddr);
- kfree(front);
- }
- if (flags & MAP_ANONYMOUS) {
- clear_user((char *)addr, len);
- return(addr);
+
+ if (IS_ERR((void *) ret))
+ return ret;
+
+ if (!is_congruent) {
+ /* read the file contents */
+ inode = file->f_dentry->d_inode;
+ if (!inode->i_fop || !file->f_op->read
+ || ((*file->f_op->read)(file, (char *) pstart, pend - pstart, &poff) < 0))
+ {
+ sys_munmap(pstart, pend - pstart);
+ return -EINVAL;
+ }
+ if (!(prot & PROT_WRITE) && sys_mprotect(pstart, pend - pstart, prot) < 0)
+ return EINVAL;
}
- if (!file)
- return -EINVAL;
- inode = file->f_dentry->d_inode;
- if (!inode->i_fop)
- return -EINVAL;
- if (!file->f_op->read)
- return -EINVAL;
- r = file->f_op->read(file, (char *)addr, len, &off);
- return (r < 0) ? -EINVAL : addr;
+ return start;
}
-long
-ia32_do_mmap (struct file *file, unsigned int addr, unsigned int len, unsigned int prot,
- unsigned int flags, unsigned int fd, unsigned int offset)
+#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */
+
+static inline unsigned int
+get_prot32 (unsigned int prot)
{
- long error = -EFAULT;
- unsigned int poff;
+ if (prot & PROT_WRITE)
+ /* on x86, PROT_WRITE implies PROT_READ which implies PROT_EEC */
+ prot |= PROT_READ | PROT_WRITE | PROT_EXEC;
+ else if (prot & (PROT_READ | PROT_EXEC))
+ /* on x86, there is no distinction between PROT_READ and PROT_EXEC */
+ prot |= (PROT_READ | PROT_EXEC);
- flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
- prot |= PROT_EXEC;
+ return prot;
+}
- if ((flags & MAP_FIXED) && ((addr & ~PAGE_MASK) || (offset & ~PAGE_MASK)))
- error = do_mmap_fake(file, addr, len, prot, flags, (loff_t)offset);
- else {
- poff = offset & PAGE_MASK;
- len += offset - poff;
+unsigned long
+ia32_do_mmap (struct file *file, unsigned long addr, unsigned long len, int prot, int flags,
+ loff_t offset)
+{
+ DBG("ia32_do_mmap(file=%p,addr=0x%lx,len=0x%lx,prot=%x,flags=%x,offset=0x%llx)\n",
+ file, addr, len, prot, flags, offset);
+
+ if (file && (!file->f_op || !file->f_op->mmap))
+ return -ENODEV;
+
+ len = IA32_PAGE_ALIGN(len);
+ if (len = 0)
+ return addr;
+
+ if (len > IA32_PAGE_OFFSET || addr > IA32_PAGE_OFFSET - len)
+ return -EINVAL;
+
+ if (OFFSET4K(offset))
+ return -EINVAL;
- down_write(¤t->mm->mmap_sem);
- error = do_mmap_pgoff(file, addr, len, prot, flags, poff >> PAGE_SHIFT);
- up_write(¤t->mm->mmap_sem);
+ prot = get_prot32(prot);
- if (!IS_ERR((void *) error))
- error += offset - poff;
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ down(&ia32_mmap_sem);
+ {
+ addr = emulate_mmap(file, addr, len, prot, flags, offset);
}
- return error;
+ up(&ia32_mmap_sem);
+#else
+ down_write(¤t->mm->mmap_sem);
+ {
+ addr = do_mmap(file, addr, len, prot, flags, offset);
+ }
+ up_write(¤t->mm->mmap_sem);
+#endif
+ DBG("ia32_do_mmap: returning 0x%lx\n", addr);
+ return addr;
}
/*
- * Linux/i386 didn't use to be able to handle more than
- * 4 system call parameters, so these system calls used a memory
- * block for parameter passing..
+ * Linux/i386 didn't use to be able to handle more than 4 system call parameters, so these
+ * system calls used a memory block for parameter passing..
*/
struct mmap_arg_struct {
@@ -317,180 +486,166 @@
};
asmlinkage long
-sys32_mmap(struct mmap_arg_struct *arg)
+sys32_mmap (struct mmap_arg_struct *arg)
{
struct mmap_arg_struct a;
struct file *file = NULL;
- long retval;
+ unsigned long addr;
+ int flags;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
- if (PAGE_ALIGN(a.len) = 0)
- return a.addr;
+ if (OFFSET4K(a.offset))
+ return -EINVAL;
+
+ flags = a.flags;
- if (!(a.flags & MAP_ANONYMOUS)) {
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
file = fget(a.fd);
if (!file)
return -EBADF;
}
-#ifdef CONFIG_IA64_PAGE_SIZE_4KB
- if ((a.offset & ~PAGE_MASK) != 0)
- return -EINVAL;
- down_write(¤t->mm->mmap_sem);
- retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, a.offset >> PAGE_SHIFT);
- up_write(¤t->mm->mmap_sem);
-#else
- retval = ia32_do_mmap(file, a.addr, a.len, a.prot, a.flags, a.fd, a.offset);
-#endif
+ addr = ia32_do_mmap(file, a.addr, a.len, a.prot, flags, a.offset);
+
if (file)
fput(file);
- return retval;
+ return addr;
}
asmlinkage long
-sys32_mprotect(unsigned long start, size_t len, unsigned long prot)
+sys32_mmap2 (unsigned int addr, unsigned int len, unsigned int prot, unsigned int flags,
+ unsigned int fd, unsigned int pgoff)
{
+ struct file *file = NULL;
+ unsigned long retval;
-#ifdef CONFIG_IA64_PAGE_SIZE_4KB
- return(sys_mprotect(start, len, prot));
-#else // CONFIG_IA64_PAGE_SIZE_4KB
- if (prot = 0)
- return(0);
- len += start & ~PAGE_MASK;
- if ((start & ~PAGE_MASK) && (prot & PROT_WRITE))
- prot |= PROT_EXEC;
- return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot));
-#endif // CONFIG_IA64_PAGE_SIZE_4KB
-}
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ return -EBADF;
+ }
-asmlinkage long
-sys32_pipe(int *fd)
-{
- int retval;
- int fds[2];
+ retval = ia32_do_mmap(file, addr, len, prot, flags,
+ (unsigned long) pgoff << IA32_PAGE_SHIFT);
- retval = do_pipe(fds);
- if (retval)
- goto out;
- if (copy_to_user(fd, fds, sizeof(fds)))
- retval = -EFAULT;
- out:
+ if (file)
+ fput(file);
return retval;
}
asmlinkage long
-sys32_signal (int sig, unsigned int handler)
+sys32_munmap (unsigned int start, unsigned int len)
{
- struct k_sigaction new_sa, old_sa;
- int ret;
+ unsigned int end = start + len;
+ long ret;
+
+#if PAGE_SHIFT <= IA32_PAGE_SHIFT
+ ret = sys_munmap(start, end - start);
+#else
+ if (start > end)
+ return -EINVAL;
+
+ start = PAGE_ALIGN(start);
+ end = PAGE_START(end);
+
+ if (start >= end)
+ return 0;
+
+ down(&ia32_mmap_sem);
+ {
+ ret = sys_munmap(start, end - start);
+ }
+ up(&ia32_mmap_sem);
+#endif
+ return ret;
+}
- new_sa.sa.sa_handler = (__sighandler_t) A(handler);
- new_sa.sa.sa_flags = SA_ONESHOT | SA_NOMASK;
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+
+/*
+ * When mprotect()ing a partial page, we set the permission to the union of the old
+ * settings and the new settings. In other words, it's only possible to make access to a
+ * partial page less restrictive.
+ */
+static long
+mprotect_subpage (unsigned long address, int new_prot)
+{
+ int old_prot;
- ret = do_sigaction(sig, &new_sa, &old_sa);
+ if (new_prot = PROT_NONE)
+ return 0; /* optimize case where nothing changes... */
- return ret ? ret : (unsigned long)old_sa.sa.sa_handler;
+ old_prot = get_page_prot(address);
+ return sys_mprotect(address, PAGE_SIZE, new_prot | old_prot);
}
+#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */
+
asmlinkage long
-sys32_rt_sigaction(int sig, struct sigaction32 *act,
- struct sigaction32 *oact, unsigned int sigsetsize)
+sys32_mprotect (unsigned int start, unsigned int len, int prot)
{
- struct k_sigaction new_ka, old_ka;
- int ret;
- sigset32_t set32;
+ unsigned long end = start + len;
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ long retval = 0;
+#endif
+
+ prot = get_prot32(prot);
- /* XXX: Don't preclude handling different sized sigset_t's. */
- if (sigsetsize != sizeof(sigset32_t))
+#if PAGE_SHIFT <= IA32_PAGE_SHIFT
+ return sys_mprotect(start, end - start, prot);
+#else
+ if (OFFSET4K(start))
return -EINVAL;
- if (act) {
- ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
- ret |= __copy_from_user(&set32, &act->sa_mask,
- sizeof(sigset32_t));
- switch (_NSIG_WORDS) {
- case 4: new_ka.sa.sa_mask.sig[3] = set32.sig[6]
- | (((long)set32.sig[7]) << 32);
- case 3: new_ka.sa.sa_mask.sig[2] = set32.sig[4]
- | (((long)set32.sig[5]) << 32);
- case 2: new_ka.sa.sa_mask.sig[1] = set32.sig[2]
- | (((long)set32.sig[3]) << 32);
- case 1: new_ka.sa.sa_mask.sig[0] = set32.sig[0]
- | (((long)set32.sig[1]) << 32);
- }
- ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ end = IA32_PAGE_ALIGN(end);
+ if (end < start)
+ return -EINVAL;
- if (ret)
- return -EFAULT;
- }
+ down(&ia32_mmap_sem);
+ {
+ if (PAGE_OFF(start)) {
+ /* start address is 4KB aligned but not page aligned. */
+ retval = mprotect_subpage(PAGE_START(start), prot);
+ if (retval < 0)
+ goto out;
- ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+ start = PAGE_ALIGN(start);
+ if (start >= end)
+ goto out; /* retval is already zero... */
+ }
- if (!ret && oact) {
- switch (_NSIG_WORDS) {
- case 4:
- set32.sig[7] = (old_ka.sa.sa_mask.sig[3] >> 32);
- set32.sig[6] = old_ka.sa.sa_mask.sig[3];
- case 3:
- set32.sig[5] = (old_ka.sa.sa_mask.sig[2] >> 32);
- set32.sig[4] = old_ka.sa.sa_mask.sig[2];
- case 2:
- set32.sig[3] = (old_ka.sa.sa_mask.sig[1] >> 32);
- set32.sig[2] = old_ka.sa.sa_mask.sig[1];
- case 1:
- set32.sig[1] = (old_ka.sa.sa_mask.sig[0] >> 32);
- set32.sig[0] = old_ka.sa.sa_mask.sig[0];
+ if (PAGE_OFF(end)) {
+ /* end address is 4KB aligned but not page aligned. */
+ retval = mprotect_subpage(PAGE_START(end), prot);
+ if (retval < 0)
+ return retval;
+ end = PAGE_START(end);
}
- ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
- ret |= __copy_to_user(&oact->sa_mask, &set32,
- sizeof(sigset32_t));
- ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ retval = sys_mprotect(start, end - start, prot);
}
-
- return ret;
+ out:
+ up(&ia32_mmap_sem);
+ return retval;
+#endif
}
-
-extern asmlinkage long sys_rt_sigprocmask(int how, sigset_t *set, sigset_t *oset,
- size_t sigsetsize);
-
asmlinkage long
-sys32_rt_sigprocmask(int how, sigset32_t *set, sigset32_t *oset,
- unsigned int sigsetsize)
+sys32_pipe (int *fd)
{
- sigset_t s;
- sigset32_t s32;
- int ret;
- mm_segment_t old_fs = get_fs();
+ int retval;
+ int fds[2];
- if (set) {
- if (copy_from_user (&s32, set, sizeof(sigset32_t)))
- return -EFAULT;
- switch (_NSIG_WORDS) {
- case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
- case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
- case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
- case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
- }
- }
- set_fs (KERNEL_DS);
- ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL,
- sigsetsize);
- set_fs (old_fs);
- if (ret) return ret;
- if (oset) {
- switch (_NSIG_WORDS) {
- case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
- case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
- case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
- case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
- }
- if (copy_to_user (oset, &s32, sizeof(sigset32_t)))
- return -EFAULT;
- }
- return 0;
+ retval = do_pipe(fds);
+ if (retval)
+ goto out;
+ if (copy_to_user(fd, fds, sizeof(fds)))
+ retval = -EFAULT;
+ out:
+ return retval;
}
static inline int
@@ -498,31 +653,34 @@
{
int err;
- err = put_user (kbuf->f_type, &ubuf->f_type);
- err |= __put_user (kbuf->f_bsize, &ubuf->f_bsize);
- err |= __put_user (kbuf->f_blocks, &ubuf->f_blocks);
- err |= __put_user (kbuf->f_bfree, &ubuf->f_bfree);
- err |= __put_user (kbuf->f_bavail, &ubuf->f_bavail);
- err |= __put_user (kbuf->f_files, &ubuf->f_files);
- err |= __put_user (kbuf->f_ffree, &ubuf->f_ffree);
- err |= __put_user (kbuf->f_namelen, &ubuf->f_namelen);
- err |= __put_user (kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]);
- err |= __put_user (kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]);
+ if (!access_ok(VERIFY_WRITE, ubuf, sizeof(*ubuf)))
+ return -EFAULT;
+
+ err = __put_user(kbuf->f_type, &ubuf->f_type);
+ err |= __put_user(kbuf->f_bsize, &ubuf->f_bsize);
+ err |= __put_user(kbuf->f_blocks, &ubuf->f_blocks);
+ err |= __put_user(kbuf->f_bfree, &ubuf->f_bfree);
+ err |= __put_user(kbuf->f_bavail, &ubuf->f_bavail);
+ err |= __put_user(kbuf->f_files, &ubuf->f_files);
+ err |= __put_user(kbuf->f_ffree, &ubuf->f_ffree);
+ err |= __put_user(kbuf->f_namelen, &ubuf->f_namelen);
+ err |= __put_user(kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]);
+ err |= __put_user(kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]);
return err;
}
extern asmlinkage long sys_statfs(const char * path, struct statfs * buf);
asmlinkage long
-sys32_statfs(const char * path, struct statfs32 *buf)
+sys32_statfs (const char *path, struct statfs32 *buf)
{
int ret;
struct statfs s;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
- ret = sys_statfs((const char *)path, &s);
- set_fs (old_fs);
+ set_fs(KERNEL_DS);
+ ret = sys_statfs(path, &s);
+ set_fs(old_fs);
if (put_statfs(buf, &s))
return -EFAULT;
return ret;
@@ -531,15 +689,15 @@
extern asmlinkage long sys_fstatfs(unsigned int fd, struct statfs * buf);
asmlinkage long
-sys32_fstatfs(unsigned int fd, struct statfs32 *buf)
+sys32_fstatfs (unsigned int fd, struct statfs32 *buf)
{
int ret;
struct statfs s;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_fstatfs(fd, &s);
- set_fs (old_fs);
+ set_fs(old_fs);
if (put_statfs(buf, &s))
return -EFAULT;
return ret;
@@ -557,23 +715,21 @@
};
static inline long
-get_tv32(struct timeval *o, struct timeval32 *i)
+get_tv32 (struct timeval *o, struct timeval32 *i)
{
return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
- (__get_user(o->tv_sec, &i->tv_sec) |
- __get_user(o->tv_usec, &i->tv_usec)));
+ (__get_user(o->tv_sec, &i->tv_sec) | __get_user(o->tv_usec, &i->tv_usec)));
}
static inline long
-put_tv32(struct timeval32 *o, struct timeval *i)
+put_tv32 (struct timeval32 *o, struct timeval *i)
{
return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
- (__put_user(i->tv_sec, &o->tv_sec) |
- __put_user(i->tv_usec, &o->tv_usec)));
+ (__put_user(i->tv_sec, &o->tv_sec) | __put_user(i->tv_usec, &o->tv_usec)));
}
static inline long
-get_it32(struct itimerval *o, struct itimerval32 *i)
+get_it32 (struct itimerval *o, struct itimerval32 *i)
{
return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
(__get_user(o->it_interval.tv_sec, &i->it_interval.tv_sec) |
@@ -583,7 +739,7 @@
}
static inline long
-put_it32(struct itimerval32 *o, struct itimerval *i)
+put_it32 (struct itimerval32 *o, struct itimerval *i)
{
return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
(__put_user(i->it_interval.tv_sec, &o->it_interval.tv_sec) |
@@ -592,10 +748,10 @@
__put_user(i->it_value.tv_usec, &o->it_value.tv_usec)));
}
-extern int do_getitimer(int which, struct itimerval *value);
+extern int do_getitimer (int which, struct itimerval *value);
asmlinkage long
-sys32_getitimer(int which, struct itimerval32 *it)
+sys32_getitimer (int which, struct itimerval32 *it)
{
struct itimerval kit;
int error;
@@ -607,10 +763,10 @@
return error;
}
-extern int do_setitimer(int which, struct itimerval *, struct itimerval *);
+extern int do_setitimer (int which, struct itimerval *, struct itimerval *);
asmlinkage long
-sys32_setitimer(int which, struct itimerval32 *in, struct itimerval32 *out)
+sys32_setitimer (int which, struct itimerval32 *in, struct itimerval32 *out)
{
struct itimerval kin, kout;
int error;
@@ -630,8 +786,9 @@
return 0;
}
+
asmlinkage unsigned long
-sys32_alarm(unsigned int seconds)
+sys32_alarm (unsigned int seconds)
{
struct itimerval it_new, it_old;
unsigned int oldalarm;
@@ -660,7 +817,7 @@
extern asmlinkage long sys_gettimeofday (struct timeval *tv, struct timezone *tz);
asmlinkage long
-ia32_utime(char * filename, struct utimbuf_32 *times32)
+sys32_utime (char *filename, struct utimbuf_32 *times32)
{
mm_segment_t old_fs = get_fs();
struct timeval tv[2], *tvp;
@@ -673,20 +830,20 @@
if (get_user(tv[1].tv_sec, ×32->mtime))
return -EFAULT;
tv[1].tv_usec = 0;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
tvp = tv;
} else
tvp = NULL;
ret = sys_utimes(filename, tvp);
- set_fs (old_fs);
+ set_fs(old_fs);
return ret;
}
extern struct timezone sys_tz;
-extern int do_sys_settimeofday(struct timeval *tv, struct timezone *tz);
+extern int do_sys_settimeofday (struct timeval *tv, struct timezone *tz);
asmlinkage long
-sys32_gettimeofday(struct timeval32 *tv, struct timezone *tz)
+sys32_gettimeofday (struct timeval32 *tv, struct timezone *tz)
{
if (tv) {
struct timeval ktv;
@@ -702,7 +859,7 @@
}
asmlinkage long
-sys32_settimeofday(struct timeval32 *tv, struct timezone *tz)
+sys32_settimeofday (struct timeval32 *tv, struct timezone *tz)
{
struct timeval ktv;
struct timezone ktz;
@@ -719,20 +876,6 @@
return do_sys_settimeofday(tv ? &ktv : NULL, tz ? &ktz : NULL);
}
-struct linux32_dirent {
- u32 d_ino;
- u32 d_off;
- u16 d_reclen;
- char d_name[1];
-};
-
-struct old_linux32_dirent {
- u32 d_ino;
- u32 d_offset;
- u16 d_namlen;
- char d_name[1];
-};
-
struct getdents32_callback {
struct linux32_dirent * current_dir;
struct linux32_dirent * previous;
@@ -775,7 +918,7 @@
}
asmlinkage long
-sys32_getdents (unsigned int fd, void * dirent, unsigned int count)
+sys32_getdents (unsigned int fd, struct linux32_dirent *dirent, unsigned int count)
{
struct file * file;
struct linux32_dirent * lastdirent;
@@ -787,7 +930,7 @@
if (!file)
goto out;
- buf.current_dir = (struct linux32_dirent *) dirent;
+ buf.current_dir = dirent;
buf.previous = NULL;
buf.count = count;
buf.error = 0;
@@ -831,7 +974,7 @@
}
asmlinkage long
-sys32_readdir (unsigned int fd, void * dirent, unsigned int count)
+sys32_readdir (unsigned int fd, void *dirent, unsigned int count)
{
int error;
struct file * file;
@@ -866,7 +1009,7 @@
#define ROUND_UP_TIME(x,y) (((x)+(y)-1)/(y))
asmlinkage long
-sys32_select(int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
+sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
{
fd_set_bits fds;
char *bits;
@@ -878,8 +1021,7 @@
time_t sec, usec;
ret = -EFAULT;
- if (get_user(sec, &tvp32->tv_sec)
- || get_user(usec, &tvp32->tv_usec))
+ if (get_user(sec, &tvp32->tv_sec) || get_user(usec, &tvp32->tv_usec))
goto out_nofds;
ret = -EINVAL;
@@ -933,9 +1075,7 @@
usec = timeout % HZ;
usec *= (1000000/HZ);
}
- if (put_user(sec, (int *)&tvp32->tv_sec)
- || put_user(usec, (int *)&tvp32->tv_usec))
- {
+ if (put_user(sec, &tvp32->tv_sec) || put_user(usec, &tvp32->tv_usec)) {
ret = -EFAULT;
goto out;
}
@@ -969,50 +1109,43 @@
};
asmlinkage long
-old_select(struct sel_arg_struct *arg)
+sys32_old_select (struct sel_arg_struct *arg)
{
struct sel_arg_struct a;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
- return sys32_select(a.n, (fd_set *)A(a.inp), (fd_set *)A(a.outp), (fd_set *)A(a.exp),
- (struct timeval32 *)A(a.tvp));
+ return sys32_select(a.n, (fd_set *) A(a.inp), (fd_set *) A(a.outp), (fd_set *) A(a.exp),
+ (struct timeval32 *) A(a.tvp));
}
-struct timespec32 {
- int tv_sec;
- int tv_nsec;
-};
-
-extern asmlinkage long sys_nanosleep(struct timespec *rqtp, struct timespec *rmtp);
+extern asmlinkage long sys_nanosleep (struct timespec *rqtp, struct timespec *rmtp);
asmlinkage long
-sys32_nanosleep(struct timespec32 *rqtp, struct timespec32 *rmtp)
+sys32_nanosleep (struct timespec32 *rqtp, struct timespec32 *rmtp)
{
struct timespec t;
int ret;
- mm_segment_t old_fs = get_fs ();
+ mm_segment_t old_fs = get_fs();
- if (get_user (t.tv_sec, &rqtp->tv_sec) ||
- __get_user (t.tv_nsec, &rqtp->tv_nsec))
+ if (get_user (t.tv_sec, &rqtp->tv_sec) || get_user (t.tv_nsec, &rqtp->tv_nsec))
return -EFAULT;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_nanosleep(&t, rmtp ? &t : NULL);
- set_fs (old_fs);
+ set_fs(old_fs);
if (rmtp && ret = -EINTR) {
- if (__put_user (t.tv_sec, &rmtp->tv_sec) ||
- __put_user (t.tv_nsec, &rmtp->tv_nsec))
+ if (put_user(t.tv_sec, &rmtp->tv_sec) || put_user(t.tv_nsec, &rmtp->tv_nsec))
return -EFAULT;
}
return ret;
}
struct iovec32 { unsigned int iov_base; int iov_len; };
-asmlinkage ssize_t sys_readv(unsigned long,const struct iovec *,unsigned long);
-asmlinkage ssize_t sys_writev(unsigned long,const struct iovec *,unsigned long);
+asmlinkage ssize_t sys_readv (unsigned long,const struct iovec *,unsigned long);
+asmlinkage ssize_t sys_writev (unsigned long,const struct iovec *,unsigned long);
static struct iovec *
-get_iovec32(struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int type)
+get_iovec32 (struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int type)
{
int i;
u32 buf, len;
@@ -1022,24 +1155,23 @@
if (!count)
return 0;
- if(verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count))
- return(struct iovec *)0;
+ if (verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count))
+ return NULL;
if (count > UIO_MAXIOV)
- return(struct iovec *)0;
+ return NULL;
if (count > UIO_FASTIOV) {
iov = kmalloc(count*sizeof(struct iovec), GFP_KERNEL);
if (!iov)
- return((struct iovec *)0);
+ return NULL;
} else
iov = iov_buf;
ivp = iov;
for (i = 0; i < count; i++) {
- if (__get_user(len, &iov32->iov_len) ||
- __get_user(buf, &iov32->iov_base)) {
+ if (__get_user(len, &iov32->iov_len) || __get_user(buf, &iov32->iov_base)) {
if (iov != iov_buf)
kfree(iov);
- return((struct iovec *)0);
+ return NULL;
}
if (verify_area(type, (void *)A(buf), len)) {
if (iov != iov_buf)
@@ -1047,22 +1179,23 @@
return((struct iovec *)0);
}
ivp->iov_base = (void *)A(buf);
- ivp->iov_len = (__kernel_size_t)len;
+ ivp->iov_len = (__kernel_size_t) len;
iov32++;
ivp++;
}
- return(iov);
+ return iov;
}
asmlinkage long
-sys32_readv(int fd, struct iovec32 *vector, u32 count)
+sys32_readv (int fd, struct iovec32 *vector, u32 count)
{
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov;
- int ret;
+ long ret;
mm_segment_t old_fs = get_fs();
- if ((iov = get_iovec32(vector, iovstack, count, VERIFY_WRITE)) = (struct iovec *)0)
+ iov = get_iovec32(vector, iovstack, count, VERIFY_WRITE);
+ if (!iov)
return -EFAULT;
set_fs(KERNEL_DS);
ret = sys_readv(fd, iov, count);
@@ -1073,14 +1206,15 @@
}
asmlinkage long
-sys32_writev(int fd, struct iovec32 *vector, u32 count)
+sys32_writev (int fd, struct iovec32 *vector, u32 count)
{
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov;
- int ret;
+ long ret;
mm_segment_t old_fs = get_fs();
- if ((iov = get_iovec32(vector, iovstack, count, VERIFY_READ)) = (struct iovec *)0)
+ iov = get_iovec32(vector, iovstack, count, VERIFY_READ);
+ if (!iov)
return -EFAULT;
set_fs(KERNEL_DS);
ret = sys_writev(fd, iov, count);
@@ -1098,45 +1232,66 @@
int rlim_max;
};
-extern asmlinkage long sys_getrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_getrlimit (unsigned int resource, struct rlimit *rlim);
asmlinkage long
-sys32_getrlimit(unsigned int resource, struct rlimit32 *rlim)
+sys32_old_getrlimit (unsigned int resource, struct rlimit32 *rlim)
{
+ mm_segment_t old_fs = get_fs();
+ struct rlimit r;
+ int ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_getrlimit(resource, &r);
+ set_fs(old_fs);
+ if (!ret) {
+ ret = put_user(RESOURCE32(r.rlim_cur), &rlim->rlim_cur);
+ ret |= put_user(RESOURCE32(r.rlim_max), &rlim->rlim_max);
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_getrlimit (unsigned int resource, struct rlimit32 *rlim)
+{
+ mm_segment_t old_fs = get_fs();
struct rlimit r;
int ret;
- mm_segment_t old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_getrlimit(resource, &r);
- set_fs (old_fs);
+ set_fs(old_fs);
if (!ret) {
- ret = put_user (RESOURCE32(r.rlim_cur), &rlim->rlim_cur);
- ret |= __put_user (RESOURCE32(r.rlim_max), &rlim->rlim_max);
+ if (r.rlim_cur >= 0xffffffff)
+ r.rlim_cur = 0xffffffff;
+ if (r.rlim_max >= 0xffffffff)
+ r.rlim_max = 0xffffffff;
+ ret = put_user(r.rlim_cur, &rlim->rlim_cur);
+ ret |= put_user(r.rlim_max, &rlim->rlim_max);
}
return ret;
}
-extern asmlinkage long sys_setrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_setrlimit (unsigned int resource, struct rlimit *rlim);
asmlinkage long
-sys32_setrlimit(unsigned int resource, struct rlimit32 *rlim)
+sys32_setrlimit (unsigned int resource, struct rlimit32 *rlim)
{
struct rlimit r;
int ret;
- mm_segment_t old_fs = get_fs ();
+ mm_segment_t old_fs = get_fs();
- if (resource >= RLIM_NLIMITS) return -EINVAL;
- if (get_user (r.rlim_cur, &rlim->rlim_cur) ||
- __get_user (r.rlim_max, &rlim->rlim_max))
+ if (resource >= RLIM_NLIMITS)
+ return -EINVAL;
+ if (get_user(r.rlim_cur, &rlim->rlim_cur) || get_user(r.rlim_max, &rlim->rlim_max))
return -EFAULT;
if (r.rlim_cur = RLIM_INFINITY32)
r.rlim_cur = RLIM_INFINITY;
if (r.rlim_max = RLIM_INFINITY32)
r.rlim_max = RLIM_INFINITY;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_setrlimit(resource, &r);
- set_fs (old_fs);
+ set_fs(old_fs);
return ret;
}
@@ -1154,25 +1309,141 @@
unsigned msg_flags;
};
-static inline int
-shape_msg(struct msghdr *mp, struct msghdr32 *mp32)
-{
- int ret;
- unsigned int i;
+struct cmsghdr32 {
+ __kernel_size_t32 cmsg_len;
+ int cmsg_level;
+ int cmsg_type;
+};
- if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32)))
- return(-EFAULT);
- ret = __get_user(i, &mp32->msg_name);
- mp->msg_name = (void *)A(i);
- ret |= __get_user(mp->msg_namelen, &mp32->msg_namelen);
- ret |= __get_user(i, &mp32->msg_iov);
+/* Bleech... */
+#define __CMSG32_NXTHDR(ctl, len, cmsg, cmsglen) __cmsg32_nxthdr((ctl),(len),(cmsg),(cmsglen))
+#define CMSG32_NXTHDR(mhdr, cmsg, cmsglen) cmsg32_nxthdr((mhdr), (cmsg), (cmsglen))
+#define CMSG32_ALIGN(len) ( ((len)+sizeof(int)-1) & ~(sizeof(int)-1) )
+#define CMSG32_DATA(cmsg) \
+ ((void *)((char *)(cmsg) + CMSG32_ALIGN(sizeof(struct cmsghdr32))))
+#define CMSG32_SPACE(len) \
+ (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + CMSG32_ALIGN(len))
+#define CMSG32_LEN(len) (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + (len))
+#define __CMSG32_FIRSTHDR(ctl,len) \
+ ((len) >= sizeof(struct cmsghdr32) ? (struct cmsghdr32 *)(ctl) : (struct cmsghdr32 *)NULL)
+#define CMSG32_FIRSTHDR(msg) __CMSG32_FIRSTHDR((msg)->msg_control, (msg)->msg_controllen)
+
+static inline struct cmsghdr32 *
+__cmsg32_nxthdr (void *ctl, __kernel_size_t size, struct cmsghdr32 *cmsg, int cmsg_len)
+{
+ struct cmsghdr32 * ptr;
+
+ ptr = (struct cmsghdr32 *)(((unsigned char *) cmsg) + CMSG32_ALIGN(cmsg_len));
+ if ((unsigned long)((char*)(ptr+1) - (char *) ctl) > size)
+ return NULL;
+ return ptr;
+}
+
+static inline struct cmsghdr32 *
+cmsg32_nxthdr (struct msghdr *msg, struct cmsghdr32 *cmsg, int cmsg_len)
+{
+ return __cmsg32_nxthdr(msg->msg_control, msg->msg_controllen, cmsg, cmsg_len);
+}
+
+static inline int
+get_msghdr32 (struct msghdr *mp, struct msghdr32 *mp32)
+{
+ int ret;
+ unsigned int i;
+
+ if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32)))
+ return -EFAULT;
+ ret = __get_user(i, &mp32->msg_name);
+ mp->msg_name = (void *)A(i);
+ ret |= __get_user(mp->msg_namelen, &mp32->msg_namelen);
+ ret |= __get_user(i, &mp32->msg_iov);
mp->msg_iov = (struct iovec *)A(i);
ret |= __get_user(mp->msg_iovlen, &mp32->msg_iovlen);
ret |= __get_user(i, &mp32->msg_control);
mp->msg_control = (void *)A(i);
ret |= __get_user(mp->msg_controllen, &mp32->msg_controllen);
ret |= __get_user(mp->msg_flags, &mp32->msg_flags);
- return(ret ? -EFAULT : 0);
+ return ret ? -EFAULT : 0;
+}
+
+/*
+ * There is a lot of hair here because the alignment rules (and thus placement) of cmsg
+ * headers and length are different for 32-bit apps. -DaveM
+ */
+static int
+get_cmsghdr32 (struct msghdr *kmsg, unsigned char *stackbuf, struct sock *sk, size_t *bufsize)
+{
+ struct cmsghdr *kcmsg, *kcmsg_base;
+ __kernel_size_t kcmlen, tmp;
+ __kernel_size_t32 ucmlen;
+ struct cmsghdr32 *ucmsg;
+ long err;
+
+ kcmlen = 0;
+ kcmsg_base = kcmsg = (struct cmsghdr *)stackbuf;
+ ucmsg = CMSG32_FIRSTHDR(kmsg);
+ while (ucmsg != NULL) {
+ if (get_user(ucmlen, &ucmsg->cmsg_len))
+ return -EFAULT;
+
+ /* Catch bogons. */
+ if (CMSG32_ALIGN(ucmlen) < CMSG32_ALIGN(sizeof(struct cmsghdr32)))
+ return -EINVAL;
+ if ((unsigned long)(((char *)ucmsg - (char *)kmsg->msg_control) + ucmlen)
+ > kmsg->msg_controllen)
+ return -EINVAL;
+
+ tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
+ CMSG_ALIGN(sizeof(struct cmsghdr)));
+ kcmlen += tmp;
+ ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
+ }
+ if (kcmlen = 0)
+ return -EINVAL;
+
+ /*
+ * The kcmlen holds the 64-bit version of the control length. It may not be
+ * modified as we do not stick it into the kmsg until we have successfully copied
+ * over all of the data from the user.
+ */
+ if (kcmlen > *bufsize) {
+ *bufsize = kcmlen;
+ kcmsg_base = kcmsg = sock_kmalloc(sk, kcmlen, GFP_KERNEL);
+ }
+ if (kcmsg = NULL)
+ return -ENOBUFS;
+
+ /* Now copy them over neatly. */
+ memset(kcmsg, 0, kcmlen);
+ ucmsg = CMSG32_FIRSTHDR(kmsg);
+ while (ucmsg != NULL) {
+ err = get_user(ucmlen, &ucmsg->cmsg_len);
+ tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
+ CMSG_ALIGN(sizeof(struct cmsghdr)));
+ kcmsg->cmsg_len = tmp;
+ err |= get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level);
+ err |= get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type);
+
+ /* Copy over the data. */
+ err |= copy_from_user(CMSG_DATA(kcmsg), CMSG32_DATA(ucmsg),
+ (ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))));
+ if (err)
+ goto out_free_efault;
+
+ /* Advance. */
+ kcmsg = (struct cmsghdr *)((char *)kcmsg + CMSG_ALIGN(tmp));
+ ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
+ }
+
+ /* Ok, looks like we made it. Hook it up and return success. */
+ kmsg->msg_control = kcmsg_base;
+ kmsg->msg_controllen = kcmlen;
+ return 0;
+
+out_free_efault:
+ if (kcmsg_base != (struct cmsghdr *)stackbuf)
+ sock_kfree_s(sk, kcmsg_base, kcmlen);
+ return -EFAULT;
}
/*
@@ -1187,20 +1458,17 @@
*/
static inline int
-verify_iovec32(struct msghdr *m, struct iovec *iov, char *address, int mode)
+verify_iovec32 (struct msghdr *m, struct iovec *iov, char *address, int mode)
{
int size, err, ct;
struct iovec32 *iov32;
- if(m->msg_namelen)
- {
- if(mode=VERIFY_READ)
- {
- err=move_addr_to_kernel(m->msg_name, m->msg_namelen, address);
- if(err<0)
+ if (m->msg_namelen) {
+ if (mode = VERIFY_READ) {
+ err = move_addr_to_kernel(m->msg_name, m->msg_namelen, address);
+ if (err < 0)
goto out;
}
-
m->msg_name = address;
} else
m->msg_name = NULL;
@@ -1209,7 +1477,7 @@
size = m->msg_iovlen * sizeof(struct iovec32);
if (copy_from_user(iov, m->msg_iov, size))
goto out;
- m->msg_iov=iov;
+ m->msg_iov = iov;
err = 0;
iov32 = (struct iovec32 *)iov;
@@ -1222,8 +1490,188 @@
return err;
}
-extern __inline__ void
-sockfd_put(struct socket *sock)
+static void
+put_cmsg32(struct msghdr *kmsg, int level, int type, int len, void *data)
+{
+ struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
+ struct cmsghdr32 cmhdr;
+ int cmlen = CMSG32_LEN(len);
+
+ if(cm = NULL || kmsg->msg_controllen < sizeof(*cm)) {
+ kmsg->msg_flags |= MSG_CTRUNC;
+ return;
+ }
+
+ if(kmsg->msg_controllen < cmlen) {
+ kmsg->msg_flags |= MSG_CTRUNC;
+ cmlen = kmsg->msg_controllen;
+ }
+ cmhdr.cmsg_level = level;
+ cmhdr.cmsg_type = type;
+ cmhdr.cmsg_len = cmlen;
+
+ if(copy_to_user(cm, &cmhdr, sizeof cmhdr))
+ return;
+ if(copy_to_user(CMSG32_DATA(cm), data,
+ cmlen - sizeof(struct cmsghdr32)))
+ return;
+ cmlen = CMSG32_SPACE(len);
+ kmsg->msg_control += cmlen;
+ kmsg->msg_controllen -= cmlen;
+}
+
+static void
+scm_detach_fds32 (struct msghdr *kmsg, struct scm_cookie *scm)
+{
+ struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
+ int fdmax = (kmsg->msg_controllen - sizeof(struct cmsghdr32))
+ / sizeof(int);
+ int fdnum = scm->fp->count;
+ struct file **fp = scm->fp->fp;
+ int *cmfptr;
+ int err = 0, i;
+
+ if (fdnum < fdmax)
+ fdmax = fdnum;
+
+ for (i = 0, cmfptr = (int *) CMSG32_DATA(cm);
+ i < fdmax;
+ i++, cmfptr++) {
+ int new_fd;
+ err = get_unused_fd();
+ if (err < 0)
+ break;
+ new_fd = err;
+ err = put_user(new_fd, cmfptr);
+ if (err) {
+ put_unused_fd(new_fd);
+ break;
+ }
+ /* Bump the usage count and install the file. */
+ get_file(fp[i]);
+ current->files->fd[new_fd] = fp[i];
+ }
+
+ if (i > 0) {
+ int cmlen = CMSG32_LEN(i * sizeof(int));
+ if (!err)
+ err = put_user(SOL_SOCKET, &cm->cmsg_level);
+ if (!err)
+ err = put_user(SCM_RIGHTS, &cm->cmsg_type);
+ if (!err)
+ err = put_user(cmlen, &cm->cmsg_len);
+ if (!err) {
+ cmlen = CMSG32_SPACE(i * sizeof(int));
+ kmsg->msg_control += cmlen;
+ kmsg->msg_controllen -= cmlen;
+ }
+ }
+ if (i < fdnum)
+ kmsg->msg_flags |= MSG_CTRUNC;
+
+ /*
+ * All of the files that fit in the message have had their
+ * usage counts incremented, so we just free the list.
+ */
+ __scm_destroy(scm);
+}
+
+/*
+ * In these cases we (currently) can just copy to data over verbatim because all CMSGs
+ * created by the kernel have well defined types which have the same layout in both the
+ * 32-bit and 64-bit API. One must add some special cased conversions here if we start
+ * sending control messages with incompatible types.
+ *
+ * SCM_RIGHTS and SCM_CREDENTIALS are done by hand in recvmsg32 right after
+ * we do our work. The remaining cases are:
+ *
+ * SOL_IP IP_PKTINFO struct in_pktinfo 32-bit clean
+ * IP_TTL int 32-bit clean
+ * IP_TOS __u8 32-bit clean
+ * IP_RECVOPTS variable length 32-bit clean
+ * IP_RETOPTS variable length 32-bit clean
+ * (these last two are clean because the types are defined
+ * by the IPv4 protocol)
+ * IP_RECVERR struct sock_extended_err +
+ * struct sockaddr_in 32-bit clean
+ * SOL_IPV6 IPV6_RECVERR struct sock_extended_err +
+ * struct sockaddr_in6 32-bit clean
+ * IPV6_PKTINFO struct in6_pktinfo 32-bit clean
+ * IPV6_HOPLIMIT int 32-bit clean
+ * IPV6_FLOWINFO u32 32-bit clean
+ * IPV6_HOPOPTS ipv6 hop exthdr 32-bit clean
+ * IPV6_DSTOPTS ipv6 dst exthdr(s) 32-bit clean
+ * IPV6_RTHDR ipv6 routing exthdr 32-bit clean
+ * IPV6_AUTHHDR ipv6 auth exthdr 32-bit clean
+ */
+static void
+cmsg32_recvmsg_fixup (struct msghdr *kmsg, unsigned long orig_cmsg_uptr)
+{
+ unsigned char *workbuf, *wp;
+ unsigned long bufsz, space_avail;
+ struct cmsghdr *ucmsg;
+ long err;
+
+ bufsz = ((unsigned long)kmsg->msg_control) - orig_cmsg_uptr;
+ space_avail = kmsg->msg_controllen + bufsz;
+ wp = workbuf = kmalloc(bufsz, GFP_KERNEL);
+ if (workbuf = NULL)
+ goto fail;
+
+ /* To make this more sane we assume the kernel sends back properly
+ * formatted control messages. Because of how the kernel will truncate
+ * the cmsg_len for MSG_TRUNC cases, we need not check that case either.
+ */
+ ucmsg = (struct cmsghdr *) orig_cmsg_uptr;
+ while (((unsigned long)ucmsg) < ((unsigned long)kmsg->msg_control)) {
+ struct cmsghdr32 *kcmsg32 = (struct cmsghdr32 *) wp;
+ int clen64, clen32;
+
+ /*
+ * UCMSG is the 64-bit format CMSG entry in user-space. KCMSG32 is within
+ * the kernel space temporary buffer we use to convert into a 32-bit style
+ * CMSG.
+ */
+ err = get_user(kcmsg32->cmsg_len, &ucmsg->cmsg_len);
+ err |= get_user(kcmsg32->cmsg_level, &ucmsg->cmsg_level);
+ err |= get_user(kcmsg32->cmsg_type, &ucmsg->cmsg_type);
+ if (err)
+ goto fail2;
+
+ clen64 = kcmsg32->cmsg_len;
+ copy_from_user(CMSG32_DATA(kcmsg32), CMSG_DATA(ucmsg),
+ clen64 - CMSG_ALIGN(sizeof(*ucmsg)));
+ clen32 = ((clen64 - CMSG_ALIGN(sizeof(*ucmsg))) +
+ CMSG32_ALIGN(sizeof(struct cmsghdr32)));
+ kcmsg32->cmsg_len = clen32;
+
+ ucmsg = (struct cmsghdr *) (((char *)ucmsg) + CMSG_ALIGN(clen64));
+ wp = (((char *)kcmsg32) + CMSG32_ALIGN(clen32));
+ }
+
+ /* Copy back fixed up data, and adjust pointers. */
+ bufsz = (wp - workbuf);
+ if (copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz))
+ goto fail2;
+
+ kmsg->msg_control = (struct cmsghdr *) (((char *)orig_cmsg_uptr) + bufsz);
+ kmsg->msg_controllen = space_avail - bufsz;
+ kfree(workbuf);
+ return;
+
+ fail2:
+ kfree(workbuf);
+ fail:
+ /*
+ * If we leave the 64-bit format CMSG chunks in there, the application could get
+ * confused and crash. So to ensure greater recovery, we report no CMSGs.
+ */
+ kmsg->msg_controllen += bufsz;
+ kmsg->msg_control = (void *) orig_cmsg_uptr;
+}
+
+static inline void
+sockfd_put (struct socket *sock)
{
fput(sock->file);
}
@@ -1234,13 +1682,14 @@
24 for IPv6,
about 80 for AX.25 */
-extern struct socket *sockfd_lookup(int fd, int *err);
+extern struct socket *sockfd_lookup (int fd, int *err);
/*
* BSD sendmsg interface
*/
-int sys32_sendmsg(int fd, struct msghdr32 *msg, unsigned flags)
+int
+sys32_sendmsg (int fd, struct msghdr32 *msg, unsigned flags)
{
struct socket *sock;
char address[MAX_SOCK_ADDR];
@@ -1248,10 +1697,11 @@
unsigned char ctl[sizeof(struct cmsghdr) + 20]; /* 20 is size of ipv6_pktinfo */
unsigned char *ctl_buf = ctl;
struct msghdr msg_sys;
- int err, ctl_len, iov_size, total_len;
+ int err, iov_size, total_len;
+ size_t ctl_len;
err = -EFAULT;
- if (shape_msg(&msg_sys, msg))
+ if (get_msghdr32(&msg_sys, msg))
goto out;
sock = sockfd_lookup(fd, &err);
@@ -1282,20 +1732,12 @@
if (msg_sys.msg_controllen > INT_MAX)
goto out_freeiov;
- ctl_len = msg_sys.msg_controllen;
- if (ctl_len)
- {
- if (ctl_len > sizeof(ctl))
- {
- err = -ENOBUFS;
- ctl_buf = sock_kmalloc(sock->sk, ctl_len, GFP_KERNEL);
- if (ctl_buf = NULL)
- goto out_freeiov;
- }
- err = -EFAULT;
- if (copy_from_user(ctl_buf, msg_sys.msg_control, ctl_len))
- goto out_freectl;
- msg_sys.msg_control = ctl_buf;
+ if (msg_sys.msg_controllen) {
+ ctl_len = sizeof(ctl);
+ err = get_cmsghdr32(&msg_sys, ctl_buf, sock->sk, &ctl_len);
+ if (err)
+ goto out_freeiov;
+ ctl_buf = msg_sys.msg_control;
}
msg_sys.msg_flags = flags;
@@ -1303,7 +1745,6 @@
msg_sys.msg_flags |= MSG_DONTWAIT;
err = sock_sendmsg(sock, &msg_sys, total_len);
-out_freectl:
if (ctl_buf != ctl)
sock_kfree_s(sock->sk, ctl_buf, ctl_len);
out_freeiov:
@@ -1328,6 +1769,7 @@
struct msghdr msg_sys;
unsigned long cmsg_ptr;
int err, iov_size, total_len, len;
+ struct scm_cookie scm;
/* kernel mode address */
char addr[MAX_SOCK_ADDR];
@@ -1336,8 +1778,8 @@
struct sockaddr *uaddr;
int *uaddr_len;
- err=-EFAULT;
- if (shape_msg(&msg_sys, msg))
+ err = -EFAULT;
+ if (get_msghdr32(&msg_sys, msg))
goto out;
sock = sockfd_lookup(fd, &err);
@@ -1374,13 +1816,42 @@
if (sock->file->f_flags & O_NONBLOCK)
flags |= MSG_DONTWAIT;
- err = sock_recvmsg(sock, &msg_sys, total_len, flags);
- if (err < 0)
- goto out_freeiov;
- len = err;
- if (uaddr != NULL) {
- err = move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len);
+ memset(&scm, 0, sizeof(scm));
+
+ lock_kernel();
+ {
+ err = sock->ops->recvmsg(sock, &msg_sys, total_len, flags, &scm);
+ if (err < 0)
+ goto out_unlock_freeiov;
+
+ len = err;
+ if (!msg_sys.msg_control) {
+ if (sock->passcred || scm.fp)
+ msg_sys.msg_flags |= MSG_CTRUNC;
+ if (scm.fp)
+ __scm_destroy(&scm);
+ } else {
+ /*
+ * If recvmsg processing itself placed some control messages into
+ * user space, it's is using 64-bit CMSG processing, so we need to
+ * fix it up before we tack on more stuff.
+ */
+ if ((unsigned long) msg_sys.msg_control != cmsg_ptr)
+ cmsg32_recvmsg_fixup(&msg_sys, cmsg_ptr);
+
+ /* Wheee... */
+ if (sock->passcred)
+ put_cmsg32(&msg_sys, SOL_SOCKET, SCM_CREDENTIALS,
+ sizeof(scm.creds), &scm.creds);
+ if (scm.fp != NULL)
+ scm_detach_fds32(&msg_sys, &scm);
+ }
+ }
+ unlock_kernel();
+
+ if (uaddr != NULL) {
+ err = move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len);
if (err < 0)
goto out_freeiov;
}
@@ -1393,20 +1864,23 @@
goto out_freeiov;
err = len;
-out_freeiov:
+ out_freeiov:
if (iov != iovstack)
sock_kfree_s(sock->sk, iov, iov_size);
-out_put:
+ out_put:
sockfd_put(sock);
-out:
+ out:
return err;
+
+ out_unlock_freeiov:
+ goto out_freeiov;
}
/* Argument list sizes for sys_socketcall */
#define AL(x) ((x) * sizeof(u32))
-static unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
- AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
- AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
+static const unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
+ AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
+ AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
#undef AL
extern asmlinkage long sys_bind(int fd, struct sockaddr *umyaddr, int addrlen);
@@ -1435,7 +1909,8 @@
extern asmlinkage long sys_shutdown(int fd, int how);
extern asmlinkage long sys_listen(int fd, int backlog);
-asmlinkage long sys32_socketcall(int call, u32 *args)
+asmlinkage long
+sys32_socketcall (int call, u32 *args)
{
int ret;
u32 a[6];
@@ -1463,16 +1938,13 @@
ret = sys_listen(a0, a1);
break;
case SYS_ACCEPT:
- ret = sys_accept(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_accept(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_GETSOCKNAME:
- ret = sys_getsockname(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_getsockname(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_GETPEERNAME:
- ret = sys_getpeername(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_getpeername(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_SOCKETPAIR:
ret = sys_socketpair(a0, a1, a[2], (int *)A(a[3]));
@@ -1500,12 +1972,10 @@
ret = sys_getsockopt(a0, a1, a[2], a[3], a[4]);
break;
case SYS_SENDMSG:
- ret = sys32_sendmsg(a0, (struct msghdr32 *)A(a1),
- a[2]);
+ ret = sys32_sendmsg(a0, (struct msghdr32 *) A(a1), a[2]);
break;
case SYS_RECVMSG:
- ret = sys32_recvmsg(a0, (struct msghdr32 *)A(a1),
- a[2]);
+ ret = sys32_recvmsg(a0, (struct msghdr32 *) A(a1), a[2]);
break;
default:
ret = EINVAL;
@@ -1522,15 +1992,28 @@
struct msgbuf32 { s32 mtype; char mtext[1]; };
-struct ipc_perm32
-{
- key_t key;
- __kernel_uid_t32 uid;
- __kernel_gid_t32 gid;
- __kernel_uid_t32 cuid;
- __kernel_gid_t32 cgid;
+struct ipc_perm32 {
+ key_t key;
+ __kernel_uid_t32 uid;
+ __kernel_gid_t32 gid;
+ __kernel_uid_t32 cuid;
+ __kernel_gid_t32 cgid;
+ __kernel_mode_t32 mode;
+ unsigned short seq;
+};
+
+struct ipc64_perm32 {
+ key_t key;
+ __kernel_uid32_t32 uid;
+ __kernel_gid32_t32 gid;
+ __kernel_uid32_t32 cuid;
+ __kernel_gid32_t32 cgid;
__kernel_mode_t32 mode;
- unsigned short seq;
+ unsigned short __pad1;
+ unsigned short seq;
+ unsigned short __pad2;
+ unsigned int unused1;
+ unsigned int unused2;
};
struct semid_ds32 {
@@ -1544,8 +2027,18 @@
unsigned short sem_nsems; /* no. of semaphores in array */
};
-struct msqid_ds32
-{
+struct semid64_ds32 {
+ struct ipc64_perm32 sem_perm;
+ __kernel_time_t32 sem_otime;
+ unsigned int __unused1;
+ __kernel_time_t32 sem_ctime;
+ unsigned int __unused2;
+ unsigned int sem_nsems;
+ unsigned int __unused3;
+ unsigned int __unused4;
+};
+
+struct msqid_ds32 {
struct ipc_perm32 msg_perm;
u32 msg_first;
u32 msg_last;
@@ -1561,110 +2054,206 @@
__kernel_ipc_pid_t32 msg_lrpid;
};
+struct msqid64_ds32 {
+ struct ipc64_perm32 msg_perm;
+ __kernel_time_t32 msg_stime;
+ unsigned int __unused1;
+ __kernel_time_t32 msg_rtime;
+ unsigned int __unused2;
+ __kernel_time_t32 msg_ctime;
+ unsigned int __unused3;
+ unsigned int msg_cbytes;
+ unsigned int msg_qnum;
+ unsigned int msg_qbytes;
+ __kernel_pid_t32 msg_lspid;
+ __kernel_pid_t32 msg_lrpid;
+ unsigned int __unused4;
+ unsigned int __unused5;
+};
+
struct shmid_ds32 {
- struct ipc_perm32 shm_perm;
- int shm_segsz;
- __kernel_time_t32 shm_atime;
- __kernel_time_t32 shm_dtime;
- __kernel_time_t32 shm_ctime;
- __kernel_ipc_pid_t32 shm_cpid;
- __kernel_ipc_pid_t32 shm_lpid;
- unsigned short shm_nattch;
+ struct ipc_perm32 shm_perm;
+ int shm_segsz;
+ __kernel_time_t32 shm_atime;
+ __kernel_time_t32 shm_dtime;
+ __kernel_time_t32 shm_ctime;
+ __kernel_ipc_pid_t32 shm_cpid;
+ __kernel_ipc_pid_t32 shm_lpid;
+ unsigned short shm_nattch;
+};
+
+struct shmid64_ds32 {
+ struct ipc64_perm shm_perm;
+ __kernel_size_t32 shm_segsz;
+ __kernel_time_t32 shm_atime;
+ unsigned int __unused1;
+ __kernel_time_t32 shm_dtime;
+ unsigned int __unused2;
+ __kernel_time_t32 shm_ctime;
+ unsigned int __unused3;
+ __kernel_pid_t32 shm_cpid;
+ __kernel_pid_t32 shm_lpid;
+ unsigned int shm_nattch;
+ unsigned int __unused4;
+ unsigned int __unused5;
+};
+
+struct shminfo64_32 {
+ unsigned int shmmax;
+ unsigned int shmmin;
+ unsigned int shmmni;
+ unsigned int shmseg;
+ unsigned int shmall;
+ unsigned int __unused1;
+ unsigned int __unused2;
+ unsigned int __unused3;
+ unsigned int __unused4;
};
+struct shm_info32 {
+ int used_ids;
+ u32 shm_tot, shm_rss, shm_swp;
+ u32 swap_attempts, swap_successes;
+};
+
+struct ipc_kludge {
+ struct msgbuf *msgp;
+ long msgtyp;
+};
+
+#define SEMOP 1
+#define SEMGET 2
+#define SEMCTL 3
+#define MSGSND 11
+#define MSGRCV 12
+#define MSGGET 13
+#define MSGCTL 14
+#define SHMAT 21
+#define SHMDT 22
+#define SHMGET 23
+#define SHMCTL 24
+
#define IPCOP_MASK(__x) (1UL << (__x))
static int
-do_sys32_semctl(int first, int second, int third, void *uptr)
+ipc_parse_version32 (int *cmd)
+{
+ if (*cmd & IPC_64) {
+ *cmd ^= IPC_64;
+ return IPC_64;
+ } else {
+ return IPC_OLD;
+ }
+}
+
+static int
+semctl32 (int first, int second, int third, void *uptr)
{
union semun fourth;
u32 pad;
int err = 0, err2;
struct semid64_ds s;
- struct semid_ds32 *usp;
mm_segment_t old_fs;
+ int version = ipc_parse_version32(&third);
if (!uptr)
return -EINVAL;
if (get_user(pad, (u32 *)uptr))
return -EFAULT;
- if(third = SETVAL)
+ if (third = SETVAL)
fourth.val = (int)pad;
else
fourth.__pad = (void *)A(pad);
switch (third) {
-
- case IPC_INFO:
- case IPC_RMID:
- case IPC_SET:
- case SEM_INFO:
- case GETVAL:
- case GETPID:
- case GETNCNT:
- case GETZCNT:
- case GETALL:
- case SETVAL:
- case SETALL:
- err = sys_semctl (first, second, third, fourth);
+ case IPC_INFO:
+ case IPC_RMID:
+ case IPC_SET:
+ case SEM_INFO:
+ case GETVAL:
+ case GETPID:
+ case GETNCNT:
+ case GETZCNT:
+ case GETALL:
+ case SETVAL:
+ case SETALL:
+ err = sys_semctl(first, second, third, fourth);
break;
- case IPC_STAT:
- case SEM_STAT:
- usp = (struct semid_ds32 *)A(pad);
+ case IPC_STAT:
+ case SEM_STAT:
fourth.__pad = &s;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_semctl (first, second, third, fourth);
- set_fs (old_fs);
- err2 = put_user(s.sem_perm.key, &usp->sem_perm.key);
- err2 |= __put_user(s.sem_perm.uid, &usp->sem_perm.uid);
- err2 |= __put_user(s.sem_perm.gid, &usp->sem_perm.gid);
- err2 |= __put_user(s.sem_perm.cuid,
- &usp->sem_perm.cuid);
- err2 |= __put_user (s.sem_perm.cgid,
- &usp->sem_perm.cgid);
- err2 |= __put_user (s.sem_perm.mode,
- &usp->sem_perm.mode);
- err2 |= __put_user (s.sem_perm.seq, &usp->sem_perm.seq);
- err2 |= __put_user (s.sem_otime, &usp->sem_otime);
- err2 |= __put_user (s.sem_ctime, &usp->sem_ctime);
- err2 |= __put_user (s.sem_nsems, &usp->sem_nsems);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_semctl(first, second, third, fourth);
+ set_fs(old_fs);
+
+ if (version = IPC_64) {
+ struct semid64_ds32 *usp64 = (struct semid64_ds32 *) A(pad);
+
+ if (!access_ok(VERIFY_WRITE, usp64, sizeof(*usp64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s.sem_perm.key, &usp64->sem_perm.key);
+ err2 |= __put_user(s.sem_perm.uid, &usp64->sem_perm.uid);
+ err2 |= __put_user(s.sem_perm.gid, &usp64->sem_perm.gid);
+ err2 |= __put_user(s.sem_perm.cuid, &usp64->sem_perm.cuid);
+ err2 |= __put_user(s.sem_perm.cgid, &usp64->sem_perm.cgid);
+ err2 |= __put_user(s.sem_perm.mode, &usp64->sem_perm.mode);
+ err2 |= __put_user(s.sem_perm.seq, &usp64->sem_perm.seq);
+ err2 |= __put_user(s.sem_otime, &usp64->sem_otime);
+ err2 |= __put_user(s.sem_ctime, &usp64->sem_ctime);
+ err2 |= __put_user(s.sem_nsems, &usp64->sem_nsems);
+ } else {
+ struct semid_ds32 *usp32 = (struct semid_ds32 *) A(pad);
+
+ if (!access_ok(VERIFY_WRITE, usp32, sizeof(*usp32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s.sem_perm.key, &usp32->sem_perm.key);
+ err2 |= __put_user(s.sem_perm.uid, &usp32->sem_perm.uid);
+ err2 |= __put_user(s.sem_perm.gid, &usp32->sem_perm.gid);
+ err2 |= __put_user(s.sem_perm.cuid, &usp32->sem_perm.cuid);
+ err2 |= __put_user(s.sem_perm.cgid, &usp32->sem_perm.cgid);
+ err2 |= __put_user(s.sem_perm.mode, &usp32->sem_perm.mode);
+ err2 |= __put_user(s.sem_perm.seq, &usp32->sem_perm.seq);
+ err2 |= __put_user(s.sem_otime, &usp32->sem_otime);
+ err2 |= __put_user(s.sem_ctime, &usp32->sem_ctime);
+ err2 |= __put_user(s.sem_nsems, &usp32->sem_nsems);
+ }
if (err2)
- err = -EFAULT;
+ err = -EFAULT;
break;
-
}
-
return err;
}
static int
do_sys32_msgsnd (int first, int second, int third, void *uptr)
{
- struct msgbuf *p = kmalloc (second + sizeof (struct msgbuf)
- + 4, GFP_USER);
+ struct msgbuf *p = kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER);
struct msgbuf32 *up = (struct msgbuf32 *)uptr;
mm_segment_t old_fs;
int err;
if (!p)
return -ENOMEM;
- err = get_user (p->mtype, &up->mtype);
- err |= __copy_from_user (p->mtext, &up->mtext, second);
+ err = get_user(p->mtype, &up->mtype);
+ err |= copy_from_user(p->mtext, &up->mtext, second);
if (err)
goto out;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgsnd (first, p, second, third);
- set_fs (old_fs);
-out:
- kfree (p);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgsnd(first, p, second, third);
+ set_fs(old_fs);
+ out:
+ kfree(p);
return err;
}
static int
-do_sys32_msgrcv (int first, int second, int msgtyp, int third,
- int version, void *uptr)
+do_sys32_msgrcv (int first, int second, int msgtyp, int third, int version, void *uptr)
{
struct msgbuf32 *up;
struct msgbuf *p;
@@ -1679,185 +2268,281 @@
if (!uptr)
goto out;
err = -EFAULT;
- if (copy_from_user (&ipck, uipck, sizeof (struct ipc_kludge)))
+ if (copy_from_user(&ipck, uipck, sizeof(struct ipc_kludge)))
goto out;
uptr = (void *)A(ipck.msgp);
msgtyp = ipck.msgtyp;
}
err = -ENOMEM;
- p = kmalloc (second + sizeof (struct msgbuf) + 4, GFP_USER);
+ p = kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER);
if (!p)
goto out;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgrcv (first, p, second + 4, msgtyp, third);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgrcv(first, p, second + 4, msgtyp, third);
+ set_fs(old_fs);
if (err < 0)
goto free_then_out;
up = (struct msgbuf32 *)uptr;
- if (put_user (p->mtype, &up->mtype) ||
- __copy_to_user (&up->mtext, p->mtext, err))
+ if (put_user(p->mtype, &up->mtype) || copy_to_user(&up->mtext, p->mtext, err))
err = -EFAULT;
free_then_out:
- kfree (p);
+ kfree(p);
out:
return err;
}
static int
-do_sys32_msgctl (int first, int second, void *uptr)
+msgctl32 (int first, int second, void *uptr)
{
int err = -EINVAL, err2;
struct msqid_ds m;
struct msqid64_ds m64;
- struct msqid_ds32 *up = (struct msqid_ds32 *)uptr;
+ struct msqid_ds32 *up32 = (struct msqid_ds32 *)uptr;
+ struct msqid64_ds32 *up64 = (struct msqid64_ds32 *)uptr;
mm_segment_t old_fs;
+ int version = ipc_parse_version32(&second);
switch (second) {
-
- case IPC_INFO:
- case IPC_RMID:
- case MSG_INFO:
- err = sys_msgctl (first, second, (struct msqid_ds *)uptr);
- break;
-
- case IPC_SET:
- err = get_user (m.msg_perm.uid, &up->msg_perm.uid);
- err |= __get_user (m.msg_perm.gid, &up->msg_perm.gid);
- err |= __get_user (m.msg_perm.mode, &up->msg_perm.mode);
- err |= __get_user (m.msg_qbytes, &up->msg_qbytes);
+ case IPC_INFO:
+ case IPC_RMID:
+ case MSG_INFO:
+ err = sys_msgctl(first, second, (struct msqid_ds *)uptr);
+ break;
+
+ case IPC_SET:
+ if (version = IPC_64) {
+ err = get_user(m.msg_perm.uid, &up64->msg_perm.uid);
+ err |= get_user(m.msg_perm.gid, &up64->msg_perm.gid);
+ err |= get_user(m.msg_perm.mode, &up64->msg_perm.mode);
+ err |= get_user(m.msg_qbytes, &up64->msg_qbytes);
+ } else {
+ err = get_user(m.msg_perm.uid, &up32->msg_perm.uid);
+ err |= get_user(m.msg_perm.gid, &up32->msg_perm.gid);
+ err |= get_user(m.msg_perm.mode, &up32->msg_perm.mode);
+ err |= get_user(m.msg_qbytes, &up32->msg_qbytes);
+ }
if (err)
break;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgctl (first, second, &m);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgctl(first, second, &m);
+ set_fs(old_fs);
break;
- case IPC_STAT:
- case MSG_STAT:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgctl (first, second, (void *) &m64);
- set_fs (old_fs);
- err2 = put_user (m64.msg_perm.key, &up->msg_perm.key);
- err2 |= __put_user(m64.msg_perm.uid, &up->msg_perm.uid);
- err2 |= __put_user(m64.msg_perm.gid, &up->msg_perm.gid);
- err2 |= __put_user(m64.msg_perm.cuid, &up->msg_perm.cuid);
- err2 |= __put_user(m64.msg_perm.cgid, &up->msg_perm.cgid);
- err2 |= __put_user(m64.msg_perm.mode, &up->msg_perm.mode);
- err2 |= __put_user(m64.msg_perm.seq, &up->msg_perm.seq);
- err2 |= __put_user(m64.msg_stime, &up->msg_stime);
- err2 |= __put_user(m64.msg_rtime, &up->msg_rtime);
- err2 |= __put_user(m64.msg_ctime, &up->msg_ctime);
- err2 |= __put_user(m64.msg_cbytes, &up->msg_cbytes);
- err2 |= __put_user(m64.msg_qnum, &up->msg_qnum);
- err2 |= __put_user(m64.msg_qbytes, &up->msg_qbytes);
- err2 |= __put_user(m64.msg_lspid, &up->msg_lspid);
- err2 |= __put_user(m64.msg_lrpid, &up->msg_lrpid);
- if (err2)
- err = -EFAULT;
- break;
+ case IPC_STAT:
+ case MSG_STAT:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgctl(first, second, (void *) &m64);
+ set_fs(old_fs);
+ if (version = IPC_64) {
+ if (!access_ok(VERIFY_WRITE, up64, sizeof(*up64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(m64.msg_perm.key, &up64->msg_perm.key);
+ err2 |= __put_user(m64.msg_perm.uid, &up64->msg_perm.uid);
+ err2 |= __put_user(m64.msg_perm.gid, &up64->msg_perm.gid);
+ err2 |= __put_user(m64.msg_perm.cuid, &up64->msg_perm.cuid);
+ err2 |= __put_user(m64.msg_perm.cgid, &up64->msg_perm.cgid);
+ err2 |= __put_user(m64.msg_perm.mode, &up64->msg_perm.mode);
+ err2 |= __put_user(m64.msg_perm.seq, &up64->msg_perm.seq);
+ err2 |= __put_user(m64.msg_stime, &up64->msg_stime);
+ err2 |= __put_user(m64.msg_rtime, &up64->msg_rtime);
+ err2 |= __put_user(m64.msg_ctime, &up64->msg_ctime);
+ err2 |= __put_user(m64.msg_cbytes, &up64->msg_cbytes);
+ err2 |= __put_user(m64.msg_qnum, &up64->msg_qnum);
+ err2 |= __put_user(m64.msg_qbytes, &up64->msg_qbytes);
+ err2 |= __put_user(m64.msg_lspid, &up64->msg_lspid);
+ err2 |= __put_user(m64.msg_lrpid, &up64->msg_lrpid);
+ if (err2)
+ err = -EFAULT;
+ } else {
+ if (!access_ok(VERIFY_WRITE, up32, sizeof(*up32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(m64.msg_perm.key, &up32->msg_perm.key);
+ err2 |= __put_user(m64.msg_perm.uid, &up32->msg_perm.uid);
+ err2 |= __put_user(m64.msg_perm.gid, &up32->msg_perm.gid);
+ err2 |= __put_user(m64.msg_perm.cuid, &up32->msg_perm.cuid);
+ err2 |= __put_user(m64.msg_perm.cgid, &up32->msg_perm.cgid);
+ err2 |= __put_user(m64.msg_perm.mode, &up32->msg_perm.mode);
+ err2 |= __put_user(m64.msg_perm.seq, &up32->msg_perm.seq);
+ err2 |= __put_user(m64.msg_stime, &up32->msg_stime);
+ err2 |= __put_user(m64.msg_rtime, &up32->msg_rtime);
+ err2 |= __put_user(m64.msg_ctime, &up32->msg_ctime);
+ err2 |= __put_user(m64.msg_cbytes, &up32->msg_cbytes);
+ err2 |= __put_user(m64.msg_qnum, &up32->msg_qnum);
+ err2 |= __put_user(m64.msg_qbytes, &up32->msg_qbytes);
+ err2 |= __put_user(m64.msg_lspid, &up32->msg_lspid);
+ err2 |= __put_user(m64.msg_lrpid, &up32->msg_lrpid);
+ if (err2)
+ err = -EFAULT;
+ }
+ break;
}
-
return err;
}
static int
-do_sys32_shmat (int first, int second, int third, int version, void *uptr)
+shmat32 (int first, int second, int third, int version, void *uptr)
{
unsigned long raddr;
u32 *uaddr = (u32 *)A((u32)third);
int err;
if (version = 1)
- return -EINVAL;
- err = sys_shmat (first, uptr, second, &raddr);
+ return -EINVAL; /* iBCS2 emulator entry point: unsupported */
+ err = sys_shmat(first, uptr, second, &raddr);
if (err)
return err;
return put_user(raddr, uaddr);
}
static int
-do_sys32_shmctl (int first, int second, void *uptr)
+shmctl32 (int first, int second, void *uptr)
{
int err = -EFAULT, err2;
struct shmid_ds s;
struct shmid64_ds s64;
- struct shmid_ds32 *up = (struct shmid_ds32 *)uptr;
+ struct shmid_ds32 *up32 = (struct shmid_ds32 *)uptr;
+ struct shmid64_ds32 *up64 = (struct shmid64_ds32 *)uptr;
mm_segment_t old_fs;
- struct shm_info32 {
- int used_ids;
- u32 shm_tot, shm_rss, shm_swp;
- u32 swap_attempts, swap_successes;
- } *uip = (struct shm_info32 *)uptr;
+ struct shm_info32 *uip = (struct shm_info32 *)uptr;
struct shm_info si;
+ int version = ipc_parse_version32(&second);
+ struct shminfo64 smi;
+ struct shminfo *usi32 = (struct shminfo *) uptr;
+ struct shminfo64_32 *usi64 = (struct shminfo64_32 *) uptr;
switch (second) {
+ case IPC_INFO:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, (struct shmid_ds *)&smi);
+ set_fs(old_fs);
+
+ if (version = IPC_64) {
+ if (!access_ok(VERIFY_WRITE, usi64, sizeof(*usi64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(smi.shmmax, &usi64->shmmax);
+ err2 |= __put_user(smi.shmmin, &usi64->shmmin);
+ err2 |= __put_user(smi.shmmni, &usi64->shmmni);
+ err2 |= __put_user(smi.shmseg, &usi64->shmseg);
+ err2 |= __put_user(smi.shmall, &usi64->shmall);
+ } else {
+ if (!access_ok(VERIFY_WRITE, usi32, sizeof(*usi32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(smi.shmmax, &usi32->shmmax);
+ err2 |= __put_user(smi.shmmin, &usi32->shmmin);
+ err2 |= __put_user(smi.shmmni, &usi32->shmmni);
+ err2 |= __put_user(smi.shmseg, &usi32->shmseg);
+ err2 |= __put_user(smi.shmall, &usi32->shmall);
+ }
+ if (err2)
+ err = -EFAULT;
+ break;
- case IPC_INFO:
- case IPC_RMID:
- case SHM_LOCK:
- case SHM_UNLOCK:
- err = sys_shmctl (first, second, (struct shmid_ds *)uptr);
+ case IPC_RMID:
+ case SHM_LOCK:
+ case SHM_UNLOCK:
+ err = sys_shmctl(first, second, (struct shmid_ds *)uptr);
break;
- case IPC_SET:
- err = get_user (s.shm_perm.uid, &up->shm_perm.uid);
- err |= __get_user (s.shm_perm.gid, &up->shm_perm.gid);
- err |= __get_user (s.shm_perm.mode, &up->shm_perm.mode);
+
+ case IPC_SET:
+ if (version = IPC_64) {
+ err = get_user(s.shm_perm.uid, &up64->shm_perm.uid);
+ err |= get_user(s.shm_perm.gid, &up64->shm_perm.gid);
+ err |= get_user(s.shm_perm.mode, &up64->shm_perm.mode);
+ } else {
+ err = get_user(s.shm_perm.uid, &up32->shm_perm.uid);
+ err |= get_user(s.shm_perm.gid, &up32->shm_perm.gid);
+ err |= get_user(s.shm_perm.mode, &up32->shm_perm.mode);
+ }
if (err)
break;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, &s);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, &s);
+ set_fs(old_fs);
break;
- case IPC_STAT:
- case SHM_STAT:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, (void *) &s64);
- set_fs (old_fs);
+ case IPC_STAT:
+ case SHM_STAT:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, (void *) &s64);
+ set_fs(old_fs);
if (err < 0)
break;
- err2 = put_user (s64.shm_perm.key, &up->shm_perm.key);
- err2 |= __put_user (s64.shm_perm.uid, &up->shm_perm.uid);
- err2 |= __put_user (s64.shm_perm.gid, &up->shm_perm.gid);
- err2 |= __put_user (s64.shm_perm.cuid,
- &up->shm_perm.cuid);
- err2 |= __put_user (s64.shm_perm.cgid,
- &up->shm_perm.cgid);
- err2 |= __put_user (s64.shm_perm.mode,
- &up->shm_perm.mode);
- err2 |= __put_user (s64.shm_perm.seq, &up->shm_perm.seq);
- err2 |= __put_user (s64.shm_atime, &up->shm_atime);
- err2 |= __put_user (s64.shm_dtime, &up->shm_dtime);
- err2 |= __put_user (s64.shm_ctime, &up->shm_ctime);
- err2 |= __put_user (s64.shm_segsz, &up->shm_segsz);
- err2 |= __put_user (s64.shm_nattch, &up->shm_nattch);
- err2 |= __put_user (s64.shm_cpid, &up->shm_cpid);
- err2 |= __put_user (s64.shm_lpid, &up->shm_lpid);
+ if (version = IPC_64) {
+ if (!access_ok(VERIFY_WRITE, up64, sizeof(*up64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s64.shm_perm.key, &up64->shm_perm.key);
+ err2 |= __put_user(s64.shm_perm.uid, &up64->shm_perm.uid);
+ err2 |= __put_user(s64.shm_perm.gid, &up64->shm_perm.gid);
+ err2 |= __put_user(s64.shm_perm.cuid, &up64->shm_perm.cuid);
+ err2 |= __put_user(s64.shm_perm.cgid, &up64->shm_perm.cgid);
+ err2 |= __put_user(s64.shm_perm.mode, &up64->shm_perm.mode);
+ err2 |= __put_user(s64.shm_perm.seq, &up64->shm_perm.seq);
+ err2 |= __put_user(s64.shm_atime, &up64->shm_atime);
+ err2 |= __put_user(s64.shm_dtime, &up64->shm_dtime);
+ err2 |= __put_user(s64.shm_ctime, &up64->shm_ctime);
+ err2 |= __put_user(s64.shm_segsz, &up64->shm_segsz);
+ err2 |= __put_user(s64.shm_nattch, &up64->shm_nattch);
+ err2 |= __put_user(s64.shm_cpid, &up64->shm_cpid);
+ err2 |= __put_user(s64.shm_lpid, &up64->shm_lpid);
+ } else {
+ if (!access_ok(VERIFY_WRITE, up32, sizeof(*up32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s64.shm_perm.key, &up32->shm_perm.key);
+ err2 |= __put_user(s64.shm_perm.uid, &up32->shm_perm.uid);
+ err2 |= __put_user(s64.shm_perm.gid, &up32->shm_perm.gid);
+ err2 |= __put_user(s64.shm_perm.cuid, &up32->shm_perm.cuid);
+ err2 |= __put_user(s64.shm_perm.cgid, &up32->shm_perm.cgid);
+ err2 |= __put_user(s64.shm_perm.mode, &up32->shm_perm.mode);
+ err2 |= __put_user(s64.shm_perm.seq, &up32->shm_perm.seq);
+ err2 |= __put_user(s64.shm_atime, &up32->shm_atime);
+ err2 |= __put_user(s64.shm_dtime, &up32->shm_dtime);
+ err2 |= __put_user(s64.shm_ctime, &up32->shm_ctime);
+ err2 |= __put_user(s64.shm_segsz, &up32->shm_segsz);
+ err2 |= __put_user(s64.shm_nattch, &up32->shm_nattch);
+ err2 |= __put_user(s64.shm_cpid, &up32->shm_cpid);
+ err2 |= __put_user(s64.shm_lpid, &up32->shm_lpid);
+ }
if (err2)
err = -EFAULT;
break;
- case SHM_INFO:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, (void *)&si);
- set_fs (old_fs);
+ case SHM_INFO:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, (void *)&si);
+ set_fs(old_fs);
if (err < 0)
break;
- err2 = put_user (si.used_ids, &uip->used_ids);
- err2 |= __put_user (si.shm_tot, &uip->shm_tot);
- err2 |= __put_user (si.shm_rss, &uip->shm_rss);
- err2 |= __put_user (si.shm_swp, &uip->shm_swp);
- err2 |= __put_user (si.swap_attempts,
- &uip->swap_attempts);
- err2 |= __put_user (si.swap_successes,
- &uip->swap_successes);
+
+ if (!access_ok(VERIFY_WRITE, uip, sizeof(*uip))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(si.used_ids, &uip->used_ids);
+ err2 |= __put_user(si.shm_tot, &uip->shm_tot);
+ err2 |= __put_user(si.shm_rss, &uip->shm_rss);
+ err2 |= __put_user(si.shm_swp, &uip->shm_swp);
+ err2 |= __put_user(si.swap_attempts, &uip->swap_attempts);
+ err2 |= __put_user(si.swap_successes, &uip->swap_successes);
if (err2)
err = -EFAULT;
break;
@@ -1869,59 +2554,42 @@
asmlinkage long
sys32_ipc (u32 call, int first, int second, int third, u32 ptr, u32 fifth)
{
- int version, err;
+ int version;
version = call >> 16; /* hack for backward compatibility */
call &= 0xffff;
switch (call) {
-
- case SEMOP:
+ case SEMOP:
/* struct sembuf is the same on 32 and 64bit :)) */
- err = sys_semop (first, (struct sembuf *)AA(ptr),
- second);
- break;
- case SEMGET:
- err = sys_semget (first, second, third);
- break;
- case SEMCTL:
- err = do_sys32_semctl (first, second, third,
- (void *)AA(ptr));
- break;
-
- case MSGSND:
- err = do_sys32_msgsnd (first, second, third,
- (void *)AA(ptr));
- break;
- case MSGRCV:
- err = do_sys32_msgrcv (first, second, fifth, third,
- version, (void *)AA(ptr));
- break;
- case MSGGET:
- err = sys_msgget ((key_t) first, second);
- break;
- case MSGCTL:
- err = do_sys32_msgctl (first, second, (void *)AA(ptr));
- break;
+ return sys_semop(first, (struct sembuf *)AA(ptr), second);
+ case SEMGET:
+ return sys_semget(first, second, third);
+ case SEMCTL:
+ return semctl32(first, second, third, (void *)AA(ptr));
+
+ case MSGSND:
+ return do_sys32_msgsnd(first, second, third, (void *)AA(ptr));
+ case MSGRCV:
+ return do_sys32_msgrcv(first, second, fifth, third, version, (void *)AA(ptr));
+ case MSGGET:
+ return sys_msgget((key_t) first, second);
+ case MSGCTL:
+ return msgctl32(first, second, (void *)AA(ptr));
+
+ case SHMAT:
+ return shmat32(first, second, third, version, (void *)AA(ptr));
+ break;
+ case SHMDT:
+ return sys_shmdt((char *)AA(ptr));
+ case SHMGET:
+ return sys_shmget(first, second, third);
+ case SHMCTL:
+ return shmctl32(first, second, (void *)AA(ptr));
- case SHMAT:
- err = do_sys32_shmat (first, second, third, version, (void *)AA(ptr));
- break;
- case SHMDT:
- err = sys_shmdt ((char *)AA(ptr));
- break;
- case SHMGET:
- err = sys_shmget (first, second, third);
- break;
- case SHMCTL:
- err = do_sys32_shmctl (first, second, (void *)AA(ptr));
- break;
- default:
- err = -EINVAL;
- break;
+ default:
+ return -EINVAL;
}
-
- return err;
}
/*
@@ -1929,7 +2597,8 @@
* sys_gettimeofday(). IA64 did this but i386 Linux did not
* so we have to implement this system call here.
*/
-asmlinkage long sys32_time(int * tloc)
+asmlinkage long
+sys32_time (int *tloc)
{
int i;
@@ -1937,7 +2606,7 @@
stuff it to user space. No side effects */
i = CURRENT_TIME;
if (tloc) {
- if (put_user(i,tloc))
+ if (put_user(i, tloc))
i = -EFAULT;
}
return i;
@@ -1967,7 +2636,10 @@
{
int err;
- err = put_user (r->ru_utime.tv_sec, &ru->ru_utime.tv_sec);
+ if (!access_ok(VERIFY_WRITE, ru, sizeof(*ru)))
+ return -EFAULT;
+
+ err = __put_user (r->ru_utime.tv_sec, &ru->ru_utime.tv_sec);
err |= __put_user (r->ru_utime.tv_usec, &ru->ru_utime.tv_usec);
err |= __put_user (r->ru_stime.tv_sec, &ru->ru_stime.tv_sec);
err |= __put_user (r->ru_stime.tv_usec, &ru->ru_stime.tv_usec);
@@ -1989,8 +2661,7 @@
}
asmlinkage long
-sys32_wait4(__kernel_pid_t32 pid, unsigned int *stat_addr, int options,
- struct rusage32 *ru)
+sys32_wait4 (int pid, unsigned int *stat_addr, int options, struct rusage32 *ru)
{
if (!ru)
return sys_wait4(pid, stat_addr, options, NULL);
@@ -2000,37 +2671,38 @@
unsigned int status;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_wait4(pid, stat_addr ? &status : NULL, options, &r);
- set_fs (old_fs);
- if (put_rusage (ru, &r)) return -EFAULT;
- if (stat_addr && put_user (status, stat_addr))
+ set_fs(old_fs);
+ if (put_rusage(ru, &r))
+ return -EFAULT;
+ if (stat_addr && put_user(status, stat_addr))
return -EFAULT;
return ret;
}
}
asmlinkage long
-sys32_waitpid(__kernel_pid_t32 pid, unsigned int *stat_addr, int options)
+sys32_waitpid (int pid, unsigned int *stat_addr, int options)
{
return sys32_wait4(pid, stat_addr, options, NULL);
}
-extern asmlinkage long
-sys_getrusage(int who, struct rusage *ru);
+extern asmlinkage long sys_getrusage (int who, struct rusage *ru);
asmlinkage long
-sys32_getrusage(int who, struct rusage32 *ru)
+sys32_getrusage (int who, struct rusage32 *ru)
{
struct rusage r;
int ret;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_getrusage(who, &r);
- set_fs (old_fs);
- if (put_rusage (ru, &r)) return -EFAULT;
+ set_fs(old_fs);
+ if (put_rusage (ru, &r))
+ return -EFAULT;
return ret;
}
@@ -2041,41 +2713,41 @@
__kernel_clock_t32 tms_cstime;
};
-extern asmlinkage long sys_times(struct tms * tbuf);
+extern asmlinkage long sys_times (struct tms * tbuf);
asmlinkage long
-sys32_times(struct tms32 *tbuf)
+sys32_times (struct tms32 *tbuf)
{
+ mm_segment_t old_fs = get_fs();
struct tms t;
long ret;
- mm_segment_t old_fs = get_fs ();
int err;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_times(tbuf ? &t : NULL);
- set_fs (old_fs);
+ set_fs(old_fs);
if (tbuf) {
err = put_user (IA32_TICK(t.tms_utime), &tbuf->tms_utime);
- err |= __put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime);
- err |= __put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime);
- err |= __put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime);
+ err |= put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime);
+ err |= put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime);
+ err |= put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime);
if (err)
ret = -EFAULT;
}
return IA32_TICK(ret);
}
-unsigned int
+static unsigned int
ia32_peek (struct pt_regs *regs, struct task_struct *child, unsigned long addr, unsigned int *val)
{
size_t copied;
unsigned int ret;
copied = access_process_vm(child, addr, val, sizeof(*val), 0);
- return(copied != sizeof(ret) ? -EIO : 0);
+ return (copied != sizeof(ret)) ? -EIO : 0;
}
-unsigned int
+static unsigned int
ia32_poke (struct pt_regs *regs, struct task_struct *child, unsigned long addr, unsigned int val)
{
@@ -2105,135 +2777,87 @@
#define PT_UESP 15
#define PT_SS 16
-unsigned int
-getreg(struct task_struct *child, int regno)
+static unsigned int
+getreg (struct task_struct *child, int regno)
{
struct pt_regs *child_regs;
child_regs = ia64_task_regs(child);
switch (regno / sizeof(int)) {
-
- case PT_EBX:
- return(child_regs->r11);
- case PT_ECX:
- return(child_regs->r9);
- case PT_EDX:
- return(child_regs->r10);
- case PT_ESI:
- return(child_regs->r14);
- case PT_EDI:
- return(child_regs->r15);
- case PT_EBP:
- return(child_regs->r13);
- case PT_EAX:
- case PT_ORIG_EAX:
- return(child_regs->r8);
- case PT_EIP:
- return(child_regs->cr_iip);
- case PT_UESP:
- return(child_regs->r12);
- case PT_EFL:
- return(child->thread.eflag);
- case PT_DS:
- case PT_ES:
- case PT_FS:
- case PT_GS:
- case PT_SS:
- return((unsigned int)__USER_DS);
- case PT_CS:
- return((unsigned int)__USER_CS);
- default:
- printk(KERN_ERR "getregs:unknown register %d\n", regno);
+ case PT_EBX: return child_regs->r11;
+ case PT_ECX: return child_regs->r9;
+ case PT_EDX: return child_regs->r10;
+ case PT_ESI: return child_regs->r14;
+ case PT_EDI: return child_regs->r15;
+ case PT_EBP: return child_regs->r13;
+ case PT_EAX: return child_regs->r8;
+ case PT_ORIG_EAX: return child_regs->r1; /* see dispatch_to_ia32_handler() */
+ case PT_EIP: return child_regs->cr_iip;
+ case PT_UESP: return child_regs->r12;
+ case PT_EFL: return child->thread.eflag;
+ case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS:
+ return __USER_DS;
+ case PT_CS: return __USER_CS;
+ default:
+ printk(KERN_ERR "ia32.getreg(): unknown register %d\n", regno);
break;
-
}
- return(0);
+ return 0;
}
-void
-putreg(struct task_struct *child, int regno, unsigned int value)
+static void
+putreg (struct task_struct *child, int regno, unsigned int value)
{
struct pt_regs *child_regs;
child_regs = ia64_task_regs(child);
switch (regno / sizeof(int)) {
-
- case PT_EBX:
- child_regs->r11 = value;
- break;
- case PT_ECX:
- child_regs->r9 = value;
- break;
- case PT_EDX:
- child_regs->r10 = value;
- break;
- case PT_ESI:
- child_regs->r14 = value;
- break;
- case PT_EDI:
- child_regs->r15 = value;
- break;
- case PT_EBP:
- child_regs->r13 = value;
- break;
- case PT_EAX:
- case PT_ORIG_EAX:
- child_regs->r8 = value;
- break;
- case PT_EIP:
- child_regs->cr_iip = value;
- break;
- case PT_UESP:
- child_regs->r12 = value;
- break;
- case PT_EFL:
- child->thread.eflag = value;
- break;
- case PT_DS:
- case PT_ES:
- case PT_FS:
- case PT_GS:
- case PT_SS:
+ case PT_EBX: child_regs->r11 = value; break;
+ case PT_ECX: child_regs->r9 = value; break;
+ case PT_EDX: child_regs->r10 = value; break;
+ case PT_ESI: child_regs->r14 = value; break;
+ case PT_EDI: child_regs->r15 = value; break;
+ case PT_EBP: child_regs->r13 = value; break;
+ case PT_EAX: child_regs->r8 = value; break;
+ case PT_ORIG_EAX: child_regs->r1 = value; break;
+ case PT_EIP: child_regs->cr_iip = value; break;
+ case PT_UESP: child_regs->r12 = value; break;
+ case PT_EFL: child->thread.eflag = value; break;
+ case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS:
if (value != __USER_DS)
- printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
+ printk(KERN_ERR
+ "ia32.putreg: attempt to set invalid segment register %d = %x\n",
regno, value);
break;
- case PT_CS:
+ case PT_CS:
if (value != __USER_CS)
- printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
+ printk(KERN_ERR
+ "ia32.putreg: attempt to to set invalid segment register %d = %x\n",
regno, value);
break;
- default:
- printk(KERN_ERR "getregs:unknown register %d\n", regno);
+ default:
+ printk(KERN_ERR "ia32.putreg: unknown register %d\n", regno);
break;
-
}
}
static inline void
-ia32f2ia64f(void *dst, void *src)
+ia32f2ia64f (void *dst, void *src)
{
-
- __asm__ ("ldfe f6=[%1] ;;\n\t"
- "stf.spill [%0]ö"
- :
- : "r"(dst), "r"(src));
+ asm volatile ("ldfe f6=[%1];; stf.spill [%0]ö" :: "r"(dst), "r"(src) : "memory");
return;
}
static inline void
-ia64f2ia32f(void *dst, void *src)
+ia64f2ia32f (void *dst, void *src)
{
-
- __asm__ ("ldf.fill f6=[%1] ;;\n\t"
- "stfe [%0]ö"
- :
- : "r"(dst), "r"(src));
+ asm volatile ("ldf.fill f6=[%1];; stfe [%0]ö" :: "r"(dst), "r"(src) : "memory");
return;
}
-void
-put_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp, int tos)
+static void
+put_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp,
+ int tos)
{
struct _fpreg_ia32 *f;
char buf[32];
@@ -2242,62 +2866,59 @@
if ((regno += tos) >= 8)
regno -= 8;
switch (regno) {
-
- case 0:
+ case 0:
ia64f2ia32f(f, &ptp->f8);
break;
- case 1:
+ case 1:
ia64f2ia32f(f, &ptp->f9);
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
+ case 2:
+ case 3:
+ case 4:
+ case 5:
+ case 6:
+ case 7:
ia64f2ia32f(f, &swp->f10 + (regno - 2));
break;
-
}
- __copy_to_user(reg, f, sizeof(*reg));
+ copy_to_user(reg, f, sizeof(*reg));
}
-void
-get_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp, int tos)
+static void
+get_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp,
+ int tos)
{
if ((regno += tos) >= 8)
regno -= 8;
switch (regno) {
-
- case 0:
- __copy_from_user(&ptp->f8, reg, sizeof(*reg));
+ case 0:
+ copy_from_user(&ptp->f8, reg, sizeof(*reg));
break;
- case 1:
- __copy_from_user(&ptp->f9, reg, sizeof(*reg));
+ case 1:
+ copy_from_user(&ptp->f9, reg, sizeof(*reg));
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
- __copy_from_user(&swp->f10 + (regno - 2), reg, sizeof(*reg));
+ case 2:
+ case 3:
+ case 4:
+ case 5:
+ case 6:
+ case 7:
+ copy_from_user(&swp->f10 + (regno - 2), reg, sizeof(*reg));
break;
-
}
return;
}
-int
-save_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save)
+static int
+save_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save)
{
struct switch_stack *swp;
struct pt_regs *ptp;
int i, tos;
if (!access_ok(VERIFY_WRITE, save, sizeof(*save)))
- return(-EIO);
+ return -EIO;
__put_user(tsk->thread.fcr, &save->cw);
__put_user(tsk->thread.fsr, &save->sw);
__put_user(tsk->thread.fsr >> 32, &save->tag);
@@ -2313,11 +2934,11 @@
tos = (tsk->thread.fsr >> 11) & 3;
for (i = 0; i < 8; i++)
put_fpreg(i, &save->_st[i], ptp, swp, tos);
- return(0);
+ return 0;
}
-int
-restore_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save)
+static int
+restore_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save)
{
struct switch_stack *swp;
struct pt_regs *ptp;
@@ -2340,10 +2961,11 @@
tos = (tsk->thread.fsr >> 11) & 3;
for (i = 0; i < 8; i++)
get_fpreg(i, &save->_st[i], ptp, swp, tos);
- return(ret ? -EFAULT : 0);
+ return ret ? -EFAULT : 0;
}
-asmlinkage long sys_ptrace(long, pid_t, unsigned long, unsigned long, long, long, long, long, long);
+extern asmlinkage long sys_ptrace (long, pid_t, unsigned long, unsigned long, long, long, long,
+ long, long);
/*
* Note that the IA32 version of `ptrace' calls the IA64 routine for
@@ -2358,13 +2980,12 @@
{
struct pt_regs *regs = (struct pt_regs *) &stack;
struct task_struct *child;
+ unsigned int value, tmp;
long i, ret;
- unsigned int value;
lock_kernel();
if (request = PTRACE_TRACEME) {
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
goto out;
}
@@ -2379,8 +3000,7 @@
goto out;
if (request = PTRACE_ATTACH) {
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
goto out;
}
ret = -ESRCH;
@@ -2398,21 +3018,32 @@
case PTRACE_PEEKDATA: /* read word at location addr */
ret = ia32_peek(regs, child, addr, &value);
if (ret = 0)
- ret = put_user(value, (unsigned int *)A(data));
+ ret = put_user(value, (unsigned int *) A(data));
else
ret = -EIO;
goto out;
case PTRACE_POKETEXT:
case PTRACE_POKEDATA: /* write the word at location addr */
- ret = ia32_poke(regs, child, addr, (unsigned int)data);
+ ret = ia32_poke(regs, child, addr, data);
goto out;
case PTRACE_PEEKUSR: /* read word at addr in USER area */
- ret = 0;
+ ret = -EIO;
+ if ((addr & 3) || addr > 17*sizeof(int))
+ break;
+
+ tmp = getreg(child, addr);
+ if (!put_user(tmp, (unsigned int *) A(data)))
+ ret = 0;
break;
case PTRACE_POKEUSR: /* write word at addr in USER area */
+ ret = -EIO;
+ if ((addr & 3) || addr > 17*sizeof(int))
+ break;
+
+ putreg(child, addr, data);
ret = 0;
break;
@@ -2421,28 +3052,25 @@
ret = -EIO;
break;
}
- for ( i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
- __put_user(getreg(child, i), (unsigned int *) A(data));
+ for (i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
+ put_user(getreg(child, i), (unsigned int *) A(data));
data += sizeof(int);
}
ret = 0;
break;
case IA32_PTRACE_SETREGS:
- {
- unsigned int tmp;
if (!access_ok(VERIFY_READ, (int *) A(data), 17*sizeof(int))) {
ret = -EIO;
break;
}
- for ( i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
- __get_user(tmp, (unsigned int *) A(data));
+ for (i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
+ get_user(tmp, (unsigned int *) A(data));
putreg(child, i, tmp);
data += sizeof(int);
}
ret = 0;
break;
- }
case IA32_PTRACE_GETFPREGS:
ret = save_ia32_fpstate(child, (struct _fpstate_ia32 *) A(data));
@@ -2457,10 +3085,8 @@
case PTRACE_KILL:
case PTRACE_SINGLESTEP: /* execute chile for one instruction */
case PTRACE_DETACH: /* detach a process */
- unlock_kernel();
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
- return(ret);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
+ break;
default:
ret = -EIO;
@@ -2477,7 +3103,10 @@
{
int err;
- err = get_user(kfl->l_type, &ufl->l_type);
+ if (!access_ok(VERIFY_READ, ufl, sizeof(*ufl)))
+ return -EFAULT;
+
+ err = __get_user(kfl->l_type, &ufl->l_type);
err |= __get_user(kfl->l_whence, &ufl->l_whence);
err |= __get_user(kfl->l_start, &ufl->l_start);
err |= __get_user(kfl->l_len, &ufl->l_len);
@@ -2490,6 +3119,9 @@
{
int err;
+ if (!access_ok(VERIFY_WRITE, ufl, sizeof(*ufl)))
+ return -EFAULT;
+
err = __put_user(kfl->l_type, &ufl->l_type);
err |= __put_user(kfl->l_whence, &ufl->l_whence);
err |= __put_user(kfl->l_start, &ufl->l_start);
@@ -2498,71 +3130,43 @@
return err;
}
-extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd,
- unsigned long arg);
+extern asmlinkage long sys_fcntl (unsigned int fd, unsigned int cmd, unsigned long arg);
asmlinkage long
-sys32_fcntl(unsigned int fd, unsigned int cmd, int arg)
+sys32_fcntl (unsigned int fd, unsigned int cmd, unsigned int arg)
{
- struct flock f;
mm_segment_t old_fs;
+ struct flock f;
long ret;
switch (cmd) {
- case F_GETLK:
- case F_SETLK:
- case F_SETLKW:
- if(get_flock32(&f, (struct flock32 *)((long)arg)))
+ case F_GETLK:
+ case F_SETLK:
+ case F_SETLKW:
+ if (get_flock32(&f, (struct flock32 *) A(arg)))
return -EFAULT;
old_fs = get_fs();
set_fs(KERNEL_DS);
- ret = sys_fcntl(fd, cmd, (unsigned long)&f);
+ ret = sys_fcntl(fd, cmd, (unsigned long) &f);
set_fs(old_fs);
- if(cmd = F_GETLK && put_flock32(&f, (struct flock32 *)((long)arg)))
+ if (cmd = F_GETLK && put_flock32(&f, (struct flock32 *) A(arg)))
return -EFAULT;
return ret;
- default:
+
+ default:
/*
* `sys_fcntl' lies about arg, for the F_SETOWN
* sub-function arg can have a negative value.
*/
- return sys_fcntl(fd, cmd, (unsigned long)((long)arg));
- }
-}
-
-asmlinkage long
-sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigaction32 *oact)
-{
- struct k_sigaction new_ka, old_ka;
- int ret;
-
- if (act) {
- old_sigset32_t mask;
-
- ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
- ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
- ret |= __get_user(mask, &act->sa_mask);
- if (ret)
- return ret;
- siginitset(&new_ka.sa.sa_mask, mask);
- }
-
- ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
-
- if (!ret && oact) {
- ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
- ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
- ret |= __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ return sys_fcntl(fd, cmd, arg);
}
-
- return ret;
}
asmlinkage long sys_ni_syscall(void);
asmlinkage long
-sys32_ni_syscall(int dummy0, int dummy1, int dummy2, int dummy3,
- int dummy4, int dummy5, int dummy6, int dummy7, int stack)
+sys32_ni_syscall (int dummy0, int dummy1, int dummy2, int dummy3, int dummy4, int dummy5,
+ int dummy6, int dummy7, int stack)
{
struct pt_regs *regs = (struct pt_regs *)&stack;
@@ -2577,7 +3181,7 @@
#define IOLEN ((65536 / 4) * 4096)
asmlinkage long
-sys_iopl (int level)
+sys32_iopl (int level)
{
extern unsigned long ia64_iobase;
int fd;
@@ -2612,9 +3216,8 @@
up_write(¤t->mm->mmap_sem);
if (addr >= 0) {
- ia64_set_kr(IA64_KR_IO_BASE, addr);
old = (old & ~0x3000) | (level << 12);
- __asm__ __volatile__("mov ar.eflag=%0 ;;" :: "r"(old));
+ asm volatile ("mov ar.eflag=%0;;" :: "r"(old));
}
fput(file);
@@ -2623,7 +3226,7 @@
}
asmlinkage long
-sys_ioperm (unsigned int from, unsigned int num, int on)
+sys32_ioperm (unsigned int from, unsigned int num, int on)
{
/*
@@ -2636,7 +3239,7 @@
* XXX proper ioperm() support should be emulated by
* manipulating the page protections...
*/
- return sys_iopl(3);
+ return sys32_iopl(3);
}
typedef struct {
@@ -2646,10 +3249,8 @@
} ia32_stack_t;
asmlinkage long
-sys32_sigaltstack (const ia32_stack_t *uss32, ia32_stack_t *uoss32,
-long arg2, long arg3, long arg4,
-long arg5, long arg6, long arg7,
-long stack)
+sys32_sigaltstack (ia32_stack_t *uss32, ia32_stack_t *uoss32,
+ long arg2, long arg3, long arg4, long arg5, long arg6, long arg7, long stack)
{
struct pt_regs *pt = (struct pt_regs *) &stack;
stack_t uss, uoss;
@@ -2658,8 +3259,8 @@
mm_segment_t old_fs = get_fs();
if (uss32)
- if (copy_from_user(&buf32, (void *)A(uss32), sizeof(ia32_stack_t)))
- return(-EFAULT);
+ if (copy_from_user(&buf32, uss32, sizeof(ia32_stack_t)))
+ return -EFAULT;
uss.ss_sp = (void *) (long) buf32.ss_sp;
uss.ss_flags = buf32.ss_flags;
uss.ss_size = buf32.ss_size;
@@ -2672,34 +3273,34 @@
buf32.ss_sp = (long) uoss.ss_sp;
buf32.ss_flags = uoss.ss_flags;
buf32.ss_size = uoss.ss_size;
- if (copy_to_user((void*)A(uoss32), &buf32, sizeof(ia32_stack_t)))
- return(-EFAULT);
+ if (copy_to_user(uoss32, &buf32, sizeof(ia32_stack_t)))
+ return -EFAULT;
}
- return(ret);
+ return ret;
}
asmlinkage int
-sys_pause (void)
+sys32_pause (void)
{
current->state = TASK_INTERRUPTIBLE;
schedule();
return -ERESTARTNOHAND;
}
-asmlinkage long sys_msync(unsigned long start, size_t len, int flags);
+asmlinkage long sys_msync (unsigned long start, size_t len, int flags);
asmlinkage int
-sys32_msync(unsigned int start, unsigned int len, int flags)
+sys32_msync (unsigned int start, unsigned int len, int flags)
{
unsigned int addr;
if (OFFSET4K(start))
return -EINVAL;
- addr = start & PAGE_MASK;
- return(sys_msync(addr, len + (start - addr), flags));
+ addr = PAGE_START(start);
+ return sys_msync(addr, len + (start - addr), flags);
}
-struct sysctl_ia32 {
+struct sysctl32 {
unsigned int name;
int nlen;
unsigned int oldval;
@@ -2712,16 +3313,16 @@
extern asmlinkage long sys_sysctl(struct __sysctl_args *args);
asmlinkage long
-sys32_sysctl(struct sysctl_ia32 *args32)
+sys32_sysctl (struct sysctl32 *args)
{
- struct sysctl_ia32 a32;
+ struct sysctl32 a32;
mm_segment_t old_fs = get_fs ();
void *oldvalp, *newvalp;
size_t oldlen;
int *namep;
long ret;
- if (copy_from_user(&a32, args32, sizeof (a32)))
+ if (copy_from_user(&a32, args, sizeof(a32)))
return -EFAULT;
/*
@@ -2754,7 +3355,7 @@
}
asmlinkage long
-sys32_newuname(struct new_utsname * name)
+sys32_newuname (struct new_utsname *name)
{
extern asmlinkage long sys_newuname(struct new_utsname * name);
int ret = sys_newuname(name);
@@ -2765,10 +3366,10 @@
return ret;
}
-extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
+extern asmlinkage long sys_getresuid (uid_t *ruid, uid_t *euid, uid_t *suid);
asmlinkage long
-sys32_getresuid (u16 *ruid, u16 *euid, u16 *suid)
+sys32_getresuid16 (u16 *ruid, u16 *euid, u16 *suid)
{
uid_t a, b, c;
int ret;
@@ -2786,7 +3387,7 @@
extern asmlinkage long sys_getresgid (gid_t *rgid, gid_t *egid, gid_t *sgid);
asmlinkage long
-sys32_getresgid(u16 *rgid, u16 *egid, u16 *sgid)
+sys32_getresgid16 (u16 *rgid, u16 *egid, u16 *sgid)
{
gid_t a, b, c;
int ret;
@@ -2796,15 +3397,13 @@
ret = sys_getresgid(&a, &b, &c);
set_fs(old_fs);
- if (!ret) {
- ret = put_user(a, rgid);
- ret |= put_user(b, egid);
- ret |= put_user(c, sgid);
- }
- return ret;
+ if (ret)
+ return ret;
+
+ return put_user(a, rgid) | put_user(b, egid) | put_user(c, sgid);
}
-int
+asmlinkage long
sys32_lseek (unsigned int fd, int offset, unsigned int whence)
{
extern off_t sys_lseek (unsigned int fd, off_t offset, unsigned int origin);
@@ -2813,36 +3412,272 @@
return sys_lseek(fd, offset, whence);
}
-#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
+extern asmlinkage long sys_getgroups (int gidsetsize, gid_t *grouplist);
-/* In order to reduce some races, while at the same time doing additional
- * checking and hopefully speeding things up, we copy filenames to the
- * kernel data space before using them..
- *
- * POSIX.1 2.4: an empty pathname is invalid (ENOENT).
- */
-static inline int
-do_getname32(const char *filename, char *page)
+asmlinkage long
+sys32_getgroups16 (int gidsetsize, short *grouplist)
{
- int retval;
+ mm_segment_t old_fs = get_fs();
+ gid_t gl[NGROUPS];
+ int ret, i;
- /* 32bit pointer will be always far below TASK_SIZE :)) */
- retval = strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE);
- if (retval > 0) {
- if (retval < PAGE_SIZE)
- return 0;
- return -ENAMETOOLONG;
- } else if (!retval)
- retval = -ENOENT;
- return retval;
+ set_fs(KERNEL_DS);
+ ret = sys_getgroups(gidsetsize, gl);
+ set_fs(old_fs);
+
+ if (gidsetsize && ret > 0 && ret <= NGROUPS)
+ for (i = 0; i < ret; i++, grouplist++)
+ if (put_user(gl[i], grouplist))
+ return -EFAULT;
+ return ret;
}
-char *
-getname32(const char *filename)
+extern asmlinkage long sys_setgroups (int gidsetsize, gid_t *grouplist);
+
+asmlinkage long
+sys32_setgroups16 (int gidsetsize, short *grouplist)
{
- char *tmp, *result;
+ mm_segment_t old_fs = get_fs();
+ gid_t gl[NGROUPS];
+ int ret, i;
- result = ERR_PTR(-ENOMEM);
+ if ((unsigned) gidsetsize > NGROUPS)
+ return -EINVAL;
+ for (i = 0; i < gidsetsize; i++, grouplist++)
+ if (get_user(gl[i], grouplist))
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_setgroups(gidsetsize, gl);
+ set_fs(old_fs);
+ return ret;
+}
+
+/*
+ * Unfortunately, the x86 compiler aligns variables of type "long long" to a 4 byte boundary
+ * only, which means that the x86 version of "struct flock64" doesn't match the ia64 version
+ * of struct flock.
+ */
+
+static inline long
+ia32_put_flock (struct flock *l, unsigned long addr)
+{
+ return (put_user(l->l_type, (short *) addr)
+ | put_user(l->l_whence, (short *) (addr + 2))
+ | put_user(l->l_start, (long *) (addr + 4))
+ | put_user(l->l_len, (long *) (addr + 12))
+ | put_user(l->l_pid, (int *) (addr + 20)));
+}
+
+static inline long
+ia32_get_flock (struct flock *l, unsigned long addr)
+{
+ unsigned int start_lo, start_hi, len_lo, len_hi;
+ int err = (get_user(l->l_type, (short *) addr)
+ | get_user(l->l_whence, (short *) (addr + 2))
+ | get_user(start_lo, (int *) (addr + 4))
+ | get_user(start_hi, (int *) (addr + 8))
+ | get_user(len_lo, (int *) (addr + 12))
+ | get_user(len_hi, (int *) (addr + 16))
+ | get_user(l->l_pid, (int *) (addr + 20)));
+ l->l_start = ((unsigned long) start_hi << 32) | start_lo;
+ l->l_len = ((unsigned long) len_hi << 32) | len_lo;
+ return err;
+}
+
+asmlinkage long
+sys32_fcntl64 (unsigned int fd, unsigned int cmd, unsigned int arg)
+{
+ mm_segment_t old_fs;
+ struct flock f;
+ long ret;
+
+ switch (cmd) {
+ case F_GETLK64:
+ case F_SETLK64:
+ case F_SETLKW64:
+ if (ia32_get_flock(&f, arg))
+ return -EFAULT;
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ ret = sys_fcntl(fd, cmd, (unsigned long) &f);
+ set_fs(old_fs);
+ if (cmd = F_GETLK && ia32_put_flock(&f, arg))
+ return -EFAULT;
+ break;
+
+ default:
+ ret = sys32_fcntl(fd, cmd, arg);
+ break;
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_truncate64 (unsigned int path, unsigned int len_lo, unsigned int len_hi)
+{
+ extern asmlinkage long sys_truncate (const char *path, unsigned long length);
+
+ return sys_truncate((const char *) A(path), ((unsigned long) len_hi << 32) | len_lo);
+}
+
+asmlinkage long
+sys32_ftruncate64 (int fd, unsigned int len_lo, unsigned int len_hi)
+{
+ extern asmlinkage long sys_ftruncate (int fd, unsigned long length);
+
+ return sys_ftruncate(fd, ((unsigned long) len_hi << 32) | len_lo);
+}
+
+static int
+putstat64 (struct stat64 *ubuf, struct stat *kbuf)
+{
+ int err;
+
+ if (clear_user(ubuf, sizeof(*ubuf)))
+ return 1;
+
+ err = __put_user(kbuf->st_dev, &ubuf->st_dev);
+ err |= __put_user(kbuf->st_ino, &ubuf->__st_ino);
+ err |= __put_user(kbuf->st_ino, &ubuf->st_ino_lo);
+ err |= __put_user(kbuf->st_ino >> 32, &ubuf->st_ino_hi);
+ err |= __put_user(kbuf->st_mode, &ubuf->st_mode);
+ err |= __put_user(kbuf->st_nlink, &ubuf->st_nlink);
+ err |= __put_user(kbuf->st_uid, &ubuf->st_uid);
+ err |= __put_user(kbuf->st_gid, &ubuf->st_gid);
+ err |= __put_user(kbuf->st_rdev, &ubuf->st_rdev);
+ err |= __put_user(kbuf->st_size, &ubuf->st_size_lo);
+ err |= __put_user((kbuf->st_size >> 32), &ubuf->st_size_hi);
+ err |= __put_user(kbuf->st_atime, &ubuf->st_atime);
+ err |= __put_user(kbuf->st_mtime, &ubuf->st_mtime);
+ err |= __put_user(kbuf->st_ctime, &ubuf->st_ctime);
+ err |= __put_user(kbuf->st_blksize, &ubuf->st_blksize);
+ err |= __put_user(kbuf->st_blocks, &ubuf->st_blocks);
+ return err;
+}
+
+asmlinkage long
+sys32_stat64 (char *filename, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newstat(filename, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_lstat64 (char *filename, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newlstat(filename, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_fstat64 (unsigned int fd, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newfstat(fd, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_sigpending (unsigned int *set)
+{
+ return do_sigpending(set, sizeof(*set));
+}
+
+struct sysinfo32 {
+ s32 uptime;
+ u32 loads[3];
+ u32 totalram;
+ u32 freeram;
+ u32 sharedram;
+ u32 bufferram;
+ u32 totalswap;
+ u32 freeswap;
+ unsigned short procs;
+ char _f[22];
+};
+
+asmlinkage long
+sys32_sysinfo (struct sysinfo32 *info)
+{
+ extern asmlinkage long sys_sysinfo (struct sysinfo *);
+ mm_segment_t old_fs = get_fs();
+ struct sysinfo s;
+ long ret, err;
+
+ set_fs(KERNEL_DS);
+ ret = sys_sysinfo(&s);
+ set_fs(old_fs);
+
+ if (!access_ok(VERIFY_WRITE, info, sizeof(*info)))
+ return -EFAULT;
+
+ err = __put_user(s.uptime, &info->uptime);
+ err |= __put_user(s.loads[0], &info->loads[0]);
+ err |= __put_user(s.loads[1], &info->loads[1]);
+ err |= __put_user(s.loads[2], &info->loads[2]);
+ err |= __put_user(s.totalram, &info->totalram);
+ err |= __put_user(s.freeram, &info->freeram);
+ err |= __put_user(s.sharedram, &info->sharedram);
+ err |= __put_user(s.bufferram, &info->bufferram);
+ err |= __put_user(s.totalswap, &info->totalswap);
+ err |= __put_user(s.freeswap, &info->freeswap);
+ err |= __put_user(s.procs, &info->procs);
+ if (err)
+ return -EFAULT;
+ return ret;
+}
+
+/* In order to reduce some races, while at the same time doing additional
+ * checking and hopefully speeding things up, we copy filenames to the
+ * kernel data space before using them..
+ *
+ * POSIX.1 2.4: an empty pathname is invalid (ENOENT).
+ */
+static inline int
+do_getname32 (const char *filename, char *page)
+{
+ int retval;
+
+ /* 32bit pointer will be always far below TASK_SIZE :)) */
+ retval = strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE);
+ if (retval > 0) {
+ if (retval < PAGE_SIZE)
+ return 0;
+ return -ENAMETOOLONG;
+ } else if (!retval)
+ retval = -ENOENT;
+ return retval;
+}
+
+static char *
+getname32 (const char *filename)
+{
+ char *tmp, *result;
+
+ result = ERR_PTR(-ENOMEM);
tmp = (char *)__get_free_page(GFP_KERNEL);
if (tmp) {
int retval = do_getname32(filename, tmp);
@@ -2856,178 +3691,132 @@
return result;
}
-/* 32-bit timeval and related flotsam. */
-
-extern asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int on);
-
-asmlinkage long
-sys32_ioperm(u32 from, u32 num, int on)
-{
- return sys_ioperm((unsigned long)from, (unsigned long)num, on);
-}
-
struct dqblk32 {
- __u32 dqb_bhardlimit;
- __u32 dqb_bsoftlimit;
- __u32 dqb_curblocks;
- __u32 dqb_ihardlimit;
- __u32 dqb_isoftlimit;
- __u32 dqb_curinodes;
- __kernel_time_t32 dqb_btime;
- __kernel_time_t32 dqb_itime;
+ __u32 dqb_bhardlimit;
+ __u32 dqb_bsoftlimit;
+ __u32 dqb_curblocks;
+ __u32 dqb_ihardlimit;
+ __u32 dqb_isoftlimit;
+ __u32 dqb_curinodes;
+ __kernel_time_t32 dqb_btime;
+ __kernel_time_t32 dqb_itime;
};
-extern asmlinkage long sys_quotactl(int cmd, const char *special, int id,
- caddr_t addr);
-
asmlinkage long
-sys32_quotactl(int cmd, const char *special, int id, unsigned long addr)
+sys32_quotactl (int cmd, unsigned int special, int id, struct dqblk32 *addr)
{
+ extern asmlinkage long sys_quotactl (int, const char *, int, caddr_t);
int cmds = cmd >> SUBCMDSHIFT;
- int err;
- struct dqblk d;
mm_segment_t old_fs;
+ struct dqblk d;
char *spec;
+ long err;
switch (cmds) {
- case Q_GETQUOTA:
+ case Q_GETQUOTA:
break;
- case Q_SETQUOTA:
- case Q_SETUSE:
- case Q_SETQLIM:
- if (copy_from_user (&d, (struct dqblk32 *)addr,
- sizeof (struct dqblk32)))
+ case Q_SETQUOTA:
+ case Q_SETUSE:
+ case Q_SETQLIM:
+ if (copy_from_user (&d, addr, sizeof(struct dqblk32)))
return -EFAULT;
d.dqb_itime = ((struct dqblk32 *)&d)->dqb_itime;
d.dqb_btime = ((struct dqblk32 *)&d)->dqb_btime;
break;
- default:
- return sys_quotactl(cmd, special,
- id, (caddr_t)addr);
+ default:
+ return sys_quotactl(cmd, (void *) A(special), id, (caddr_t) addr);
}
- spec = getname32 (special);
+ spec = getname32((void *) A(special));
err = PTR_ERR(spec);
- if (IS_ERR(spec)) return err;
+ if (IS_ERR(spec))
+ return err;
old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
err = sys_quotactl(cmd, (const char *)spec, id, (caddr_t)&d);
- set_fs (old_fs);
- putname (spec);
+ set_fs(old_fs);
+ putname(spec);
if (cmds = Q_GETQUOTA) {
__kernel_time_t b = d.dqb_btime, i = d.dqb_itime;
((struct dqblk32 *)&d)->dqb_itime = i;
((struct dqblk32 *)&d)->dqb_btime = b;
- if (copy_to_user ((struct dqblk32 *)addr, &d,
- sizeof (struct dqblk32)))
+ if (copy_to_user(addr, &d, sizeof(struct dqblk32)))
return -EFAULT;
}
return err;
}
-extern asmlinkage long sys_utime(char * filename, struct utimbuf * times);
-
-struct utimbuf32 {
- __kernel_time_t32 actime, modtime;
-};
-
asmlinkage long
-sys32_utime(char * filename, struct utimbuf32 *times)
+sys32_sched_rr_get_interval (pid_t pid, struct timespec32 *interval)
{
- struct utimbuf t;
- mm_segment_t old_fs;
- int ret;
- char *filenam;
+ extern asmlinkage long sys_sched_rr_get_interval (pid_t, struct timespec *);
+ mm_segment_t old_fs = get_fs();
+ struct timespec t;
+ long ret;
- if (!times)
- return sys_utime(filename, NULL);
- if (get_user (t.actime, ×->actime) ||
- __get_user (t.modtime, ×->modtime))
- return -EFAULT;
- filenam = getname32 (filename);
- ret = PTR_ERR(filenam);
- if (!IS_ERR(filenam)) {
- old_fs = get_fs();
- set_fs (KERNEL_DS);
- ret = sys_utime(filenam, &t);
- set_fs (old_fs);
- putname (filenam);
- }
+ set_fs(KERNEL_DS);
+ ret = sys_sched_rr_get_interval(pid, &t);
+ set_fs(old_fs);
+ if (put_user (t.tv_sec, &interval->tv_sec) || put_user (t.tv_nsec, &interval->tv_nsec))
+ return -EFAULT;
return ret;
}
-/*
- * Ooo, nasty. We need here to frob 32-bit unsigned longs to
- * 64-bit unsigned longs.
- */
-
-static inline int
-get_fd_set32(unsigned long n, unsigned long *fdset, u32 *ufdset)
+asmlinkage long
+sys32_pread (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, u32 pos_hi)
{
- if (ufdset) {
- unsigned long odd;
-
- if (verify_area(VERIFY_WRITE, ufdset, n*sizeof(u32)))
- return -EFAULT;
+ extern asmlinkage long sys_pread (unsigned int, char *, size_t, loff_t);
+ return sys_pread(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo);
+}
- odd = n & 1UL;
- n &= ~1UL;
- while (n) {
- unsigned long h, l;
- __get_user(l, ufdset);
- __get_user(h, ufdset+1);
- ufdset += 2;
- *fdset++ = h << 32 | l;
- n -= 2;
- }
- if (odd)
- __get_user(*fdset, ufdset);
- } else {
- /* Tricky, must clear full unsigned long in the
- * kernel fdset at the end, this makes sure that
- * actually happens.
- */
- memset(fdset, 0, ((n + 1) & ~1)*sizeof(u32));
- }
- return 0;
+asmlinkage long
+sys32_pwrite (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, u32 pos_hi)
+{
+ extern asmlinkage long sys_pwrite (unsigned int, const char *, size_t, loff_t);
+ return sys_pwrite(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo);
}
-static inline void
-set_fd_set32(unsigned long n, u32 *ufdset, unsigned long *fdset)
+asmlinkage long
+sys32_sendfile (int out_fd, int in_fd, int *offset, unsigned int count)
{
- unsigned long odd;
+ extern asmlinkage long sys_sendfile (int, int, off_t *, size_t);
+ mm_segment_t old_fs = get_fs();
+ long ret;
+ off_t of;
- if (!ufdset)
- return;
+ if (offset && get_user(of, offset))
+ return -EFAULT;
- odd = n & 1UL;
- n &= ~1UL;
- while (n) {
- unsigned long h, l;
- l = *fdset++;
- h = l >> 32;
- __put_user(l, ufdset);
- __put_user(h, ufdset+1);
- ufdset += 2;
- n -= 2;
- }
- if (odd)
- __put_user(*fdset, ufdset);
-}
+ set_fs(KERNEL_DS);
+ ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
+ set_fs(old_fs);
+
+ if (!ret && offset && put_user(of, offset))
+ return -EFAULT;
-extern asmlinkage long sys_sysfs(int option, unsigned long arg1,
- unsigned long arg2);
+ return ret;
+}
asmlinkage long
-sys32_sysfs(int option, u32 arg1, u32 arg2)
+sys32_personality (unsigned int personality)
{
- return sys_sysfs(option, arg1, arg2);
+ extern asmlinkage long sys_personality (unsigned long);
+ long ret;
+
+ if (current->personality = PER_LINUX32 && personality = PER_LINUX)
+ personality = PER_LINUX32;
+ ret = sys_personality(personality);
+ if (ret = PER_LINUX32)
+ ret = PER_LINUX;
+ return ret;
}
+#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
+
struct ncp_mount_data32 {
int version;
unsigned int ncp_fd;
__kernel_uid_t32 mounted_uid;
- __kernel_pid_t32 wdog_pid;
+ int wdog_pid;
unsigned char mounted_vol[NCP_VOLNAME_LEN + 1];
unsigned int time_out;
unsigned int retry_count;
@@ -3061,1485 +3850,169 @@
__kernel_uid_t32 uid;
__kernel_gid_t32 gid;
__kernel_mode_t32 file_mode;
- __kernel_mode_t32 dir_mode;
-};
-
-static void *
-do_smb_super_data_conv(void *raw_data)
-{
- struct smb_mount_data *s = (struct smb_mount_data *)raw_data;
- struct smb_mount_data32 *s32 = (struct smb_mount_data32 *)raw_data;
-
- s->version = s32->version;
- s->mounted_uid = s32->mounted_uid;
- s->uid = s32->uid;
- s->gid = s32->gid;
- s->file_mode = s32->file_mode;
- s->dir_mode = s32->dir_mode;
- return raw_data;
-}
-
-static int
-copy_mount_stuff_to_kernel(const void *user, unsigned long *kernel)
-{
- int i;
- unsigned long page;
- struct vm_area_struct *vma;
-
- *kernel = 0;
- if(!user)
- return 0;
- vma = find_vma(current->mm, (unsigned long)user);
- if(!vma || (unsigned long)user < vma->vm_start)
- return -EFAULT;
- if(!(vma->vm_flags & VM_READ))
- return -EFAULT;
- i = vma->vm_end - (unsigned long) user;
- if(PAGE_SIZE <= (unsigned long) i)
- i = PAGE_SIZE - 1;
- if(!(page = __get_free_page(GFP_KERNEL)))
- return -ENOMEM;
- if(copy_from_user((void *) page, user, i)) {
- free_page(page);
- return -EFAULT;
- }
- *kernel = page;
- return 0;
-}
-
-extern asmlinkage long sys_mount(char * dev_name, char * dir_name, char * type,
- unsigned long new_flags, void *data);
-
-#define SMBFS_NAME "smbfs"
-#define NCPFS_NAME "ncpfs"
-
-asmlinkage long
-sys32_mount(char *dev_name, char *dir_name, char *type,
- unsigned long new_flags, u32 data)
-{
- unsigned long type_page;
- int err, is_smb, is_ncp;
-
- if(!capable(CAP_SYS_ADMIN))
- return -EPERM;
- is_smb = is_ncp = 0;
- err = copy_mount_stuff_to_kernel((const void *)type, &type_page);
- if(err)
- return err;
- if(type_page) {
- is_smb = !strcmp((char *)type_page, SMBFS_NAME);
- is_ncp = !strcmp((char *)type_page, NCPFS_NAME);
- }
- if(!is_smb && !is_ncp) {
- if(type_page)
- free_page(type_page);
- return sys_mount(dev_name, dir_name, type, new_flags,
- (void *)AA(data));
- } else {
- unsigned long dev_page, dir_page, data_page;
-
- err = copy_mount_stuff_to_kernel((const void *)dev_name,
- &dev_page);
- if(err)
- goto out;
- err = copy_mount_stuff_to_kernel((const void *)dir_name,
- &dir_page);
- if(err)
- goto dev_out;
- err = copy_mount_stuff_to_kernel((const void *)AA(data),
- &data_page);
- if(err)
- goto dir_out;
- if(is_ncp)
- do_ncp_super_data_conv((void *)data_page);
- else if(is_smb)
- do_smb_super_data_conv((void *)data_page);
- else
- panic("The problem is here...");
- err = do_mount((char *)dev_page, (char *)dir_page,
- (char *)type_page, new_flags,
- (void *)data_page);
- if(data_page)
- free_page(data_page);
- dir_out:
- if(dir_page)
- free_page(dir_page);
- dev_out:
- if(dev_page)
- free_page(dev_page);
- out:
- if(type_page)
- free_page(type_page);
- return err;
- }
-}
-
-struct sysinfo32 {
- s32 uptime;
- u32 loads[3];
- u32 totalram;
- u32 freeram;
- u32 sharedram;
- u32 bufferram;
- u32 totalswap;
- u32 freeswap;
- unsigned short procs;
- char _f[22];
-};
-
-extern asmlinkage long sys_sysinfo(struct sysinfo *info);
-
-asmlinkage long
-sys32_sysinfo(struct sysinfo32 *info)
-{
- struct sysinfo s;
- int ret, err;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_sysinfo(&s);
- set_fs (old_fs);
- err = put_user (s.uptime, &info->uptime);
- err |= __put_user (s.loads[0], &info->loads[0]);
- err |= __put_user (s.loads[1], &info->loads[1]);
- err |= __put_user (s.loads[2], &info->loads[2]);
- err |= __put_user (s.totalram, &info->totalram);
- err |= __put_user (s.freeram, &info->freeram);
- err |= __put_user (s.sharedram, &info->sharedram);
- err |= __put_user (s.bufferram, &info->bufferram);
- err |= __put_user (s.totalswap, &info->totalswap);
- err |= __put_user (s.freeswap, &info->freeswap);
- err |= __put_user (s.procs, &info->procs);
- if (err)
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_sched_rr_get_interval(pid_t pid,
- struct timespec *interval);
-
-asmlinkage long
-sys32_sched_rr_get_interval(__kernel_pid_t32 pid, struct timespec32 *interval)
-{
- struct timespec t;
- int ret;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_sched_rr_get_interval(pid, &t);
- set_fs (old_fs);
- if (put_user (t.tv_sec, &interval->tv_sec) ||
- __put_user (t.tv_nsec, &interval->tv_nsec))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_sigprocmask(int how, old_sigset_t *set,
- old_sigset_t *oset);
-
-asmlinkage long
-sys32_sigprocmask(int how, old_sigset_t32 *set, old_sigset_t32 *oset)
-{
- old_sigset_t s;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- if (set && get_user (s, set)) return -EFAULT;
- set_fs (KERNEL_DS);
- ret = sys_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL);
- set_fs (old_fs);
- if (ret) return ret;
- if (oset && put_user (s, oset)) return -EFAULT;
- return 0;
-}
-
-extern asmlinkage long sys_sigpending(old_sigset_t *set);
-
-asmlinkage long
-sys32_sigpending(old_sigset_t32 *set)
-{
- old_sigset_t s;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_sigpending(&s);
- set_fs (old_fs);
- if (put_user (s, set)) return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_rt_sigpending(sigset_t *set, size_t sigsetsize);
-
-asmlinkage long
-sys32_rt_sigpending(sigset_t32 *set, __kernel_size_t32 sigsetsize)
-{
- sigset_t s;
- sigset_t32 s32;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_rt_sigpending(&s, sigsetsize);
- set_fs (old_fs);
- if (!ret) {
- switch (_NSIG_WORDS) {
- case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
- case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
- case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
- case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
- }
- if (copy_to_user (set, &s32, sizeof(sigset_t32)))
- return -EFAULT;
- }
- return ret;
-}
-
-siginfo_t32 *
-siginfo64to32(siginfo_t32 *d, siginfo_t *s)
-{
- memset(d, 0, sizeof(siginfo_t32));
- d->si_signo = s->si_signo;
- d->si_errno = s->si_errno;
- d->si_code = s->si_code;
- if (s->si_signo >= SIGRTMIN) {
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- /* XXX: Ouch, how to find this out??? */
- d->si_int = s->si_int;
- } else switch (s->si_signo) {
- /* XXX: What about POSIX1.b timers */
- case SIGCHLD:
- d->si_pid = s->si_pid;
- d->si_status = s->si_status;
- d->si_utime = s->si_utime;
- d->si_stime = s->si_stime;
- break;
- case SIGSEGV:
- case SIGBUS:
- case SIGFPE:
- case SIGILL:
- d->si_addr = (long)(s->si_addr);
- /* XXX: Do we need to translate this from ia64 to ia32 traps? */
- d->si_trapno = s->si_trapno;
- break;
- case SIGPOLL:
- d->si_band = s->si_band;
- d->si_fd = s->si_fd;
- break;
- default:
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- break;
- }
- return d;
-}
-
-siginfo_t *
-siginfo32to64(siginfo_t *d, siginfo_t32 *s)
-{
- d->si_signo = s->si_signo;
- d->si_errno = s->si_errno;
- d->si_code = s->si_code;
- if (s->si_signo >= SIGRTMIN) {
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- /* XXX: Ouch, how to find this out??? */
- d->si_int = s->si_int;
- } else switch (s->si_signo) {
- /* XXX: What about POSIX1.b timers */
- case SIGCHLD:
- d->si_pid = s->si_pid;
- d->si_status = s->si_status;
- d->si_utime = s->si_utime;
- d->si_stime = s->si_stime;
- break;
- case SIGSEGV:
- case SIGBUS:
- case SIGFPE:
- case SIGILL:
- d->si_addr = (void *)A(s->si_addr);
- /* XXX: Do we need to translate this from ia32 to ia64 traps? */
- d->si_trapno = s->si_trapno;
- break;
- case SIGPOLL:
- d->si_band = s->si_band;
- d->si_fd = s->si_fd;
- break;
- default:
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- break;
- }
- return d;
-}
-
-extern asmlinkage long
-sys_rt_sigtimedwait(const sigset_t *uthese, siginfo_t *uinfo,
- const struct timespec *uts, size_t sigsetsize);
-
-asmlinkage long
-sys32_rt_sigtimedwait(sigset_t32 *uthese, siginfo_t32 *uinfo,
- struct timespec32 *uts, __kernel_size_t32 sigsetsize)
-{
- sigset_t s;
- sigset_t32 s32;
- struct timespec t;
- int ret;
- mm_segment_t old_fs = get_fs();
- siginfo_t info;
- siginfo_t32 info32;
-
- if (copy_from_user (&s32, uthese, sizeof(sigset_t32)))
- return -EFAULT;
- switch (_NSIG_WORDS) {
- case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
- case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
- case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
- case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
- }
- if (uts) {
- ret = get_user (t.tv_sec, &uts->tv_sec);
- ret |= __get_user (t.tv_nsec, &uts->tv_nsec);
- if (ret)
- return -EFAULT;
- }
- set_fs (KERNEL_DS);
- ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
- set_fs (old_fs);
- if (ret >= 0 && uinfo) {
- if (copy_to_user (uinfo, siginfo64to32(&info32, &info),
- sizeof(siginfo_t32)))
- return -EFAULT;
- }
- return ret;
-}
-
-extern asmlinkage long
-sys_rt_sigqueueinfo(int pid, int sig, siginfo_t *uinfo);
-
-asmlinkage long
-sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo)
-{
- siginfo_t info;
- siginfo_t32 info32;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- if (copy_from_user (&info32, uinfo, sizeof(siginfo_t32)))
- return -EFAULT;
- /* XXX: Is this correct? */
- siginfo32to64(&info, &info32);
- set_fs (KERNEL_DS);
- ret = sys_rt_sigqueueinfo(pid, sig, &info);
- set_fs (old_fs);
- return ret;
-}
-
-extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
-
-asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
-{
- uid_t sruid, seuid;
-
- sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
- seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
- return sys_setreuid(sruid, seuid);
-}
-
-extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
-
-asmlinkage long
-sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid,
- __kernel_uid_t32 suid)
-{
- uid_t sruid, seuid, ssuid;
-
- sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
- seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
- ssuid = (suid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid);
- return sys_setresuid(sruid, seuid, ssuid);
-}
-
-extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
-
-asmlinkage long
-sys32_getresuid(__kernel_uid_t32 *ruid, __kernel_uid_t32 *euid,
- __kernel_uid_t32 *suid)
-{
- uid_t a, b, c;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_getresuid(&a, &b, &c);
- set_fs (old_fs);
- if (put_user (a, ruid) || put_user (b, euid) || put_user (c, suid))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid);
-
-asmlinkage long
-sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid)
-{
- gid_t srgid, segid;
-
- srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
- segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
- return sys_setregid(srgid, segid);
-}
-
-extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
-
-asmlinkage long
-sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid,
- __kernel_gid_t32 sgid)
-{
- gid_t srgid, segid, ssgid;
-
- srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
- segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
- ssgid = (sgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid);
- return sys_setresgid(srgid, segid, ssgid);
-}
-
-extern asmlinkage long sys_getgroups(int gidsetsize, gid_t *grouplist);
-
-asmlinkage long
-sys32_getgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
-{
- gid_t gl[NGROUPS];
- int ret, i;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_getgroups(gidsetsize, gl);
- set_fs (old_fs);
- if (gidsetsize && ret > 0 && ret <= NGROUPS)
- for (i = 0; i < ret; i++, grouplist++)
- if (__put_user (gl[i], grouplist))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_setgroups(int gidsetsize, gid_t *grouplist);
-
-asmlinkage long
-sys32_setgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
-{
- gid_t gl[NGROUPS];
- int ret, i;
- mm_segment_t old_fs = get_fs ();
-
- if ((unsigned) gidsetsize > NGROUPS)
- return -EINVAL;
- for (i = 0; i < gidsetsize; i++, grouplist++)
- if (__get_user (gl[i], grouplist))
- return -EFAULT;
- set_fs (KERNEL_DS);
- ret = sys_setgroups(gidsetsize, gl);
- set_fs (old_fs);
- return ret;
-}
-
-
-/* XXX These as well... */
-extern __inline__ struct socket *
-socki_lookup(struct inode *inode)
-{
- return &inode->u.socket_i;
-}
-
-extern __inline__ struct socket *
-sockfd_lookup(int fd, int *err)
-{
- struct file *file;
- struct inode *inode;
-
- if (!(file = fget(fd)))
- {
- *err = -EBADF;
- return NULL;
- }
-
- inode = file->f_dentry->d_inode;
- if (!inode->i_sock || !socki_lookup(inode))
- {
- *err = -ENOTSOCK;
- fput(file);
- return NULL;
- }
-
- return socki_lookup(inode);
-}
-
-struct msghdr32 {
- u32 msg_name;
- int msg_namelen;
- u32 msg_iov;
- __kernel_size_t32 msg_iovlen;
- u32 msg_control;
- __kernel_size_t32 msg_controllen;
- unsigned msg_flags;
-};
-
-struct cmsghdr32 {
- __kernel_size_t32 cmsg_len;
- int cmsg_level;
- int cmsg_type;
-};
-
-/* Bleech... */
-#define __CMSG32_NXTHDR(ctl, len, cmsg, cmsglen) \
- __cmsg32_nxthdr((ctl),(len),(cmsg),(cmsglen))
-#define CMSG32_NXTHDR(mhdr, cmsg, cmsglen) \
- cmsg32_nxthdr((mhdr), (cmsg), (cmsglen))
-
-#define CMSG32_ALIGN(len) ( ((len)+sizeof(int)-1) & ~(sizeof(int)-1) )
-
-#define CMSG32_DATA(cmsg) \
- ((void *)((char *)(cmsg) + CMSG32_ALIGN(sizeof(struct cmsghdr32))))
-#define CMSG32_SPACE(len) \
- (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + CMSG32_ALIGN(len))
-#define CMSG32_LEN(len) (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + (len))
-
-#define __CMSG32_FIRSTHDR(ctl,len) ((len) >= sizeof(struct cmsghdr32) ? \
- (struct cmsghdr32 *)(ctl) : \
- (struct cmsghdr32 *)NULL)
-#define CMSG32_FIRSTHDR(msg) \
- __CMSG32_FIRSTHDR((msg)->msg_control, (msg)->msg_controllen)
-
-__inline__ struct cmsghdr32 *
-__cmsg32_nxthdr(void *__ctl, __kernel_size_t __size,
- struct cmsghdr32 *__cmsg, int __cmsg_len)
-{
- struct cmsghdr32 * __ptr;
-
- __ptr = (struct cmsghdr32 *)(((unsigned char *) __cmsg) +
- CMSG32_ALIGN(__cmsg_len));
- if ((unsigned long)((char*)(__ptr+1) - (char *) __ctl) > __size)
- return NULL;
-
- return __ptr;
-}
-
-__inline__ struct cmsghdr32 *
-cmsg32_nxthdr (struct msghdr *__msg, struct cmsghdr32 *__cmsg, int __cmsg_len)
-{
- return __cmsg32_nxthdr(__msg->msg_control, __msg->msg_controllen,
- __cmsg, __cmsg_len);
-}
-
-static inline int
-iov_from_user32_to_kern(struct iovec *kiov, struct iovec32 *uiov32, int niov)
-{
- int tot_len = 0;
-
- while(niov > 0) {
- u32 len, buf;
-
- if(get_user(len, &uiov32->iov_len) ||
- get_user(buf, &uiov32->iov_base)) {
- tot_len = -EFAULT;
- break;
- }
- tot_len += len;
- kiov->iov_base = (void *)A(buf);
- kiov->iov_len = (__kernel_size_t) len;
- uiov32++;
- kiov++;
- niov--;
- }
- return tot_len;
-}
-
-static inline int
-msghdr_from_user32_to_kern(struct msghdr *kmsg, struct msghdr32 *umsg)
-{
- u32 tmp1, tmp2, tmp3;
- int err;
-
- err = get_user(tmp1, &umsg->msg_name);
- err |= __get_user(tmp2, &umsg->msg_iov);
- err |= __get_user(tmp3, &umsg->msg_control);
- if (err)
- return -EFAULT;
-
- kmsg->msg_name = (void *)A(tmp1);
- kmsg->msg_iov = (struct iovec *)A(tmp2);
- kmsg->msg_control = (void *)A(tmp3);
-
- err = get_user(kmsg->msg_namelen, &umsg->msg_namelen);
- err |= get_user(kmsg->msg_iovlen, &umsg->msg_iovlen);
- err |= get_user(kmsg->msg_controllen, &umsg->msg_controllen);
- err |= get_user(kmsg->msg_flags, &umsg->msg_flags);
-
- return err;
-}
-
-/* I've named the args so it is easy to tell whose space the pointers are in. */
-static int
-verify_iovec32(struct msghdr *kern_msg, struct iovec *kern_iov,
- char *kern_address, int mode)
-{
- int tot_len;
-
- if(kern_msg->msg_namelen) {
- if(mode=VERIFY_READ) {
- int err = move_addr_to_kernel(kern_msg->msg_name,
- kern_msg->msg_namelen,
- kern_address);
- if(err < 0)
- return err;
- }
- kern_msg->msg_name = kern_address;
- } else
- kern_msg->msg_name = NULL;
-
- if(kern_msg->msg_iovlen > UIO_FASTIOV) {
- kern_iov = kmalloc(kern_msg->msg_iovlen * sizeof(struct iovec),
- GFP_KERNEL);
- if(!kern_iov)
- return -ENOMEM;
- }
-
- tot_len = iov_from_user32_to_kern(kern_iov,
- (struct iovec32 *)kern_msg->msg_iov,
- kern_msg->msg_iovlen);
- if(tot_len >= 0)
- kern_msg->msg_iov = kern_iov;
- else if(kern_msg->msg_iovlen > UIO_FASTIOV)
- kfree(kern_iov);
-
- return tot_len;
-}
-
-/* There is a lot of hair here because the alignment rules (and
- * thus placement) of cmsg headers and length are different for
- * 32-bit apps. -DaveM
- */
-static int
-cmsghdr_from_user32_to_kern(struct msghdr *kmsg, unsigned char *stackbuf,
- int stackbuf_size)
-{
- struct cmsghdr32 *ucmsg;
- struct cmsghdr *kcmsg, *kcmsg_base;
- __kernel_size_t32 ucmlen;
- __kernel_size_t kcmlen, tmp;
-
- kcmlen = 0;
- kcmsg_base = kcmsg = (struct cmsghdr *)stackbuf;
- ucmsg = CMSG32_FIRSTHDR(kmsg);
- while(ucmsg != NULL) {
- if(get_user(ucmlen, &ucmsg->cmsg_len))
- return -EFAULT;
-
- /* Catch bogons. */
- if(CMSG32_ALIGN(ucmlen) <
- CMSG32_ALIGN(sizeof(struct cmsghdr32)))
- return -EINVAL;
- if((unsigned long)(((char *)ucmsg - (char *)kmsg->msg_control)
- + ucmlen) > kmsg->msg_controllen)
- return -EINVAL;
-
- tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
- CMSG_ALIGN(sizeof(struct cmsghdr)));
- kcmlen += tmp;
- ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
- }
- if(kcmlen = 0)
- return -EINVAL;
-
- /* The kcmlen holds the 64-bit version of the control length.
- * It may not be modified as we do not stick it into the kmsg
- * until we have successfully copied over all of the data
- * from the user.
- */
- if(kcmlen > stackbuf_size)
- kcmsg_base = kcmsg = kmalloc(kcmlen, GFP_KERNEL);
- if(kcmsg = NULL)
- return -ENOBUFS;
-
- /* Now copy them over neatly. */
- memset(kcmsg, 0, kcmlen);
- ucmsg = CMSG32_FIRSTHDR(kmsg);
- while(ucmsg != NULL) {
- __get_user(ucmlen, &ucmsg->cmsg_len);
- tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
- CMSG_ALIGN(sizeof(struct cmsghdr)));
- kcmsg->cmsg_len = tmp;
- __get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level);
- __get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type);
-
- /* Copy over the data. */
- if(copy_from_user(CMSG_DATA(kcmsg),
- CMSG32_DATA(ucmsg),
- (ucmlen - CMSG32_ALIGN(sizeof(*ucmsg)))))
- goto out_free_efault;
-
- /* Advance. */
- kcmsg = (struct cmsghdr *)((char *)kcmsg + CMSG_ALIGN(tmp));
- ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
- }
-
- /* Ok, looks like we made it. Hook it up and return success. */
- kmsg->msg_control = kcmsg_base;
- kmsg->msg_controllen = kcmlen;
- return 0;
-
-out_free_efault:
- if(kcmsg_base != (struct cmsghdr *)stackbuf)
- kfree(kcmsg_base);
- return -EFAULT;
-}
-
-static void
-put_cmsg32(struct msghdr *kmsg, int level, int type, int len, void *data)
-{
- struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
- struct cmsghdr32 cmhdr;
- int cmlen = CMSG32_LEN(len);
-
- if(cm = NULL || kmsg->msg_controllen < sizeof(*cm)) {
- kmsg->msg_flags |= MSG_CTRUNC;
- return;
- }
-
- if(kmsg->msg_controllen < cmlen) {
- kmsg->msg_flags |= MSG_CTRUNC;
- cmlen = kmsg->msg_controllen;
- }
- cmhdr.cmsg_level = level;
- cmhdr.cmsg_type = type;
- cmhdr.cmsg_len = cmlen;
-
- if(copy_to_user(cm, &cmhdr, sizeof cmhdr))
- return;
- if(copy_to_user(CMSG32_DATA(cm), data,
- cmlen - sizeof(struct cmsghdr32)))
- return;
- cmlen = CMSG32_SPACE(len);
- kmsg->msg_control += cmlen;
- kmsg->msg_controllen -= cmlen;
-}
-
-static void scm_detach_fds32(struct msghdr *kmsg, struct scm_cookie *scm)
-{
- struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
- int fdmax = (kmsg->msg_controllen - sizeof(struct cmsghdr32))
- / sizeof(int);
- int fdnum = scm->fp->count;
- struct file **fp = scm->fp->fp;
- int *cmfptr;
- int err = 0, i;
-
- if (fdnum < fdmax)
- fdmax = fdnum;
-
- for (i = 0, cmfptr = (int *) CMSG32_DATA(cm);
- i < fdmax;
- i++, cmfptr++) {
- int new_fd;
- err = get_unused_fd();
- if (err < 0)
- break;
- new_fd = err;
- err = put_user(new_fd, cmfptr);
- if (err) {
- put_unused_fd(new_fd);
- break;
- }
- /* Bump the usage count and install the file. */
- fp[i]->f_count++;
- current->files->fd[new_fd] = fp[i];
- }
-
- if (i > 0) {
- int cmlen = CMSG32_LEN(i * sizeof(int));
- if (!err)
- err = put_user(SOL_SOCKET, &cm->cmsg_level);
- if (!err)
- err = put_user(SCM_RIGHTS, &cm->cmsg_type);
- if (!err)
- err = put_user(cmlen, &cm->cmsg_len);
- if (!err) {
- cmlen = CMSG32_SPACE(i * sizeof(int));
- kmsg->msg_control += cmlen;
- kmsg->msg_controllen -= cmlen;
- }
- }
- if (i < fdnum)
- kmsg->msg_flags |= MSG_CTRUNC;
-
- /*
- * All of the files that fit in the message have had their
- * usage counts incremented, so we just free the list.
- */
- __scm_destroy(scm);
-}
-
-/* In these cases we (currently) can just copy to data over verbatim
- * because all CMSGs created by the kernel have well defined types which
- * have the same layout in both the 32-bit and 64-bit API. One must add
- * some special cased conversions here if we start sending control messages
- * with incompatible types.
- *
- * SCM_RIGHTS and SCM_CREDENTIALS are done by hand in recvmsg32 right after
- * we do our work. The remaining cases are:
- *
- * SOL_IP IP_PKTINFO struct in_pktinfo 32-bit clean
- * IP_TTL int 32-bit clean
- * IP_TOS __u8 32-bit clean
- * IP_RECVOPTS variable length 32-bit clean
- * IP_RETOPTS variable length 32-bit clean
- * (these last two are clean because the types are defined
- * by the IPv4 protocol)
- * IP_RECVERR struct sock_extended_err +
- * struct sockaddr_in 32-bit clean
- * SOL_IPV6 IPV6_RECVERR struct sock_extended_err +
- * struct sockaddr_in6 32-bit clean
- * IPV6_PKTINFO struct in6_pktinfo 32-bit clean
- * IPV6_HOPLIMIT int 32-bit clean
- * IPV6_FLOWINFO u32 32-bit clean
- * IPV6_HOPOPTS ipv6 hop exthdr 32-bit clean
- * IPV6_DSTOPTS ipv6 dst exthdr(s) 32-bit clean
- * IPV6_RTHDR ipv6 routing exthdr 32-bit clean
- * IPV6_AUTHHDR ipv6 auth exthdr 32-bit clean
- */
-static void
-cmsg32_recvmsg_fixup(struct msghdr *kmsg, unsigned long orig_cmsg_uptr)
-{
- unsigned char *workbuf, *wp;
- unsigned long bufsz, space_avail;
- struct cmsghdr *ucmsg;
-
- bufsz = ((unsigned long)kmsg->msg_control) - orig_cmsg_uptr;
- space_avail = kmsg->msg_controllen + bufsz;
- wp = workbuf = kmalloc(bufsz, GFP_KERNEL);
- if(workbuf = NULL)
- goto fail;
-
- /* To make this more sane we assume the kernel sends back properly
- * formatted control messages. Because of how the kernel will truncate
- * the cmsg_len for MSG_TRUNC cases, we need not check that case either.
- */
- ucmsg = (struct cmsghdr *) orig_cmsg_uptr;
- while(((unsigned long)ucmsg) < ((unsigned long)kmsg->msg_control)) {
- struct cmsghdr32 *kcmsg32 = (struct cmsghdr32 *) wp;
- int clen64, clen32;
-
- /* UCMSG is the 64-bit format CMSG entry in user-space.
- * KCMSG32 is within the kernel space temporary buffer
- * we use to convert into a 32-bit style CMSG.
- */
- __get_user(kcmsg32->cmsg_len, &ucmsg->cmsg_len);
- __get_user(kcmsg32->cmsg_level, &ucmsg->cmsg_level);
- __get_user(kcmsg32->cmsg_type, &ucmsg->cmsg_type);
-
- clen64 = kcmsg32->cmsg_len;
- copy_from_user(CMSG32_DATA(kcmsg32), CMSG_DATA(ucmsg),
- clen64 - CMSG_ALIGN(sizeof(*ucmsg)));
- clen32 = ((clen64 - CMSG_ALIGN(sizeof(*ucmsg))) +
- CMSG32_ALIGN(sizeof(struct cmsghdr32)));
- kcmsg32->cmsg_len = clen32;
-
- ucmsg = (struct cmsghdr *) (((char *)ucmsg) +
- CMSG_ALIGN(clen64));
- wp = (((char *)kcmsg32) + CMSG32_ALIGN(clen32));
- }
-
- /* Copy back fixed up data, and adjust pointers. */
- bufsz = (wp - workbuf);
- copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz);
-
- kmsg->msg_control = (struct cmsghdr *)
- (((char *)orig_cmsg_uptr) + bufsz);
- kmsg->msg_controllen = space_avail - bufsz;
-
- kfree(workbuf);
- return;
-
-fail:
- /* If we leave the 64-bit format CMSG chunks in there,
- * the application could get confused and crash. So to
- * ensure greater recovery, we report no CMSGs.
- */
- kmsg->msg_controllen += bufsz;
- kmsg->msg_control = (void *) orig_cmsg_uptr;
-}
-
-asmlinkage long
-sys32_sendmsg(int fd, struct msghdr32 *user_msg, unsigned user_flags)
-{
- struct socket *sock;
- char address[MAX_SOCK_ADDR];
- struct iovec iov[UIO_FASTIOV];
- unsigned char ctl[sizeof(struct cmsghdr) + 20];
- unsigned char *ctl_buf = ctl;
- struct msghdr kern_msg;
- int err, total_len;
-
- if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
- return -EFAULT;
- if(kern_msg.msg_iovlen > UIO_MAXIOV)
- return -EINVAL;
- err = verify_iovec32(&kern_msg, iov, address, VERIFY_READ);
- if (err < 0)
- goto out;
- total_len = err;
-
- if(kern_msg.msg_controllen) {
- err = cmsghdr_from_user32_to_kern(&kern_msg, ctl, sizeof(ctl));
- if(err)
- goto out_freeiov;
- ctl_buf = kern_msg.msg_control;
- }
- kern_msg.msg_flags = user_flags;
-
- sock = sockfd_lookup(fd, &err);
- if (sock != NULL) {
- if (sock->file->f_flags & O_NONBLOCK)
- kern_msg.msg_flags |= MSG_DONTWAIT;
- err = sock_sendmsg(sock, &kern_msg, total_len);
- sockfd_put(sock);
- }
-
- /* N.B. Use kfree here, as kern_msg.msg_controllen might change? */
- if(ctl_buf != ctl)
- kfree(ctl_buf);
-out_freeiov:
- if(kern_msg.msg_iov != iov)
- kfree(kern_msg.msg_iov);
-out:
- return err;
-}
-
-asmlinkage long
-sys32_recvmsg(int fd, struct msghdr32 *user_msg, unsigned int user_flags)
-{
- struct iovec iovstack[UIO_FASTIOV];
- struct msghdr kern_msg;
- char addr[MAX_SOCK_ADDR];
- struct socket *sock;
- struct iovec *iov = iovstack;
- struct sockaddr *uaddr;
- int *uaddr_len;
- unsigned long cmsg_ptr;
- int err, total_len, len = 0;
-
- if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
- return -EFAULT;
- if(kern_msg.msg_iovlen > UIO_MAXIOV)
- return -EINVAL;
-
- uaddr = kern_msg.msg_name;
- uaddr_len = &user_msg->msg_namelen;
- err = verify_iovec32(&kern_msg, iov, addr, VERIFY_WRITE);
- if (err < 0)
- goto out;
- total_len = err;
-
- cmsg_ptr = (unsigned long) kern_msg.msg_control;
- kern_msg.msg_flags = 0;
-
- sock = sockfd_lookup(fd, &err);
- if (sock != NULL) {
- struct scm_cookie scm;
-
- if (sock->file->f_flags & O_NONBLOCK)
- user_flags |= MSG_DONTWAIT;
- memset(&scm, 0, sizeof(scm));
- lock_kernel();
- err = sock->ops->recvmsg(sock, &kern_msg, total_len,
- user_flags, &scm);
- if(err >= 0) {
- len = err;
- if(!kern_msg.msg_control) {
- if(sock->passcred || scm.fp)
- kern_msg.msg_flags |= MSG_CTRUNC;
- if(scm.fp)
- __scm_destroy(&scm);
- } else {
- /* If recvmsg processing itself placed some
- * control messages into user space, it's is
- * using 64-bit CMSG processing, so we need
- * to fix it up before we tack on more stuff.
- */
- if((unsigned long) kern_msg.msg_control
- != cmsg_ptr)
- cmsg32_recvmsg_fixup(&kern_msg,
- cmsg_ptr);
-
- /* Wheee... */
- if(sock->passcred)
- put_cmsg32(&kern_msg,
- SOL_SOCKET, SCM_CREDENTIALS,
- sizeof(scm.creds),
- &scm.creds);
- if(scm.fp != NULL)
- scm_detach_fds32(&kern_msg, &scm);
- }
- }
- unlock_kernel();
- sockfd_put(sock);
- }
-
- if(uaddr != NULL && err >= 0)
- err = move_addr_to_user(addr, kern_msg.msg_namelen, uaddr,
- uaddr_len);
- if(cmsg_ptr != 0 && err >= 0) {
- unsigned long ucmsg_ptr = ((unsigned long)kern_msg.msg_control);
- __kernel_size_t32 uclen = (__kernel_size_t32) (ucmsg_ptr
- - cmsg_ptr);
- err |= __put_user(uclen, &user_msg->msg_controllen);
- }
- if(err >= 0)
- err = __put_user(kern_msg.msg_flags, &user_msg->msg_flags);
- if(kern_msg.msg_iov != iov)
- kfree(kern_msg.msg_iov);
-out:
- if(err < 0)
- return err;
- return len;
-}
-
-extern void check_pending(int signum);
-
-#ifdef CONFIG_MODULES
-
-extern asmlinkage unsigned long sys_create_module(const char *name_user,
- size_t size);
-
-asmlinkage unsigned long
-sys32_create_module(const char *name_user, __kernel_size_t32 size)
-{
- return sys_create_module(name_user, (size_t)size);
-}
-
-extern asmlinkage long sys_init_module(const char *name_user,
- struct module *mod_user);
-
-/* Hey, when you're trying to init module, take time and prepare us a nice 64bit
- * module structure, even if from 32bit modutils... Why to pollute kernel... :))
- */
-asmlinkage long
-sys32_init_module(const char *name_user, struct module *mod_user)
-{
- return sys_init_module(name_user, mod_user);
-}
-
-extern asmlinkage long sys_delete_module(const char *name_user);
-
-asmlinkage long
-sys32_delete_module(const char *name_user)
-{
- return sys_delete_module(name_user);
-}
-
-struct module_info32 {
- u32 addr;
- u32 size;
- u32 flags;
- s32 usecount;
-};
-
-/* Query various bits about modules. */
-
-static inline long
-get_mod_name(const char *user_name, char **buf)
-{
- unsigned long page;
- long retval;
-
- if ((unsigned long)user_name >= TASK_SIZE
- && !segment_eq(get_fs (), KERNEL_DS))
- return -EFAULT;
-
- page = __get_free_page(GFP_KERNEL);
- if (!page)
- return -ENOMEM;
-
- retval = strncpy_from_user((char *)page, user_name, PAGE_SIZE);
- if (retval > 0) {
- if (retval < PAGE_SIZE) {
- *buf = (char *)page;
- return retval;
- }
- retval = -ENAMETOOLONG;
- } else if (!retval)
- retval = -EINVAL;
-
- free_page(page);
- return retval;
-}
-
-static inline void
-put_mod_name(char *buf)
-{
- free_page((unsigned long)buf);
-}
-
-static __inline__ struct module *
-find_module(const char *name)
-{
- struct module *mod;
-
- for (mod = module_list; mod ; mod = mod->next) {
- if (mod->flags & MOD_DELETED)
- continue;
- if (!strcmp(mod->name, name))
- break;
- }
-
- return mod;
-}
-
-static int
-qm_modules(char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- struct module *mod;
- size_t nmod, space, len;
-
- nmod = space = 0;
-
- for (mod = module_list; mod->next != NULL; mod = mod->next, ++nmod) {
- len = strlen(mod->name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, mod->name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(nmod, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while ((mod = mod->next)->next != NULL)
- space += strlen(mod->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static int
-qm_deps(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- size_t i, space, len;
-
- if (mod->next = NULL)
- return -EINVAL;
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = 0;
- for (i = 0; i < mod->ndeps; ++i) {
- const char *dep_name = mod->deps[i].dep->name;
-
- len = strlen(dep_name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, dep_name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(i, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while (++i < mod->ndeps)
- space += strlen(mod->deps[i].dep->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static int
-qm_refs(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- size_t nrefs, space, len;
- struct module_ref *ref;
-
- if (mod->next = NULL)
- return -EINVAL;
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = 0;
- for (nrefs = 0, ref = mod->refs; ref ; ++nrefs, ref = ref->next_ref) {
- const char *ref_name = ref->ref->name;
-
- len = strlen(ref_name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, ref_name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(nrefs, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while ((ref = ref->next_ref) != NULL)
- space += strlen(ref->ref->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static inline int
-qm_symbols(struct module *mod, char *buf, size_t bufsize,
- __kernel_size_t32 *ret)
-{
- size_t i, space, len;
- struct module_symbol *s;
- char *strings;
- unsigned *vals;
-
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = mod->nsyms * 2*sizeof(u32);
-
- i = len = 0;
- s = mod->syms;
-
- if (space > bufsize)
- goto calc_space_needed;
-
- if (!access_ok(VERIFY_WRITE, buf, space))
- return -EFAULT;
-
- bufsize -= space;
- vals = (unsigned *)buf;
- strings = buf+space;
-
- for (; i < mod->nsyms ; ++i, ++s, vals += 2) {
- len = strlen(s->name)+1;
- if (len > bufsize)
- goto calc_space_needed;
-
- if (copy_to_user(strings, s->name, len)
- || __put_user(s->value, vals+0)
- || __put_user(space, vals+1))
- return -EFAULT;
-
- strings += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(i, ret))
- return -EFAULT;
- else
- return 0;
+ __kernel_mode_t32 dir_mode;
+};
-calc_space_needed:
- for (; i < mod->nsyms; ++i, ++s)
- space += strlen(s->name)+1;
+static void *
+do_smb_super_data_conv(void *raw_data)
+{
+ struct smb_mount_data *s = (struct smb_mount_data *)raw_data;
+ struct smb_mount_data32 *s32 = (struct smb_mount_data32 *)raw_data;
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
+ s->version = s32->version;
+ s->mounted_uid = s32->mounted_uid;
+ s->uid = s32->uid;
+ s->gid = s32->gid;
+ s->file_mode = s32->file_mode;
+ s->dir_mode = s32->dir_mode;
+ return raw_data;
}
-static inline int
-qm_info(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
+static int
+copy_mount_stuff_to_kernel(const void *user, unsigned long *kernel)
{
- int error = 0;
-
- if (mod->next = NULL)
- return -EINVAL;
-
- if (sizeof(struct module_info32) <= bufsize) {
- struct module_info32 info;
- info.addr = (unsigned long)mod;
- info.size = mod->size;
- info.flags = mod->flags;
- info.usecount - ((mod_member_present(mod, can_unload)
- && mod->can_unload)
- ? -1 : atomic_read(&mod->uc.usecount));
-
- if (copy_to_user(buf, &info, sizeof(struct module_info32)))
- return -EFAULT;
- } else
- error = -ENOSPC;
+ int i;
+ unsigned long page;
+ struct vm_area_struct *vma;
- if (put_user(sizeof(struct module_info32), ret))
+ *kernel = 0;
+ if(!user)
+ return 0;
+ vma = find_vma(current->mm, (unsigned long)user);
+ if(!vma || (unsigned long)user < vma->vm_start)
return -EFAULT;
-
- return error;
+ if(!(vma->vm_flags & VM_READ))
+ return -EFAULT;
+ i = vma->vm_end - (unsigned long) user;
+ if(PAGE_SIZE <= (unsigned long) i)
+ i = PAGE_SIZE - 1;
+ if(!(page = __get_free_page(GFP_KERNEL)))
+ return -ENOMEM;
+ if(copy_from_user((void *) page, user, i)) {
+ free_page(page);
+ return -EFAULT;
+ }
+ *kernel = page;
+ return 0;
}
+extern asmlinkage long sys_mount(char * dev_name, char * dir_name, char * type,
+ unsigned long new_flags, void *data);
+
+#define SMBFS_NAME "smbfs"
+#define NCPFS_NAME "ncpfs"
+
asmlinkage long
-sys32_query_module(char *name_user, int which, char *buf,
- __kernel_size_t32 bufsize, u32 ret)
+sys32_mount(char *dev_name, char *dir_name, char *type,
+ unsigned long new_flags, u32 data)
{
- struct module *mod;
- int err;
+ unsigned long type_page;
+ int err, is_smb, is_ncp;
- lock_kernel();
- if (name_user = 0) {
- /* This finds "kernel_module" which is not exported. */
- for(mod = module_list; mod->next != NULL; mod = mod->next)
- ;
+ if(!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ is_smb = is_ncp = 0;
+ err = copy_mount_stuff_to_kernel((const void *)type, &type_page);
+ if(err)
+ return err;
+ if(type_page) {
+ is_smb = !strcmp((char *)type_page, SMBFS_NAME);
+ is_ncp = !strcmp((char *)type_page, NCPFS_NAME);
+ }
+ if(!is_smb && !is_ncp) {
+ if(type_page)
+ free_page(type_page);
+ return sys_mount(dev_name, dir_name, type, new_flags,
+ (void *)AA(data));
} else {
- long namelen;
- char *name;
+ unsigned long dev_page, dir_page, data_page;
- if ((namelen = get_mod_name(name_user, &name)) < 0) {
- err = namelen;
- goto out;
- }
- err = -ENOENT;
- if (namelen = 0) {
- /* This finds "kernel_module" which is not exported. */
- for(mod = module_list;
- mod->next != NULL;
- mod = mod->next) ;
- } else if ((mod = find_module(name)) = NULL) {
- put_mod_name(name);
+ err = copy_mount_stuff_to_kernel((const void *)dev_name,
+ &dev_page);
+ if(err)
goto out;
- }
- put_mod_name(name);
- }
-
- switch (which)
- {
- case 0:
- err = 0;
- break;
- case QM_MODULES:
- err = qm_modules(buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_DEPS:
- err = qm_deps(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_REFS:
- err = qm_refs(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_SYMBOLS:
- err = qm_symbols(mod, buf, bufsize,
- (__kernel_size_t32 *)AA(ret));
- break;
- case QM_INFO:
- err = qm_info(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- default:
- err = -EINVAL;
- break;
+ err = copy_mount_stuff_to_kernel((const void *)dir_name,
+ &dir_page);
+ if(err)
+ goto dev_out;
+ err = copy_mount_stuff_to_kernel((const void *)AA(data),
+ &data_page);
+ if(err)
+ goto dir_out;
+ if(is_ncp)
+ do_ncp_super_data_conv((void *)data_page);
+ else if(is_smb)
+ do_smb_super_data_conv((void *)data_page);
+ else
+ panic("The problem is here...");
+ err = do_mount((char *)dev_page, (char *)dir_page,
+ (char *)type_page, new_flags,
+ (void *)data_page);
+ if(data_page)
+ free_page(data_page);
+ dir_out:
+ if(dir_page)
+ free_page(dir_page);
+ dev_out:
+ if(dev_page)
+ free_page(dev_page);
+ out:
+ if(type_page)
+ free_page(type_page);
+ return err;
}
-out:
- unlock_kernel();
- return err;
}
-struct kernel_sym32 {
- u32 value;
- char name[60];
-};
-
-extern asmlinkage long sys_get_kernel_syms(struct kernel_sym *table);
+extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
-asmlinkage long
-sys32_get_kernel_syms(struct kernel_sym32 *table)
+asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
{
- int len, i;
- struct kernel_sym *tbl;
- mm_segment_t old_fs;
+ uid_t sruid, seuid;
- len = sys_get_kernel_syms(NULL);
- if (!table) return len;
- tbl = kmalloc (len * sizeof (struct kernel_sym), GFP_KERNEL);
- if (!tbl) return -ENOMEM;
- old_fs = get_fs();
- set_fs (KERNEL_DS);
- sys_get_kernel_syms(tbl);
- set_fs (old_fs);
- for (i = 0; i < len; i++, table += sizeof (struct kernel_sym32)) {
- if (put_user (tbl[i].value, &table->value) ||
- copy_to_user (table->name, tbl[i].name, 60))
- break;
- }
- kfree (tbl);
- return i;
+ sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
+ seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
+ return sys_setreuid(sruid, seuid);
}
-#else /* CONFIG_MODULES */
-
-asmlinkage unsigned long
-sys32_create_module(const char *name_user, size_t size)
-{
- return -ENOSYS;
-}
+extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
asmlinkage long
-sys32_init_module(const char *name_user, struct module *mod_user)
+sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid,
+ __kernel_uid_t32 suid)
{
- return -ENOSYS;
-}
+ uid_t sruid, seuid, ssuid;
-asmlinkage long
-sys32_delete_module(const char *name_user)
-{
- return -ENOSYS;
+ sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
+ seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
+ ssuid = (suid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid);
+ return sys_setresuid(sruid, seuid, ssuid);
}
+extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid);
+
asmlinkage long
-sys32_query_module(const char *name_user, int which, char *buf, size_t bufsize,
- size_t *ret)
+sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid)
{
- /* Let the program know about the new interface. Not that
- it'll do them much good. */
- if (which = 0)
- return 0;
+ gid_t srgid, segid;
- return -ENOSYS;
+ srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
+ segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
+ return sys_setregid(srgid, segid);
}
+extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
+
asmlinkage long
-sys32_get_kernel_syms(struct kernel_sym *table)
+sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid,
+ __kernel_gid_t32 sgid)
{
- return -ENOSYS;
-}
+ gid_t srgid, segid, ssgid;
-#endif /* CONFIG_MODULES */
+ srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
+ segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
+ ssgid = (sgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid);
+ return sys_setresgid(srgid, segid, ssgid);
+}
/* Stuff for NFS server syscalls... */
struct nfsctl_svc32 {
@@ -4820,154 +4293,6 @@
return err;
}
-asmlinkage long sys_utimes(char *, struct timeval *);
-
-asmlinkage long
-sys32_utimes(char *filename, struct timeval32 *tvs)
-{
- char *kfilename;
- struct timeval ktvs[2];
- mm_segment_t old_fs;
- int ret;
-
- kfilename = getname32(filename);
- ret = PTR_ERR(kfilename);
- if (!IS_ERR(kfilename)) {
- if (tvs) {
- if (get_tv32(&ktvs[0], tvs) ||
- get_tv32(&ktvs[1], 1+tvs))
- return -EFAULT;
- }
-
- old_fs = get_fs();
- set_fs(KERNEL_DS);
- ret = sys_utimes(kfilename, &ktvs[0]);
- set_fs(old_fs);
-
- putname(kfilename);
- }
- return ret;
-}
-
-/* These are here just in case some old ia32 binary calls it. */
-asmlinkage long
-sys32_pause(void)
-{
- current->state = TASK_INTERRUPTIBLE;
- schedule();
- return -ERESTARTNOHAND;
-}
-
-/* PCI config space poking. */
-extern asmlinkage long sys_pciconfig_read(unsigned long bus,
- unsigned long dfn,
- unsigned long off,
- unsigned long len,
- unsigned char *buf);
-
-extern asmlinkage long sys_pciconfig_write(unsigned long bus,
- unsigned long dfn,
- unsigned long off,
- unsigned long len,
- unsigned char *buf);
-
-asmlinkage long
-sys32_pciconfig_read(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
-{
- return sys_pciconfig_read((unsigned long) bus,
- (unsigned long) dfn,
- (unsigned long) off,
- (unsigned long) len,
- (unsigned char *)AA(ubuf));
-}
-
-asmlinkage long
-sys32_pciconfig_write(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
-{
- return sys_pciconfig_write((unsigned long) bus,
- (unsigned long) dfn,
- (unsigned long) off,
- (unsigned long) len,
- (unsigned char *)AA(ubuf));
-}
-
-extern asmlinkage long sys_prctl(int option, unsigned long arg2,
- unsigned long arg3, unsigned long arg4,
- unsigned long arg5);
-
-asmlinkage long
-sys32_prctl(int option, u32 arg2, u32 arg3, u32 arg4, u32 arg5)
-{
- return sys_prctl(option,
- (unsigned long) arg2,
- (unsigned long) arg3,
- (unsigned long) arg4,
- (unsigned long) arg5);
-}
-
-
-extern asmlinkage ssize_t sys_pread(unsigned int fd, char * buf,
- size_t count, loff_t pos);
-
-extern asmlinkage ssize_t sys_pwrite(unsigned int fd, const char * buf,
- size_t count, loff_t pos);
-
-typedef __kernel_ssize_t32 ssize_t32;
-
-asmlinkage ssize_t32
-sys32_pread(unsigned int fd, char *ubuf, __kernel_size_t32 count,
- u32 poshi, u32 poslo)
-{
- return sys_pread(fd, ubuf, count,
- ((loff_t)AA(poshi) << 32) | AA(poslo));
-}
-
-asmlinkage ssize_t32
-sys32_pwrite(unsigned int fd, char *ubuf, __kernel_size_t32 count,
- u32 poshi, u32 poslo)
-{
- return sys_pwrite(fd, ubuf, count,
- ((loff_t)AA(poshi) << 32) | AA(poslo));
-}
-
-
-extern asmlinkage long sys_personality(unsigned long);
-
-asmlinkage long
-sys32_personality(unsigned long personality)
-{
- int ret;
- if (current->personality = PER_LINUX32 && personality = PER_LINUX)
- personality = PER_LINUX32;
- ret = sys_personality(personality);
- if (ret = PER_LINUX32)
- ret = PER_LINUX;
- return ret;
-}
-
-extern asmlinkage ssize_t sys_sendfile(int out_fd, int in_fd, off_t *offset,
- size_t count);
-
-asmlinkage long
-sys32_sendfile(int out_fd, int in_fd, __kernel_off_t32 *offset, s32 count)
-{
- mm_segment_t old_fs = get_fs();
- int ret;
- off_t of;
-
- if (offset && get_user(of, offset))
- return -EFAULT;
-
- set_fs(KERNEL_DS);
- ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
- set_fs(old_fs);
-
- if (!ret && offset && put_user(of, offset))
- return -EFAULT;
-
- return ret;
-}
-
/* Handle adjtimex compatability. */
struct timex32 {
@@ -5041,4 +4366,4 @@
return ret;
}
-#endif // NOTYET
+#endif /* NOTYET */
diff -urN linux-2.4.13/arch/ia64/kernel/Makefile linux-2.4.13-lia/arch/ia64/kernel/Makefile
--- linux-2.4.13/arch/ia64/kernel/Makefile Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/Makefile Wed Oct 10 17:55:55 2001
@@ -16,7 +16,7 @@
obj-y := acpi.o entry.o gate.o efi.o efi_stub.o ia64_ksyms.o irq.o irq_ia64.o irq_lsapic.o ivt.o \
machvec.o pal.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
-obj-$(CONFIG_IA64_GENERIC) += machvec.o iosapic.o
+obj-$(CONFIG_IA64_GENERIC) += iosapic.o
obj-$(CONFIG_IA64_DIG) += iosapic.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_EFI_VARS) += efivars.o
diff -urN linux-2.4.13/arch/ia64/kernel/acpi.c linux-2.4.13-lia/arch/ia64/kernel/acpi.c
--- linux-2.4.13/arch/ia64/kernel/acpi.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/acpi.c Thu Oct 4 00:21:39 2001
@@ -9,7 +9,7 @@
* Copyright (C) 2000 Hewlett-Packard Co.
* Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 2000 Intel Corp.
- * Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
+ * Copyright (C) 2000,2001 J.I. Lee <jung-ik.lee@intel.com>
* ACPI based kernel configuration manager.
* ACPI 2.0 & IA64 ext 0.71
*/
@@ -34,6 +34,9 @@
#undef ACPI_DEBUG /* Guess what this does? */
+/* global array to record platform interrupt vectors for generic int routing */
+int platform_irq_list[ACPI_MAX_PLATFORM_IRQS];
+
/* These are ugly but will be reclaimed by the kernel */
int __initdata available_cpus;
int __initdata total_cpus;
@@ -42,7 +45,9 @@
void (*pm_power_off) (void);
asm (".weak iosapic_register_legacy_irq");
+asm (".weak iosapic_register_platform_irq");
asm (".weak iosapic_init");
+asm (".weak iosapic_version");
const char *
acpi_get_sysname (void)
@@ -55,6 +60,8 @@
return "hpsim";
# elif defined (CONFIG_IA64_SGI_SN1)
return "sn1";
+# elif defined (CONFIG_IA64_SGI_SN2)
+ return "sn2";
# elif defined (CONFIG_IA64_DIG)
return "dig";
# else
@@ -65,6 +72,25 @@
}
/*
+ * Interrupt routing API for device drivers.
+ * Provides the interrupt vector for a generic platform event
+ * (currently only CPEI implemented)
+ */
+int
+acpi_request_vector(u32 int_type)
+{
+ int vector = -1;
+
+ if (int_type < ACPI_MAX_PLATFORM_IRQS) {
+ /* correctable platform error interrupt */
+ vector = platform_irq_list[int_type];
+ } else
+ printk("acpi_request_vector(): invalid interrupt type\n");
+
+ return vector;
+}
+
+/*
* Configure legacy IRQ information.
*/
static void __init
@@ -139,15 +165,93 @@
}
/*
- * Info on platform interrupt sources: NMI. PMI, INIT, etc.
+ * Extract iosapic info from madt (again) to determine which iosapic
+ * this platform interrupt resides in
+ */
+static int __init
+acpi20_which_iosapic (int global_vector, acpi_madt_t *madt, u32 *irq_base, char **iosapic_address)
+{
+ acpi_entry_iosapic_t *iosapic;
+ char *p, *end;
+ int ver, max_pin;
+
+ p = (char *) (madt + 1);
+ end = p + (madt->header.length - sizeof(acpi_madt_t));
+
+ while (p < end) {
+ switch (*p) {
+ case ACPI20_ENTRY_IO_SAPIC:
+ /* collect IOSAPIC info for platform int use later */
+ iosapic = (acpi_entry_iosapic_t *)p;
+ *irq_base = iosapic->irq_base;
+ *iosapic_address = ioremap(iosapic->address, 0);
+ /* is this the iosapic we're looking for? */
+ ver = iosapic_version(*iosapic_address);
+ max_pin = (ver >> 16) & 0xff;
+ if ((global_vector - *irq_base) <= max_pin)
+ return 0; /* found it! */
+ break;
+ default:
+ break;
+ }
+ p += p[1];
+ }
+ return 1;
+}
+
+/*
+ * Info on platform interrupt sources: NMI, PMI, INIT, etc.
*/
static void __init
-acpi20_platform (char *p)
+acpi20_platform (char *p, acpi_madt_t *madt)
{
+ int vector;
+ u32 irq_base;
+ char *iosapic_address;
+ unsigned long polarity = 0, trigger = 0;
acpi20_entry_platform_src_t *plat = (acpi20_entry_platform_src_t *) p;
printk("PLATFORM: IOSAPIC %x -> Vector %x on CPU %.04u:%.04u\n",
plat->iosapic_vector, plat->global_vector, plat->eid, plat->id);
+
+ /* record platform interrupt vectors for generic int routing code */
+
+ if (!iosapic_register_platform_irq) {
+ printk("acpi20_platform(): no ACPI platform IRQ support\n");
+ return;
+ }
+
+ /* extract polarity and trigger info from flags */
+ switch (plat->flags) {
+ case 0x5: polarity = 1; trigger = 1; break;
+ case 0x7: polarity = 0; trigger = 1; break;
+ case 0xd: polarity = 1; trigger = 0; break;
+ case 0xf: polarity = 0; trigger = 0; break;
+ default:
+ printk("acpi20_platform(): unknown flags 0x%x\n", plat->flags);
+ break;
+ }
+
+ /* which iosapic does this IRQ belong to? */
+ if (acpi20_which_iosapic(plat->global_vector, madt, &irq_base, &iosapic_address)) {
+ printk("acpi20_platform(): I/O SAPIC not found!\n");
+ return;
+ }
+
+ /*
+ * get vector assignment for this IRQ, set attributes, and program the IOSAPIC
+ * routing table
+ */
+ vector = iosapic_register_platform_irq(plat->int_type,
+ plat->global_vector,
+ plat->iosapic_vector,
+ plat->eid,
+ plat->id,
+ polarity,
+ trigger,
+ irq_base,
+ iosapic_address);
+ platform_irq_list[plat->int_type] = vector;
}
/*
@@ -173,8 +277,10 @@
static void __init
acpi20_parse_madt (acpi_madt_t *madt)
{
- acpi_entry_iosapic_t *iosapic;
+ acpi_entry_iosapic_t *iosapic = NULL;
+ acpi20_entry_lsapic_t *lsapic = NULL;
char *p, *end;
+ int i;
/* Base address of IPI Message Block */
if (madt->lapic_address) {
@@ -186,23 +292,27 @@
p = (char *) (madt + 1);
end = p + (madt->header.length - sizeof(acpi_madt_t));
+ /* Initialize platform interrupt vector array */
+ for (i = 0; i < ACPI_MAX_PLATFORM_IRQS; i++)
+ platform_irq_list[i] = -1;
+
/*
- * Splitted entry parsing to ensure ordering.
+ * Split-up entry parsing to ensure ordering.
*/
-
while (p < end) {
switch (*p) {
- case ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE:
+ case ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE:
printk("ACPI 2.0 MADT: LOCAL APIC Override\n");
acpi20_lapic_addr_override(p);
break;
- case ACPI20_ENTRY_LOCAL_SAPIC:
+ case ACPI20_ENTRY_LOCAL_SAPIC:
printk("ACPI 2.0 MADT: LOCAL SAPIC\n");
+ lsapic = (acpi20_entry_lsapic_t *) p;
acpi20_lsapic(p);
break;
- case ACPI20_ENTRY_IO_SAPIC:
+ case ACPI20_ENTRY_IO_SAPIC:
iosapic = (acpi_entry_iosapic_t *) p;
if (iosapic_init)
/*
@@ -218,26 +328,25 @@
);
break;
- case ACPI20_ENTRY_PLATFORM_INT_SOURCE:
+ case ACPI20_ENTRY_PLATFORM_INT_SOURCE:
printk("ACPI 2.0 MADT: PLATFORM INT SOURCE\n");
- acpi20_platform(p);
+ acpi20_platform(p, madt);
break;
- case ACPI20_ENTRY_LOCAL_APIC:
+ case ACPI20_ENTRY_LOCAL_APIC:
printk("ACPI 2.0 MADT: LOCAL APIC entry\n"); break;
- case ACPI20_ENTRY_IO_APIC:
+ case ACPI20_ENTRY_IO_APIC:
printk("ACPI 2.0 MADT: IO APIC entry\n"); break;
- case ACPI20_ENTRY_NMI_SOURCE:
+ case ACPI20_ENTRY_NMI_SOURCE:
printk("ACPI 2.0 MADT: NMI SOURCE entry\n"); break;
- case ACPI20_ENTRY_LOCAL_APIC_NMI:
+ case ACPI20_ENTRY_LOCAL_APIC_NMI:
printk("ACPI 2.0 MADT: LOCAL APIC NMI entry\n"); break;
- case ACPI20_ENTRY_INT_SRC_OVERRIDE:
+ case ACPI20_ENTRY_INT_SRC_OVERRIDE:
break;
- default:
+ default:
printk("ACPI 2.0 MADT: unknown entry skip\n"); break;
break;
}
-
p += p[1];
}
@@ -245,16 +354,35 @@
end = p + (madt->header.length - sizeof(acpi_madt_t));
while (p < end) {
+ switch (*p) {
+ case ACPI20_ENTRY_LOCAL_APIC:
+ if (lsapic) break;
+ printk("ACPI 2.0 MADT: LOCAL APIC entry\n");
+ /* parse local apic if there's no local Sapic */
+ break;
+ case ACPI20_ENTRY_IO_APIC:
+ if (iosapic) break;
+ printk("ACPI 2.0 MADT: IO APIC entry\n");
+ /* parse ioapic if there's no ioSapic */
+ break;
+ default:
+ break;
+ }
+ p += p[1];
+ }
+ p = (char *) (madt + 1);
+ end = p + (madt->header.length - sizeof(acpi_madt_t));
+
+ while (p < end) {
switch (*p) {
- case ACPI20_ENTRY_INT_SRC_OVERRIDE:
+ case ACPI20_ENTRY_INT_SRC_OVERRIDE:
printk("ACPI 2.0 MADT: INT SOURCE Override\n");
acpi_legacy_irq(p);
break;
- default:
+ default:
break;
}
-
p += p[1];
}
diff -urN linux-2.4.13/arch/ia64/kernel/efi.c linux-2.4.13-lia/arch/ia64/kernel/efi.c
--- linux-2.4.13/arch/ia64/kernel/efi.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/efi.c Thu Oct 4 00:21:39 2001
@@ -482,5 +482,7 @@
static void __exit
efivars_exit(void)
{
+#ifdef CONFIG_PROC_FS
remove_proc_entry(efi_dir->name, NULL);
+#endif
}
diff -urN linux-2.4.13/arch/ia64/kernel/efi_stub.S linux-2.4.13-lia/arch/ia64/kernel/efi_stub.S
--- linux-2.4.13/arch/ia64/kernel/efi_stub.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/efi_stub.S Thu Oct 4 00:21:39 2001
@@ -1,8 +1,8 @@
/*
* EFI call stub.
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * David Mosberger <davidm@hpl.hp.com>
*
* This stub allows us to make EFI calls in physical mode with interrupts
* turned off. We need this because we can't call SetVirtualMap() until
@@ -68,17 +68,17 @@
;;
andcm r16=loc3,r16 // get psr with IT, DT, and RT bits cleared
mov out3=in4
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret0: mov out4=in5
mov out5=in6
mov out6=in7
- br.call.sptk.few rp¶ // call the EFI function
+ br.call.sptk.many rp¶ // call the EFI function
.ret1: mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret2: mov ar.rsc=loc4 // restore RSE configuration
mov ar.pfs=loc1
mov rp=loc0
mov gp=loc2
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(efi_call_phys)
diff -urN linux-2.4.13/arch/ia64/kernel/efivars.c linux-2.4.13-lia/arch/ia64/kernel/efivars.c
--- linux-2.4.13/arch/ia64/kernel/efivars.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/efivars.c Wed Oct 10 17:40:37 2001
@@ -65,6 +65,7 @@
MODULE_AUTHOR("Matt Domsch <Matt_Domsch@Dell.com>");
MODULE_DESCRIPTION("/proc interface to EFI Variables");
+MODULE_LICENSE("GPL");
#define EFIVARS_VERSION "0.03 2001-Apr-20"
@@ -276,21 +277,20 @@
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
- spin_lock(&efivars_lock);
MOD_INC_USE_COUNT;
var_data = kmalloc(size, GFP_KERNEL);
if (!var_data) {
MOD_DEC_USE_COUNT;
- spin_unlock(&efivars_lock);
return -ENOMEM;
}
if (copy_from_user(var_data, buffer, size)) {
MOD_DEC_USE_COUNT;
- spin_unlock(&efivars_lock);
+ kfree(var_data);
return -EFAULT;
}
+ spin_lock(&efivars_lock);
/* Since the data ptr we've currently got is probably for
a different variable find the right variable.
diff -urN linux-2.4.13/arch/ia64/kernel/entry.S linux-2.4.13-lia/arch/ia64/kernel/entry.S
--- linux-2.4.13/arch/ia64/kernel/entry.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/entry.S Wed Oct 24 18:13:32 2001
@@ -4,7 +4,7 @@
* Kernel entry points.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Asit Mallick <Asit.K.Mallick@intel.com>
@@ -15,7 +15,7 @@
* kernel stack. This allows us to handle interrupts without changing
* to physical mode.
*
- * Jonathan Nickin <nicklin@missioncriticallinux.com>
+ * Jonathan Nicklin <nicklin@missioncriticallinux.com>
* Patrick O'Rourke <orourke@missioncriticallinux.com>
* 11/07/2000
/
@@ -55,7 +55,7 @@
mov out1=in1 // argv
mov out2=in2 // envp
add out3\x16,sp // regs
- br.call.sptk.few rp=sys_execve
+ br.call.sptk.many rp=sys_execve
.ret0: cmp4.ge p6,p7=r8,r0
mov ar.pfs=loc1 // restore ar.pfs
sxt4 r8=r8 // return 64-bit result
@@ -64,7 +64,7 @@
(p6) cmp.ne pKern,pUser=r0,r0 // a successful execve() lands us in user-mode...
mov rp=loc0
(p6) mov ar.pfs=r0 // clear ar.pfs on success
-(p7) br.ret.sptk.few rp
+(p7) br.ret.sptk.many rp
/*
* In theory, we'd have to zap this state only to prevent leaking of
@@ -85,7 +85,7 @@
ldf.fill f26=[sp]; ldf.fill f27=[sp]; mov f28ð
ldf.fill f29=[sp]; ldf.fill f30=[sp]; mov f31ð
mov ar.lc=0
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_execve)
GLOBAL_ENTRY(sys_clone2)
@@ -99,7 +99,7 @@
mov out3=in2
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
- br.call.sptk.few rp=do_fork
+ br.call.sptk.many rp=do_fork
.ret1: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
@@ -118,7 +118,7 @@
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
- br.call.sptk.few rp=do_fork
+ br.call.sptk.many rp=do_fork
.ret2: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
@@ -143,7 +143,7 @@
shr.u r26=r20,KERNEL_PG_SHIFT
mov r16=KERNEL_PG_NUM
;;
- cmp.ne p6,p7=r26,r16 // check >= 64M && < 128M
+ cmp.ne p6,p7=r26,r16 // check whether r26 != KERNEL_PG_NUM
adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
;;
/*
@@ -151,12 +151,13 @@
* again.
*/
(p6) cmp.eq p7,p6=r26,r27
-(p6) br.cond.dpnt.few .map
+(p6) br.cond.dpnt .map
;;
-.done: ld8 sp=[r21] // load kernel stack pointer of new task
+.done:
(p6) ssm psr.ic // if we we had to map, renable the psr.ic bit FIRST!!!
;;
(p6) srlz.d
+ ld8 sp=[r21] // load kernel stack pointer of new task
mov IA64_KR(CURRENT)=r20 // update "current" application register
mov r8=r13 // return pointer to previously running task
mov r13=in0 // set "current" pointer
@@ -167,7 +168,7 @@
#ifdef CONFIG_SMP
sync.i // ensure "fc"s done by this CPU are visible on other CPUs
#endif
- br.ret.sptk.few rp // boogie on out in new context
+ br.ret.sptk.many rp // boogie on out in new context
.map:
rsm psr.i | psr.ic
@@ -184,7 +185,7 @@
mov IA64_KR(CURRENT_STACK)=r26 // remember last page we mapped...
;;
itr.d dtr[r25]=r23 // wire in new mapping...
- br.cond.sptk.many .done
+ br.cond.sptk .done
END(ia64_switch_to)
/*
@@ -212,24 +213,18 @@
.save @priunat,r17
mov r17=ar.unat // preserve caller's
.body
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
adds r3€,sp
;;
lfetch.fault.excl.nt1 [r3],128
-#endif
mov ar.rsc=0 // put RSE in mode: enforced lazy, little endian, pl 0
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
adds r2\x16+128,sp
;;
lfetch.fault.excl.nt1 [r2],128
lfetch.fault.excl.nt1 [r3],128
-#endif
adds r14=SW(R4)+16,sp
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
;;
lfetch.fault.excl [r2]
lfetch.fault.excl [r3]
-#endif
adds r15=SW(R5)+16,sp
;;
mov r18=ar.fpsr // preserve fpsr
@@ -309,7 +304,7 @@
st8 [r2]=r20 // save ar.bspstore
st8 [r3]=r21 // save predicate registers
mov ar.rsc=3 // put RSE back into eager mode, pl 0
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
END(save_switch_stack)
/*
@@ -321,11 +316,9 @@
ENTRY(load_switch_stack)
.prologue
.altrp b7
- .body
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
+ .body
lfetch.fault.nt1 [sp]
-#endif
adds r2=SW(AR_BSPSTORE)+16,sp
adds r3=SW(AR_UNAT)+16,sp
mov ar.rsc=0 // put RSE into enforced lazy mode
@@ -426,7 +419,7 @@
;;
(p6) st4 [r2]=r8
(p6) mov r8=-1
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_syscall)
/*
@@ -441,11 +434,11 @@
.body
mov loc2¶
;;
- br.call.sptk.few rp=syscall_trace
+ br.call.sptk.many rp=syscall_trace
.ret3: mov rp=loc0
mov ar.pfs=loc1
mov b6=loc2
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(invoke_syscall_trace)
/*
@@ -462,21 +455,21 @@
GLOBAL_ENTRY(ia64_trace_syscall)
PT_REGS_UNWIND_INFO(0)
- br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
-.ret6: br.call.sptk.few rp¶ // do the syscall
+ br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch syscall args
+.ret6: br.call.sptk.many rp¶ // do the syscall
strace_check_retval:
cmp.lt p6,p0=r8,r0 // syscall failed?
adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8
adds r3=PT(R10)+16,sp // r3 = &pt_regs.r10
mov r10=0
-(p6) br.cond.sptk.few strace_error // syscall failed ->
+(p6) br.cond.sptk strace_error // syscall failed ->
;; // avoid RAW on r10
strace_save_retval:
.mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8
.mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10
ia64_strace_leave_kernel:
- br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
-.rety: br.cond.sptk.many ia64_leave_kernel
+ br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch return value
+.rety: br.cond.sptk ia64_leave_kernel
strace_error:
ld8 r3=[r2] // load pt_regs.r8
@@ -487,7 +480,7 @@
;;
(p6) mov r10=-1
(p6) mov r8=r9
- br.cond.sptk.few strace_save_retval
+ br.cond.sptk strace_save_retval
END(ia64_trace_syscall)
GLOBAL_ENTRY(ia64_ret_from_clone)
@@ -497,7 +490,7 @@
* Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
* address of the previously executing task.
*/
- br.call.sptk.few rp=invoke_schedule_tail
+ br.call.sptk.many rp=ia64_invoke_schedule_tail
.ret8:
adds r2=IA64_TASK_PTRACE_OFFSET,r13
;;
@@ -505,7 +498,7 @@
;;
mov r8=0
tbit.nz p6,p0=r2,PT_TRACESYS_BIT
-(p6) br strace_check_retval
+(p6) br.cond.spnt strace_check_retval
;; // added stop bits to prevent r8 dependency
END(ia64_ret_from_clone)
// fall through
@@ -519,7 +512,7 @@
(p6) st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit
.mem.offset 8,0
(p6) st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit
-(p7) br.cond.spnt.few handle_syscall_error // handle potential syscall failure
+(p7) br.cond.spnt handle_syscall_error // handle potential syscall failure
END(ia64_ret_from_syscall)
// fall through
GLOBAL_ENTRY(ia64_leave_kernel)
@@ -527,22 +520,22 @@
lfetch.fault [sp]
movl r14=.restart
;;
- MOVBR(.ret.sptk,rp,r14,.restart)
+ mov.ret.sptk rp=r14,.restart
.restart:
adds r17=IA64_TASK_NEED_RESCHED_OFFSET,r13
adds r18=IA64_TASK_SIGPENDING_OFFSET,r13
#ifdef CONFIG_PERFMON
- adds r19=IA64_TASK_PFM_NOTIFY_OFFSET,r13
+ adds r19=IA64_TASK_PFM_MUST_BLOCK_OFFSET,r13
#endif
;;
#ifdef CONFIG_PERFMON
- ld8 r19=[r19] // load current->task.pfm_notify
+(pUser) ld8 r19=[r19] // load current->thread.pfm_must_block
#endif
- ld8 r17=[r17] // load current->need_resched
- ld4 r18=[r18] // load current->sigpending
+(pUser) ld8 r17=[r17] // load current->need_resched
+(pUser) ld4 r18=[r18] // load current->sigpending
;;
#ifdef CONFIG_PERFMON
- cmp.ne p9,p0=r19,r0 // current->task.pfm_notify != 0?
+(pUser) cmp.ne.unc p9,p0=r19,r0 // current->thread.pfm_must_block != 0?
#endif
(pUser) cmp.ne.unc p7,p0=r17,r0 // current->need_resched != 0?
(pUser) cmp.ne.unc p8,p0=r18,r0 // current->sigpending != 0?
@@ -550,7 +543,7 @@
adds r2=PT(R8)+16,r12
adds r3=PT(R9)+16,r12
#ifdef CONFIG_PERFMON
-(p9) br.call.spnt.many b7=pfm_overflow_notify
+(p9) br.call.spnt.many b7=pfm_block_on_overflow
#endif
#if __GNUC__ < 3
(p7) br.call.spnt.many b7=invoke_schedule
@@ -650,13 +643,13 @@
movl r17=PERCPU_ADDR+IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET
;;
ld4 r17=[r17] // r17 = cpu_data->phys_stacked_size_p8
-(pKern) br.cond.dpnt.few skip_rbs_switch
+(pKern) br.cond.dpnt skip_rbs_switch
/*
* Restore user backing store.
*
* NOTE: alloc, loadrs, and cover can't be predicated.
*/
-(pNonSys) br.cond.dpnt.few dont_preserve_current_frame
+(pNonSys) br.cond.dpnt dont_preserve_current_frame
cover // add current frame into dirty partition
;;
mov r19=ar.bsp // get new backing store pointer
@@ -687,7 +680,7 @@
shladd in0=loc1,3,r17
mov in1=0
;;
- .align 32
+// .align 32 // gas-2.11.90 is unable to generate a stop bit after .align
rse_clear_invalid:
// cycle 0
{ .mii
@@ -706,7 +699,7 @@
}{ .mib
mov loc3=0
mov loc4=0
-(pRecurse) br.call.sptk.few b6=rse_clear_invalid
+(pRecurse) br.call.sptk.many b6=rse_clear_invalid
}{ .mfi // cycle 2
mov loc5=0
@@ -715,7 +708,7 @@
}{ .mib
mov loc6=0
mov loc7=0
-(pReturn) br.ret.sptk.few b6
+(pReturn) br.ret.sptk.many b6
}
# undef pRecurse
# undef pReturn
@@ -761,24 +754,24 @@
;;
.mem.offset 0,0; st8.spill [r2]=r9 // store errno in pt_regs.r8 and set unat bit
.mem.offset 8,0; st8.spill [r3]=r10 // store error indication in pt_regs.r10 and set unat bit
- br.cond.sptk.many ia64_leave_kernel
+ br.cond.sptk ia64_leave_kernel
END(handle_syscall_error)
/*
* Invoke schedule_tail(task) while preserving in0-in7, which may be needed
* in case a system call gets restarted.
*/
-ENTRY(invoke_schedule_tail)
+GLOBAL_ENTRY(ia64_invoke_schedule_tail)
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,1,0
mov loc0=rp
mov out0=r8 // Address of previous task
;;
- br.call.sptk.few rp=schedule_tail
+ br.call.sptk.many rp=schedule_tail
.ret11: mov ar.pfs=loc1
mov rp=loc0
br.ret.sptk.many rp
-END(invoke_schedule_tail)
+END(ia64_invoke_schedule_tail)
#if __GNUC__ < 3
@@ -797,7 +790,7 @@
mov loc0=rp
;;
.body
- br.call.sptk.few rp=schedule
+ br.call.sptk.many rp=schedule
.ret14: mov ar.pfs=loc1
mov rp=loc0
br.ret.sptk.many rp
@@ -824,7 +817,7 @@
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
.body
- br.call.sptk.few rp=ia64_do_signal
+ br.call.sptk.many rp=ia64_do_signal
.ret15: .restore sp
adds sp\x16,sp // pop scratch stack space
;;
@@ -849,7 +842,7 @@
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
.body
- br.call.sptk.few rp=ia64_rt_sigsuspend
+ br.call.sptk.many rp=ia64_rt_sigsuspend
.ret17: .restore sp
adds sp\x16,sp // pop scratch stack space
;;
@@ -871,15 +864,15 @@
cmp.eq pNonSys,pSys=r0,r0 // sigreturn isn't a normal syscall...
;;
adds out0\x16,sp // out0 = &sigscratch
- br.call.sptk.few rp=ia64_rt_sigreturn
+ br.call.sptk.many rp=ia64_rt_sigreturn
.ret19: .restore sp 0
adds sp\x16,sp
;;
ld8 r9=[sp] // load new ar.unat
- MOVBR(.sptk,b7,r8,ia64_leave_kernel)
+ mov.sptk b7=r8,ia64_leave_kernel
;;
mov ar.unat=r9
- br b7
+ br.many b7
END(sys_rt_sigreturn)
GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
@@ -890,7 +883,7 @@
mov r16=r0
.prologue
DO_SAVE_SWITCH_STACK
- br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in ivt
+ br.call.sptk.many rp=ia64_handle_unaligned // stack frame setup in ivt
.ret21: .body
DO_LOAD_SWITCH_STACK
br.cond.sptk.many rp // goes to ia64_leave_kernel
@@ -920,14 +913,14 @@
adds out0\x16,sp // &info
mov out1=r13 // current
adds out2\x16+EXTRA_FRAME_SIZE,sp // &switch_stack
- br.call.sptk.few rp=unw_init_frame_info
+ br.call.sptk.many rp=unw_init_frame_info
1: adds out0\x16,sp // &info
mov b6=loc2
mov loc2=gp // save gp across indirect function call
;;
ld8 gp=[in0]
mov out1=in1 // arg
- br.call.sptk.few rp¶ // invoke the callback function
+ br.call.sptk.many rp¶ // invoke the callback function
1: mov gp=loc2 // restore gp
// For now, we don't allow changing registers from within
@@ -1026,7 +1019,7 @@
data8 sys_setpriority
data8 sys_statfs
data8 sys_fstatfs
- data8 ia64_ni_syscall // 1105
+ data8 sys_gettid // 1105
data8 sys_semget
data8 sys_semop
data8 sys_semctl
@@ -1137,7 +1130,7 @@
data8 sys_clone2
data8 sys_getdents64
data8 sys_getunwind // 1215
- data8 ia64_ni_syscall
+ data8 sys_readahead
data8 ia64_ni_syscall
data8 ia64_ni_syscall
data8 ia64_ni_syscall
diff -urN linux-2.4.13/arch/ia64/kernel/entry.h linux-2.4.13-lia/arch/ia64/kernel/entry.h
--- linux-2.4.13/arch/ia64/kernel/entry.h Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/entry.h Thu Oct 4 00:21:39 2001
@@ -1,12 +1,5 @@
#include <linux/config.h>
-/* XXX fixme */
-#if defined(CONFIG_ITANIUM_B1_SPECIFIC)
-# define MOVBR(type,br,gr,lbl) mov br=gr
-#else
-# define MOVBR(type,br,gr,lbl) mov##type br=gr,lbl
-#endif
-
/*
* Preserved registers that are shared between code in ivt.S and entry.S. Be
* careful not to step on these!
@@ -62,7 +55,7 @@
;; \
.fframe IA64_SWITCH_STACK_SIZE; \
adds sp=-IA64_SWITCH_STACK_SIZE,sp; \
- MOVBR(.ret.sptk,b7,r28,1f); \
+ mov.ret.sptk b7=r28,1f; \
SWITCH_STACK_SAVES(0); \
br.cond.sptk.many save_switch_stack; \
1:
@@ -71,7 +64,7 @@
movl r28\x1f; \
;; \
invala; \
- MOVBR(.ret.sptk,b7,r28,1f); \
+ mov.ret.sptk b7=r28,1f; \
br.cond.sptk.many load_switch_stack; \
1: .restore sp; \
adds sp=IA64_SWITCH_STACK_SIZE,sp
diff -urN linux-2.4.13/arch/ia64/kernel/fw-emu.c linux-2.4.13-lia/arch/ia64/kernel/fw-emu.c
--- linux-2.4.13/arch/ia64/kernel/fw-emu.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/fw-emu.c Wed Oct 24 18:13:46 2001
@@ -174,6 +174,43 @@
" ;;\n"
" mov ar.lc=r9\n"
" mov r8=r0\n"
+" ;;\n"
+"1: cmp.eq p6,p7\x15,r28 /* PAL_PERF_MON_INFO */\n"
+"(p7) br.cond.sptk.few 1f\n"
+" mov r8=0 /* status = 0 */\n"
+" movl r9 =0x12082004 /* generic=4 width2 retired=8 cycles\x18 */\n"
+" mov r10=0 /* reserved */\n"
+" mov r11=0 /* reserved */\n"
+" mov r16=0xffff /* implemented PMC */\n"
+" mov r17=0xffff /* implemented PMD */\n"
+" add r18=8,r29 /* second index */\n"
+" ;;\n"
+" st8 [r29]=r16,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r17,16 /* store implemented PMD */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" mov r16=0xf0 /* cycles count capable PMC */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" mov r17=0x10 /* retired bundles capable PMC */\n"
+" ;;\n"
+" st8 [r29]=r16,16 /* store cycles capable */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r17,16 /* store retired bundle capable */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
"1: br.cond.sptk.few rp\n"
"stacked:\n"
" br.ret.sptk.few rp\n"
@@ -414,11 +451,6 @@
#ifdef CONFIG_IA64_SDV
strcpy(sal_systab->oem_id, "Intel");
strcpy(sal_systab->product_id, "SDV");
-#endif
-
-#ifdef CONFIG_IA64_SGI_SN1_SIM
- strcpy(sal_systab->oem_id, "SGI");
- strcpy(sal_systab->product_id, "SN1");
#endif
/* fill in an entry point: */
diff -urN linux-2.4.13/arch/ia64/kernel/gate.S linux-2.4.13-lia/arch/ia64/kernel/gate.S
--- linux-2.4.13/arch/ia64/kernel/gate.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/gate.S Thu Oct 4 00:21:39 2001
@@ -3,7 +3,7 @@
* region. For now, it contains the signal trampoline code only.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
@@ -18,7 +18,6 @@
# define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET)
# define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET)
# define ARG2_OFF (16 + IA64_SIGFRAME_ARG2_OFFSET)
-# define RBS_BASE_OFF (16 + IA64_SIGFRAME_RBS_BASE_OFFSET)
# define SIGHANDLER_OFF (16 + IA64_SIGFRAME_HANDLER_OFFSET)
# define SIGCONTEXT_OFF (16 + IA64_SIGFRAME_SIGCONTEXT_OFFSET)
@@ -32,6 +31,8 @@
# define PR_OFF IA64_SIGCONTEXT_PR_OFFSET
# define RP_OFF IA64_SIGCONTEXT_B0_OFFSET
# define SP_OFF IA64_SIGCONTEXT_R12_OFFSET
+# define RBS_BASE_OFF IA64_SIGCONTEXT_RBS_BASE_OFFSET
+# define LOADRS_OFF IA64_SIGCONTEXT_LOADRS_OFFSET
# define base0 r2
# define base1 r3
/*
@@ -73,34 +74,37 @@
.vframesp SP_OFF+SIGCONTEXT_OFF
.body
- .prologue
+ .label_state 1
+
adds base0=SIGHANDLER_OFF,sp
- adds base1=RBS_BASE_OFF,sp
+ adds base1=RBS_BASE_OFF+SIGCONTEXT_OFF,sp
br.call.sptk.many rp\x1f
1:
ld8 r17=[base0],(ARG0_OFF-SIGHANDLER_OFF) // get pointer to signal handler's plabel
- ld8 r15=[base1],(ARG1_OFF-RBS_BASE_OFF) // get address of new RBS base (or NULL)
+ ld8 r15=[base1] // get address of new RBS base (or NULL)
cover // push args in interrupted frame onto backing store
;;
+ cmp.ne p8,p0=r15,r0 // do we need to switch the rbs?
+ mov.m r9=ar.bsp // fetch ar.bsp
+ .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+(p8) br.cond.spnt setup_rbs // yup -> (clobbers r14, r15, and r16)
+back_from_setup_rbs:
+
.save ar.pfs, r8
alloc r8=ar.pfs,0,0,3,0 // get CFM0, EC0, and CPL0 into r8
ld8 out0=[base0],16 // load arg0 (signum)
+ adds base1=(ARG1_OFF-(RBS_BASE_OFF+SIGCONTEXT_OFF)),base1
;;
ld8 out1=[base1] // load arg1 (siginfop)
ld8 r10=[r17],8 // get signal handler entry point
;;
ld8 out2=[base0] // load arg2 (sigcontextp)
ld8 gp=[r17] // get signal handler's global pointer
- cmp.ne p8,p0=r15,r0 // do we need to switch the rbs?
- mov.m r17=ar.bsp // fetch ar.bsp
- .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
-(p8) br.cond.spnt.few setup_rbs // yup -> (clobbers r14 and r16)
-back_from_setup_rbs:
adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
;;
.spillsp ar.bsp, BSP_OFF+SIGCONTEXT_OFF
- st8 [base0]=r17,(CFM_OFF-BSP_OFF) // save sc_ar_bsp
+ st8 [base0]=r9,(CFM_OFF-BSP_OFF) // save sc_ar_bsp
dep r8=0,r8,38,26 // clear EC0, CPL0 and reserved bits
adds base1=(FR6_OFF+16+SIGCONTEXT_OFF),sp
;;
@@ -123,7 +127,7 @@
;;
stf.spill [base0]ñ4,32
stf.spill [base1]ñ5,32
- br.call.sptk.few rp¶ // call the signal handler
+ br.call.sptk.many rp¶ // call the signal handler
.ret0: adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
;;
ld8 r15=[base0],(CFM_OFF-BSP_OFF) // fetch sc_ar_bsp and advance to CFM_OFF
@@ -131,7 +135,7 @@
;;
ld8 r8=[base0] // restore (perhaps modified) CFM0, EC0, and CPL0
cmp.ne p8,p0=r14,r15 // do we need to restore the rbs?
-(p8) br.cond.spnt.few restore_rbs // yup -> (clobbers r14 and r16)
+(p8) br.cond.spnt restore_rbs // yup -> (clobbers r14 and r16)
;;
back_from_restore_rbs:
adds base0=(FR6_OFF+SIGCONTEXT_OFF),sp
@@ -154,30 +158,52 @@
mov r15=__NR_rt_sigreturn
break __BREAK_SYSCALL
+ .body
+ .copy_state 1
setup_rbs:
- flushrs // must be first in insn
mov ar.rsc=0 // put RSE into enforced lazy mode
- adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
- mov r14=ar.rnat // get rnat as updated by flushrs
- mov ar.bspstore=r15 // set new register backing store area
+ .save ar.rnat, r16
+ mov r16=ar.rnat // save RNaT before switching backing store area
+ adds r14=(RNAT_OFF+SIGCONTEXT_OFF),sp
+
+ mov ar.bspstore=r15 // switch over to new register backing store area
;;
.spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
- st8 [r16]=r14 // save sc_ar_rnat
+ st8 [r14]=r16 // save sc_ar_rnat
+ adds r14=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+
+ mov.m r16=ar.bsp // sc_loadrs <- (new bsp - new bspstore) << 16
+ ;;
+ invala
+ sub r15=r16,r15
+ ;;
+ shl r15=r15,16
+ ;;
+ st8 [r14]=r15 // save sc_loadrs
mov ar.rsc=0xf // set RSE into eager mode, pl 3
- invala // invalidate ALAT
- br.cond.sptk.many back_from_setup_rbs
+ br.cond.sptk back_from_setup_rbs
+ .prologue
+ .copy_state 1
+ .spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+ .body
restore_rbs:
- flushrs
- mov ar.rsc=0 // put RSE into enforced lazy mode
+ alloc r2=ar.pfs,0,0,0,0 // alloc null frame
+ adds r16=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+ ;;
+ ld8 r14=[r16]
adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
+ mov ar.rsc=r14 // put RSE into enforced lazy mode
ld8 r14=[r16] // get new rnat
- mov ar.bspstore=r15 // set old register backing store area
;;
- mov ar.rnat=r14 // establish new rnat
+ loadrs // restore dirty partition
+ ;;
+ mov ar.bspstore=r15 // switch back to old register backing store area
+ ;;
+ mov ar.rnat=r14 // restore RNaT
mov ar.rsc=0xf // (will be restored later on from sc_ar_rsc)
// invala not necessary as that will happen when returning to user-mode
- br.cond.sptk.many back_from_restore_rbs
+ br.cond.sptk back_from_restore_rbs
END(ia64_sigtramp)
diff -urN linux-2.4.13/arch/ia64/kernel/head.S linux-2.4.13-lia/arch/ia64/kernel/head.S
--- linux-2.4.13/arch/ia64/kernel/head.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/head.S Thu Oct 4 00:21:39 2001
@@ -6,8 +6,8 @@
* entry point.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 2001 Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Intel Corp.
@@ -86,7 +86,8 @@
/*
* Switch into virtual mode:
*/
- movl r16=(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN)
+ movl r16=(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN \
+ |IA64_PSR_DI)
;;
mov cr.ipsr=r16
movl r17\x1f
@@ -183,31 +184,31 @@
alloc r2=ar.pfs,0,0,2,0
movl out0=alive_msg
;;
- br.call.sptk.few rpêrly_printk
+ br.call.sptk.many rpêrly_printk
1: // force new bundle
#endif /* CONFIG_IA64_EARLY_PRINTK */
#ifdef CONFIG_SMP
-(isAP) br.call.sptk.few rp=start_secondary
+(isAP) br.call.sptk.many rp=start_secondary
.ret0:
-(isAP) br.cond.sptk.few self
+(isAP) br.cond.sptk self
#endif
// This is executed by the bootstrap processor (bsp) only:
#ifdef CONFIG_IA64_FW_EMU
// initialize PAL & SAL emulator:
- br.call.sptk.few rp=sys_fw_init
+ br.call.sptk.many rp=sys_fw_init
.ret1:
#endif
- br.call.sptk.few rp=start_kernel
+ br.call.sptk.many rp=start_kernel
.ret2: addl r3=@ltoff(halt_msg),gp
;;
alloc r2=ar.pfs,8,0,2,0
;;
ld8 out0=[r3]
- br.call.sptk.few b0=console_print
-self: br.sptk.few self // endless loop
+ br.call.sptk.many b0=console_print
+self: br.sptk.many self // endless loop
END(_start)
GLOBAL_ENTRY(ia64_save_debug_regs)
@@ -218,7 +219,7 @@
add r19=IA64_NUM_DBG_REGS*8,in0
;;
1: mov r16Ûr[r18]
-#if defined(CONFIG_ITANIUM_C0_SPECIFIC)
+#ifdef CONFIG_ITANIUM
;;
srlz.d
#endif
@@ -227,17 +228,15 @@
;;
st8.nta [in0]=r16,8
st8.nta [r19]=r17,8
- br.cloop.sptk.few 1b
+ br.cloop.sptk.many 1b
;;
mov ar.lc=r20 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_save_debug_regs)
GLOBAL_ENTRY(ia64_load_debug_regs)
alloc r16=ar.pfs,1,0,0,0
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
lfetch.nta [in0]
-#endif
mov r20=ar.lc // preserve ar.lc
add r19=IA64_NUM_DBG_REGS*8,in0
mov ar.lc=IA64_NUM_DBG_REGS-1
@@ -248,15 +247,15 @@
add r18=1,r18
;;
mov dbr[r18]=r16
-#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) || defined(CONFIG_ITANIUM_C0_SPECIFIC)
+#ifdef CONFIG_ITANIUM
;;
- srlz.d
+ srlz.d // Errata 132 (NoFix status)
#endif
mov ibr[r18]=r17
- br.cloop.sptk.few 1b
+ br.cloop.sptk.many 1b
;;
mov ar.lc=r20 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_load_debug_regs)
GLOBAL_ENTRY(__ia64_save_fpu)
@@ -406,7 +405,7 @@
;;
stf.spill.nta [in0]ñ26,32
stf.spill.nta [ r3]ñ27,32
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_save_fpu)
GLOBAL_ENTRY(__ia64_load_fpu)
@@ -556,7 +555,7 @@
;;
ldf.fill.nta f126=[in0],32
ldf.fill.nta f127=[ r3],32
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_load_fpu)
GLOBAL_ENTRY(__ia64_init_fpu)
@@ -690,7 +689,7 @@
;;
ldf.fill f126=[sp]
mov f127ð
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_init_fpu)
/*
@@ -738,7 +737,7 @@
rfi // must be last insn in group
;;
1: mov rp=r14
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_switch_mode)
#ifdef CONFIG_IA64_BRL_EMU
@@ -752,7 +751,7 @@
alloc r16=ar.pfs,1,0,0,0; \
mov reg=r32; \
;; \
- br.ret.sptk rp; \
+ br.ret.sptk.many rp; \
END(ia64_set_##reg)
SET_REG(b1);
@@ -816,12 +815,11 @@
;;
cmp.ne p15,p0=tmp,r0
mov tmp=ar.itc
-(p15) br.cond.sptk.few .retry // lock is still busy
+(p15) br.cond.sptk .retry // lock is still busy
;;
// try acquiring lock (we know ar.ccv is still zero!):
mov tmp=1
;;
- IA64_SEMFIX_INSN
cmpxchg4.acq tmp=[r31],tmp,ar.ccv
;;
cmp.eq p15,p0=tmp,r0
diff -urN linux-2.4.13/arch/ia64/kernel/ia64_ksyms.c linux-2.4.13-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-2.4.13/arch/ia64/kernel/ia64_ksyms.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/ia64_ksyms.c Thu Oct 4 00:21:39 2001
@@ -145,4 +145,3 @@
#include <linux/proc_fs.h>
extern struct proc_dir_entry *efi_dir;
EXPORT_SYMBOL(efi_dir);
-
diff -urN linux-2.4.13/arch/ia64/kernel/iosapic.c linux-2.4.13-lia/arch/ia64/kernel/iosapic.c
--- linux-2.4.13/arch/ia64/kernel/iosapic.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/iosapic.c Thu Oct 4 00:21:39 2001
@@ -53,6 +53,7 @@
#include <asm/acpi-ext.h>
#include <asm/acpikcfg.h>
#include <asm/delay.h>
+#include <asm/hw_irq.h>
#include <asm/io.h>
#include <asm/iosapic.h>
#include <asm/machvec.h>
@@ -325,7 +326,7 @@
set_affinity: iosapic_set_affinity
};
-static unsigned int
+unsigned int
iosapic_version (char *addr)
{
/*
@@ -342,6 +343,113 @@
}
/*
+ * ACPI can describe IOSAPIC interrupts via static tables and namespace
+ * methods. This provides an interface to register those interrupts and
+ * program the IOSAPIC RTE.
+ */
+int
+iosapic_register_irq (u32 global_vector, unsigned long polarity, unsigned long
+ edge_triggered, u32 base_irq, char *iosapic_address)
+{
+ irq_desc_t *idesc;
+ struct hw_interrupt_type *irq_type;
+ int vector;
+
+ vector = iosapic_irq_to_vector(global_vector);
+ if (vector < 0)
+ vector = ia64_alloc_irq();
+
+ /* fill in information from this vector's IOSAPIC */
+ iosapic_irq[vector].addr = iosapic_address;
+ iosapic_irq[vector].base_irq = base_irq;
+ iosapic_irq[vector].pin = global_vector - iosapic_irq[vector].base_irq;
+ iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW;
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+
+ if (edge_triggered) {
+ iosapic_irq[vector].trigger = IOSAPIC_EDGE;
+ irq_type = &irq_type_iosapic_edge;
+ } else {
+ iosapic_irq[vector].trigger = IOSAPIC_LEVEL;
+ irq_type = &irq_type_iosapic_level;
+ }
+
+ idesc = irq_desc(vector);
+ if (idesc->handler != irq_type) {
+ if (idesc->handler != &no_irq_type)
+ printk("iosapic_register_irq(): changing vector 0x%02x from"
+ "%s to %s\n", vector, idesc->handler->typename, irq_type->typename);
+ idesc->handler = irq_type;
+ }
+
+ printk("IOSAPIC %x(%s,%s) -> Vector %x\n", global_vector,
+ (polarity ? "high" : "low"), (edge_triggered ? "edge" : "level"), vector);
+
+ /* program the IOSAPIC routing table */
+ set_rte(vector, (ia64_get_lid() >> 16) & 0xffff);
+ return vector;
+}
+
+/*
+ * ACPI calls this when it finds an entry for a platform interrupt.
+ * Note that the irq_base and IOSAPIC address must be set in iosapic_init().
+ */
+int
+iosapic_register_platform_irq (u32 int_type, u32 global_vector, u32 iosapic_vector,
+ u16 eid, u16 id, unsigned long polarity,
+ unsigned long edge_triggered, u32 base_irq, char *iosapic_address)
+{
+ struct hw_interrupt_type *irq_type;
+ irq_desc_t *idesc;
+ int vector;
+
+ switch (int_type) {
+ case ACPI20_ENTRY_PIS_CPEI:
+ vector = IA64_PCE_VECTOR;
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+ break;
+ case ACPI20_ENTRY_PIS_INIT:
+ vector = ia64_alloc_irq();
+ iosapic_irq[vector].dmode = IOSAPIC_INIT;
+ break;
+ default:
+ printk("iosapic_register_platform_irq(): invalid int type\n");
+ return -1;
+ }
+
+ /* fill in information from this vector's IOSAPIC */
+ iosapic_irq[vector].addr = iosapic_address;
+ iosapic_irq[vector].base_irq = base_irq;
+ iosapic_irq[vector].pin = global_vector - iosapic_irq[vector].base_irq;
+ iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW;
+
+ if (edge_triggered) {
+ iosapic_irq[vector].trigger = IOSAPIC_EDGE;
+ irq_type = &irq_type_iosapic_edge;
+ } else {
+ iosapic_irq[vector].trigger = IOSAPIC_LEVEL;
+ irq_type = &irq_type_iosapic_level;
+ }
+
+ idesc = irq_desc(vector);
+ if (idesc->handler != irq_type) {
+ if (idesc->handler != &no_irq_type)
+ printk("iosapic_register_platform_irq(): changing vector 0x%02x from"
+ "%s to %s\n", vector, idesc->handler->typename, irq_type->typename);
+ idesc->handler = irq_type;
+ }
+
+ printk("PLATFORM int %x: IOSAPIC %x(%s,%s) -> Vector %x CPU %.02u:%.02u\n",
+ int_type, global_vector, (polarity ? "high" : "low"),
+ (edge_triggered ? "edge" : "level"), vector, eid, id);
+
+ /* program the IOSAPIC routing table */
+ set_rte(vector, ((id << 8) | eid) & 0xffff);
+ return vector;
+}
+
+
+/*
* ACPI calls this when it finds an entry for a legacy ISA interrupt. Note that the
* irq_base and IOSAPIC address must be set in iosapic_init().
*/
@@ -436,7 +544,7 @@
/* the interrupt route is for another controller... */
continue;
- if (irq < 16)
+ if (pcat_compat && (irq < 16))
vector = isa_irq_to_vector(irq);
else {
vector = iosapic_irq_to_vector(irq);
@@ -515,6 +623,23 @@
printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> 0x%02x\n",
dev->bus->number, PCI_SLOT(dev->devfn), pin, vector);
dev->irq = vector;
+
+#ifdef CONFIG_SMP
+ /*
+ * For platforms that do not support interrupt redirect
+ * via the XTP interface, we can round-robin the PCI
+ * device interrupts to the processors
+ */
+ if (!(smp_int_redirect & SMP_IRQ_REDIRECTION)) {
+ static int cpu_index = 0;
+
+ set_rte(vector, cpu_physical_id(cpu_index) & 0xffff);
+
+ cpu_index++;
+ if (cpu_index >= smp_num_cpus)
+ cpu_index = 0;
+ }
+#endif
}
}
/*
diff -urN linux-2.4.13/arch/ia64/kernel/irq.c linux-2.4.13-lia/arch/ia64/kernel/irq.c
--- linux-2.4.13/arch/ia64/kernel/irq.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/irq.c Thu Oct 4 00:21:39 2001
@@ -33,6 +33,7 @@
#include <linux/irq.h>
#include <linux/proc_fs.h>
+#include <asm/atomic.h>
#include <asm/io.h>
#include <asm/smp.h>
#include <asm/system.h>
@@ -121,7 +122,10 @@
end_none
};
-volatile unsigned long irq_err_count;
+atomic_t irq_err_count;
+#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MISMATCH_DEBUG)
+atomic_t irq_mis_count;
+#endif
/*
* Generic, controller-independent functions:
@@ -164,14 +168,17 @@
p += sprintf(p, "%10u ",
nmi_count(cpu_logical_map(j)));
p += sprintf(p, "\n");
-#if defined(CONFIG_SMP) && defined(__i386__)
+#if defined(CONFIG_SMP) && defined(CONFIG_X86)
p += sprintf(p, "LOC: ");
for (j = 0; j < smp_num_cpus; j++)
p += sprintf(p, "%10u ",
apic_timer_irqs[cpu_logical_map(j)]);
p += sprintf(p, "\n");
#endif
- p += sprintf(p, "ERR: %10lu\n", irq_err_count);
+ p += sprintf(p, "ERR: %10u\n", atomic_read(&irq_err_count));
+#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MISMATCH_DEBUG)
+ p += sprintf(p, "MIS: %10u\n", atomic_read(&irq_mis_count));
+#endif
return p - buf;
}
@@ -183,7 +190,7 @@
#ifdef CONFIG_SMP
unsigned int global_irq_holder = NO_PROC_ID;
-volatile unsigned long global_irq_lock; /* long for set_bit --RR */
+unsigned volatile long global_irq_lock; /* pedantic: long for set_bit --RR */
extern void show_stack(unsigned long* esp);
@@ -201,14 +208,14 @@
printk(" %d",bh_count(i));
printk(" ]\nStack dumps:");
-#if defined(__ia64__)
+#if defined(CONFIG_IA64)
/*
* We can't unwind the stack of another CPU without access to
* the registers of that CPU. And sending an IPI when we're
* in a potentially wedged state doesn't sound like a smart
* idea.
*/
-#elif defined(__i386__)
+#elif defined(CONFIG_X86)
for(i=0;i< smp_num_cpus;i++) {
unsigned long esp;
if(i=cpu)
@@ -261,7 +268,7 @@
/*
* We have to allow irqs to arrive between __sti and __cli
*/
-# ifdef __ia64__
+# ifdef CONFIG_IA64
# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop 0")
# else
# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop")
@@ -331,6 +338,9 @@
/* Uhhuh.. Somebody else got it. Wait.. */
do {
do {
+#ifdef CONFIG_X86
+ rep_nop();
+#endif
} while (test_bit(0,&global_irq_lock));
} while (test_and_set_bit(0,&global_irq_lock));
}
@@ -364,7 +374,7 @@
{
unsigned int flags;
-#ifdef __ia64__
+#ifdef CONFIG_IA64
__save_flags(flags);
if (flags & IA64_PSR_I) {
__cli();
@@ -403,7 +413,7 @@
int cpu = smp_processor_id();
__save_flags(flags);
-#ifdef __ia64__
+#ifdef CONFIG_IA64
local_enabled = (flags & IA64_PSR_I) != 0;
#else
local_enabled = (flags >> EFLAGS_IF_SHIFT) & 1;
@@ -476,13 +486,19 @@
return status;
}
-/*
- * Generic enable/disable code: this just calls
- * down into the PIC-specific version for the actual
- * hardware disable after having gotten the irq
- * controller lock.
+/**
+ * disable_irq_nosync - disable an irq without waiting
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables and Enables are
+ * nested.
+ * Unlike disable_irq(), this function does not ensure existing
+ * instances of the IRQ handler have completed before returning.
+ *
+ * This function may be called from IRQ context.
*/
-void inline disable_irq_nosync(unsigned int irq)
+
+inline void disable_irq_nosync(unsigned int irq)
{
irq_desc_t *desc = irq_desc(irq);
unsigned long flags;
@@ -495,10 +511,19 @@
spin_unlock_irqrestore(&desc->lock, flags);
}
-/*
- * Synchronous version of the above, making sure the IRQ is
- * no longer running on any other IRQ..
+/**
+ * disable_irq - disable an irq and wait for completion
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Enables and Disables are
+ * nested.
+ * This function waits for any pending IRQ handlers for this interrupt
+ * to complete before returning. If you use this function while
+ * holding a resource the IRQ handler may need you will deadlock.
+ *
+ * This function may be called - with care - from IRQ context.
*/
+
void disable_irq(unsigned int irq)
{
disable_irq_nosync(irq);
@@ -512,6 +537,17 @@
#endif
}
+/**
+ * enable_irq - enable handling of an irq
+ * @irq: Interrupt to enable
+ *
+ * Undoes the effect of one call to disable_irq(). If this
+ * matches the last disable, processing of interrupts on this
+ * IRQ line is re-enabled.
+ *
+ * This function may be called from IRQ context.
+ */
+
void enable_irq(unsigned int irq)
{
irq_desc_t *desc = irq_desc(irq);
@@ -533,7 +569,8 @@
desc->depth--;
break;
case 0:
- printk("enable_irq() unbalanced from %p\n", (void *) __builtin_return_address(0));
+ printk("enable_irq(%u) unbalanced from %p\n",
+ irq, (void *) __builtin_return_address(0));
}
spin_unlock_irqrestore(&desc->lock, flags);
}
@@ -626,11 +663,41 @@
desc->handler->end(irq);
spin_unlock(&desc->lock);
}
- if (local_softirq_pending())
- do_softirq();
return 1;
}
+/**
+ * request_irq - allocate an interrupt line
+ * @irq: Interrupt line to allocate
+ * @handler: Function to be called when the IRQ occurs
+ * @irqflags: Interrupt type flags
+ * @devname: An ascii name for the claiming device
+ * @dev_id: A cookie passed back to the handler function
+ *
+ * This call allocates interrupt resources and enables the
+ * interrupt line and IRQ handling. From the point this
+ * call is made your handler function may be invoked. Since
+ * your handler function must clear any interrupt the board
+ * raises, you must take care both to initialise your hardware
+ * and to set up the interrupt handler in the right order.
+ *
+ * Dev_id must be globally unique. Normally the address of the
+ * device data structure is used as the cookie. Since the handler
+ * receives this value it makes sense to use it.
+ *
+ * If your interrupt is shared you must pass a non NULL dev_id
+ * as this is required when freeing the interrupt.
+ *
+ * Flags:
+ *
+ * SA_SHIRQ Interrupt is shared
+ *
+ * SA_INTERRUPT Disable local interrupts while processing
+ *
+ * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ *
+ */
+
int request_irq(unsigned int irq,
void (*handler)(int, void *, struct pt_regs *),
unsigned long irqflags,
@@ -676,6 +743,24 @@
return retval;
}
+/**
+ * free_irq - free an interrupt
+ * @irq: Interrupt line to free
+ * @dev_id: Device identity to free
+ *
+ * Remove an interrupt handler. The handler is removed and if the
+ * interrupt line is no longer in use by any driver it is disabled.
+ * On a shared IRQ the caller must ensure the interrupt is disabled
+ * on the card it drives before calling this function. The function
+ * does not return until any executing interrupts for this IRQ
+ * have completed.
+ *
+ * This function may be called from interrupt context.
+ *
+ * Bugs: Attempting to free an irq in a handler for the same irq hangs
+ * the machine.
+ */
+
void free_irq(unsigned int irq, void *dev_id)
{
irq_desc_t *desc;
@@ -726,6 +811,17 @@
* with "IRQ_WAITING" cleared and the interrupt
* disabled.
*/
+
+static DECLARE_MUTEX(probe_sem);
+
+/**
+ * probe_irq_on - begin an interrupt autodetect
+ *
+ * Commence probing for an interrupt. The interrupts are scanned
+ * and a mask of potential interrupt lines is returned.
+ *
+ */
+
unsigned long probe_irq_on(void)
{
unsigned int i;
@@ -733,6 +829,7 @@
unsigned long val;
unsigned long delay;
+ down(&probe_sem);
/*
* something may have generated an irq long ago and we want to
* flush such a longstanding irq before considering it as spurious.
@@ -799,10 +896,19 @@
return val;
}
-/*
- * Return a mask of triggered interrupts (this
- * can handle only legacy ISA interrupts).
+/**
+ * probe_irq_mask - scan a bitmap of interrupt lines
+ * @val: mask of interrupts to consider
+ *
+ * Scan the ISA bus interrupt lines and return a bitmap of
+ * active interrupts. The interrupt probe logic state is then
+ * returned to its previous value.
+ *
+ * Note: we need to scan all the irq's even though we will
+ * only return ISA irq numbers - just so that we reset them
+ * all to a known state.
*/
+
unsigned int probe_irq_mask(unsigned long val)
{
int i;
@@ -825,14 +931,29 @@
}
spin_unlock_irq(&desc->lock);
}
+ up(&probe_sem);
return mask & val;
}
-/*
- * Return the one interrupt that triggered (this can
- * handle any interrupt source)
+/**
+ * probe_irq_off - end an interrupt autodetect
+ * @val: mask of potential interrupts (unused)
+ *
+ * Scans the unused interrupt lines and returns the line which
+ * appears to have triggered the interrupt. If no interrupt was
+ * found then zero is returned. If more than one interrupt is
+ * found then minus the first candidate is returned to indicate
+ * their is doubt.
+ *
+ * The interrupt probe logic state is returned to its previous
+ * value.
+ *
+ * BUGS: When used in a module (which arguably shouldnt happen)
+ * nothing prevents two IRQ probe callers from overlapping. The
+ * results of this are non-optimal.
*/
+
int probe_irq_off(unsigned long val)
{
int i, irq_found, nr_irqs;
@@ -857,6 +978,7 @@
}
spin_unlock_irq(&desc->lock);
}
+ up(&probe_sem);
if (nr_irqs > 1)
irq_found = -irq_found;
@@ -911,7 +1033,7 @@
if (!shared) {
desc->depth = 0;
- desc->status &= ~IRQ_DISABLED;
+ desc->status &= ~(IRQ_DISABLED | IRQ_AUTODETECT | IRQ_WAITING);
desc->handler->startup(irq);
}
spin_unlock_irqrestore(&desc->lock,flags);
@@ -922,20 +1044,9 @@
static struct proc_dir_entry * root_irq_dir;
static struct proc_dir_entry * irq_dir [NR_IRQS];
-static struct proc_dir_entry * smp_affinity_entry [NR_IRQS];
-
-static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
#define HEX_DIGITS 8
-static int irq_affinity_read_proc (char *page, char **start, off_t off,
- int count, int *eof, void *data)
-{
- if (count < HEX_DIGITS+1)
- return -EINVAL;
- return sprintf (page, "%08lx\n", irq_affinity[(long)data]);
-}
-
static unsigned int parse_hex_value (const char *buffer,
unsigned long count, unsigned long *ret)
{
@@ -973,6 +1084,20 @@
return 0;
}
+#if CONFIG_SMP
+
+static struct proc_dir_entry * smp_affinity_entry [NR_IRQS];
+
+static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
+
+static int irq_affinity_read_proc (char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+{
+ if (count < HEX_DIGITS+1)
+ return -EINVAL;
+ return sprintf (page, "%08lx\n", irq_affinity[(long)data]);
+}
+
static int irq_affinity_write_proc (struct file *file, const char *buffer,
unsigned long count, void *data)
{
@@ -984,7 +1109,6 @@
err = parse_hex_value(buffer, count, &new_value);
-#if CONFIG_SMP
/*
* Do not allow disabling IRQs completely - it's a too easy
* way to make the system unusable accidentally :-) At least
@@ -992,7 +1116,6 @@
*/
if (!(new_value & cpu_online_map))
return -EINVAL;
-#endif
irq_affinity[irq] = new_value;
irq_desc(irq)->handler->set_affinity(irq, new_value);
@@ -1000,6 +1123,8 @@
return full_count;
}
+#endif /* CONFIG_SMP */
+
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
@@ -1027,7 +1152,6 @@
static void register_irq_proc (unsigned int irq)
{
- struct proc_dir_entry *entry;
char name [MAX_NAMELEN];
if (!root_irq_dir || (irq_desc(irq)->handler = &no_irq_type))
@@ -1039,15 +1163,22 @@
/* create /proc/irq/1234 */
irq_dir[irq] = proc_mkdir(name, root_irq_dir);
- /* create /proc/irq/1234/smp_affinity */
- entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
-
- entry->nlink = 1;
- entry->data = (void *)(long)irq;
- entry->read_proc = irq_affinity_read_proc;
- entry->write_proc = irq_affinity_write_proc;
+#if CONFIG_SMP
+ {
+ struct proc_dir_entry *entry;
+ /* create /proc/irq/1234/smp_affinity */
+ entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
+
+ if (entry) {
+ entry->nlink = 1;
+ entry->data = (void *)(long)irq;
+ entry->read_proc = irq_affinity_read_proc;
+ entry->write_proc = irq_affinity_write_proc;
+ }
- smp_affinity_entry[irq] = entry;
+ smp_affinity_entry[irq] = entry;
+ }
+#endif
}
unsigned long prof_cpu_mask = -1;
@@ -1062,6 +1193,9 @@
/* create /proc/irq/prof_cpu_mask */
entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
+
+ if (!entry)
+ return;
entry->nlink = 1;
entry->data = (void *)&prof_cpu_mask;
diff -urN linux-2.4.13/arch/ia64/kernel/irq_ia64.c linux-2.4.13-lia/arch/ia64/kernel/irq_ia64.c
--- linux-2.4.13/arch/ia64/kernel/irq_ia64.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/irq_ia64.c Thu Oct 4 00:21:39 2001
@@ -1,9 +1,9 @@
/*
* linux/arch/ia64/kernel/irq.c
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 6/10/99: Updated to bring in sync with x86 version to facilitate
* support for SMP and different interrupt controllers.
@@ -131,6 +131,13 @@
ia64_eoi();
vector = ia64_get_ivr();
}
+ /*
+ * This must be done *after* the ia64_eoi(). For example, the keyboard softirq
+ * handler needs to be able to wait for further keyboard interrupts, which can't
+ * come through until ia64_eoi() has been done.
+ */
+ if (local_softirq_pending())
+ do_softirq();
}
#ifdef CONFIG_SMP
diff -urN linux-2.4.13/arch/ia64/kernel/ivt.S linux-2.4.13-lia/arch/ia64/kernel/ivt.S
--- linux-2.4.13/arch/ia64/kernel/ivt.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/ivt.S Wed Oct 10 17:58:45 2001
@@ -2,8 +2,8 @@
* arch/ia64/kernel/ivt.S
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1998-2001 David Mosberger <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger <davidm@hpl.hp.com>
*
* 00/08/23 Asit Mallick <asit.k.mallick@intel.com> TLB handling for SMP
* 00/12/20 David Mosberger-Tang <davidm@hpl.hp.com> DTLB/ITLB handler now uses virtual PT.
@@ -157,7 +157,7 @@
;;
(p10) itc.i r18 // insert the instruction TLB entry
(p11) itc.d r18 // insert the data TLB entry
-(p6) br.spnt.many page_fault // handle bad address/page not present (page fault)
+(p6) br.cond.spnt.many page_fault // handle bad address/page not present (page fault)
mov cr.ifa=r22
/*
@@ -213,7 +213,7 @@
;;
mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
;;
itc.i r18
;;
@@ -251,7 +251,7 @@
;;
mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
;;
itc.d r18
;;
@@ -286,7 +286,7 @@
;;
(p8) mov cr.iha=r17
(p8) mov r29° // save b0
-(p8) br.cond.dptk.many itlb_fault
+(p8) br.cond.dptk itlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
shr.u r18=r16,57 // move address bit 61 to bit 4
@@ -297,7 +297,7 @@
dep r19=r17,r19,0,12 // insert PTE control bits into r19
;;
or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6
-(p8) br.cond.spnt.many page_fault
+(p8) br.cond.spnt page_fault
;;
itc.i r19 // insert the TLB entry
mov pr=r31,-1
@@ -324,7 +324,7 @@
;;
(p8) mov cr.iha=r17
(p8) mov r29° // save b0
-(p8) br.cond.dptk.many dtlb_fault
+(p8) br.cond.dptk dtlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
tbit.nz p6,p7=r20,IA64_ISR_SP_BIT // is speculation bit on?
@@ -333,7 +333,7 @@
;;
andcm r18=0x10,r18 // bit 4=~address-bit(61)
cmp.ne p8,p0=r0,r23
-(p8) br.cond.spnt.many page_fault
+(p8) br.cond.spnt page_fault
dep r21=-1,r21,IA64_PSR_ED_BIT,1
dep r19=r17,r19,0,12 // insert PTE control bits into r19
@@ -429,7 +429,7 @@
;;
(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
mov b0=r30
br.sptk.many b0 // return to continuation point
END(nested_dtlb_miss)
@@ -534,15 +534,6 @@
;;
1: ld8 r18=[r17]
;;
-# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC)
- /*
- * Erratum 85 (Access bit fault could be reported before page not present fault)
- * If the PTE is indicates the page is not present, then just turn this into a
- * page fault.
- */
- tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.sptk page_fault // page wasn't present
-# endif
mov ar.ccv=r18 // set compare value for cmpxchg
or r25=_PAGE_A,r18 // set the accessed bit
;;
@@ -564,15 +555,6 @@
;;
1: ld8 r18=[r17]
;;
-# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC)
- /*
- * Erratum 85 (Access bit fault could be reported before page not present fault)
- * If the PTE is indicates the page is not present, then just turn this into a
- * page fault.
- */
- tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.sptk page_fault // page wasn't present
-# endif
or r18=_PAGE_A,r18 // set the accessed bit
mov b0=r29 // restore b0
;;
@@ -640,7 +622,7 @@
mov r31=pr // prepare to save predicates
;;
cmp.eq p0,p7=r16,r17 // is this a system call? (p7 <- false, if so)
-(p7) br.cond.spnt.many non_syscall
+(p7) br.cond.spnt non_syscall
SAVE_MIN // uses r31; defines r2:
@@ -656,7 +638,7 @@
adds r3=8,r2 // set up second base pointer for SAVE_REST
;;
SAVE_REST
- br.call.sptk rpÞmine_args // clear NaT bits in (potential) syscall args
+ br.call.sptk.many rpÞmine_args // clear NaT bits in (potential) syscall args
mov r3%5
adds r15=-1024,r15 // r15 contains the syscall number---subtract 1024
@@ -698,7 +680,7 @@
st8 [r16]=r18 // store new value for cr.isr
(p8) br.call.sptk.many b6¶ // ignore this return addr
- br.cond.sptk.many ia64_trace_syscall
+ br.cond.sptk ia64_trace_syscall
// NOT REACHED
END(break_fault)
@@ -811,8 +793,8 @@
mov b6=r8
;;
cmp.ne p6,p0=0,r8
-(p6) br.call.dpnt b6¶ // call returns to ia64_leave_kernel
- br.sptk ia64_leave_kernel
+(p6) br.call.dpnt.many b6¶ // call returns to ia64_leave_kernel
+ br.sptk.many ia64_leave_kernel
END(dispatch_illegal_op_fault)
.align 1024
@@ -855,30 +837,30 @@
adds r15=IA64_PT_REGS_R1_OFFSET + 16,sp
;;
cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
- st8 [r15]=r8 // save orignal EAX in r1 (IA32 procs don't use the GP)
+ st8 [r15]=r8 // save original EAX in r1 (IA32 procs don't use the GP)
;;
alloc r15=ar.pfs,0,0,6,0 // must first in an insn group
;;
- ld4 r8=[r14],8 // r8 = EAX (syscall number)
- mov r15"2 // sys_vfork - last implemented system call
+ ld4 r8=[r14],8 // r8 = eax (syscall number)
+ mov r15#0 // number of entries in ia32 system call table
;;
- cmp.leu.unc p6,p7=r8,r15
- ld4 out1=[r14],8 // r9 = ecx
+ cmp.ltu.unc p6,p7=r8,r15
+ ld4 out1=[r14],8 // r9 = ecx
;;
- ld4 out2=[r14],8 // r10 = edx
+ ld4 out2=[r14],8 // r10 = edx
;;
- ld4 out0=[r14] // r11 = ebx
+ ld4 out0=[r14] // r11 = ebx
adds r14=(IA64_PT_REGS_R8_OFFSET-(8*3)) + 16,sp
;;
- ld4 out5=[r14],8 // r13 = ebp
+ ld4 out5=[r14],8 // r13 = ebp
;;
- ld4 out3=[r14],8 // r14 = esi
+ ld4 out3=[r14],8 // r14 = esi
adds r2=IA64_TASK_PTRACE_OFFSET,r13 // r2 = ¤t->ptrace
;;
- ld4 out4=[r14] // R15 = edi
+ ld4 out4=[r14] // r15 = edi
movl r16=ia32_syscall_table
;;
-(p6) shladd r16=r8,3,r16 // Force ni_syscall if not valid syscall number
+(p6) shladd r16=r8,3,r16 // force ni_syscall if not valid syscall number
ld8 r2=[r2] // r2 = current->ptrace
;;
ld8 r16=[r16]
@@ -889,12 +871,12 @@
;;
mov rp=r15
(p8) br.call.sptk.many b6¶
- br.cond.sptk.many ia32_trace_syscall
+ br.cond.sptk ia32_trace_syscall
non_ia32_syscall:
alloc r15=ar.pfs,0,0,2,0
- mov out0=r14 // interrupt #
- add out1\x16,sp // pointer to pt_regs
+ mov out0=r14 // interrupt #
+ add out1\x16,sp // pointer to pt_regs
;; // avoid WAW on CFM
br.call.sptk.many rp=ia32_bad_interrupt
.ret1: movl r15=ia64_leave_kernel
@@ -1085,7 +1067,7 @@
mov r31=pr
;;
cmp4.eq p6,p0=0,r16
-(p6) br.sptk dispatch_illegal_op_fault
+(p6) br.sptk.many dispatch_illegal_op_fault
;;
mov r19$ // fault number
br.sptk.many dispatch_to_fault_handler
diff -urN linux-2.4.13/arch/ia64/kernel/mca.c linux-2.4.13-lia/arch/ia64/kernel/mca.c
--- linux-2.4.13/arch/ia64/kernel/mca.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/mca.c Wed Oct 10 17:42:06 2001
@@ -3,12 +3,20 @@
* Purpose: Generic MCA handling layer
*
* Updated for latest kernel
+ * Copyright (C) 2001 Intel
+ * Copyright (C) Fred Lewis (frederick.v.lewis@intel.com)
+ *
* Copyright (C) 2000 Intel
* Copyright (C) Chuck Fleckenstein (cfleck@co.intel.com)
*
* Copyright (C) 1999 Silicon Graphics, Inc.
* Copyright (C) Vijay Chander(vijay@engr.sgi.com)
*
+ * 01/01/03 F. Lewis Added setup of CMCI and CPEI IRQs, logging of corrected
+ * platform errors, completed code for logging of
+ * corrected & uncorrected machine check errors, and
+ * updated for conformance with Nov. 2000 revision of the
+ * SAL 3.0 spec.
* 00/03/29 C. Fleckenstein Fixed PAL/SAL update issues, began MCA bug fixes, logging issues,
* added min save state dump, added INIT handler.
*/
@@ -16,6 +24,7 @@
#include <linux/types.h>
#include <linux/init.h>
#include <linux/sched.h>
+#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/smp_lock.h>
@@ -27,8 +36,10 @@
#include <asm/mca.h>
#include <asm/irq.h>
-#include <asm/machvec.h>
+#include <asm/hw_irq.h>
+#include <asm/acpi-ext.h>
+#undef MCA_PRT_XTRA_DATA
typedef struct ia64_fptr {
unsigned long fp;
@@ -38,22 +49,67 @@
ia64_mc_info_t ia64_mc_info;
ia64_mca_sal_to_os_state_t ia64_sal_to_os_handoff_state;
ia64_mca_os_to_sal_state_t ia64_os_to_sal_handoff_state;
-u64 ia64_mca_proc_state_dump[256];
+u64 ia64_mca_proc_state_dump[512];
u64 ia64_mca_stack[1024];
u64 ia64_mca_stackframe[32];
u64 ia64_mca_bspstore[1024];
u64 ia64_init_stack[INIT_TASK_SIZE] __attribute__((aligned(16)));
-static void ia64_mca_cmc_vector_setup(int enable,
- int_vector_t cmc_vector);
static void ia64_mca_wakeup_ipi_wait(void);
static void ia64_mca_wakeup(int cpu);
static void ia64_mca_wakeup_all(void);
-static void ia64_log_init(int,int);
-static void ia64_log_get(int,int, prfunc_t);
-static void ia64_log_clear(int,int,int, prfunc_t);
+static void ia64_log_init(int);
extern void ia64_monarch_init_handler (void);
extern void ia64_slave_init_handler (void);
+extern struct hw_interrupt_type irq_type_iosapic_level;
+
+static struct irqaction cmci_irqaction = {
+ handler: ia64_mca_cmc_int_handler,
+ flags: SA_INTERRUPT,
+ name: "cmc_hndlr"
+};
+
+static struct irqaction mca_rdzv_irqaction = {
+ handler: ia64_mca_rendez_int_handler,
+ flags: SA_INTERRUPT,
+ name: "mca_rdzv"
+};
+
+static struct irqaction mca_wkup_irqaction = {
+ handler: ia64_mca_wakeup_int_handler,
+ flags: SA_INTERRUPT,
+ name: "mca_wkup"
+};
+
+static struct irqaction mca_cpe_irqaction = {
+ handler: ia64_mca_cpe_int_handler,
+ flags: SA_INTERRUPT,
+ name: "cpe_hndlr"
+};
+
+/*
+ * ia64_mca_log_sal_error_record
+ *
+ * This function retrieves a specified error record type from SAL, sends it to
+ * the system log, and notifies SALs to clear the record from its non-volatile
+ * memory.
+ *
+ * Inputs : sal_info_type (Type of error record MCA/CMC/CPE/INIT)
+ * Outputs : None
+ */
+void
+ia64_mca_log_sal_error_record(int sal_info_type)
+{
+ /* Get the MCA error record */
+ if (!ia64_log_get(sal_info_type, (prfunc_t)printk))
+ return; // no record retrieved
+
+ /* Log the error record */
+ ia64_log_print(sal_info_type, (prfunc_t)printk);
+
+ /* Clear the CMC SAL logs now that they have been logged */
+ ia64_sal_clear_state_info(sal_info_type);
+}
/*
* hack for now, add platform dependent handlers
@@ -67,10 +123,14 @@
}
void
-cmci_handler_platform (int cmc_irq, void *arg, struct pt_regs *ptregs)
+ia64_mca_cpe_int_handler (int cpe_irq, void *arg, struct pt_regs *ptregs)
{
+ IA64_MCA_DEBUG("ia64_mca_cpe_int_handler: received interrupt. vector = %#x\n", cpe_irq);
+ /* Get the CMC error record and log it */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CPE);
}
+
/*
* This routine will be used to deal with platform specific handling
* of the init, i.e. drop into the kernel debugger on server machine,
@@ -81,17 +141,72 @@
init_handler_platform (struct pt_regs *regs)
{
/* if a kernel debugger is available call it here else just dump the registers */
+
show_regs(regs); /* dump the state info */
+ while (1); /* hang city if no debugger */
}
+/*
+ * ia64_mca_init_platform
+ *
+ * External entry for platform specific MCA initialization.
+ *
+ * Inputs
+ * None
+ *
+ * Outputs
+ * None
+ */
void
-log_print_platform ( void *cur_buff_ptr, prfunc_t prfunc)
+ia64_mca_init_platform (void)
{
+
}
+/*
+ * ia64_mca_check_errors
+ *
+ * External entry to check for error records which may have been posted by SAL
+ * for a prior failure which resulted in a machine shutdown before an the
+ * error could be logged. This function must be called after the filesystem
+ * is initialized.
+ *
+ * Inputs : None
+ *
+ * Outputs : None
+ */
void
-ia64_mca_init_platform (void)
+ia64_mca_check_errors (void)
{
+ /*
+ * If there is an MCA error record pending, get it and log it.
+ */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA);
+}
+
+/*
+ * ia64_mca_register_cpev
+ *
+ * Register the corrected platform error vector with SAL.
+ *
+ * Inputs
+ * cpev Corrected Platform Error Vector number
+ *
+ * Outputs
+ * None
+ */
+static void
+ia64_mca_register_cpev (int cpev)
+{
+ /* Register the CPE interrupt vector with SAL */
+ if (ia64_sal_mc_set_params(SAL_MC_PARAM_CPE_INT, SAL_MC_PARAM_MECHANISM_INT, cpev, 0, 0)) {
+ printk("ia64_mca_platform_init: failed to register Corrected "
+ "Platform Error interrupt vector with SAL.\n");
+ return;
+ }
+
+ IA64_MCA_DEBUG("ia64_mca_platform_init: corrected platform error "
+ "vector %#x setup and enabled\n", cpev);
}
#endif /* PLATFORM_MCA_HANDLERS */
@@ -140,30 +255,36 @@
&& !ia64_pmss_dump_bank0))
printk("\n");
}
- /* hang city for now, until we include debugger or copy to ptregs to show: */
- while (1);
}
/*
* ia64_mca_cmc_vector_setup
- * Setup the correctable machine check vector register in the processor
+ *
+ * Setup the corrected machine check vector register in the processor and
+ * unmask interrupt. This function is invoked on a per-processor basis.
+ *
* Inputs
- * Enable (1 - enable cmc interrupt , 0 - disable)
- * CMC handler entry point (if enabled)
+ * None
*
* Outputs
* None
*/
-static void
-ia64_mca_cmc_vector_setup(int enable,
- int_vector_t cmc_vector)
+void
+ia64_mca_cmc_vector_setup (void)
{
cmcv_reg_t cmcv;
cmcv.cmcv_regval = 0;
- cmcv.cmcv_mask = enable;
- cmcv.cmcv_vector = cmc_vector;
+ cmcv.cmcv_mask = 0; /* Unmask/enable interrupt */
+ cmcv.cmcv_vector = IA64_CMC_VECTOR;
ia64_set_cmcv(cmcv.cmcv_regval);
+
+ IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d corrected "
+ "machine check vector %#x setup and enabled.\n",
+ smp_processor_id(), IA64_CMC_VECTOR);
+
+ IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d CMCV = %#016lx\n",
+ smp_processor_id(), ia64_get_cmcv());
}
@@ -174,26 +295,58 @@
void
mca_test(void)
{
- slpi_buf.slpi_valid.slpi_psi = 1;
- slpi_buf.slpi_valid.slpi_cache_check = 1;
- slpi_buf.slpi_valid.slpi_tlb_check = 1;
- slpi_buf.slpi_valid.slpi_bus_check = 1;
- slpi_buf.slpi_valid.slpi_minstate = 1;
- slpi_buf.slpi_valid.slpi_bank1_gr = 1;
- slpi_buf.slpi_valid.slpi_br = 1;
- slpi_buf.slpi_valid.slpi_cr = 1;
- slpi_buf.slpi_valid.slpi_ar = 1;
- slpi_buf.slpi_valid.slpi_rr = 1;
- slpi_buf.slpi_valid.slpi_fr = 1;
+ slpi_buf.valid.psi_static_struct = 1;
+ slpi_buf.valid.num_cache_check = 1;
+ slpi_buf.valid.num_tlb_check = 1;
+ slpi_buf.valid.num_bus_check = 1;
+ slpi_buf.valid.processor_static_info.minstate = 1;
+ slpi_buf.valid.processor_static_info.br = 1;
+ slpi_buf.valid.processor_static_info.cr = 1;
+ slpi_buf.valid.processor_static_info.ar = 1;
+ slpi_buf.valid.processor_static_info.rr = 1;
+ slpi_buf.valid.processor_static_info.fr = 1;
ia64_os_mca_dispatch();
}
#endif /* #if defined(MCA_TEST) */
+
+/*
+ * verify_guid
+ *
+ * Compares a test guid to a target guid and returns result.
+ *
+ * Inputs
+ * test_guid * (ptr to guid to be verified)
+ * target_guid * (ptr to standard guid to be verified against)
+ *
+ * Outputs
+ * 0 (test verifies against target)
+ * non-zero (test guid does not verify)
+ */
+static int
+verify_guid (efi_guid_t *test, efi_guid_t *target)
+{
+ int rc;
+
+ if ((rc = memcmp((void *)test, (void *)target, sizeof(efi_guid_t)))) {
+ IA64_MCA_DEBUG("ia64_mca_print: invalid guid = "
+ "{ %08x, %04x, %04x, { %#02x, %#02x, %#02x, %#02x, "
+ "%#02x, %#02x, %#02x, %#02x, } } \n ",
+ test->data1, test->data2, test->data3, test->data4[0],
+ test->data4[1], test->data4[2], test->data4[3],
+ test->data4[4], test->data4[5], test->data4[6],
+ test->data4[7]);
+ }
+
+ return rc;
+}
+
/*
* ia64_mca_init
- * Do all the mca specific initialization on a per-processor basis.
+ *
+ * Do all the system level mca specific initialization.
*
* 1. Register spinloop and wakeup request interrupt vectors
*
@@ -201,77 +354,80 @@
*
* 3. Register OS_INIT handler entry point
*
- * 4. Initialize CMCV register to enable/disable CMC interrupt on the
- * processor and hook a handler in the platform-specific ia64_mca_init.
+ * 4. Initialize MCA/CMC/INIT related log buffers maintained by the OS.
*
- * 5. Initialize MCA/CMC/INIT related log buffers maintained by the OS.
+ * Note that this initialization is done very early before some kernel
+ * services are available.
*
- * Inputs
- * None
- * Outputs
- * None
+ * Inputs : None
+ *
+ * Outputs : None
*/
void __init
ia64_mca_init(void)
{
ia64_fptr_t *mon_init_ptr = (ia64_fptr_t *)ia64_monarch_init_handler;
ia64_fptr_t *slave_init_ptr = (ia64_fptr_t *)ia64_slave_init_handler;
+ ia64_fptr_t *mca_hldlr_ptr = (ia64_fptr_t *)ia64_os_mca_dispatch;
int i;
+ s64 rc;
- IA64_MCA_DEBUG("ia64_mca_init : begin\n");
+ IA64_MCA_DEBUG("ia64_mca_init: begin\n");
/* Clear the Rendez checkin flag for all cpus */
- for(i = 0 ; i < IA64_MAXCPUS; i++)
+ for(i = 0 ; i < NR_CPUS; i++)
ia64_mc_info.imi_rendez_checkin[i] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
- /* NOTE : The actual irqs for the rendez, wakeup and
- * cmc interrupts are requested in the platform-specific
- * mca initialization code.
- */
/*
* Register the rendezvous spinloop and wakeup mechanism with SAL
*/
/* Register the rendezvous interrupt vector with SAL */
- if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT,
- SAL_MC_PARAM_MECHANISM_INT,
- IA64_MCA_RENDEZ_VECTOR,
- IA64_MCA_RENDEZ_TIMEOUT,
- 0))
+ if ((rc = ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT,
+ SAL_MC_PARAM_MECHANISM_INT,
+ IA64_MCA_RENDEZ_VECTOR,
+ IA64_MCA_RENDEZ_TIMEOUT,
+ 0)))
+ {
+ printk("ia64_mca_init: Failed to register rendezvous interrupt "
+ "with SAL. rc = %ld\n", rc);
return;
+ }
/* Register the wakeup interrupt vector with SAL */
- if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP,
- SAL_MC_PARAM_MECHANISM_INT,
- IA64_MCA_WAKEUP_VECTOR,
- 0,
- 0))
+ if ((rc = ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP,
+ SAL_MC_PARAM_MECHANISM_INT,
+ IA64_MCA_WAKEUP_VECTOR,
+ 0, 0)))
+ {
+ printk("ia64_mca_init: Failed to register wakeup interrupt with SAL. rc = %ld\n",
+ rc);
return;
+ }
- IA64_MCA_DEBUG("ia64_mca_init : registered mca rendezvous spinloop and wakeup mech.\n");
- /*
- * Setup the correctable machine check vector
- */
- ia64_mca_cmc_vector_setup(IA64_CMC_INT_ENABLE, IA64_CMC_VECTOR);
-
- IA64_MCA_DEBUG("ia64_mca_init : correctable mca vector setup done\n");
+ IA64_MCA_DEBUG("ia64_mca_init: registered mca rendezvous spinloop and wakeup mech.\n");
- ia64_mc_info.imi_mca_handler = __pa(ia64_os_mca_dispatch);
+ ia64_mc_info.imi_mca_handler = __pa(mca_hldlr_ptr->fp);
/*
* XXX - disable SAL checksum by setting size to 0; should be
* __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch);
*/
ia64_mc_info.imi_mca_handler_size = 0;
- /* Register the os mca handler with SAL */
- if (ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
- ia64_mc_info.imi_mca_handler,
- __pa(ia64_get_gp()),
- ia64_mc_info.imi_mca_handler_size,
- 0,0,0))
+ /* Register the os mca handler with SAL */
+ if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
+ ia64_mc_info.imi_mca_handler,
+ mca_hldlr_ptr->gp,
+ ia64_mc_info.imi_mca_handler_size,
+ 0, 0, 0)))
+ {
+ printk("ia64_mca_init: Failed to register os mca handler with SAL. rc = %ld\n",
+ rc);
return;
+ }
- IA64_MCA_DEBUG("ia64_mca_init : registered os mca handler with SAL\n");
+ IA64_MCA_DEBUG("ia64_mca_init: registered os mca handler with SAL at 0x%lx, gp = 0x%lx\n",
+ ia64_mc_info.imi_mca_handler, mca_hldlr_ptr->gp);
/*
* XXX - disable SAL checksum by setting size to 0, should be
@@ -282,53 +438,87 @@
ia64_mc_info.imi_slave_init_handler = __pa(slave_init_ptr->fp);
ia64_mc_info.imi_slave_init_handler_size = 0;
- IA64_MCA_DEBUG("ia64_mca_init : os init handler at %lx\n",ia64_mc_info.imi_monarch_init_handler);
+ IA64_MCA_DEBUG("ia64_mca_init: os init handler at %lx\n",
+ ia64_mc_info.imi_monarch_init_handler);
/* Register the os init handler with SAL */
- if (ia64_sal_set_vectors(SAL_VECTOR_OS_INIT,
- ia64_mc_info.imi_monarch_init_handler,
- __pa(ia64_get_gp()),
- ia64_mc_info.imi_monarch_init_handler_size,
- ia64_mc_info.imi_slave_init_handler,
- __pa(ia64_get_gp()),
- ia64_mc_info.imi_slave_init_handler_size))
+ if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_INIT,
+ ia64_mc_info.imi_monarch_init_handler,
+ __pa(ia64_get_gp()),
+ ia64_mc_info.imi_monarch_init_handler_size,
+ ia64_mc_info.imi_slave_init_handler,
+ __pa(ia64_get_gp()),
+ ia64_mc_info.imi_slave_init_handler_size)))
+ {
+ printk("ia64_mca_init: Failed to register m/s init handlers with SAL. rc = %ld\n",
+ rc);
+ return;
+ }
+ IA64_MCA_DEBUG("ia64_mca_init: registered os init handler with SAL\n");
- return;
+ /*
+ * Configure the CMCI vector and handler. Interrupts for CMC are
+ * per-processor, so AP CMC interrupts are setup in smp_callin() (smp.c).
+ */
+ register_percpu_irq(IA64_CMC_VECTOR, &cmci_irqaction);
+ ia64_mca_cmc_vector_setup(); /* Setup vector on BSP & enable */
- IA64_MCA_DEBUG("ia64_mca_init : registered os init handler with SAL\n");
+ /* Setup the MCA rendezvous interrupt vector */
+ register_percpu_irq(IA64_MCA_RENDEZ_VECTOR, &mca_rdzv_irqaction);
+
+ /* Setup the MCA wakeup interrupt vector */
+ register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction);
+
+ /* Setup the CPE interrupt vector */
+ {
+ irq_desc_t *desc;
+ unsigned int irq;
+ int cpev = acpi_request_vector(ACPI20_ENTRY_PIS_CPEI);
+
+ if (cpev >= 0) {
+ for (irq = 0; irq < NR_IRQS; ++irq)
+ if (irq_to_vector(irq) = cpev) {
+ desc = irq_desc(irq);
+ desc->status |= IRQ_PER_CPU;
+ desc->handler = &irq_type_iosapic_level;
+ setup_irq(irq, &mca_cpe_irqaction);
+ }
+ ia64_mca_register_cpev(cpev);
+ } else
+ printk("ia64_mca_init: Failed to get routed CPEI vector from ACPI.\n");
+ }
/* Initialize the areas set aside by the OS to buffer the
* platform/processor error states for MCA/INIT/CMC
* handling.
*/
- ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM);
- ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM);
- ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM);
-
- ia64_mca_init_platform();
-
- IA64_MCA_DEBUG("ia64_mca_init : platform-specific mca handling setup done\n");
+ ia64_log_init(SAL_INFO_TYPE_MCA);
+ ia64_log_init(SAL_INFO_TYPE_INIT);
+ ia64_log_init(SAL_INFO_TYPE_CMC);
+ ia64_log_init(SAL_INFO_TYPE_CPE);
#if defined(MCA_TEST)
mca_test();
#endif /* #if defined(MCA_TEST) */
printk("Mca related initialization done\n");
+
+#if 0 // Too early in initialization -- error log is lost
+ /* Do post-failure MCA error logging */
+ ia64_mca_check_errors();
+#endif // Too early in initialization -- error log is lost
}
/*
* ia64_mca_wakeup_ipi_wait
+ *
* Wait for the inter-cpu interrupt to be sent by the
* monarch processor once it is done with handling the
* MCA.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_wakeup_ipi_wait(void)
@@ -339,16 +529,16 @@
do {
switch(irr_num) {
- case 0:
+ case 0:
irr = ia64_get_irr0();
break;
- case 1:
+ case 1:
irr = ia64_get_irr1();
break;
- case 2:
+ case 2:
irr = ia64_get_irr2();
break;
- case 3:
+ case 3:
irr = ia64_get_irr3();
break;
}
@@ -357,26 +547,28 @@
/*
* ia64_mca_wakeup
+ *
* Send an inter-cpu interrupt to wake-up a particular cpu
* and mark that cpu to be out of rendez.
- * Inputs
- * cpuid
- * Outputs
- * None
+ *
+ * Inputs : cpuid
+ * Outputs : None
*/
void
ia64_mca_wakeup(int cpu)
{
platform_send_ipi(cpu, IA64_MCA_WAKEUP_VECTOR, IA64_IPI_DM_INT, 0);
ia64_mc_info.imi_rendez_checkin[cpu] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
+
}
+
/*
* ia64_mca_wakeup_all
+ *
* Wakeup all the cpus which have rendez'ed previously.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_wakeup_all(void)
@@ -389,15 +581,16 @@
ia64_mca_wakeup(cpu);
}
+
/*
* ia64_mca_rendez_interrupt_handler
+ *
* This is handler used to put slave processors into spinloop
* while the monarch processor does the mca handling and later
* wake each slave up once the monarch is done.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_rendez_int_handler(int rendez_irq, void *arg, struct pt_regs *ptregs)
@@ -423,23 +616,22 @@
/* Enable all interrupts */
restore_flags(flags);
-
-
}
/*
* ia64_mca_wakeup_int_handler
+ *
* The interrupt handler for processing the inter-cpu interrupt to the
* slave cpu which was spinning in the rendez loop.
* Since this spinning is done by turning off the interrupts and
* polling on the wakeup-interrupt bit in the IRR, there is
* nothing useful to be done in the handler.
- * Inputs
- * wakeup_irq (Wakeup-interrupt bit)
+ *
+ * Inputs : wakeup_irq (Wakeup-interrupt bit)
* arg (Interrupt handler specific argument)
* ptregs (Exception frame at the time of the interrupt)
- * Outputs
+ * Outputs : None
*
*/
void
@@ -450,16 +642,16 @@
/*
* ia64_return_to_sal_check
+ *
* This is function called before going back from the OS_MCA handler
* to the OS_MCA dispatch code which finally takes the control back
* to the SAL.
* The main purpose of this routine is to setup the OS_MCA to SAL
* return state which can be used by the OS_MCA dispatch code
* just before going back to SAL.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
@@ -474,11 +666,13 @@
ia64_os_to_sal_handoff_state.imots_sal_check_ra ia64_sal_to_os_handoff_state.imsto_sal_check_ra;
- /* For now ignore the MCA */
- ia64_os_to_sal_handoff_state.imots_os_status = IA64_MCA_CORRECTED;
+ /* Cold Boot for uncorrectable MCA */
+ ia64_os_to_sal_handoff_state.imots_os_status = IA64_MCA_COLD_BOOT;
}
+
/*
* ia64_mca_ucmc_handler
+ *
* This is uncorrectable machine check handler called from OS_MCA
* dispatch code which is in turn called from SAL_CHECK().
* This is the place where the core of OS MCA handling is done.
@@ -487,93 +681,92 @@
* monarch processor. Once the monarch is done with MCA handling
* further MCA logging is enabled by clearing logs.
* Monarch also has the duty of sending wakeup-IPIs to pull the
- * slave processors out of rendez. spinloop.
- * Inputs
- * None
- * Outputs
- * None
+ * slave processors out of rendezvous spinloop.
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_ucmc_handler(void)
{
+#if 0 /* stubbed out @FVL */
+ /*
+ * Attempting to log a DBE error Causes "reserved register/field panic"
+ * in printk.
+ */
- /* Get the MCA processor log */
- ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- /* Get the MCA platform log */
- ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
-
- ia64_log_print(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+ /* Get the MCA error record and log it */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA);
+#endif /* stubbed out @FVL */
/*
- * Do some error handling - Platform-specific mca handler is called at this point
+ * Do Platform-specific mca error handling if required.
*/
-
mca_handler_platform() ;
- /* Clear the SAL MCA logs */
- ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, 1, printk);
- ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, 1, printk);
-
- /* Wakeup all the processors which are spinning in the rendezvous
- * loop.
+ /*
+ * Wakeup all the processors which are spinning in the rendezvous
+ * loop.
*/
ia64_mca_wakeup_all();
+
+ /* Return to SAL */
ia64_return_to_sal_check();
}
/*
* ia64_mca_cmc_int_handler
- * This is correctable machine check interrupt handler.
+ *
+ * This is corrected machine check interrupt handler.
* Right now the logs are extracted and displayed in a well-defined
* format.
+ *
* Inputs
- * None
+ * interrupt number
+ * client data arg ptr
+ * saved registers ptr
+ *
* Outputs
* None
*/
void
ia64_mca_cmc_int_handler(int cmc_irq, void *arg, struct pt_regs *ptregs)
{
- /* Get the CMC processor log */
- ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- /* Get the CMC platform log */
- ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
-
-
- ia64_log_print(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- cmci_handler_platform(cmc_irq, arg, ptregs);
+ IA64_MCA_DEBUG("ia64_mca_cmc_int_handler: received interrupt vector = %#x on CPU %d\n",
+ cmc_irq, smp_processor_id());
- /* Clear the CMC SAL logs now that they have been saved in the OS buffer */
- ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC);
+ /* Get the CMC error record and log it */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CMC);
}
/*
* IA64_MCA log support
*/
#define IA64_MAX_LOGS 2 /* Double-buffering for nested MCAs */
-#define IA64_MAX_LOG_TYPES 3 /* MCA, CMC, INIT */
-#define IA64_MAX_LOG_SUBTYPES 2 /* Processor, Platform */
+#define IA64_MAX_LOG_TYPES 4 /* MCA, INIT, CMC, CPE */
-typedef struct ia64_state_log_s {
+typedef struct ia64_state_log_s
+{
spinlock_t isl_lock;
int isl_index;
- ia64_psilog_t isl_log[IA64_MAX_LOGS]; /* need space to store header + error log */
+ ia64_err_rec_t isl_log[IA64_MAX_LOGS]; /* need space to store header + error log */
} ia64_state_log_t;
-static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES][IA64_MAX_LOG_SUBTYPES];
+static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES];
-#define IA64_LOG_LOCK_INIT(it, sit) spin_lock_init(&ia64_state_log[it][sit].isl_lock)
-#define IA64_LOG_LOCK(it, sit) spin_lock_irqsave(&ia64_state_log[it][sit].isl_lock, s)
-#define IA64_LOG_UNLOCK(it, sit) spin_unlock_irqrestore(&ia64_state_log[it][sit].isl_lock,\
- s)
-#define IA64_LOG_NEXT_INDEX(it, sit) ia64_state_log[it][sit].isl_index
-#define IA64_LOG_CURR_INDEX(it, sit) 1 - ia64_state_log[it][sit].isl_index
-#define IA64_LOG_INDEX_INC(it, sit) \
- ia64_state_log[it][sit].isl_index = 1 - ia64_state_log[it][sit].isl_index
-#define IA64_LOG_INDEX_DEC(it, sit) \
- ia64_state_log[it][sit].isl_index = 1 - ia64_state_log[it][sit].isl_index
-#define IA64_LOG_NEXT_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].isl_log[IA64_LOG_NEXT_INDEX(it,sit)]))
-#define IA64_LOG_CURR_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].isl_log[IA64_LOG_CURR_INDEX(it,sit)]))
+/* Note: Some of these macros assume IA64_MAX_LOGS is always 2. Should be */
+/* fixed. @FVL */
+#define IA64_LOG_LOCK_INIT(it) spin_lock_init(&ia64_state_log[it].isl_lock)
+#define IA64_LOG_LOCK(it) spin_lock_irqsave(&ia64_state_log[it].isl_lock, s)
+#define IA64_LOG_UNLOCK(it) spin_unlock_irqrestore(&ia64_state_log[it].isl_lock,s)
+#define IA64_LOG_NEXT_INDEX(it) ia64_state_log[it].isl_index
+#define IA64_LOG_CURR_INDEX(it) 1 - ia64_state_log[it].isl_index
+#define IA64_LOG_INDEX_INC(it) \
+ ia64_state_log[it].isl_index = 1 - ia64_state_log[it].isl_index
+#define IA64_LOG_INDEX_DEC(it) \
+ ia64_state_log[it].isl_index = 1 - ia64_state_log[it].isl_index
+#define IA64_LOG_NEXT_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)]))
+#define IA64_LOG_CURR_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)]))
/*
* C portion of the OS INIT handler
@@ -584,123 +777,217 @@
*
* Returns:
* 0 if SAL must warm boot the System
- * 1 if SAL must retrun to interrupted context using PAL_MC_RESUME
+ * 1 if SAL must return to interrupted context using PAL_MC_RESUME
*
*/
-
void
ia64_init_handler (struct pt_regs *regs)
{
sal_log_processor_info_t *proc_ptr;
- ia64_psilog_t *plog_ptr;
+ ia64_err_rec_t *plog_ptr;
printk("Entered OS INIT handler\n");
/* Get the INIT processor log */
- ia64_log_get(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- /* Get the INIT platform log */
- ia64_log_get(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
+ if (!ia64_log_get(SAL_INFO_TYPE_INIT, (prfunc_t)printk))
+ return; // no record retrieved
#ifdef IA64_DUMP_ALL_PROC_INFO
- ia64_log_print(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+ ia64_log_print(SAL_INFO_TYPE_INIT, (prfunc_t)printk);
#endif
/*
* get pointer to min state save area
*
*/
- plog_ptr=(ia64_psilog_t *)IA64_LOG_CURR_BUFFER(SAL_INFO_TYPE_INIT,
- SAL_SUB_INFO_TYPE_PROCESSOR);
- proc_ptr = &plog_ptr->devlog.proclog;
+ plog_ptr=(ia64_err_rec_t *)IA64_LOG_CURR_BUFFER(SAL_INFO_TYPE_INIT);
+ proc_ptr = &plog_ptr->proc_err;
- ia64_process_min_state_save(&proc_ptr->slpi_min_state_area,regs);
-
- init_handler_platform(regs); /* call platform specific routines */
+ ia64_process_min_state_save(&proc_ptr->processor_static_info.min_state_area,
+ regs);
/* Clear the INIT SAL logs now that they have been saved in the OS buffer */
ia64_sal_clear_state_info(SAL_INFO_TYPE_INIT);
+
+ init_handler_platform(regs); /* call platform specific routines */
+}
+
+/*
+ * ia64_log_prt_guid
+ *
+ * Print a formatted GUID.
+ *
+ * Inputs : p_guid (ptr to the GUID)
+ * prfunc (print function)
+ * Outputs : None
+ *
+ */
+void
+ia64_log_prt_guid (efi_guid_t *p_guid, prfunc_t prfunc)
+{
+ printk("GUID = { %08x, %04x, %04x, { %#02x, %#02x, %#02x, %#02x, "
+ "%#02x, %#02x, %#02x, %#02x, } } \n ", p_guid->data1,
+ p_guid->data2, p_guid->data3, p_guid->data4[0], p_guid->data4[1],
+ p_guid->data4[2], p_guid->data4[3], p_guid->data4[4],
+ p_guid->data4[5], p_guid->data4[6], p_guid->data4[7]);
+}
+
+static void
+ia64_log_hexdump(unsigned char *p, unsigned long n_ch, prfunc_t prfunc)
+{
+ int i, j;
+
+ if (!p)
+ return;
+
+ for (i = 0; i < n_ch;) {
+ prfunc("%p ", (void *)p);
+ for (j = 0; (j < 16) && (i < n_ch); i++, j++, p++) {
+ prfunc("%02x ", *p);
+ }
+ prfunc("\n");
+ }
+}
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+
+static void
+ia64_log_prt_record_header (sal_log_record_header_t *rh, prfunc_t prfunc)
+{
+ prfunc("SAL RECORD HEADER: Record buffer = %p, header size = %ld\n",
+ (void *)rh, sizeof(sal_log_record_header_t));
+ ia64_log_hexdump((unsigned char *)rh, sizeof(sal_log_record_header_t),
+ (prfunc_t)prfunc);
+ prfunc("Total record length = %d\n", rh->len);
+ ia64_log_prt_guid(&rh->platform_guid, prfunc);
+ prfunc("End of SAL RECORD HEADER\n");
+}
+
+static void
+ia64_log_prt_section_header (sal_log_section_hdr_t *sh, prfunc_t prfunc)
+{
+ prfunc("SAL SECTION HEADER: Record buffer = %p, header size = %ld\n",
+ (void *)sh, sizeof(sal_log_section_hdr_t));
+ ia64_log_hexdump((unsigned char *)sh, sizeof(sal_log_section_hdr_t),
+ (prfunc_t)prfunc);
+ prfunc("Length of section & header = %d\n", sh->len);
+ ia64_log_prt_guid(&sh->guid, prfunc);
+ prfunc("End of SAL SECTION HEADER\n");
}
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
/*
* ia64_log_init
* Reset the OS ia64 log buffer
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE})
* Outputs : None
*/
void
-ia64_log_init(int sal_info_type, int sal_sub_info_type)
+ia64_log_init(int sal_info_type)
{
- IA64_LOG_LOCK_INIT(sal_info_type, sal_sub_info_type);
- IA64_LOG_NEXT_INDEX(sal_info_type, sal_sub_info_type) = 0;
- memset(IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type), 0,
- sizeof(ia64_psilog_t) * IA64_MAX_LOGS);
+ IA64_LOG_LOCK_INIT(sal_info_type);
+ IA64_LOG_NEXT_INDEX(sal_info_type) = 0;
+ memset(IA64_LOG_NEXT_BUFFER(sal_info_type), 0,
+ sizeof(ia64_err_rec_t) * IA64_MAX_LOGS);
}
/*
* ia64_log_get
+ *
* Get the current MCA log from SAL and copy it into the OS log buffer.
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
- * Outputs : None
+ *
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE})
+ * prfunc (fn ptr of log output function)
+ * Outputs : size (total record length)
*
*/
-void
-ia64_log_get(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc)
+u64
+ia64_log_get(int sal_info_type, prfunc_t prfunc)
{
- sal_log_header_t *log_buffer;
- int s,total_len=0;
-
- IA64_LOG_LOCK(sal_info_type, sal_sub_info_type);
+ sal_log_record_header_t *log_buffer;
+ u64 total_len = 0;
+ int s;
+ IA64_LOG_LOCK(sal_info_type);
/* Get the process state information */
- log_buffer = IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type);
-
- if (!(total_len=ia64_sal_get_state_info(sal_info_type,(u64 *)log_buffer)))
- prfunc("ia64_mca_log_get : Getting processor log failed\n");
-
- IA64_MCA_DEBUG("ia64_log_get: retrieved %d bytes of error information\n",total_len);
+ log_buffer = IA64_LOG_NEXT_BUFFER(sal_info_type);
- IA64_LOG_INDEX_INC(sal_info_type, sal_sub_info_type);
-
- IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type);
+ total_len = ia64_sal_get_state_info(sal_info_type, (u64 *)log_buffer);
+ if (total_len) {
+ IA64_LOG_INDEX_INC(sal_info_type);
+ IA64_LOG_UNLOCK(sal_info_type);
+ IA64_MCA_DEBUG("ia64_log_get: SAL error record type %d retrieved. "
+ "Record length = %ld\n", sal_info_type, total_len);
+ return total_len;
+ } else {
+ IA64_LOG_UNLOCK(sal_info_type);
+ prfunc("ia64_log_get: Failed to retrieve SAL error record type %d\n",
+ sal_info_type);
+ return 0;
+ }
}
/*
- * ia64_log_clear
- * Clear the current MCA log from SAL and dpending on the clear_os_buffer flags
- * clear the OS log buffer also
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
- * clear_os_buffer
+ * ia64_log_prt_oem_data
+ *
+ * Print OEM specific data if included.
+ *
+ * Inputs : header_len (length passed in section header)
+ * sect_len (default length of section type)
+ * p_data (ptr to data)
* prfunc (print function)
* Outputs : None
*
*/
void
-ia64_log_clear(int sal_info_type, int sal_sub_info_type, int clear_os_buffer, prfunc_t prfunc)
+ia64_log_prt_oem_data (int header_len, int sect_len, u8 *p_data, prfunc_t prfunc)
{
- if (ia64_sal_clear_state_info(sal_info_type))
- prfunc("ia64_mca_log_get : Clearing processor log failed\n");
-
- if (clear_os_buffer) {
- sal_log_header_t *log_buffer;
- int s;
-
- IA64_LOG_LOCK(sal_info_type, sal_sub_info_type);
+ int oem_data_len, i;
- /* Get the process state information */
- log_buffer = IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type);
-
- memset(log_buffer, 0, sizeof(ia64_psilog_t));
-
- IA64_LOG_INDEX_DEC(sal_info_type, sal_sub_info_type);
-
- IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type);
+ if ((oem_data_len = header_len - sect_len) > 0) {
+ prfunc(" OEM Specific Data:");
+ for (i = 0; i < oem_data_len; i++, p_data++)
+ prfunc(" %02x", *p_data);
}
+ prfunc("\n");
+}
+/*
+ * ia64_log_rec_header_print
+ *
+ * Log info from the SAL error record header.
+ *
+ * Inputs : lh * (ptr to SAL log error record header)
+ * prfunc (fn ptr of log output function to use)
+ * Outputs : None
+ */
+void
+ia64_log_rec_header_print (sal_log_record_header_t *lh, prfunc_t prfunc)
+{
+ char str_buf[32];
+
+ sprintf(str_buf, "%2d.%02d",
+ (lh->revision.major >> 4) * 10 + (lh->revision.major & 0xf),
+ (lh->revision.minor >> 4) * 10 + (lh->revision.minor & 0xf));
+ prfunc("+Err Record ID: %d SAL Rev: %s\n", lh->id, str_buf);
+ sprintf(str_buf, "%02d/%02d/%04d/ %02d:%02d:%02d",
+ (lh->timestamp.slh_month >> 4) * 10 +
+ (lh->timestamp.slh_month & 0xf),
+ (lh->timestamp.slh_day >> 4) * 10 +
+ (lh->timestamp.slh_day & 0xf),
+ (lh->timestamp.slh_century >> 4) * 1000 +
+ (lh->timestamp.slh_century & 0xf) * 100 +
+ (lh->timestamp.slh_year >> 4) * 10 +
+ (lh->timestamp.slh_year & 0xf),
+ (lh->timestamp.slh_hour >> 4) * 10 +
+ (lh->timestamp.slh_hour & 0xf),
+ (lh->timestamp.slh_minute >> 4) * 10 +
+ (lh->timestamp.slh_minute & 0xf),
+ (lh->timestamp.slh_second >> 4) * 10 +
+ (lh->timestamp.slh_second & 0xf));
+ prfunc("+Time: %s Severity %d\n", str_buf, lh->severity);
}
/*
@@ -729,6 +1016,33 @@
prfunc("+ %s[%d] 0x%lx\n", reg_prefix, i, regs[i]);
}
+/*
+ * ia64_log_processor_fp_regs_print
+ * Print the contents of the saved floating page register(s) in the format
+ * <reg_prefix>[<index>] <value>
+ *
+ * Inputs: ia64_fpreg (Register save buffer)
+ * reg_num (# of registers)
+ * reg_class (application/banked/control/bank1_general)
+ * reg_prefix (ar/br/cr/b1_gr)
+ * Outputs: None
+ *
+ */
+void
+ia64_log_processor_fp_regs_print (struct ia64_fpreg *regs,
+ int reg_num,
+ char *reg_class,
+ char *reg_prefix,
+ prfunc_t prfunc)
+{
+ int i;
+
+ prfunc("+%s Registers\n", reg_class);
+ for (i = 0; i < reg_num; i++)
+ prfunc("+ %s[%d] 0x%lx%016lx\n", reg_prefix, i, regs[i].u.bits[1],
+ regs[i].u.bits[0]);
+}
+
static char *pal_mesi_state[] = {
"Invalid",
"Shared",
@@ -754,69 +1068,91 @@
/*
* ia64_log_cache_check_info_print
* Display the machine check information related to cache error(s).
- * Inputs : i (Multiple errors are logged, i - index of logged error)
- * info (Machine check info logged by the PAL and later
+ * Inputs: i (Multiple errors are logged, i - index of logged error)
+ * cc_info * (Ptr to cache check info logged by the PAL and later
* captured by the SAL)
- * target_addr (Address which caused the cache error)
- * Outputs : None
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
*/
void
-ia64_log_cache_check_info_print(int i,
- pal_cache_check_info_t info,
- u64 target_addr,
- prfunc_t prfunc)
+ia64_log_cache_check_info_print (int i,
+ sal_log_mod_error_info_t *cache_check_info,
+ prfunc_t prfunc)
{
+ pal_cache_check_info_t *info;
+ u64 target_addr;
+
+ if (!cache_check_info->valid.check_info) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid cache_check_info[%d]\n",i);
+ return; /* If check info data not valid, skip it */
+ }
+
+ info = (pal_cache_check_info_t *)&cache_check_info->check_info;
+ target_addr = cache_check_info->target_identifier;
+
prfunc("+ Cache check info[%d]\n+", i);
- prfunc(" Level: L%d",info.level);
- if (info.mv)
- prfunc(" ,Mesi: %s",pal_mesi_state[info.mesi]);
- prfunc(" ,Index: %d,", info.index);
- if (info.ic)
- prfunc(" ,Cache: Instruction");
- if (info.dc)
- prfunc(" ,Cache: Data");
- if (info.tl)
- prfunc(" ,Line: Tag");
- if (info.dl)
- prfunc(" ,Line: Data");
- prfunc(" ,Operation: %s,", pal_cache_op[info.op]);
- if (info.wv)
- prfunc(" ,Way: %d,", info.way);
- if (info.tv)
- prfunc(" ,Target Addr: 0x%lx", target_addr);
- if (info.mc)
- prfunc(" ,MC: Corrected");
+ prfunc(" Level: L%d,",info->level);
+ if (info->mv)
+ prfunc(" Mesi: %s,",pal_mesi_state[info->mesi]);
+ prfunc(" Index: %d,", info->index);
+ if (info->ic)
+ prfunc(" Cache: Instruction,");
+ if (info->dc)
+ prfunc(" Cache: Data,");
+ if (info->tl)
+ prfunc(" Line: Tag,");
+ if (info->dl)
+ prfunc(" Line: Data,");
+ prfunc(" Operation: %s,", pal_cache_op[info->op]);
+ if (info->wv)
+ prfunc(" Way: %d,", info->way);
+ if (cache_check_info->valid.target_identifier)
+ /* Hope target address is saved in target_identifier */
+ if (info->tv)
+ prfunc(" Target Addr: 0x%lx,", target_addr);
+ if (info->mc)
+ prfunc(" MC: Corrected");
prfunc("\n");
}
/*
* ia64_log_tlb_check_info_print
* Display the machine check information related to tlb error(s).
- * Inputs : i (Multiple errors are logged, i - index of logged error)
- * info (Machine check info logged by the PAL and later
+ * Inputs: i (Multiple errors are logged, i - index of logged error)
+ * tlb_info * (Ptr to machine check info logged by the PAL and later
* captured by the SAL)
- * Outputs : None
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
*/
-
void
-ia64_log_tlb_check_info_print(int i,
- pal_tlb_check_info_t info,
- prfunc_t prfunc)
+ia64_log_tlb_check_info_print (int i,
+ sal_log_mod_error_info_t *tlb_check_info,
+ prfunc_t prfunc)
+
{
+ pal_tlb_check_info_t *info;
+
+ if (!tlb_check_info->valid.check_info) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid tlb_check_info[%d]\n", i);
+ return; /* If check info data not valid, skip it */
+ }
+
+ info = (pal_tlb_check_info_t *)&tlb_check_info->check_info;
+
prfunc("+ TLB Check Info [%d]\n+", i);
- if (info.itc)
+ if (info->itc)
prfunc(" Failure: Instruction Translation Cache");
- if (info.dtc)
+ if (info->dtc)
prfunc(" Failure: Data Translation Cache");
- if (info.itr) {
+ if (info->itr) {
prfunc(" Failure: Instruction Translation Register");
- prfunc(" ,Slot: %d", info.tr_slot);
+ prfunc(" ,Slot: %d", info->tr_slot);
}
- if (info.dtr) {
+ if (info->dtr) {
prfunc(" Failure: Data Translation Register");
- prfunc(" ,Slot: %d", info.tr_slot);
+ prfunc(" ,Slot: %d", info->tr_slot);
}
- if (info.mc)
+ if (info->mc)
prfunc(" ,MC: Corrected");
prfunc("\n");
}
@@ -824,159 +1160,719 @@
/*
* ia64_log_bus_check_info_print
* Display the machine check information related to bus error(s).
- * Inputs : i (Multiple errors are logged, i - index of logged error)
- * info (Machine check info logged by the PAL and later
+ * Inputs: i (Multiple errors are logged, i - index of logged error)
+ * bus_info * (Ptr to machine check info logged by the PAL and later
* captured by the SAL)
- * req_addr (Address of the requestor of the transaction)
- * resp_addr (Address of the responder of the transaction)
- * target_addr (Address where the data was to be delivered to or
- * obtained from)
- * Outputs : None
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
*/
void
-ia64_log_bus_check_info_print(int i,
- pal_bus_check_info_t info,
- u64 req_addr,
- u64 resp_addr,
- u64 targ_addr,
- prfunc_t prfunc)
-{
+ia64_log_bus_check_info_print (int i,
+ sal_log_mod_error_info_t *bus_check_info,
+ prfunc_t prfunc)
+{
+ pal_bus_check_info_t *info;
+ u64 req_addr; /* Address of the requestor of the transaction */
+ u64 resp_addr; /* Address of the responder of the transaction */
+ u64 targ_addr; /* Address where the data was to be delivered to */
+ /* or obtained from */
+
+ if (!bus_check_info->valid.check_info) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid bus_check_info[%d]\n", i);
+ return; /* If check info data not valid, skip it */
+ }
+
+ info = (pal_bus_check_info_t *)&bus_check_info->check_info;
+ req_addr = bus_check_info->requestor_identifier;
+ resp_addr = bus_check_info->responder_identifier;
+ targ_addr = bus_check_info->target_identifier;
+
prfunc("+ BUS Check Info [%d]\n+", i);
- prfunc(" Status Info: %d", info.bsi);
- prfunc(" ,Severity: %d", info.sev);
- prfunc(" ,Transaction Type: %d", info.type);
- prfunc(" ,Transaction Size: %d", info.size);
- if (info.cc)
+ prfunc(" Status Info: %d", info->bsi);
+ prfunc(" ,Severity: %d", info->sev);
+ prfunc(" ,Transaction Type: %d", info->type);
+ prfunc(" ,Transaction Size: %d", info->size);
+ if (info->cc)
prfunc(" ,Cache-cache-transfer");
- if (info.ib)
+ if (info->ib)
prfunc(" ,Error: Internal");
- if (info.eb)
+ if (info->eb)
prfunc(" ,Error: External");
- if (info.mc)
+ if (info->mc)
prfunc(" ,MC: Corrected");
- if (info.tv)
+ if (info->tv)
prfunc(" ,Target Address: 0x%lx", targ_addr);
- if (info.rq)
+ if (info->rq)
prfunc(" ,Requestor Address: 0x%lx", req_addr);
- if (info.tv)
+ if (info->tv)
prfunc(" ,Responder Address: 0x%lx", resp_addr);
prfunc("\n");
}
/*
+ * ia64_log_mem_dev_err_info_print
+ *
+ * Format and log the platform memory device error record section data.
+ *
+ * Inputs: mem_dev_err_info * (Ptr to memory device error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_mem_dev_err_info_print (sal_log_mem_dev_err_info_t *mdei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Mem Error Detail: ");
+
+ if (mdei->valid.error_status)
+ prfunc(" Error Status: %#lx,", mdei->error_status);
+ if (mdei->valid.physical_addr)
+ prfunc(" Physical Address: %#lx,", mdei->physical_addr);
+ if (mdei->valid.addr_mask)
+ prfunc(" Address Mask: %#lx,", mdei->addr_mask);
+ if (mdei->valid.node)
+ prfunc(" Node: %d,", mdei->node);
+ if (mdei->valid.card)
+ prfunc(" Card: %d,", mdei->card);
+ if (mdei->valid.module)
+ prfunc(" Module: %d,", mdei->module);
+ if (mdei->valid.bank)
+ prfunc(" Bank: %d,", mdei->bank);
+ if (mdei->valid.device)
+ prfunc(" Device: %d,", mdei->device);
+ if (mdei->valid.row)
+ prfunc(" Row: %d,", mdei->row);
+ if (mdei->valid.column)
+ prfunc(" Column: %d,", mdei->column);
+ if (mdei->valid.bit_position)
+ prfunc(" Bit Position: %d,", mdei->bit_position);
+ if (mdei->valid.target_id)
+ prfunc(" ,Target Address: %#lx,", mdei->target_id);
+ if (mdei->valid.requestor_id)
+ prfunc(" ,Requestor Address: %#lx,", mdei->requestor_id);
+ if (mdei->valid.responder_id)
+ prfunc(" ,Responder Address: %#lx,", mdei->responder_id);
+ if (mdei->valid.bus_spec_data)
+ prfunc(" Bus Specific Data: %#lx,", mdei->bus_spec_data);
+ prfunc("\n");
+
+ if (mdei->valid.oem_id) {
+ u8 *p_data = &(mdei->oem_id[0]);
+ int i;
+
+ prfunc(" OEM Memory Controller ID:");
+ for (i = 0; i < 16; i++, p_data++)
+ prfunc(" %02x", *p_data);
+ prfunc("\n");
+ }
+
+ if (mdei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)mdei->header.len,
+ (int)sizeof(sal_log_mem_dev_err_info_t) - 1,
+ &(mdei->oem_data[0]), prfunc);
+ }
+}
+
+/*
+ * ia64_log_sel_dev_err_info_print
+ *
+ * Format and log the platform SEL device error record section data.
+ *
+ * Inputs: sel_dev_err_info * (Ptr to the SEL device error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_sel_dev_err_info_print (sal_log_sel_dev_err_info_t *sdei,
+ prfunc_t prfunc)
+{
+ int i;
+
+ prfunc("+ SEL Device Error Detail: ");
+
+ if (sdei->valid.record_id)
+ prfunc(" Record ID: %#x", sdei->record_id);
+ if (sdei->valid.record_type)
+ prfunc(" Record Type: %#x", sdei->record_type);
+ prfunc(" Time Stamp: ");
+ for (i = 0; i < 4; i++)
+ prfunc("%1d", sdei->timestamp[i]);
+ if (sdei->valid.generator_id)
+ prfunc(" Generator ID: %#x", sdei->generator_id);
+ if (sdei->valid.evm_rev)
+ prfunc(" Message Format Version: %#x", sdei->evm_rev);
+ if (sdei->valid.sensor_type)
+ prfunc(" Sensor Type: %#x", sdei->sensor_type);
+ if (sdei->valid.sensor_num)
+ prfunc(" Sensor Number: %#x", sdei->sensor_num);
+ if (sdei->valid.event_dir)
+ prfunc(" Event Direction Type: %#x", sdei->event_dir);
+ if (sdei->valid.event_data1)
+ prfunc(" Data1: %#x", sdei->event_data1);
+ if (sdei->valid.event_data2)
+ prfunc(" Data2: %#x", sdei->event_data2);
+ if (sdei->valid.event_data3)
+ prfunc(" Data3: %#x", sdei->event_data3);
+ prfunc("\n");
+
+}
+
+/*
+ * ia64_log_pci_bus_err_info_print
+ *
+ * Format and log the platform PCI bus error record section data.
+ *
+ * Inputs: pci_bus_err_info * (Ptr to the PCI bus error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_pci_bus_err_info_print (sal_log_pci_bus_err_info_t *pbei,
+ prfunc_t prfunc)
+{
+ prfunc("+ PCI Bus Error Detail: ");
+
+ if (pbei->valid.err_status)
+ prfunc(" Error Status: %#lx", pbei->err_status);
+ if (pbei->valid.err_type)
+ prfunc(" Error Type: %#x", pbei->err_type);
+ if (pbei->valid.bus_id)
+ prfunc(" Bus ID: %#x", pbei->bus_id);
+ if (pbei->valid.bus_address)
+ prfunc(" Bus Address: %#lx", pbei->bus_address);
+ if (pbei->valid.bus_data)
+ prfunc(" Bus Data: %#lx", pbei->bus_data);
+ if (pbei->valid.bus_cmd)
+ prfunc(" Bus Command: %#lx", pbei->bus_cmd);
+ if (pbei->valid.requestor_id)
+ prfunc(" Requestor ID: %#lx", pbei->requestor_id);
+ if (pbei->valid.responder_id)
+ prfunc(" Responder ID: %#lx", pbei->responder_id);
+ if (pbei->valid.target_id)
+ prfunc(" Target ID: %#lx", pbei->target_id);
+ if (pbei->valid.oem_data)
+ prfunc("\n");
+
+ if (pbei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)pbei->header.len,
+ (int)sizeof(sal_log_pci_bus_err_info_t) - 1,
+ &(pbei->oem_data[0]), prfunc);
+ }
+}
+
+/*
+ * ia64_log_smbios_dev_err_info_print
+ *
+ * Format and log the platform SMBIOS device error record section data.
+ *
+ * Inputs: smbios_dev_err_info * (Ptr to the SMBIOS device error record
+ * section returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_smbios_dev_err_info_print (sal_log_smbios_dev_err_info_t *sdei,
+ prfunc_t prfunc)
+{
+ u8 i;
+
+ prfunc("+ SMBIOS Device Error Detail: ");
+
+ if (sdei->valid.event_type)
+ prfunc(" Event Type: %#x", sdei->event_type);
+ if (sdei->valid.time_stamp) {
+ prfunc(" Time Stamp: ");
+ for (i = 0; i < 6; i++)
+ prfunc("%d", sdei->time_stamp[i]);
+ }
+ if ((sdei->valid.data) && (sdei->valid.length)) {
+ prfunc(" Data: ");
+ for (i = 0; i < sdei->length; i++)
+ prfunc(" %02x", sdei->data[i]);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_pci_comp_err_info_print
+ *
+ * Format and log the platform PCI component error record section data.
+ *
+ * Inputs: pci_comp_err_info * (Ptr to the PCI component error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_pci_comp_err_info_print(sal_log_pci_comp_err_info_t *pcei,
+ prfunc_t prfunc)
+{
+ u32 n_mem_regs, n_io_regs;
+ u64 i, n_pci_data;
+ u64 *p_reg_data;
+ u8 *p_oem_data;
+
+ prfunc("+ PCI Component Error Detail: ");
+
+ if (pcei->valid.err_status)
+ prfunc(" Error Status: %#lx\n", pcei->err_status);
+ if (pcei->valid.comp_info)
+ prfunc(" Component Info: Vendor Id = %#x, Device Id = %#x,"
+ " Class Code = %#x, Seg/Bus/Dev/Func = %d/%d/%d/%d\n",
+ pcei->comp_info.vendor_id, pcei->comp_info.device_id,
+ pcei->comp_info.class_code, pcei->comp_info.seg_num,
+ pcei->comp_info.bus_num, pcei->comp_info.dev_num,
+ pcei->comp_info.func_num);
+
+ n_mem_regs = (pcei->valid.num_mem_regs) ? pcei->num_mem_regs : 0;
+ n_io_regs = (pcei->valid.num_io_regs) ? pcei->num_io_regs : 0;
+ p_reg_data = &(pcei->reg_data_pairs[0]);
+ p_oem_data = (u8 *)p_reg_data +
+ (n_mem_regs + n_io_regs) * 2 * sizeof(u64);
+ n_pci_data = p_oem_data - (u8 *)pcei;
+
+ if (n_pci_data > pcei->header.len) {
+ prfunc(" Invalid PCI Component Error Record format: length = %ld, "
+ " Size PCI Data = %d, Num Mem-Map/IO-Map Regs = %ld/%ld\n",
+ pcei->header.len, n_pci_data, n_mem_regs, n_io_regs);
+ return;
+ }
+
+ if (n_mem_regs) {
+ prfunc(" Memory Mapped Registers\n Address \tValue\n");
+ for (i = 0; i < pcei->num_mem_regs; i++) {
+ prfunc(" %#lx %#lx\n", p_reg_data[0], p_reg_data[1]);
+ p_reg_data += 2;
+ }
+ }
+ if (n_io_regs) {
+ prfunc(" I/O Mapped Registers\n Address \tValue\n");
+ for (i = 0; i < pcei->num_io_regs; i++) {
+ prfunc(" %#lx %#lx\n", p_reg_data[0], p_reg_data[1]);
+ p_reg_data += 2;
+ }
+ }
+ if (pcei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)pcei->header.len, n_pci_data,
+ p_oem_data, prfunc);
+ prfunc("\n");
+ }
+}
+
+/*
+ * ia64_log_plat_specific_err_info_print
+ *
+ * Format and log the platform specifie error record section data.
+ *
+ * Inputs: sel_dev_err_info * (Ptr to the platform specific error record
+ * section returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_plat_specific_err_info_print (sal_log_plat_specific_err_info_t *psei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Platform Specific Error Detail: ");
+
+ if (psei->valid.err_status)
+ prfunc(" Error Status: %#lx", psei->err_status);
+ if (psei->valid.guid) {
+ prfunc(" GUID: ");
+ ia64_log_prt_guid(&psei->guid, prfunc);
+ }
+ if (psei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)psei->header.len,
+ (int)sizeof(sal_log_plat_specific_err_info_t) - 1,
+ &(psei->oem_data[0]), prfunc);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_host_ctlr_err_info_print
+ *
+ * Format and log the platform host controller error record section data.
+ *
+ * Inputs: host_ctlr_err_info * (Ptr to the host controller error record
+ * section returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_host_ctlr_err_info_print (sal_log_host_ctlr_err_info_t *hcei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Host Controller Error Detail: ");
+
+ if (hcei->valid.err_status)
+ prfunc(" Error Status: %#lx", hcei->err_status);
+ if (hcei->valid.requestor_id)
+ prfunc(" Requestor ID: %#lx", hcei->requestor_id);
+ if (hcei->valid.responder_id)
+ prfunc(" Responder ID: %#lx", hcei->responder_id);
+ if (hcei->valid.target_id)
+ prfunc(" Target ID: %#lx", hcei->target_id);
+ if (hcei->valid.bus_spec_data)
+ prfunc(" Bus Specific Data: %#lx", hcei->bus_spec_data);
+ if (hcei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)hcei->header.len,
+ (int)sizeof(sal_log_host_ctlr_err_info_t) - 1,
+ &(hcei->oem_data[0]), prfunc);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_plat_bus_err_info_print
+ *
+ * Format and log the platform bus error record section data.
+ *
+ * Inputs: plat_bus_err_info * (Ptr to the platform bus error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_plat_bus_err_info_print (sal_log_plat_bus_err_info_t *pbei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Platform Bus Error Detail: ");
+
+ if (pbei->valid.err_status)
+ prfunc(" Error Status: %#lx", pbei->err_status);
+ if (pbei->valid.requestor_id)
+ prfunc(" Requestor ID: %#lx", pbei->requestor_id);
+ if (pbei->valid.responder_id)
+ prfunc(" Responder ID: %#lx", pbei->responder_id);
+ if (pbei->valid.target_id)
+ prfunc(" Target ID: %#lx", pbei->target_id);
+ if (pbei->valid.bus_spec_data)
+ prfunc(" Bus Specific Data: %#lx", pbei->bus_spec_data);
+ if (pbei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)pbei->header.len,
+ (int)sizeof(sal_log_plat_bus_err_info_t) - 1,
+ &(pbei->oem_data[0]), prfunc);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_proc_dev_err_info_print
+ *
+ * Display the processor device error record.
+ *
+ * Inputs: sal_log_processor_info_t * (Ptr to processor device error record
+ * section body).
+ * prfunc (fn ptr of print function to be used
+ * for output).
+ * Outputs: None
+ */
+void
+ia64_log_proc_dev_err_info_print (sal_log_processor_info_t *slpi,
+ prfunc_t prfunc)
+{
+ size_t d_len = slpi->header.len - sizeof(sal_log_section_hdr_t);
+ sal_processor_static_info_t *spsi;
+ int i;
+ sal_log_mod_error_info_t *p_data;
+
+ prfunc("+Processor Device Error Info Section\n");
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ {
+ char *p_data = (char *)&slpi->valid;
+
+ prfunc("SAL_PROC_DEV_ERR SECTION DATA: Data buffer = %p, "
+ "Data size = %ld\n", (void *)p_data, d_len);
+ ia64_log_hexdump(p_data, d_len, prfunc);
+ prfunc("End of SAL_PROC_DEV_ERR SECTION DATA\n");
+ }
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if (slpi->valid.proc_error_map)
+ prfunc(" Processor Error Map: %#lx\n", slpi->proc_error_map);
+
+ if (slpi->valid.proc_state_param)
+ prfunc(" Processor State Param: %#lx\n", slpi->proc_state_parameter);
+
+ if (slpi->valid.proc_cr_lid)
+ prfunc(" Processor LID: %#lx\n", slpi->proc_cr_lid);
+
+ /*
+ * Note: March 2001 SAL spec states that if the number of elements in any
+ * of the MOD_ERROR_INFO_STRUCT arrays is zero, the entire array is
+ * absent. Also, current implementations only allocate space for number of
+ * elements used. So we walk the data pointer from here on.
+ */
+ p_data = &slpi->cache_check_info[0];
+
+ /* Print the cache check information if any*/
+ for (i = 0 ; i < slpi->valid.num_cache_check; i++, p_data++)
+ ia64_log_cache_check_info_print(i, p_data, prfunc);
+
+ /* Print the tlb check information if any*/
+ for (i = 0 ; i < slpi->valid.num_tlb_check; i++, p_data++)
+ ia64_log_tlb_check_info_print(i, p_data, prfunc);
+
+ /* Print the bus check information if any*/
+ for (i = 0 ; i < slpi->valid.num_bus_check; i++, p_data++)
+ ia64_log_bus_check_info_print(i, p_data, prfunc);
+
+ /* Print the reg file check information if any*/
+ for (i = 0 ; i < slpi->valid.num_reg_file_check; i++, p_data++)
+ ia64_log_hexdump((u8 *)p_data, sizeof(sal_log_mod_error_info_t),
+ prfunc); /* Just hex dump for now */
+
+ /* Print the ms check information if any*/
+ for (i = 0 ; i < slpi->valid.num_ms_check; i++, p_data++)
+ ia64_log_hexdump((u8 *)p_data, sizeof(sal_log_mod_error_info_t),
+ prfunc); /* Just hex dump for now */
+
+ /* Print CPUID registers if any*/
+ if (slpi->valid.cpuid_info) {
+ u64 *p = (u64 *)p_data;
+
+ prfunc(" CPUID Regs: %#lx %#lx %#lx %#lx\n", p[0], p[1], p[2], p[3]);
+ p_data++;
+ }
+
+ /* Print processor static info if any */
+ if (slpi->valid.psi_static_struct) {
+ spsi = (sal_processor_static_info_t *)p_data;
+
+ /* Print branch register contents if valid */
+ if (spsi->valid.br)
+ ia64_log_processor_regs_print(spsi->br, 8, "Branch", "br",
+ prfunc);
+
+ /* Print control register contents if valid */
+ if (spsi->valid.cr)
+ ia64_log_processor_regs_print(spsi->cr, 128, "Control", "cr",
+ prfunc);
+
+ /* Print application register contents if valid */
+ if (spsi->valid.ar)
+ ia64_log_processor_regs_print(spsi->ar, 128, "Application",
+ "ar", prfunc);
+
+ /* Print region register contents if valid */
+ if (spsi->valid.rr)
+ ia64_log_processor_regs_print(spsi->rr, 8, "Region", "rr",
+ prfunc);
+
+ /* Print floating-point register contents if valid */
+ if (spsi->valid.fr)
+ ia64_log_processor_fp_regs_print(spsi->fr, 128, "Floating-point", "fr",
+ prfunc);
+ }
+}
+
+/*
* ia64_log_processor_info_print
+ *
* Display the processor-specific information logged by PAL as a part
* of MCA or INIT or CMC.
- * Inputs : lh (Pointer of the sal log header which specifies the format
- * of SAL state info as specified by the SAL spec).
+ *
+ * Inputs : lh (Pointer of the sal log header which specifies the
+ * format of SAL state info as specified by the SAL spec).
+ * prfunc (fn ptr of print function to be used for output).
* Outputs : None
*/
void
-ia64_log_processor_info_print(sal_log_header_t *lh, prfunc_t prfunc)
+ia64_log_processor_info_print(sal_log_record_header_t *lh, prfunc_t prfunc)
{
- sal_log_processor_info_t *slpi;
- int i;
+ sal_log_section_hdr_t *slsh;
+ int n_sects;
+ int ercd_pos;
if (!lh)
return;
- if (lh->slh_log_type != SAL_SUB_INFO_TYPE_PROCESSOR)
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_record_header(lh, prfunc);
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if ((ercd_pos = sizeof(sal_log_record_header_t)) >= lh->len) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
+ "truncated SAL CMC error record. len = %d\n",
+ lh->len);
return;
+ }
- slpi = (sal_log_processor_info_t *)((char *)lh+sizeof(sal_log_header_t)); /* point to proc info */
+ /* Print record header info */
+ ia64_log_rec_header_print(lh, prfunc);
- if (!slpi) {
- prfunc("No Processor Error Log found\n");
- return;
+ for (n_sects = 0; (ercd_pos < lh->len); n_sects++, ercd_pos += slsh->len) {
+ /* point to next section header */
+ slsh = (sal_log_section_hdr_t *)((char *)lh + ercd_pos);
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_section_header(slsh, prfunc);
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if (verify_guid((void *)&slsh->guid, (void *)&(SAL_PROC_DEV_ERR_SECT_GUID))) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n");
+ continue;
+ }
+
+ /*
+ * Now process processor device error record section
+ */
+ ia64_log_proc_dev_err_info_print((sal_log_processor_info_t *)slsh,
+ printk);
}
- /* Print branch register contents if valid */
- if (slpi->slpi_valid.slpi_br)
- ia64_log_processor_regs_print(slpi->slpi_br, 8, "Branch", "br", prfunc);
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
+ "found %d sections in SAL CMC error record. len = %d\n",
+ n_sects, lh->len);
+ if (!n_sects) {
+ prfunc("No Processor Device Error Info Section found\n");
+ return;
+ }
+}
- /* Print control register contents if valid */
- if (slpi->slpi_valid.slpi_cr)
- ia64_log_processor_regs_print(slpi->slpi_cr, 128, "Control", "cr", prfunc);
+/*
+ * ia64_log_platform_info_print
+ *
+ * Format and Log the SAL Platform Error Record.
+ *
+ * Inputs : lh (Pointer to the sal error record header with format
+ * specified by the SAL spec).
+ * prfunc (fn ptr of log output function to use)
+ * Outputs : None
+ */
+void
+ia64_log_platform_info_print (sal_log_record_header_t *lh, prfunc_t prfunc)
+{
+ sal_log_section_hdr_t *slsh;
+ int n_sects;
+ int ercd_pos;
- /* Print application register contents if valid */
- if (slpi->slpi_valid.slpi_ar)
- ia64_log_processor_regs_print(slpi->slpi_br, 128, "Application", "ar", prfunc);
+ if (!lh)
+ return;
- /* Print region register contents if valid */
- if (slpi->slpi_valid.slpi_rr)
- ia64_log_processor_regs_print(slpi->slpi_rr, 8, "Region", "rr", prfunc);
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_record_header(lh, prfunc);
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if ((ercd_pos = sizeof(sal_log_record_header_t)) >= lh->len) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
+ "truncated SAL error record. len = %d\n",
+ lh->len);
+ return;
+ }
- /* Print floating-point register contents if valid */
- if (slpi->slpi_valid.slpi_fr)
- ia64_log_processor_regs_print(slpi->slpi_fr, 128, "Floating-point", "fr",
- prfunc);
+ /* Print record header info */
+ ia64_log_rec_header_print(lh, prfunc);
- /* Print the cache check information if any*/
- for (i = 0 ; i < MAX_CACHE_ERRORS; i++)
- ia64_log_cache_check_info_print(i,
- slpi->slpi_cache_check_info[i].slpi_cache_check,
- slpi->slpi_cache_check_info[i].slpi_target_address,
- prfunc);
- /* Print the tlb check information if any*/
- for (i = 0 ; i < MAX_TLB_ERRORS; i++)
- ia64_log_tlb_check_info_print(i,slpi->slpi_tlb_check_info[i], prfunc);
+ for (n_sects = 0; (ercd_pos < lh->len); n_sects++, ercd_pos += slsh->len) {
+ /* point to next section header */
+ slsh = (sal_log_section_hdr_t *)((char *)lh + ercd_pos);
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_section_header(slsh, prfunc);
+
+ if (efi_guidcmp(slsh->guid, SAL_PROC_DEV_ERR_SECT_GUID) != 0) {
+ size_t d_len = slsh->len - sizeof(sal_log_section_hdr_t);
+ char *p_data = (char *)&((sal_log_mem_dev_err_info_t *)slsh)->valid;
+
+ prfunc("Start of Platform Err Data Section: Data buffer = %p, "
+ "Data size = %ld\n", (void *)p_data, d_len);
+ ia64_log_hexdump(p_data, d_len, prfunc);
+ prfunc("End of Platform Err Data Section\n");
+ }
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
- /* Print the bus check information if any*/
- for (i = 0 ; i < MAX_BUS_ERRORS; i++)
- ia64_log_bus_check_info_print(i,
- slpi->slpi_bus_check_info[i].slpi_bus_check,
- slpi->slpi_bus_check_info[i].slpi_requestor_addr,
- slpi->slpi_bus_check_info[i].slpi_responder_addr,
- slpi->slpi_bus_check_info[i].slpi_target_addr,
- prfunc);
+ /*
+ * Now process CPE error record section
+ */
+ if (efi_guidcmp(slsh->guid, SAL_PROC_DEV_ERR_SECT_GUID) = 0) {
+ ia64_log_proc_dev_err_info_print((sal_log_processor_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_MEM_DEV_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Memory Device Error Info Section\n");
+ ia64_log_mem_dev_err_info_print((sal_log_mem_dev_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SEL_DEV_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform SEL Device Error Info Section\n");
+ ia64_log_sel_dev_err_info_print((sal_log_sel_dev_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_BUS_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform PCI Bus Error Info Section\n");
+ ia64_log_pci_bus_err_info_print((sal_log_pci_bus_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SMBIOS_DEV_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform SMBIOS Device Error Info Section\n");
+ ia64_log_smbios_dev_err_info_print((sal_log_smbios_dev_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_COMP_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform PCI Component Error Info Section\n");
+ ia64_log_pci_comp_err_info_print((sal_log_pci_comp_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SPECIFIC_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Specific Error Info Section\n");
+ ia64_log_plat_specific_err_info_print((sal_log_plat_specific_err_info_t *)
+ slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_HOST_CTLR_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Host Controller Error Info Section\n");
+ ia64_log_host_ctlr_err_info_print((sal_log_host_ctlr_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_BUS_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Bus Error Info Section\n");
+ ia64_log_plat_bus_err_info_print((sal_log_plat_bus_err_info_t *)slsh,
+ prfunc);
+ } else {
+ IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n");
+ continue;
+ }
+ }
+ IA64_MCA_DEBUG("ia64_mca_log_print: found %d sections in SAL error record. len = %d\n",
+ n_sects, lh->len);
+ if (!n_sects) {
+ prfunc("No Platform Error Info Sections found\n");
+ return;
+ }
}
/*
* ia64_log_print
- * Display the contents of the OS error log information
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ *
+ * Displays the contents of the OS error log information
+ *
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE})
+ * prfunc (fn ptr of log output function to use)
* Outputs : None
*/
void
-ia64_log_print(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc)
+ia64_log_print(int sal_info_type, prfunc_t prfunc)
{
- char *info_type, *sub_info_type;
-
switch(sal_info_type) {
- case SAL_INFO_TYPE_MCA:
- info_type = "MCA";
+ case SAL_INFO_TYPE_MCA:
+ prfunc("+BEGIN HARDWARE ERROR STATE AT MCA\n");
+ ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc);
+ prfunc("+END HARDWARE ERROR STATE AT MCA\n");
break;
- case SAL_INFO_TYPE_INIT:
- info_type = "INIT";
+ case SAL_INFO_TYPE_INIT:
+ prfunc("+MCA INIT ERROR LOG (UNIMPLEMENTED)\n");
break;
- case SAL_INFO_TYPE_CMC:
- info_type = "CMC";
+ case SAL_INFO_TYPE_CMC:
+ prfunc("+BEGIN HARDWARE ERROR STATE AT CMC\n");
+ ia64_log_processor_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc);
+ prfunc("+END HARDWARE ERROR STATE AT CMC\n");
break;
- default:
- info_type = "UNKNOWN";
+ case SAL_INFO_TYPE_CPE:
+ prfunc("+BEGIN HARDWARE ERROR STATE AT CPE\n");
+ ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc);
+ prfunc("+END HARDWARE ERROR STATE AT CPE\n");
break;
- }
-
- switch(sal_sub_info_type) {
- case SAL_SUB_INFO_TYPE_PROCESSOR:
- sub_info_type = "PROCESSOR";
- break;
- case SAL_SUB_INFO_TYPE_PLATFORM:
- sub_info_type = "PLATFORM";
- break;
- default:
- sub_info_type = "UNKNOWN";
+ default:
+ prfunc("+MCA UNKNOWN ERROR LOG (UNIMPLEMENTED)\n");
break;
}
-
- prfunc("+BEGIN HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type);
- if (sal_sub_info_type = SAL_SUB_INFO_TYPE_PROCESSOR)
- ia64_log_processor_info_print(
- IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type),
- prfunc);
- else
- log_print_platform(IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type),prfunc);
- prfunc("+END HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type);
}
diff -urN linux-2.4.13/arch/ia64/kernel/mca_asm.S linux-2.4.13-lia/arch/ia64/kernel/mca_asm.S
--- linux-2.4.13/arch/ia64/kernel/mca_asm.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/mca_asm.S Thu Oct 4 00:21:39 2001
@@ -9,6 +9,7 @@
//
#include <linux/config.h>
+#include <asm/asmmacro.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/mca_asm.h>
@@ -23,7 +24,7 @@
#include "minstate.h"
/*
- * SAL_TO_OS_MCA_HANDOFF_STATE
+ * SAL_TO_OS_MCA_HANDOFF_STATE (SAL 3.0 spec)
* 1. GR1 = OS GP
* 2. GR8 = PAL_PROC physical address
* 3. GR9 = SAL_PROC physical address
@@ -33,6 +34,7 @@
*/
#define SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(_tmp) \
movl _tmp=ia64_sal_to_os_handoff_state;; \
+ DATA_VA_TO_PA(_tmp);; \
st8 [_tmp]=r1,0x08;; \
st8 [_tmp]=r8,0x08;; \
st8 [_tmp]=r9,0x08;; \
@@ -41,47 +43,29 @@
st8 [_tmp]=r12,0x08;;
/*
- * OS_MCA_TO_SAL_HANDOFF_STATE
- * 1. GR8 = OS_MCA status
- * 2. GR9 = SAL GP (physical)
- * 3. GR22 = New min state save area pointer
+ * OS_MCA_TO_SAL_HANDOFF_STATE (SAL 3.0 spec)
+ * 1. GR8 = OS_MCA return status
+ * 2. GR9 = SAL GP (physical)
+ * 3. GR10 = 0/1 returning same/new context
+ * 4. GR22 = New min state save area pointer
+ * returns ptr to SAL rtn save loc in _tmp
*/
-#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \
- movl _tmp=ia64_os_to_sal_handoff_state;; \
- DATA_VA_TO_PA(_tmp);; \
- ld8 r8=[_tmp],0x08;; \
- ld8 r9=[_tmp],0x08;; \
- ld8 r22=[_tmp],0x08;;
-
-/*
- * BRANCH
- * Jump to the instruction referenced by
- * "to_label".
- * Branch is taken only if the predicate
- * register "p" is true.
- * "ip" is the address of the instruction
- * located at "from_label".
- * "temp" is a scratch register like r2
- * "adjust" needed for HP compiler.
- * A screwup somewhere with constant arithmetic.
- */
-#define BRANCH(to_label, temp, p, adjust) \
-100: (p) mov temp=ip; \
- ;; \
- (p) adds temp=to_label-100b,temp;\
- ;; \
- (p) adds tempjust,temp; \
- ;; \
- (p) mov b1=temp ; \
- (p) br b1
+#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \
+ movl _tmp=ia64_os_to_sal_handoff_state;; \
+ DATA_VA_TO_PA(_tmp);; \
+ ld8 r8=[_tmp],0x08;; \
+ ld8 r9=[_tmp],0x08;; \
+ ld8 r10=[_tmp],0x08;; \
+ ld8 r22=[_tmp],0x08;; \
+ movl _tmp=ia64_sal_to_os_handoff_state;; \
+ DATA_VA_TO_PA(_tmp);; \
+ add _tmp=0x28,_tmp;; // point to SAL rtn save location
.global ia64_os_mca_dispatch
.global ia64_os_mca_dispatch_end
.global ia64_sal_to_os_handoff_state
.global ia64_os_to_sal_handoff_state
- .global ia64_os_mca_ucmc_handler
.global ia64_mca_proc_state_dump
- .global ia64_mca_proc_state_restore
.global ia64_mca_stack
.global ia64_mca_stackframe
.global ia64_mca_bspstore
@@ -100,7 +84,7 @@
#endif /* #if defined(MCA_TEST) */
// Save the SAL to OS MCA handoff state as defined
- // by SAL SPEC 2.5
+ // by SAL SPEC 3.0
// NOTE : The order in which the state gets saved
// is dependent on the way the C-structure
// for ia64_mca_sal_to_os_state_t has been
@@ -110,15 +94,20 @@
// LOG PROCESSOR STATE INFO FROM HERE ON..
;;
begin_os_mca_dump:
- BRANCH(ia64_os_mca_proc_state_dump, r2, p0, 0x0)
- ;;
+ br ia64_os_mca_proc_state_dump;;
+
ia64_os_mca_done_dump:
// Setup new stack frame for OS_MCA handling
- movl r2=ia64_mca_bspstore // local bspstore area location in r2
- movl r3=ia64_mca_stackframe // save stack frame to memory in r3
+ movl r2=ia64_mca_bspstore;; // local bspstore area location in r2
+ DATA_VA_TO_PA(r2);;
+ movl r3=ia64_mca_stackframe;; // save stack frame to memory in r3
+ DATA_VA_TO_PA(r3);;
rse_switch_context(r6,r3,r2);; // RSC management in this new context
movl r12=ia64_mca_stack;;
+ mov r2=8*1024;; // stack size must be same as c array
+ add r12=r2,r12;; // stack base @ bottom of array
+ DATA_VA_TO_PA(r12);;
// Enter virtual mode from physical mode
VIRTUAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_begin, r4)
@@ -127,7 +116,7 @@
// call our handler
movl r2=ia64_mca_ucmc_handler;;
mov b6=r2;;
- br.call.sptk.few b0¶
+ br.call.sptk.many b0¶;;
.ret0:
// Revert back to physical mode before going back to SAL
PHYSICAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_end, r4)
@@ -135,9 +124,9 @@
#if defined(MCA_TEST)
// Pretend that we are in interrupt context
- mov r2=psr
- dep r2=0, r2, PSR_IC, 2;
- mov psr.l = r2
+ mov r2=psr;;
+ dep r2=0, r2, PSR_IC, 2;;
+ mov psr.l = r2;;
#endif /* #if defined(MCA_TEST) */
// restore the original stack frame here
@@ -152,15 +141,14 @@
mov r8=gp
;;
begin_os_mca_restore:
- BRANCH(ia64_os_mca_proc_state_restore, r2, p0, 0x0)
- ;;
+ br ia64_os_mca_proc_state_restore;;
ia64_os_mca_done_restore:
;;
// branch back to SALE_CHECK
OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(r2)
ld8 r3=[r2];;
- mov b0=r3 // SAL_CHECK return address
+ mov b0=r3;; // SAL_CHECK return address
br b0
;;
ia64_os_mca_dispatch_end:
@@ -178,8 +166,10 @@
//--
ia64_os_mca_proc_state_dump:
-// Get and save GR0-31 from Proc. Min. State Save Area to SAL PSI
+// Save bank 1 GRs 16-31 which will be used by c-language code when we switch
+// to virtual addressing mode.
movl r2=ia64_mca_proc_state_dump;; // Os state dump area
+ DATA_VA_TO_PA(r2) // convert to to physical address
// save ar.NaT
mov r5=ar.unat // ar.unat
@@ -250,16 +240,16 @@
// if PSR.ic=0, reading interruption registers causes an illegal operation fault
mov r3=psr;;
tbit.nz.unc p6,p0=r3,PSR_IC;; // PSI Valid Log bit pos. test
-(p6) st8 [r2]=r0,9*8+160 // increment by 168 byte inc.
+(p6) st8 [r2]=r0,9*8+160 // increment by 232 byte inc.
begin_skip_intr_regs:
- BRANCH(SkipIntrRegs, r9, p6, 0x0)
- ;;
+(p6) br SkipIntrRegs;;
+
add r4=8,r2 // duplicate r2 in r4
add r6=2*8,r2 // duplicate r2 in r6
mov r3=cr16 // cr.ipsr
mov r5=cr17 // cr.isr
- mov r7=r0;; // cr.ida => cr18
+ mov r7=r0;; // cr.ida => cr18 (reserved)
st8 [r2]=r3,3*8
st8 [r4]=r5,3*8
st8 [r6]=r7,3*8;;
@@ -394,8 +384,7 @@
br.cloop.sptk.few cStRR
;;
end_os_mca_dump:
- BRANCH(ia64_os_mca_done_dump, r2, p0, -0x10)
- ;;
+ br ia64_os_mca_done_dump;;
//EndStub//////////////////////////////////////////////////////////////////////
@@ -484,11 +473,10 @@
// if PSR.ic=1, reading interruption registers causes an illegal operation fault
mov r3=psr;;
tbit.nz.unc p6,p0=r3,PSR_IC;; // PSI Valid Log bit pos. test
-(p6) st8 [r2]=r0,9*8+160 // increment by 160 byte inc.
+(p6) st8 [r2]=r0,9*8+160 // increment by 232 byte inc.
begin_rskip_intr_regs:
- BRANCH(rSkipIntrRegs, r9, p6, 0x0)
- ;;
+(p6) br rSkipIntrRegs;;
add r4=8,r2 // duplicate r2 in r4
add r6=2*8,r2;; // duplicate r2 in r4
@@ -498,7 +486,7 @@
ld8 r7=[r6],3*8;;
mov cr16=r3 // cr.ipsr
mov cr17=r5 // cr.isr is read only
-// mov cr18=r7;; // cr.ida
+// mov cr18=r7;; // cr.ida (reserved - don't restore)
ld8 r3=[r2],3*8
ld8 r5=[r4],3*8
@@ -629,8 +617,8 @@
mov ar.lc=r5
;;
end_os_mca_restore:
- BRANCH(ia64_os_mca_done_restore, r2, p0, -0x20)
- ;;
+ br ia64_os_mca_done_restore;;
+
//EndStub//////////////////////////////////////////////////////////////////////
// ok, the issue here is that we need to save state information so
@@ -660,12 +648,7 @@
// 6. GR12 = Return address to location within SAL_INIT procedure
- .text
- .align 16
-.global ia64_monarch_init_handler
-.proc ia64_monarch_init_handler
-ia64_monarch_init_handler:
-
+GLOBAL_ENTRY(ia64_monarch_init_handler)
#if defined(CONFIG_SMP) && defined(SAL_MPINIT_WORKAROUND)
//
// work around SAL bug that sends all processors to monarch entry
@@ -741,13 +724,12 @@
adds out0\x16,sp // out0 = pointer to pt_regs
;;
- br.call.sptk.few rp=ia64_init_handler
+ br.call.sptk.many rp=ia64_init_handler
.ret1:
return_from_init:
br.sptk return_from_init
-
- .endp
+END(ia64_monarch_init_handler)
//
// SAL to OS entry point for INIT on the slave processor
@@ -755,14 +737,6 @@
// as a part of ia64_mca_init.
//
- .text
- .align 16
-.global ia64_slave_init_handler
-.proc ia64_slave_init_handler
-ia64_slave_init_handler:
-
-
-slave_init_spin_me:
- br.sptk slave_init_spin_me
- ;;
- .endp
+GLOBAL_ENTRY(ia64_slave_init_handler)
+1: br.sptk 1b
+END(ia64_slave_init_handler)
diff -urN linux-2.4.13/arch/ia64/kernel/pal.S linux-2.4.13-lia/arch/ia64/kernel/pal.S
--- linux-2.4.13/arch/ia64/kernel/pal.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/pal.S Thu Oct 4 00:21:39 2001
@@ -4,8 +4,9 @@
*
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * David Mosberger <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 05/22/2000 eranian Added support for stacked register calls
* 05/24/2000 eranian Added support for physical mode static calls
@@ -31,7 +32,7 @@
movl r2=pal_entry_point
;;
st8 [r2]=in0
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_pal_handler_init)
/*
@@ -41,7 +42,7 @@
*/
GLOBAL_ENTRY(ia64_pal_default_handler)
mov r8=-1
- br.cond.sptk.few rp
+ br.cond.sptk.many rp
END(ia64_pal_default_handler)
/*
@@ -79,13 +80,13 @@
;;
(p6) srlz.i
mov rp = r8
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
1: mov psr.l = loc3
mov ar.pfs = loc1
mov rp = loc0
;;
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_static)
/*
@@ -120,7 +121,7 @@
mov rp = loc0
;;
srlz.d // serialize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_stacked)
/*
@@ -173,13 +174,13 @@
or loc3=loc3,r17 // add in psr the bits to set
;;
andcm r16=loc3,r16 // removes bits to clear from psr
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret1: mov rp = r8 // install return address (physical)
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
1:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret2:
mov psr.l = loc3 // restore init PSR
@@ -188,7 +189,7 @@
;;
mov ar.rsc=loc4 // restore RSE configuration
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_phys_static)
/*
@@ -227,13 +228,13 @@
mov b7 = loc2 // install target to branch reg
;;
andcm r16=loc3,r16 // removes bits to clear from psr
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret6:
br.call.sptk.many rp· // now make the call
.ret7:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret8: mov psr.l = loc3 // restore init PSR
mov ar.pfs = loc1
@@ -241,6 +242,6 @@
;;
mov ar.rsc=loc4 // restore RSE configuration
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_phys_stacked)
diff -urN linux-2.4.13/arch/ia64/kernel/palinfo.c linux-2.4.13-lia/arch/ia64/kernel/palinfo.c
--- linux-2.4.13/arch/ia64/kernel/palinfo.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/palinfo.c Wed Oct 24 18:14:08 2001
@@ -6,12 +6,13 @@
* Intel IA-64 Architecture Software Developer's Manual v1.0.
*
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 05/26/2000 S.Eranian initial release
* 08/21/2000 S.Eranian updated to July 2000 PAL specs
* 02/05/2001 S.Eranian fixed module support
+ * 10/23/2001 S.Eranian updated pal_perf_mon_info bug fixes
*/
#include <linux/config.h>
#include <linux/types.h>
@@ -32,8 +33,9 @@
MODULE_AUTHOR("Stephane Eranian <eranian@hpl.hp.com>");
MODULE_DESCRIPTION("/proc interface to IA-64 PAL");
+MODULE_LICENSE("GPL");
-#define PALINFO_VERSION "0.4"
+#define PALINFO_VERSION "0.5"
#ifdef CONFIG_SMP
#define cpu_is_online(i) (cpu_online_map & (1UL << i))
@@ -606,15 +608,6 @@
if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0) return 0;
-#ifdef IA64_PAL_PERF_MON_INFO_BUG
- /*
- * This bug has been fixed in PAL 2.2.9 and higher
- */
- pm_buffer[5]=0x3;
- pm_info.pal_perf_mon_info_s.cycles = 0x12;
- pm_info.pal_perf_mon_info_s.retired = 0x08;
-#endif
-
p += sprintf(p, "PMC/PMD pairs : %d\n" \
"Counter width : %d bits\n" \
"Cycle event number : %d\n" \
@@ -636,6 +629,14 @@
p = bitregister_process(p, pm_buffer+8, 256);
p += sprintf(p, "\nRetired bundles count capable : ");
+
+#ifdef CONFIG_ITANIUM
+ /*
+ * PAL_PERF_MON_INFO reports that only PMC4 can be used to count CPU_CYCLES
+ * which is wrong, both PMC4 and PMD5 support it.
+ */
+ if (pm_buffer[12] = 0x10) pm_buffer[12]=0x30;
+#endif
p = bitregister_process(p, pm_buffer+12, 256);
diff -urN linux-2.4.13/arch/ia64/kernel/pci.c linux-2.4.13-lia/arch/ia64/kernel/pci.c
--- linux-2.4.13/arch/ia64/kernel/pci.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/pci.c Thu Oct 4 00:21:39 2001
@@ -38,6 +38,10 @@
#define DBG(x...)
#endif
+#ifdef CONFIG_IA64_MCA
+extern void ia64_mca_check_errors( void );
+#endif
+
/*
* This interrupt-safe spinlock protects all accesses to PCI
* configuration space.
@@ -122,6 +126,10 @@
# define PCI_BUSES_TO_SCAN 255
int i;
+#ifdef CONFIG_IA64_MCA
+ ia64_mca_check_errors(); /* For post-failure MCA error logging */
+#endif
+
platform_pci_fixup(0); /* phase 0 initialization (before PCI bus has been scanned) */
printk("PCI: Probing PCI hardware\n");
@@ -194,4 +202,40 @@
pcibios_setup (char *str)
{
return NULL;
+}
+
+int
+pci_mmap_page_range (struct pci_dev *dev, struct vm_area_struct *vma,
+ enum pci_mmap_state mmap_state, int write_combine)
+{
+ /*
+ * I/O space cannot be accessed via normal processor loads and stores on this
+ * platform.
+ */
+ if (mmap_state = pci_mmap_io)
+ /*
+ * XXX we could relax this for I/O spaces for which ACPI indicates that
+ * the space is 1-to-1 mapped. But at the moment, we don't support
+ * multiple PCI address spaces and the legacy I/O space is not 1-to-1
+ * mapped, so this is moot.
+ */
+ return -EINVAL;
+
+ /*
+ * Leave vm_pgoff as-is, the PCI space address is the physical address on this
+ * platform.
+ */
+ vma->vm_flags |= (VM_SHM | VM_LOCKED | VM_IO);
+
+ if (write_combine)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+ else
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+ if (remap_page_range(vma->vm_start, vma->vm_pgoff << PAGE_SHIFT,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot))
+ return -EAGAIN;
+
+ return 0;
}
diff -urN linux-2.4.13/arch/ia64/kernel/perfmon.c linux-2.4.13-lia/arch/ia64/kernel/perfmon.c
--- linux-2.4.13/arch/ia64/kernel/perfmon.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/perfmon.c Thu Oct 4 00:21:39 2001
@@ -38,7 +38,7 @@
#ifdef CONFIG_PERFMON
-#define PFM_VERSION "0.2"
+#define PFM_VERSION "0.3"
#define PFM_SMPL_HDR_VERSION 1
#define PMU_FIRST_COUNTER 4 /* first generic counter */
@@ -52,6 +52,7 @@
#define PFM_DISABLE 0xa6 /* freeze only */
#define PFM_RESTART 0xcf
#define PFM_CREATE_CONTEXT 0xa7
+#define PFM_DESTROY_CONTEXT 0xa8
/*
* Those 2 are just meant for debugging. I considered using sysctl() for
* that but it is a little bit too pervasive. This solution is at least
@@ -60,6 +61,8 @@
#define PFM_DEBUG_ON 0xe0
#define PFM_DEBUG_OFF 0xe1
+#define PFM_DEBUG_BASE PFM_DEBUG_ON
+
/*
* perfmon API flags
@@ -68,7 +71,8 @@
#define PFM_FL_INHERIT_ONCE 0x01 /* clone pfm_context only once across fork() */
#define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork() */
#define PFM_FL_SMPL_OVFL_NOBLOCK 0x04 /* do not block on sampling buffer overflow */
-#define PFM_FL_SYSTEMWIDE 0x08 /* create a systemwide context */
+#define PFM_FL_SYSTEM_WIDE 0x08 /* create a system wide context */
+#define PFM_FL_EXCL_INTR 0x10 /* exclude interrupt from system wide monitoring */
/*
* PMC API flags
@@ -87,7 +91,7 @@
#endif
#define PMC_IS_IMPL(i) (i < pmu_conf.num_pmcs && pmu_conf.impl_regs[i>>6] & (1<< (i&~(64-1))))
-#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
+#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
#define PMD_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
#define PMC_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
@@ -197,7 +201,8 @@
unsigned int noblock:1; /* block/don't block on overflow with notification */
unsigned int system:1; /* do system wide monitoring */
unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */
- unsigned int reserved:27;
+ unsigned int exclintr:1;/* exlcude interrupts from system wide monitoring */
+ unsigned int reserved:26;
} pfm_context_flags_t;
typedef struct pfm_context {
@@ -207,26 +212,33 @@
unsigned long ctx_iear_counter; /* which PMD holds I-EAR */
unsigned long ctx_btb_counter; /* which PMD holds BTB */
- pid_t ctx_notify_pid; /* who to notify on overflow */
- int ctx_notify_sig; /* XXX: SIGPROF or other */
- pfm_context_flags_t ctx_flags; /* block/noblock */
- pid_t ctx_creator; /* pid of creator (debug) */
- unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */
- unsigned long ctx_smpl_regs; /* which registers to record on overflow */
+ spinlock_t ctx_notify_lock;
+ pfm_context_flags_t ctx_flags; /* block/noblock */
+ int ctx_notify_sig; /* XXX: SIGPROF or other */
+ struct task_struct *ctx_notify_task; /* who to notify on overflow */
+ struct task_struct *ctx_creator; /* pid of creator (debug) */
+
+ unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */
+ unsigned long ctx_smpl_regs; /* which registers to record on overflow */
+
+ struct semaphore ctx_restart_sem; /* use for blocking notification mode */
- struct semaphore ctx_restart_sem; /* use for blocking notification mode */
+ unsigned long ctx_used_pmds[4]; /* bitmask of used PMD (speedup ctxsw) */
+ unsigned long ctx_used_pmcs[4]; /* bitmask of used PMC (speedup ctxsw) */
pfm_counter_t ctx_pmds[IA64_NUM_PMD_COUNTERS]; /* XXX: size should be dynamic */
+
} pfm_context_t;
+#define CTX_USED_PMD(ctx,n) (ctx)->ctx_used_pmds[(n)>>6] |= 1<< ((n) % 64)
+#define CTX_USED_PMC(ctx,n) (ctx)->ctx_used_pmcs[(n)>>6] |= 1<< ((n) % 64)
+
#define ctx_fl_inherit ctx_flags.inherit
#define ctx_fl_noblock ctx_flags.noblock
#define ctx_fl_system ctx_flags.system
#define ctx_fl_frozen ctx_flags.frozen
+#define ctx_fl_exclintr ctx_flags.exclintr
-#define CTX_IS_DEAR(c,n) ((c)->ctx_dear_counter = (n))
-#define CTX_IS_IEAR(c,n) ((c)->ctx_iear_counter = (n))
-#define CTX_IS_BTB(c,n) ((c)->ctx_btb_counter = (n))
#define CTX_OVFL_NOBLOCK(c) ((c)->ctx_fl_noblock = 1)
#define CTX_INHERIT_MODE(c) ((c)->ctx_fl_inherit)
#define CTX_HAS_SMPL(c) ((c)->ctx_smpl_buf != NULL)
@@ -234,17 +246,15 @@
static pmu_config_t pmu_conf;
/* for debug only */
-static unsigned long pfm_debug=0; /* 0= nodebug, >0= debug output on */
+static int pfm_debug=0; /* 0= nodebug, >0= debug output on */
+
#define DBprintk(a) \
do { \
- if (pfm_debug >0) { printk(__FUNCTION__" "); printk a; } \
+ if (pfm_debug >0) { printk(__FUNCTION__" %d: ", __LINE__); printk a; } \
} while (0);
-static void perfmon_softint(unsigned long ignored);
static void ia64_reset_pmu(void);
-DECLARE_TASKLET(pfm_tasklet, perfmon_softint, 0);
-
/*
* structure used to pass information between the interrupt handler
* and the tasklet.
@@ -256,26 +266,42 @@
unsigned long bitvect; /* which counters have overflowed */
} notification_info_t;
-#define notification_is_invalid(i) (i->to_pid < 2)
-/* will need to be cache line padded */
-static notification_info_t notify_info[NR_CPUS];
+typedef struct {
+ unsigned long pfs_proc_sessions;
+ unsigned long pfs_sys_session; /* can only be 0/1 */
+ unsigned long pfs_dfl_dcr; /* XXX: hack */
+ unsigned int pfs_pp;
+} pfm_session_t;
-/*
- * We force cache line alignment to avoid false sharing
- * given that we have one entry per CPU.
- */
-static struct {
+struct {
struct task_struct *owner;
} ____cacheline_aligned pmu_owners[NR_CPUS];
-/* helper macros */
+
+
+/*
+ * helper macros
+ */
#define SET_PMU_OWNER(t) do { pmu_owners[smp_processor_id()].owner = (t); } while(0);
#define PMU_OWNER() pmu_owners[smp_processor_id()].owner
+#ifdef CONFIG_SMP
+#define PFM_CAN_DO_LAZY() (smp_num_cpus=1 && pfs_info.pfs_sys_session=0)
+#else
+#define PFM_CAN_DO_LAZY() (pfs_info.pfs_sys_session=0)
+#endif
+
+static void pfm_lazy_save_regs (struct task_struct *ta);
+
/* for debug only */
static struct proc_dir_entry *perfmon_dir;
/*
+ * XXX: hack to indicate that a system wide monitoring session is active
+ */
+static pfm_session_t pfs_info;
+
+/*
* finds the number of PM(C|D) registers given
* the bitvector returned by PAL
*/
@@ -339,8 +365,7 @@
static inline unsigned long
kvirt_to_pa(unsigned long adr)
{
- __u64 pa;
- __asm__ __volatile__ ("tpa %0 = %1" : "=r"(pa) : "r"(adr) : "memory");
+ __u64 pa = ia64_tpa(adr);
DBprintk(("kv2pa(%lx-->%lx)\n", adr, pa));
return pa;
}
@@ -568,25 +593,44 @@
static int
pfx_is_sane(pfreq_context_t *pfx)
{
+ int ctx_flags;
+
/* valid signal */
- if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return 0;
+ //if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return -EINVAL;
+ if (pfx->notify_sig !=0 && pfx->notify_sig != SIGPROF) return -EINVAL;
/* cannot send to process 1, 0 means do not notify */
- if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return 0;
+ if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return -EINVAL;
+
+ ctx_flags = pfx->flags;
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+#ifdef CONFIG_SMP
+ if (smp_num_cpus > 1) {
+ printk("perfmon: system wide monitoring on SMP not yet supported\n");
+ return -EINVAL;
+ }
+#endif
+ if ((ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) = 0) {
+ printk("perfmon: system wide monitoring cannot use blocking notification mode\n");
+ return -EINVAL;
+ }
+ }
/* probably more to add here */
- return 1;
+ return 0;
}
static int
-pfm_context_create(struct task_struct *task, int flags, perfmon_req_t *req)
+pfm_context_create(int flags, perfmon_req_t *req)
{
pfm_context_t *ctx;
+ struct task_struct *task = NULL;
perfmon_req_t tmp;
void *uaddr = NULL;
- int ret = -EFAULT;
+ int ret;
int ctx_flags;
+ pid_t pid;
/* to go away */
if (flags) {
@@ -595,48 +639,156 @@
if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
+ ret = pfx_is_sane(&tmp.pfr_ctx);
+ if (ret < 0) return ret;
+
ctx_flags = tmp.pfr_ctx.flags;
- /* not yet supported */
- if (ctx_flags & PFM_FL_SYSTEMWIDE) return -EINVAL;
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+ /*
+ * XXX: This is not AT ALL SMP safe
+ */
+ if (pfs_info.pfs_proc_sessions > 0) return -EBUSY;
+ if (pfs_info.pfs_sys_session > 0) return -EBUSY;
+
+ pfs_info.pfs_sys_session = 1;
- if (!pfx_is_sane(&tmp.pfr_ctx)) return -EINVAL;
+ } else if (pfs_info.pfs_sys_session >0) {
+ /* no per-process monitoring while there is a system wide session */
+ return -EBUSY;
+ } else
+ pfs_info.pfs_proc_sessions++;
ctx = pfm_context_alloc();
- if (!ctx) return -ENOMEM;
+ if (!ctx) goto error;
+
+ /* record the creator (debug only) */
+ ctx->ctx_creator = current;
+
+ pid = tmp.pfr_ctx.notify_pid;
+
+ spin_lock_init(&ctx->ctx_notify_lock);
+
+ if (pid = current->pid) {
+ ctx->ctx_notify_task = task = current;
+ current->thread.pfm_context = ctx;
+
+ atomic_set(¤t->thread.pfm_notifiers_check, 1);
+
+ } else if (pid!=0) {
+ read_lock(&tasklist_lock);
+
+ task = find_task_by_pid(pid);
+ if (task) {
+ /*
+ * record who to notify
+ */
+ ctx->ctx_notify_task = task;
+
+ /*
+ * make visible
+ * must be done inside critical section
+ *
+ * if the initialization does not go through it is still
+ * okay because child will do the scan for nothing which
+ * won't hurt.
+ */
+ current->thread.pfm_context = ctx;
+
+ /*
+ * will cause task to check on exit for monitored
+ * processes that would notify it. see release_thread()
+ * Note: the scan MUST be done in release thread, once the
+ * task has been detached from the tasklist otherwise you are
+ * exposed to race conditions.
+ */
+ atomic_add(1, &task->thread.pfm_notifiers_check);
+ }
+ read_unlock(&tasklist_lock);
+ }
- /* record who the creator is (for debug) */
- ctx->ctx_creator = task->pid;
+ /*
+ * notification process does not exist
+ */
+ if (pid != 0 && task = NULL) {
+ ret = -EINVAL;
+ goto buffer_error;
+ }
- ctx->ctx_notify_pid = tmp.pfr_ctx.notify_pid;
ctx->ctx_notify_sig = SIGPROF; /* siginfo imposes a fixed signal */
if (tmp.pfr_ctx.smpl_entries) {
DBprintk((" sampling entries=%ld\n",tmp.pfr_ctx.smpl_entries));
- if ((ret=pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs, tmp.pfr_ctx.smpl_entries, &uaddr)) ) goto buffer_error;
+
+ ret = pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs,
+ tmp.pfr_ctx.smpl_entries, &uaddr);
+ if (ret<0) goto buffer_error;
+
tmp.pfr_ctx.smpl_vaddr = uaddr;
}
/* initialization of context's flags */
- ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
- ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0;
- ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEMWIDE) ? 1: 0;
- ctx->ctx_fl_frozen = 0;
+ ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
+ ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0;
+ ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEM_WIDE) ? 1: 0;
+ ctx->ctx_fl_exclintr = (ctx_flags & PFM_FL_EXCL_INTR) ? 1: 0;
+ ctx->ctx_fl_frozen = 0;
+
+ /*
+ * Keep track of the pmds we want to sample
+ * XXX: may be we don't need to save/restore the DEAR/IEAR pmds
+ * but we do need the BTB for sure. This is because of a hardware
+ * buffer of 1 only for non-BTB pmds.
+ */
+ ctx->ctx_used_pmds[0] = tmp.pfr_ctx.smpl_regs;
+ ctx->ctx_used_pmcs[0] = 1; /* always save/restore PMC[0] */
sema_init(&ctx->ctx_restart_sem, 0); /* init this semaphore to locked */
- if (copy_to_user(req, &tmp, sizeof(tmp))) goto buffer_error;
- DBprintk((" context=%p, pid=%d notify_sig %d notify_pid=%d\n",(void *)ctx, task->pid, ctx->ctx_notify_sig, ctx->ctx_notify_pid));
- DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+ if (copy_to_user(req, &tmp, sizeof(tmp))) {
+ ret = -EFAULT;
+ goto buffer_error;
+ }
+
+ DBprintk((" context=%p, pid=%d notify_sig %d notify_task=%p\n",(void *)ctx, current->pid, ctx->ctx_notify_sig, ctx->ctx_notify_task));
+ DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, current->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+
+ /*
+ * when no notification is required, we can make this visible at the last moment
+ */
+ if (pid = 0) current->thread.pfm_context = ctx;
+
+ /*
+ * by default, we always include interrupts for system wide
+ * DCR.pp is set by default to zero by kernel in cpu_init()
+ */
+ if (ctx->ctx_fl_system) {
+ if (ctx->ctx_fl_exclintr = 0) {
+ unsigned long dcr = ia64_get_dcr();
+
+ ia64_set_dcr(dcr|IA64_DCR_PP);
+ /*
+ * keep track of the kernel default value
+ */
+ pfs_info.pfs_dfl_dcr = dcr;
- /* link with task */
- task->thread.pfm_context = ctx;
+ DBprintk((" dcr.pp is set\n"));
+ }
+ }
return 0;
buffer_error:
- vfree(ctx);
-
+ pfm_context_free(ctx);
+error:
+ /*
+ * undo session reservation
+ */
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+ pfs_info.pfs_sys_session = 0;
+ } else {
+ pfs_info.pfs_proc_sessions--;
+ }
return ret;
}
@@ -656,8 +808,20 @@
/* upper part is ignored on rval */
ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
+
+ /*
+ * we must reset BTB index (clears pmd16.full to make
+ * sure we do not report the same branches twice.
+ * The non-blocking case in handled in update_counters()
+ */
+ if (cnum = ctx->ctx_btb_counter) {
+ DBprintk(("reseting PMD16\n"));
+ ia64_set_pmd(16, 0);
+ }
}
}
+ /* just in case ! */
+ ctx->ctx_ovfl_regs = 0;
}
static int
@@ -695,20 +859,23 @@
} else if (PMC_IS_BTB(&tmp.pfr_reg.reg_value)) {
ctx->ctx_btb_counter = cnum;
}
-
+#if 0
if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY)
ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY;
+#endif
}
-
+ /* keep track of what we use */
+ CTX_USED_PMC(ctx, cnum);
ia64_set_pmc(cnum, tmp.pfr_reg.reg_value);
- DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags));
+
+ DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x used_pmcs=0%lx\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags, ctx->ctx_used_pmcs[0]));
}
/*
* we have to set this here event hough we haven't necessarily started monitoring
* because we may be context switched out
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
return 0;
}
@@ -741,25 +908,32 @@
ctx->ctx_pmds[k].val = tmp.pfr_reg.reg_value & ~pmu_conf.perf_ovfl_val;
ctx->ctx_pmds[k].smpl_rval = tmp.pfr_reg.reg_smpl_reset;
ctx->ctx_pmds[k].ovfl_rval = tmp.pfr_reg.reg_ovfl_reset;
+
+ if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY)
+ ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY;
}
+ /* keep track of what we use */
+ CTX_USED_PMD(ctx, cnum);
/* writes to unimplemented part is ignored, so this is safe */
ia64_set_pmd(cnum, tmp.pfr_reg.reg_value);
/* to go away */
ia64_srlz_d();
- DBprintk((" setting PMD[%ld]: pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx\n",
+ DBprintk((" setting PMD[%ld]: ovfl_notify=%d pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx used_pmds=0%lx\n",
cnum,
+ PMD_OVFL_NOTIFY(ctx, cnum - PMU_FIRST_COUNTER),
ctx->ctx_pmds[k].val,
ctx->ctx_pmds[k].ovfl_rval,
ctx->ctx_pmds[k].smpl_rval,
- ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val));
+ ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val,
+ ctx->ctx_used_pmds[0]));
}
/*
* we have to set this here event hough we haven't necessarily started monitoring
* because we may be context switched out
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
return 0;
}
@@ -783,6 +957,8 @@
/* XXX: ctx locking may be required here */
for (i = 0; i < count; i++, req++) {
+ unsigned long reg_val = ~0, ctx_val = ~0;
+
if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
if (!PMD_IS_IMPL(tmp.pfr_reg.reg_num)) return -EINVAL;
@@ -791,23 +967,25 @@
if (ta = current){
val = ia64_get_pmd(tmp.pfr_reg.reg_num);
} else {
- val = th->pmd[tmp.pfr_reg.reg_num];
+ val = reg_val = th->pmd[tmp.pfr_reg.reg_num];
}
val &= pmu_conf.perf_ovfl_val;
/*
* lower part of .val may not be zero, so we must be an addition because of
* residual count (see update_counters).
*/
- val += ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER].val;
+ val += ctx_val = ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER].val;
} else {
/* for now */
if (ta != current) return -EINVAL;
+ ia64_srlz_d();
val = ia64_get_pmd(tmp.pfr_reg.reg_num);
}
tmp.pfr_reg.reg_value = val;
- DBprintk((" reading PMD[%ld]=0x%lx\n", tmp.pfr_reg.reg_num, val));
+ DBprintk((" reading PMD[%ld]=0x%lx reg=0x%lx ctx_val=0x%lx pmc=0x%lx\n",
+ tmp.pfr_reg.reg_num, val, reg_val, ctx_val, ia64_get_pmc(tmp.pfr_reg.reg_num)));
if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT;
}
@@ -822,7 +1000,7 @@
void *sem = &ctx->ctx_restart_sem;
if (task = current) {
- DBprintk((" restartig self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
+ DBprintk((" restarting self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
pfm_reset_regs(ctx);
@@ -871,6 +1049,23 @@
return 0;
}
+/*
+ * system-wide mode: propagate activation/desactivation throughout the tasklist
+ *
+ * XXX: does not work for SMP, of course
+ */
+static void
+pfm_process_tasklist(int cmd)
+{
+ struct task_struct *p;
+ struct pt_regs *regs;
+
+ for_each_task(p) {
+ regs = (struct pt_regs *)((unsigned long)p + IA64_STK_OFFSET);
+ regs--;
+ ia64_psr(regs)->pp = cmd;
+ }
+}
static int
do_perfmonctl (struct task_struct *task, int cmd, int flags, perfmon_req_t *req, int count, struct pt_regs *regs)
@@ -881,19 +1076,26 @@
memset(&tmp, 0, sizeof(tmp));
+ if (ctx = NULL && cmd != PFM_CREATE_CONTEXT && cmd < PFM_DEBUG_BASE) {
+ DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid));
+ return -EINVAL;
+ }
+
switch (cmd) {
case PFM_CREATE_CONTEXT:
/* a context has already been defined */
if (ctx) return -EBUSY;
- /* may be a temporary limitation */
+ /*
+ * cannot directly create a context in another process
+ */
if (task != current) return -EINVAL;
if (req = NULL || count != 1) return -EINVAL;
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- return pfm_context_create(task, flags, req);
+ return pfm_context_create(flags, req);
case PFM_WRITE_PMCS:
/* we don't quite support this right now */
@@ -901,10 +1103,6 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_write_pmcs(task, req, count);
case PFM_WRITE_PMDS:
@@ -913,45 +1111,41 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_WRITE_PMDS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_write_pmds(task, req, count);
case PFM_START:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- if (!ctx) {
- DBprintk((" PFM_START: no context for task %d\n", task->pid));
- return -EINVAL;
- }
+ if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER());
SET_PMU_OWNER(current);
/* will start monitoring right after rfi */
ia64_psr(regs)->up = 1;
+ ia64_psr(regs)->pp = 1;
+
+ if (ctx->ctx_fl_system) {
+ pfm_process_tasklist(1);
+ pfs_info.pfs_pp = 1;
+ }
/*
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
ia64_set_pmc(0, 0);
ia64_srlz_d();
- break;
+ break;
case PFM_ENABLE:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- if (!ctx) {
- DBprintk((" PFM_ENABLE: no context for task %d\n", task->pid));
- return -EINVAL;
- }
+ if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER());
/* reset all registers to stable quiet state */
ia64_reset_pmu();
@@ -969,7 +1163,7 @@
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
/* simply unfreeze */
ia64_set_pmc(0, 0);
@@ -983,54 +1177,41 @@
/* simply freeze */
ia64_set_pmc(0, 1);
ia64_srlz_d();
+ /*
+ * XXX: cannot really toggle IA64_THREAD_PM_VALID
+ * but context is still considered valid, so any
+ * read request would return something valid. Same
+ * thing when this task terminates (pfm_flush_regs()).
+ */
break;
case PFM_READ_PMDS:
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
if (!access_ok(VERIFY_WRITE, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_READ_PMDS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_read_pmds(task, req, count);
case PFM_STOP:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- ia64_set_pmc(0, 1);
- ia64_srlz_d();
-
+ /* simply stop monitors, not PMU */
ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->pp = 0;
- th->flags &= ~IA64_THREAD_PM_VALID;
-
- SET_PMU_OWNER(NULL);
-
- /* we probably will need some more cleanup here */
- break;
-
- case PFM_DEBUG_ON:
- printk(" debugging on\n");
- pfm_debug = 1;
- break;
+ if (ctx->ctx_fl_system) {
+ pfm_process_tasklist(0);
+ pfs_info.pfs_pp = 0;
+ }
- case PFM_DEBUG_OFF:
- printk(" debugging off\n");
- pfm_debug = 0;
break;
case PFM_RESTART: /* temporary, will most likely end up as a PFM_ENABLE */
- if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0 && ctx->ctx_fl_system=0) {
printk(" PFM_RESTART not monitoring\n");
return -EINVAL;
}
- if (!ctx) {
- printk(" PFM_RESTART no ctx for %d\n", task->pid);
- return -EINVAL;
- }
if (CTX_OVFL_NOBLOCK(ctx) = 0 && ctx->ctx_fl_frozen=0) {
printk("task %d without pmu_frozen set\n", task->pid);
return -EINVAL;
@@ -1038,6 +1219,37 @@
return pfm_do_restart(task); /* we only look at first entry */
+ case PFM_DESTROY_CONTEXT:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
+
+ /* first stop monitors */
+ ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->pp = 0;
+
+ /* then freeze PMU */
+ ia64_set_pmc(0, 1);
+ ia64_srlz_d();
+
+ /* don't save/restore on context switch */
+ if (ctx->ctx_fl_system =0) task->thread.flags &= ~IA64_THREAD_PM_VALID;
+
+ SET_PMU_OWNER(NULL);
+
+ /* now free context and related state */
+ pfm_context_exit(task);
+ break;
+
+ case PFM_DEBUG_ON:
+ printk("perfmon debugging on\n");
+ pfm_debug = 1;
+ break;
+
+ case PFM_DEBUG_OFF:
+ printk("perfmon debugging off\n");
+ pfm_debug = 0;
+ break;
+
default:
DBprintk((" UNknown command 0x%x\n", cmd));
return -EINVAL;
@@ -1074,11 +1286,8 @@
/* XXX: pid interface is going away in favor of pfm context */
if (pid != current->pid) {
read_lock(&tasklist_lock);
- {
- child = find_task_by_pid(pid);
- if (child)
- get_task_struct(child);
- }
+
+ child = find_task_by_pid(pid);
if (!child) goto abort_call;
@@ -1101,93 +1310,44 @@
return ret;
}
-
-/*
- * This function is invoked on the exit path of the kernel. Therefore it must make sure
- * it does does modify the caller's input registers (in0-in7) in case of entry by system call
- * which can be restarted. That's why it's declared as a system call and all 8 possible args
- * are declared even though not used.
- */
#if __GNUC__ >= 3
void asmlinkage
-pfm_overflow_notify(void)
+pfm_block_on_overflow(void)
#else
void asmlinkage
-pfm_overflow_notify(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
+pfm_block_on_overflow(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
#endif
{
- struct task_struct *task;
struct thread_struct *th = ¤t->thread;
pfm_context_t *ctx = current->thread.pfm_context;
- struct siginfo si;
int ret;
/*
- * do some sanity checks first
- */
- if (!ctx) {
- printk("perfmon: process %d has no PFM context\n", current->pid);
- return;
- }
- if (ctx->ctx_notify_pid < 2) {
- printk("perfmon: process %d invalid notify_pid=%d\n", current->pid, ctx->ctx_notify_pid);
- return;
- }
-
- DBprintk((" current=%d ctx=%p bv=0%lx\n", current->pid, (void *)ctx, ctx->ctx_ovfl_regs));
- /*
* NO matter what notify_pid is,
* we clear overflow, won't notify again
*/
- th->pfm_pend_notify = 0;
+ th->pfm_must_block = 0;
/*
- * When measuring in kernel mode and non-blocking fashion, it is possible to
- * get an overflow while executing this code. Therefore the state of pend_notify
- * and ovfl_regs can be altered. The important point is not to loose any notification.
- * It is fine to get called for nothing. To make sure we do collect as much state as
- * possible, update_counters() always uses |= to add bit to the ovfl_regs field.
- *
- * In certain cases, it is possible to come here, with ovfl_regs = 0;
- *
- * XXX: pend_notify and ovfl_regs could be merged maybe !
+ * do some sanity checks first
*/
- if (ctx->ctx_ovfl_regs = 0) {
- printk("perfmon: spurious overflow notification from pid %d\n", current->pid);
+ if (!ctx) {
+ printk("perfmon: process %d has no PFM context\n", current->pid);
return;
}
- read_lock(&tasklist_lock);
-
- task = find_task_by_pid(ctx->ctx_notify_pid);
-
- if (task) {
- si.si_signo = ctx->ctx_notify_sig;
- si.si_errno = 0;
- si.si_code = PROF_OVFL; /* goes to user */
- si.si_addr = NULL;
- si.si_pid = current->pid; /* who is sending */
- si.si_pfm_ovfl = ctx->ctx_ovfl_regs;
-
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
-
- /* must be done with tasklist_lock locked */
- ret = send_sig_info(ctx->ctx_notify_sig, &si, task);
- if (ret != 0) {
- DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_pid, ret));
- task = NULL; /* will cause return */
- }
- } else {
- printk("perfmon: notify_pid %d not found\n", ctx->ctx_notify_pid);
+ if (ctx->ctx_notify_task = 0) {
+ printk("perfmon: process %d has no task to notify\n", current->pid);
+ return;
}
- read_unlock(&tasklist_lock);
+ DBprintk((" current=%d task=%d\n", current->pid, ctx->ctx_notify_task->pid));
- /* now that we have released the lock handle error condition */
- if (!task || CTX_OVFL_NOBLOCK(ctx)) {
- /* we clear all pending overflow bits in noblock mode */
- ctx->ctx_ovfl_regs = 0;
+ /* should not happen */
+ if (CTX_OVFL_NOBLOCK(ctx)) {
+ printk("perfmon: process %d non-blocking ctx should not be here\n", current->pid);
return;
}
+
DBprintk((" CPU%d %d before sleep\n", smp_processor_id(), current->pid));
/*
@@ -1211,9 +1371,6 @@
pfm_reset_regs(ctx);
- /* now we can clear this mask */
- ctx->ctx_ovfl_regs = 0;
-
/*
* Unlock sampling buffer and reset index atomically
* XXX: not really needed when blocking
@@ -1232,84 +1389,14 @@
}
}
-static void
-perfmon_softint(unsigned long ignored)
-{
- notification_info_t *info;
- int my_cpu = smp_processor_id();
- struct task_struct *task;
- struct siginfo si;
-
- info = notify_info+my_cpu;
-
- DBprintk((" CPU%d current=%d to_pid=%d from_pid=%d bv=0x%lx\n", \
- smp_processor_id(), current->pid, info->to_pid, info->from_pid, info->bitvect));
-
- /* assumption check */
- if (info->from_pid = info->to_pid) {
- DBprintk((" Tasklet assumption error: from=%d tor=%d\n", info->from_pid, info->to_pid));
- return;
- }
-
- if (notification_is_invalid(info)) {
- DBprintk((" invalid notification information\n"));
- return;
- }
-
- /* sanity check */
- if (info->to_pid = 1) {
- DBprintk((" cannot notify init\n"));
- return;
- }
- /*
- * XXX: needs way more checks here to make sure we send to a task we have control over
- */
- read_lock(&tasklist_lock);
-
- task = find_task_by_pid(info->to_pid);
-
- DBprintk((" after find %p\n", (void *)task));
-
- if (task) {
- int ret;
-
- si.si_signo = SIGPROF;
- si.si_errno = 0;
- si.si_code = PROF_OVFL; /* goes to user */
- si.si_addr = NULL;
- si.si_pid = info->from_pid; /* who is sending */
- si.si_pfm_ovfl = info->bitvect;
-
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
-
- /* must be done with tasklist_lock locked */
- ret = send_sig_info(SIGPROF, &si, task);
- if (ret != 0)
- DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", info->to_pid, ret));
-
- /* invalidate notification */
- info->to_pid = info->from_pid = 0;
- info->bitvect = 0;
- }
-
- read_unlock(&tasklist_lock);
-
- DBprintk((" after unlock %p\n", (void *)task));
-
- if (!task) {
- printk("perfmon: CPU%d cannot find process %d\n", smp_processor_id(), info->to_pid);
- }
-}
-
/*
* main overflow processing routine.
* it can be called from the interrupt path or explicitely during the context switch code
* Return:
- * 0 : do not unfreeze the PMU
- * 1 : PMU can be unfrozen
+ * new value of pmc[0]. if 0x0 then unfreeze, else keep frozen
*/
-static unsigned long
-update_counters (struct task_struct *ta, u64 pmc0, struct pt_regs *regs)
+unsigned long
+update_counters (struct task_struct *task, u64 pmc0, struct pt_regs *regs)
{
unsigned long mask, i, cnum;
struct thread_struct *th;
@@ -1317,7 +1404,9 @@
unsigned long bv = 0;
int my_cpu = smp_processor_id();
int ret = 1, buffer_is_full = 0;
- int ovfl_is_smpl, can_notify, need_reset_pmd16=0;
+ int ovfl_has_long_recovery, can_notify, need_reset_pmd16=0;
+ struct siginfo si;
+
/*
* It is never safe to access the task for which the overflow interrupt is destinated
* using the current variable as the interrupt may occur in the middle of a context switch
@@ -1331,23 +1420,23 @@
* valid one, i.e. the one that caused the interrupt.
*/
- if (ta = NULL) {
+ if (task = NULL) {
DBprintk((" owners[%d]=NULL\n", my_cpu));
return 0x1;
}
- th = &ta->thread;
+ th = &task->thread;
ctx = th->pfm_context;
/*
* XXX: debug test
* Don't think this could happen given upfront tests
*/
- if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
- printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", ta->pid);
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0 && ctx->ctx_fl_system = 0) {
+ printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", task->pid);
return 0x1;
}
if (!ctx) {
- printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", ta->pid);
+ printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", task->pid);
return 0;
}
@@ -1355,16 +1444,21 @@
* sanity test. Should never happen
*/
if ((pmc0 & 0x1 )= 0) {
- printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", ta->pid, pmc0);
+ printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", task->pid, pmc0);
return 0x0;
}
mask = pmc0 >> PMU_FIRST_COUNTER;
- DBprintk(("pmc0=0x%lx pid=%d\n", pmc0, ta->pid));
-
- DBprintk(("ctx is in %s mode\n", CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK"));
+ DBprintk(("pmc0=0x%lx pid=%d owner=%d iip=0x%lx, ctx is in %s mode used_pmds=0x%lx used_pmcs=0x%lx\n",
+ pmc0, task->pid, PMU_OWNER()->pid, regs->cr_iip,
+ CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK",
+ ctx->ctx_used_pmds[0],
+ ctx->ctx_used_pmcs[0]));
+ /*
+ * XXX: need to record sample only when an EAR/BTB has overflowed
+ */
if (CTX_HAS_SMPL(ctx)) {
pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
unsigned long *e, m, idx=0;
@@ -1372,11 +1466,15 @@
int j;
idx = ia64_fetch_and_add(1, &psb->psb_index);
- DBprintk((" trying to record index=%ld entries=%ld\n", idx, psb->psb_entries));
+ DBprintk((" recording index=%ld entries=%ld\n", idx, psb->psb_entries));
/*
* XXX: there is a small chance that we could run out on index before resetting
* but index is unsigned long, so it will take some time.....
+ * We use > instead of = because fetch_and_add() is off by one (see below)
+ *
+ * This case can happen in non-blocking mode or with multiple processes.
+ * For non-blocking, we need to reload and continue.
*/
if (idx > psb->psb_entries) {
buffer_is_full = 1;
@@ -1388,7 +1486,7 @@
h = (perfmon_smpl_entry_t *)(((char *)psb->psb_addr) + idx*(psb->psb_entry_size));
- h->pid = ta->pid;
+ h->pid = task->pid;
h->cpu = my_cpu;
h->rate = 0;
h->ip = regs ? regs->cr_iip : 0x0; /* where did the fault happened */
@@ -1398,6 +1496,7 @@
h->stamp = perfmon_get_stamp();
e = (unsigned long *)(h+1);
+
/*
* selectively store PMDs in increasing index number
*/
@@ -1406,35 +1505,66 @@
if (PMD_IS_COUNTER(j))
*e = ctx->ctx_pmds[j-PMU_FIRST_COUNTER].val
+ (ia64_get_pmd(j) & pmu_conf.perf_ovfl_val);
- else
+ else {
*e = ia64_get_pmd(j); /* slow */
+ }
DBprintk((" e=%p pmd%d =0x%lx\n", (void *)e, j, *e));
e++;
}
}
- /* make the new entry visible to user, needs to be atomic */
+ /*
+ * make the new entry visible to user, needs to be atomic
+ */
ia64_fetch_and_add(1, &psb->psb_hdr->hdr_count);
DBprintk((" index=%ld entries=%ld hdr_count=%ld\n", idx, psb->psb_entries, psb->psb_hdr->hdr_count));
-
- /* sampling buffer full ? */
+ /*
+ * sampling buffer full ?
+ */
if (idx = (psb->psb_entries-1)) {
- bv = mask;
+ /*
+ * will cause notification, cannot be 0
+ */
+ bv = mask << PMU_FIRST_COUNTER;
+
buffer_is_full = 1;
DBprintk((" sampling buffer full must notify bv=0x%lx\n", bv));
- if (!CTX_OVFL_NOBLOCK(ctx)) goto buffer_full;
+ /*
+ * we do not reload here, when context is blocking
+ */
+ if (!CTX_OVFL_NOBLOCK(ctx)) goto no_reload;
+
/*
* here, we have a full buffer but we are in non-blocking mode
- * so we need to reloads overflowed PMDs with sampling reset values
- * and restart
+ * so we need to reload overflowed PMDs with sampling reset values
+ * and restart right away.
*/
}
+ /* FALL THROUGH */
}
reload_pmds:
- ovfl_is_smpl = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full;
- can_notify = CTX_HAS_SMPL(ctx) = 0 && ctx->ctx_notify_pid;
+
+ /*
+ * in the case of a non-blocking context, we reload
+ * with the ovfl_rval when no user notification is taking place (short recovery)
+ * otherwise when the buffer is full which requires user interaction) then we use
+ * smpl_rval which is the long_recovery path (disturbance introduce by user execution).
+ *
+ * XXX: implies that when buffer is full then there is always notification.
+ */
+ ovfl_has_long_recovery = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full;
+
+ /*
+ * XXX: CTX_HAS_SMPL() should really be something like CTX_HAS_SMPL() and is activated,i.e.,
+ * one of the PMC is configured for EAR/BTB.
+ *
+ * When sampling, we can only notify when the sampling buffer is full.
+ */
+ can_notify = CTX_HAS_SMPL(ctx) = 0 && ctx->ctx_notify_task;
+
+ DBprintk((" ovfl_has_long_recovery=%d can_notify=%d\n", ovfl_has_long_recovery, can_notify));
for (i = 0, cnum = PMU_FIRST_COUNTER; mask ; cnum++, i++, mask >>= 1) {
@@ -1456,7 +1586,7 @@
DBprintk((" pmod[%ld].val=0x%lx pmd=0x%lx\n", i, ctx->ctx_pmds[i].val, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val));
if (can_notify && PMD_OVFL_NOTIFY(ctx, i)) {
- DBprintk((" CPU%d should notify process %d with signal %d\n", my_cpu, ctx->ctx_notify_pid, ctx->ctx_notify_sig));
+ DBprintk((" CPU%d should notify task %p with signal %d\n", my_cpu, ctx->ctx_notify_task, ctx->ctx_notify_sig));
bv |= 1 << i;
} else {
DBprintk((" CPU%d PMD[%ld] overflow, no notification\n", my_cpu, cnum));
@@ -1467,93 +1597,150 @@
*/
/* writes to upper part are ignored, so this is safe */
- if (ovfl_is_smpl) {
- DBprintk((" CPU%d PMD[%ld] reloaded with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ if (ovfl_has_long_recovery) {
+ DBprintk((" CPU%d PMD[%ld] reload with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
} else {
- DBprintk((" CPU%d PMD[%ld] reloaded with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ DBprintk((" CPU%d PMD[%ld] reload with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
ia64_set_pmd(cnum, ctx->ctx_pmds[i].ovfl_rval);
}
}
if (cnum = ctx->ctx_btb_counter) need_reset_pmd16=1;
}
/*
- * In case of BTB, overflow
- * we need to reset the BTB index.
+ * In case of BTB overflow we need to reset the BTB index.
*/
if (need_reset_pmd16) {
DBprintk(("reset PMD16\n"));
ia64_set_pmd(16, 0);
}
-buffer_full:
- /* see pfm_overflow_notify() on details for why we use |= here */
- ctx->ctx_ovfl_regs |= bv;
- /* nobody to notify, return and unfreeze */
+no_reload:
+
+ /*
+ * some counters overflowed, but they did not require
+ * user notification, so after having reloaded them above
+ * we simply restart
+ */
if (!bv) return 0x0;
+ ctx->ctx_ovfl_regs = bv; /* keep track of what to reset when unblocking */
+ /*
+ * Now we know that:
+ * - we have some counters which overflowed (contains in bv)
+ * - someone has asked to be notified on overflow.
+ */
+
+
+ /*
+ * If the notification task is still present, then notify_task is non
+ * null. It is clean by that task if it ever exits before we do.
+ */
- if (ctx->ctx_notify_pid = ta->pid) {
- struct siginfo si;
+ if (ctx->ctx_notify_task) {
si.si_errno = 0;
si.si_addr = NULL;
- si.si_pid = ta->pid; /* who is sending */
-
+ si.si_pid = task->pid; /* who is sending */
si.si_signo = ctx->ctx_notify_sig; /* is SIGPROF */
si.si_code = PROF_OVFL; /* goes to user */
si.si_pfm_ovfl = bv;
+
/*
- * in this case, we don't stop the task, we let it go on. It will
- * necessarily go to the signal handler (if any) when it goes back to
- * user mode.
+ * when the target of the signal is not ourself, we have to be more
+ * careful. The notify_task may being cleared by the target task itself
+ * in release_thread(). We must ensure mutual exclusion here such that
+ * the signal is delivered (even to a dying task) safely.
*/
- DBprintk((" sending %d notification to self %d\n", si.si_signo, ta->pid));
-
- /* this call is safe in an interrupt handler */
- ret = send_sig_info(ctx->ctx_notify_sig, &si, ta);
- if (ret != 0)
- printk(" send_sig_info(process %d, SIGPROF)=%d\n", ta->pid, ret);
- /*
- * no matter if we block or not, we keep PMU frozen and do not unfreeze on ctxsw
- */
- ctx->ctx_fl_frozen = 1;
+ if (ctx->ctx_notify_task != current) {
+ /*
+ * grab the notification lock for this task
+ */
+ spin_lock(&ctx->ctx_notify_lock);
- } else {
-#if 0
/*
- * The tasklet is guaranteed to be scheduled for this CPU only
+ * now notify_task cannot be modified until we're done
+ * if NULL, they it got modified while we were in the handler
*/
- notify_info[my_cpu].to_pid = ctx->notify_pid;
- notify_info[my_cpu].from_pid = ta->pid; /* for debug only */
- notify_info[my_cpu].bitvect = bv;
- /* tasklet is inserted and active */
- tasklet_schedule(&pfm_tasklet);
-#endif
+ if (ctx->ctx_notify_task = NULL) {
+ spin_unlock(&ctx->ctx_notify_lock);
+ goto lost_notify;
+ }
/*
- * stored the vector of overflowed registers for use in notification
- * mark that a notification/blocking is pending (arm the trap)
+ * required by send_sig_info() to make sure the target
+ * task does not disappear on us.
*/
- th->pfm_pend_notify = 1;
+ read_lock(&tasklist_lock);
+ }
+ /*
+ * in this case, we don't stop the task, we let it go on. It will
+ * necessarily go to the signal handler (if any) when it goes back to
+ * user mode.
+ */
+ DBprintk((" %d sending %d notification to %d\n", task->pid, si.si_signo, ctx->ctx_notify_task->pid));
+
+
+ /*
+ * this call is safe in an interrupt handler, so does read_lock() on tasklist_lock
+ */
+ ret = send_sig_info(ctx->ctx_notify_sig, &si, ctx->ctx_notify_task);
+ if (ret != 0) printk(" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_task->pid, ret);
+ /*
+ * now undo the protections in order
+ */
+ if (ctx->ctx_notify_task != current) {
+ read_unlock(&tasklist_lock);
+ spin_unlock(&ctx->ctx_notify_lock);
+ }
/*
- * if we do block, then keep PMU frozen until restart
+ * if we block set the pfm_must_block bit
+ * when in block mode, we can effectively block only when the notified
+ * task is not self, otherwise we would deadlock.
+ * in this configuration, the notification is sent, the task will not
+ * block on the way back to user mode, but the PMU will be kept frozen
+ * until PFM_RESTART.
+ * Note that here there is still a race condition with notify_task
+ * possibly being nullified behind our back, but this is fine because
+ * it can only be changed to NULL which by construction, can only be
+ * done when notify_task != current. So if it was already different
+ * before, changing it to NULL will still maintain this invariant.
+ * Of course, when it is equal to current it cannot change at this point.
*/
- if (!CTX_OVFL_NOBLOCK(ctx)) ctx->ctx_fl_frozen = 1;
+ if (!CTX_OVFL_NOBLOCK(ctx) && ctx->ctx_notify_task != current) {
+ th->pfm_must_block = 1; /* will cause blocking */
+ }
+ } else {
+lost_notify:
+ DBprintk((" notification task has disappeared !\n"));
+ /*
+ * for a non-blocking context, we make sure we do not fall into the pfm_overflow_notify()
+ * trap. Also in the case of a blocking context with lost notify process, then we do not
+ * want to block either (even though it is interruptible). In this case, the PMU will be kept
+ * frozen and the process will run to completion without monitoring enabled.
+ *
+ * Of course, we cannot loose notify process when self-monitoring.
+ */
+ th->pfm_must_block = 0;
- DBprintk((" process %d notify ovfl_regs=0x%lx\n", ta->pid, bv));
}
/*
- * keep PMU frozen (and overflowed bits cleared) when we have to stop,
- * otherwise return a resume 'value' for PMC[0]
- *
- * XXX: maybe that's enough to get rid of ctx_fl_frozen ?
+ * if we block, we keep the PMU frozen. If non-blocking we restart.
+ * in the case of non-blocking were the notify process is lost, we also
+ * restart.
*/
- DBprintk((" will return pmc0=0x%x\n",ctx->ctx_fl_frozen ? 0x1 : 0x0));
+ if (!CTX_OVFL_NOBLOCK(ctx))
+ ctx->ctx_fl_frozen = 1;
+ else
+ ctx->ctx_fl_frozen = 0;
+
+ DBprintk((" reload pmc0=0x%x must_block=%ld\n",
+ ctx->ctx_fl_frozen ? 0x1 : 0x0, th->pfm_must_block));
+
return ctx->ctx_fl_frozen ? 0x1 : 0x0;
}
@@ -1595,10 +1782,17 @@
u64 pmc0 = ia64_get_pmc(0);
int i;
- p += sprintf(p, "PMC[0]=%lx\nPerfmon debug: %s\n", pmc0, pfm_debug ? "On" : "Off");
+ p += sprintf(p, "CPU%d.pmc[0]=%lx\nPerfmon debug: %s\n", smp_processor_id(), pmc0, pfm_debug ? "On" : "Off");
+ p += sprintf(p, "proc_sessions=%lu sys_sessions=%lu\n",
+ pfs_info.pfs_proc_sessions,
+ pfs_info.pfs_sys_session);
+
for(i=0; i < NR_CPUS; i++) {
- if (cpu_is_online(i))
- p += sprintf(p, "CPU%d.PMU %d\n", i, pmu_owners[i].owner ? pmu_owners[i].owner->pid: 0);
+ if (cpu_is_online(i)) {
+ p += sprintf(p, "CPU%d.pmu_owner: %-6d\n",
+ i,
+ pmu_owners[i].owner ? pmu_owners[i].owner->pid: -1);
+ }
}
return p - page;
}
@@ -1648,8 +1842,8 @@
}
pmu_conf.perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1;
pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic;
- pmu_conf.num_pmds = find_num_pm_regs(pmu_conf.impl_regs);
- pmu_conf.num_pmcs = find_num_pm_regs(&pmu_conf.impl_regs[4]);
+ pmu_conf.num_pmcs = find_num_pm_regs(pmu_conf.impl_regs);
+ pmu_conf.num_pmds = find_num_pm_regs(&pmu_conf.impl_regs[4]);
printk("perfmon: %d bits counters (max value 0x%lx)\n", pm_info.pal_perf_mon_info_s.width, pmu_conf.perf_ovfl_val);
printk("perfmon: %ld PMC/PMD pairs, %ld PMCs, %ld PMDs\n", pmu_conf.max_counters, pmu_conf.num_pmcs, pmu_conf.num_pmds);
@@ -1681,21 +1875,19 @@
ia64_srlz_d();
}
-/*
- * XXX: for system wide this function MUST never be called
- */
void
pfm_save_regs (struct task_struct *ta)
{
struct task_struct *owner;
+ pfm_context_t *ctx;
struct thread_struct *t;
u64 pmc0, psr;
+ unsigned long mask;
int i;
- if (ta = NULL) {
- panic(__FUNCTION__" task is NULL\n");
- }
- t = &ta->thread;
+ t = &ta->thread;
+ ctx = ta->thread.pfm_context;
+
/*
* We must make sure that we don't loose any potential overflow
* interrupt while saving PMU context. In this code, external
@@ -1715,7 +1907,7 @@
* in kernel.
* By now, we could still have an overflow interrupt in-flight.
*/
- __asm__ __volatile__ ("rum psr.up;;"::: "memory");
+ __asm__ __volatile__ ("rsm psr.up|psr.pp;;"::: "memory");
/*
* Mark the PMU as not owned
@@ -1744,7 +1936,6 @@
* next process does not start with monitoring on if not requested
*/
ia64_set_pmc(0, 1);
- ia64_srlz_d();
/*
* Check for overflow bits and proceed manually if needed
@@ -1755,94 +1946,111 @@
* next time the task exits from the kernel.
*/
if (pmc0 & ~0x1) {
- if (owner != ta) printk(__FUNCTION__" owner=%p task=%p\n", (void *)owner, (void *)ta);
- printk(__FUNCTION__" Warning: pmc[0]=0x%lx explicit call\n", pmc0);
-
- pmc0 = update_counters(owner, pmc0, NULL);
+ update_counters(owner, pmc0, NULL);
/* we will save the updated version of pmc0 */
}
-
/*
* restore PSR for context switch to save
*/
__asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(psr): "memory");
+ /*
+ * we do not save registers if we can do lazy
+ */
+ if (PFM_CAN_DO_LAZY()) {
+ SET_PMU_OWNER(owner);
+ return;
+ }
/*
* XXX needs further optimization.
* Also must take holes into account
*/
- for (i=0; i< pmu_conf.num_pmds; i++) {
- t->pmd[i] = ia64_get_pmd(i);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i);
}
/* skip PMC[0], we handle it separately */
- for (i=1; i< pmu_conf.num_pmcs; i++) {
- t->pmc[i] = ia64_get_pmc(i);
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i);
}
-
/*
* Throughout this code we could have gotten an overflow interrupt. It is transformed
* into a spurious interrupt as soon as we give up pmu ownership.
*/
}
-void
-pfm_load_regs (struct task_struct *ta)
+static void
+pfm_lazy_save_regs (struct task_struct *ta)
{
- struct thread_struct *t = &ta->thread;
- pfm_context_t *ctx = ta->thread.pfm_context;
+ pfm_context_t *ctx;
+ struct thread_struct *t;
+ unsigned long mask;
int i;
+ DBprintk((" on [%d] by [%d]\n", ta->pid, current->pid));
+
+ t = &ta->thread;
+ ctx = ta->thread.pfm_context;
/*
* XXX needs further optimization.
* Also must take holes into account
*/
- for (i=0; i< pmu_conf.num_pmds; i++) {
- ia64_set_pmd(i, t->pmd[i]);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i);
}
-
- /* skip PMC[0] to avoid side effects */
- for (i=1; i< pmu_conf.num_pmcs; i++) {
- ia64_set_pmc(i, t->pmc[i]);
+
+ /* skip PMC[0], we handle it separately */
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i);
}
+ SET_PMU_OWNER(NULL);
+}
+
+void
+pfm_load_regs (struct task_struct *ta)
+{
+ struct thread_struct *t = &ta->thread;
+ pfm_context_t *ctx = ta->thread.pfm_context;
+ struct task_struct *owner;
+ unsigned long mask;
+ int i;
+
+ owner = PMU_OWNER();
+ if (owner = ta) goto skip_restore;
+ if (owner) pfm_lazy_save_regs(owner);
- /*
- * we first restore ownership of the PMU to the 'soon to be current'
- * context. This way, if, as soon as we unfreeze the PMU at the end
- * of this function, we get an interrupt, we attribute it to the correct
- * task
- */
SET_PMU_OWNER(ta);
-#if 0
- /*
- * check if we had pending overflow before context switching out
- * If so, we invoke the handler manually, i.e. simulate interrupt.
- *
- * XXX: given that we do not use the tasklet anymore to stop, we can
- * move this back to the pfm_save_regs() routine.
- */
- if (t->pmc[0] & ~0x1) {
- /* freeze set in pfm_save_regs() */
- DBprintk((" pmc[0]=0x%lx manual interrupt\n",t->pmc[0]));
- update_counters(ta, t->pmc[0], NULL);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) ia64_set_pmd(i, t->pmd[i]);
}
-#endif
+ /* skip PMC[0] to avoid side effects */
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) ia64_set_pmc(i, t->pmc[i]);
+ }
+skip_restore:
/*
* unfreeze only when possible
*/
if (ctx->ctx_fl_frozen = 0) {
ia64_set_pmc(0, 0);
ia64_srlz_d();
+ /* place where we potentially (kernel level) start monitoring again */
}
}
/*
* This function is called when a thread exits (from exit_thread()).
- * This is a simplified pfm_save_regs() that simply flushes hthe current
+ * This is a simplified pfm_save_regs() that simply flushes the current
* register state into the save area taking into account any pending
* overflow. This time no notification is sent because the taks is dying
* anyway. The inline processing of overflows avoids loosing some counts.
@@ -1933,12 +2141,20 @@
/* collect latest results */
ctx->ctx_pmds[i].val += ia64_get_pmd(j) & pmu_conf.perf_ovfl_val;
+ /*
+ * now everything is in ctx_pmds[] and we need
+ * to clear the saved context from save_regs() such that
+ * pfm_read_pmds() gets the correct value
+ */
+ ta->thread.pmd[j] = 0;
+
/* take care of overflow inline */
if (mask & 0x1) {
ctx->ctx_pmds[i].val += 1 + pmu_conf.perf_ovfl_val;
DBprintk((" PMD[%d] overflowed pmd=0x%lx pmds.val=0x%lx\n",
j, ia64_get_pmd(j), ctx->ctx_pmds[i].val));
}
+ mask >>=1;
}
}
@@ -1977,7 +2193,7 @@
/* clears all PMD registers */
for(i=0;i< pmu_conf.num_pmds; i++) {
- if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0);
+ if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0);
}
ia64_srlz_d();
}
@@ -1986,7 +2202,7 @@
* task is the newly created task
*/
int
-pfm_inherit(struct task_struct *task)
+pfm_inherit(struct task_struct *task, struct pt_regs *regs)
{
pfm_context_t *ctx = current->thread.pfm_context;
pfm_context_t *nctx;
@@ -1994,12 +2210,22 @@
int i, cnum;
/*
+ * bypass completely for system wide
+ */
+ if (pfs_info.pfs_sys_session) {
+ DBprintk((" enabling psr.pp for %d\n", task->pid));
+ ia64_psr(regs)->pp = pfs_info.pfs_pp;
+ return 0;
+ }
+
+ /*
* takes care of easiest case first
*/
if (CTX_INHERIT_MODE(ctx) = PFM_FL_INHERIT_NONE) {
DBprintk((" removing PFM context for %d\n", task->pid));
task->thread.pfm_context = NULL;
- task->thread.pfm_pend_notify = 0;
+ task->thread.pfm_must_block = 0;
+ atomic_set(&task->thread.pfm_notifiers_check, 0);
/* copy_thread() clears IA64_THREAD_PM_VALID */
return 0;
}
@@ -2009,9 +2235,11 @@
/* copy content */
*nctx = *ctx;
- if (ctx->ctx_fl_inherit = PFM_FL_INHERIT_ONCE) {
+ if (CTX_INHERIT_MODE(ctx) = PFM_FL_INHERIT_ONCE) {
nctx->ctx_fl_inherit = PFM_FL_INHERIT_NONE;
+ atomic_set(&task->thread.pfm_notifiers_check, 0);
DBprintk((" downgrading to INHERIT_NONE for %d\n", task->pid));
+ pfs_info.pfs_proc_sessions++;
}
/* initialize counters in new context */
@@ -2033,7 +2261,7 @@
sema_init(&nctx->ctx_restart_sem, 0); /* reset this semaphore to locked */
/* clear pending notification */
- th->pfm_pend_notify = 0;
+ th->pfm_must_block = 0;
/* link with new task */
th->pfm_context = nctx;
@@ -2052,7 +2280,10 @@
return 0;
}
-/* called from exit_thread() */
+/*
+ * called from release_thread(), at this point this task is not in the
+ * tasklist anymore
+ */
void
pfm_context_exit(struct task_struct *task)
{
@@ -2068,16 +2299,126 @@
pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
/* if only user left, then remove */
- DBprintk((" pid %d: task %d sampling psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter));
+ DBprintk((" [%d] [%d] psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter));
if (atomic_dec_and_test(&psb->psb_refcnt) ) {
rvfree(psb->psb_hdr, psb->psb_size);
vfree(psb);
- DBprintk((" pid %d: cleaning task %d sampling buffer\n", current->pid, task->pid ));
+ DBprintk((" [%d] cleaning [%d] sampling buffer\n", current->pid, task->pid ));
+ }
+ }
+ DBprintk((" [%d] cleaning [%d] pfm_context @%p\n", current->pid, task->pid, (void *)ctx));
+
+ /*
+ * To avoid getting the notified task scan the entire process list
+ * when it exits because it would have pfm_notifiers_check set, we
+ * decrease it by 1 to inform the task, that one less task is going
+ * to send it notification. each new notifer increases this field by
+ * 1 in pfm_context_create(). Of course, there is race condition between
+ * decreasing the value and the notified task exiting. The danger comes
+ * from the fact that we have a direct pointer to its task structure
+ * thereby bypassing the tasklist. We must make sure that if we have
+ * notify_task!= NULL, the target task is still somewhat present. It may
+ * already be detached from the tasklist but that's okay. Note that it is
+ * okay if we 'miss the deadline' and the task scans the list for nothing,
+ * it will affect performance but not correctness. The correctness is ensured
+ * by using the notify_lock whic prevents the notify_task from changing on us.
+ * Once holdhing this lock, if we see notify_task!= NULL, then it will stay like
+ * that until we release the lock. If it is NULL already then we came too late.
+ */
+ spin_lock(&ctx->ctx_notify_lock);
+
+ if (ctx->ctx_notify_task) {
+ DBprintk((" [%d] [%d] atomic_sub on [%d] notifiers=%u\n", current->pid, task->pid,
+ ctx->ctx_notify_task->pid,
+ atomic_read(&ctx->ctx_notify_task->thread.pfm_notifiers_check)));
+
+ atomic_sub(1, &ctx->ctx_notify_task->thread.pfm_notifiers_check);
+ }
+
+ spin_unlock(&ctx->ctx_notify_lock);
+
+ if (ctx->ctx_fl_system) {
+ /*
+ * if included interrupts (true by default), then reset
+ * to get default value
+ */
+ if (ctx->ctx_fl_exclintr = 0) {
+ /*
+ * reload kernel default DCR value
+ */
+ ia64_set_dcr(pfs_info.pfs_dfl_dcr);
+ DBprintk((" restored dcr to 0x%lx\n", pfs_info.pfs_dfl_dcr));
}
+ /*
+ * free system wide session slot
+ */
+ pfs_info.pfs_sys_session = 0;
+ } else {
+ pfs_info.pfs_proc_sessions--;
}
- DBprintk((" pid %d: task %d pfm_context is freed @%p\n", current->pid, task->pid, (void *)ctx));
+
pfm_context_free(ctx);
+ /*
+ * clean pfm state in thread structure,
+ */
+ task->thread.pfm_context = NULL;
+ task->thread.pfm_must_block = 0;
+ /* pfm_notifiers is cleaned in pfm_cleanup_notifiers() */
+
+}
+
+void
+pfm_cleanup_notifiers(struct task_struct *task)
+{
+ struct task_struct *p;
+ pfm_context_t *ctx;
+
+ DBprintk((" [%d] called\n", task->pid));
+
+ read_lock(&tasklist_lock);
+
+ for_each_task(p) {
+ /*
+ * It is safe to do the 2-step test here, because thread.ctx
+ * is cleaned up only in release_thread() and at that point
+ * the task has been detached from the tasklist which is an
+ * operation which uses the write_lock() on the tasklist_lock
+ * so it cannot run concurrently to this loop. So we have the
+ * guarantee that if we find p and it has a perfmon ctx then
+ * it is going to stay like this for the entire execution of this
+ * loop.
+ */
+ ctx = p->thread.pfm_context;
+
+ DBprintk((" [%d] scanning task [%d] ctx=%p\n", task->pid, p->pid, ctx));
+
+ if (ctx && ctx->ctx_notify_task = task) {
+ DBprintk((" trying for notifier %d in %d\n", task->pid, p->pid));
+ /*
+ * the spinlock is required to take care of a race condition
+ * with the send_sig_info() call. We must make sure that
+ * either the send_sig_info() completes using a valid task,
+ * or the notify_task is cleared before the send_sig_info()
+ * can pick up a stale value. Note that by the time this
+ * function is executed the 'task' is already detached from the
+ * tasklist. The problem is that the notifiers have a direct
+ * pointer to it. It is okay to send a signal to a task in this
+ * stage, it simply will have no effect. But it is better than sending
+ * to a completely destroyed task or worse to a new task using the same
+ * task_struct address.
+ */
+ spin_lock(&ctx->ctx_notify_lock);
+
+ ctx->ctx_notify_task = NULL;
+
+ spin_unlock(&ctx->ctx_notify_lock);
+
+ DBprintk((" done for notifier %d in %d\n", task->pid, p->pid));
+ }
+ }
+ read_unlock(&tasklist_lock);
+
}
#else /* !CONFIG_PERFMON */
diff -urN linux-2.4.13/arch/ia64/kernel/process.c linux-2.4.13-lia/arch/ia64/kernel/process.c
--- linux-2.4.13/arch/ia64/kernel/process.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/process.c Wed Oct 24 18:14:43 2001
@@ -63,7 +63,8 @@
{
unsigned long ip = regs->cr_iip + ia64_psr(regs)->ri;
- printk("\npsr : %016lx ifs : %016lx ip : [<%016lx>] %s\n",
+ printk("\nPid: %d, comm: %20s\n", current->pid, current->comm);
+ printk("psr : %016lx ifs : %016lx ip : [<%016lx>] %s\n",
regs->cr_ipsr, regs->cr_ifs, ip, print_tainted());
printk("unat: %016lx pfs : %016lx rsc : %016lx\n",
regs->ar_unat, regs->ar_pfs, regs->ar_rsc);
@@ -201,7 +202,7 @@
{
unsigned long rbs, child_rbs, rbs_size, stack_offset, stack_top, stack_used;
struct switch_stack *child_stack, *stack;
- extern char ia64_ret_from_clone;
+ extern char ia64_ret_from_clone, ia32_ret_from_clone;
struct pt_regs *child_ptregs;
int retval = 0;
@@ -250,7 +251,10 @@
child_ptregs->r12 = (unsigned long) (child_ptregs + 1); /* kernel sp */
child_ptregs->r13 = (unsigned long) p; /* set `current' pointer */
}
- child_stack->b0 = (unsigned long) &ia64_ret_from_clone;
+ if (IS_IA32_PROCESS(regs))
+ child_stack->b0 = (unsigned long) &ia32_ret_from_clone;
+ else
+ child_stack->b0 = (unsigned long) &ia64_ret_from_clone;
child_stack->ar_bspstore = child_rbs + rbs_size;
/* copy parts of thread_struct: */
@@ -285,9 +289,8 @@
ia32_save_state(p);
#endif
#ifdef CONFIG_PERFMON
- p->thread.pfm_pend_notify = 0;
if (p->thread.pfm_context)
- retval = pfm_inherit(p);
+ retval = pfm_inherit(p, child_ptregs);
#endif
return retval;
}
@@ -441,11 +444,24 @@
}
#ifdef CONFIG_PERFMON
+/*
+ * By the time we get here, the task is detached from the tasklist. This is important
+ * because it means that no other tasks can ever find it as a notifiied task, therfore
+ * there is no race condition between this code and let's say a pfm_context_create().
+ * Conversely, the pfm_cleanup_notifiers() cannot try to access a task's pfm context if
+ * this other task is in the middle of its own pfm_context_exit() because it would alreayd
+ * be out of the task list. Note that this case is very unlikely between a direct child
+ * and its parents (if it is the notified process) because of the way the exit is notified
+ * via SIGCHLD.
+ */
void
release_thread (struct task_struct *task)
{
if (task->thread.pfm_context)
pfm_context_exit(task);
+
+ if (atomic_read(&task->thread.pfm_notifiers_check) > 0)
+ pfm_cleanup_notifiers(task);
}
#endif
@@ -516,6 +532,29 @@
}
void
+cpu_halt (void)
+{
+ pal_power_mgmt_info_u_t power_info[8];
+ unsigned long min_power;
+ int i, min_power_state;
+
+ if (ia64_pal_halt_info(power_info) != 0)
+ return;
+
+ min_power_state = 0;
+ min_power = power_info[0].pal_power_mgmt_info_s.power_consumption;
+ for (i = 1; i < 8; ++i)
+ if (power_info[i].pal_power_mgmt_info_s.im
+ && power_info[i].pal_power_mgmt_info_s.power_consumption < min_power) {
+ min_power = power_info[i].pal_power_mgmt_info_s.power_consumption;
+ min_power_state = i;
+ }
+
+ while (1)
+ ia64_pal_halt(min_power_state);
+}
+
+void
machine_restart (char *restart_cmd)
{
(*efi.reset_system)(EFI_RESET_WARM, 0, 0, 0);
@@ -524,6 +563,7 @@
void
machine_halt (void)
{
+ cpu_halt();
}
void
@@ -531,4 +571,5 @@
{
if (pm_power_off)
pm_power_off();
+ machine_halt();
}
diff -urN linux-2.4.13/arch/ia64/kernel/ptrace.c linux-2.4.13-lia/arch/ia64/kernel/ptrace.c
--- linux-2.4.13/arch/ia64/kernel/ptrace.c Mon Sep 24 15:06:13 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/ptrace.c Wed Oct 10 17:43:07 2001
@@ -2,7 +2,7 @@
* Kernel support for the ptrace() and syscall tracing interfaces.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Derived from the x86 and Alpha versions. Most of the code in here
* could actually be factored into a common set of routines.
@@ -794,11 +794,14 @@
*
* Make sure the single step bit is not set.
*/
-void ptrace_disable(struct task_struct *child)
+void
+ptrace_disable (struct task_struct *child)
{
+ struct ia64_psr *child_psr = ia64_psr(ia64_task_regs(child));
+
/* make sure the single step/take-branch tra bits are not set: */
- ia64_psr(pt)->ss = 0;
- ia64_psr(pt)->tb = 0;
+ child_psr->ss = 0;
+ child_psr->tb = 0;
/* Turn off flag indicating that the KRBS is sync'd with child's VM: */
child->thread.flags &= ~IA64_THREAD_KRBS_SYNCED;
@@ -809,7 +812,7 @@
long arg4, long arg5, long arg6, long arg7, long stack)
{
struct pt_regs *pt, *regs = (struct pt_regs *) &stack;
- unsigned long flags, urbs_end;
+ unsigned long urbs_end;
struct task_struct *child;
struct switch_stack *sw;
long ret;
@@ -855,6 +858,19 @@
if (child->p_pptr != current)
goto out_tsk;
+ if (request != PTRACE_KILL) {
+ if (child->state != TASK_STOPPED)
+ goto out_tsk;
+
+#ifdef CONFIG_SMP
+ while (child->has_cpu) {
+ if (child->state != TASK_STOPPED)
+ goto out_tsk;
+ barrier();
+ }
+#endif
+ }
+
pt = ia64_task_regs(child);
sw = (struct switch_stack *) (child->thread.ksp + 16);
@@ -925,7 +941,7 @@
child->ptrace &= ~PT_TRACESYS;
child->exit_code = data;
- /* make sure the single step/take-branch tra bits are not set: */
+ /* make sure the single step/taken-branch trap bits are not set: */
ia64_psr(pt)->ss = 0;
ia64_psr(pt)->tb = 0;
diff -urN linux-2.4.13/arch/ia64/kernel/sal.c linux-2.4.13-lia/arch/ia64/kernel/sal.c
--- linux-2.4.13/arch/ia64/kernel/sal.c Thu Jan 4 12:50:17 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/sal.c Thu Oct 4 00:21:39 2001
@@ -1,8 +1,8 @@
/*
* System Abstraction Layer (SAL) interface routines.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
*/
@@ -18,8 +18,6 @@
#include <asm/sal.h>
#include <asm/pal.h>
-#define SAL_DEBUG
-
spinlock_t sal_lock = SPIN_LOCK_UNLOCKED;
static struct {
@@ -122,10 +120,8 @@
switch (*p) {
case SAL_DESC_ENTRY_POINT:
ep = (struct ia64_sal_desc_entry_point *) p;
-#ifdef SAL_DEBUG
- printk("sal[%d] - entry: pal_proc=0x%lx, sal_proc=0x%lx\n",
- i, ep->pal_proc, ep->sal_proc);
-#endif
+ printk("SAL: entry: pal_proc=0x%lx, sal_proc=0x%lx\n",
+ ep->pal_proc, ep->sal_proc);
ia64_pal_handler_init(__va(ep->pal_proc));
ia64_sal_handler_init(__va(ep->sal_proc), __va(ep->gp));
break;
@@ -138,17 +134,12 @@
#ifdef CONFIG_SMP
{
struct ia64_sal_desc_ap_wakeup *ap = (void *) p;
-# ifdef SAL_DEBUG
- printk("sal[%d] - wakeup type %x, 0x%lx\n",
- i, ap->mechanism, ap->vector);
-# endif
+
switch (ap->mechanism) {
case IA64_SAL_AP_EXTERNAL_INT:
ap_wakeup_vector = ap->vector;
-# ifdef SAL_DEBUG
printk("SAL: AP wakeup using external interrupt "
"vector 0x%lx\n", ap_wakeup_vector);
-# endif
break;
default:
@@ -163,21 +154,13 @@
struct ia64_sal_desc_platform_feature *pf = (void *) p;
printk("SAL: Platform features ");
-#ifdef CONFIG_IA64_HAVE_IRQREDIR
- /*
- * Early versions of SAL say we don't have
- * IRQ redirection, even though we do...
- */
- pf->feature_mask |= (1 << 1);
-#endif
-
if (pf->feature_mask & (1 << 0))
printk("BusLock ");
if (pf->feature_mask & (1 << 1)) {
printk("IRQ_Redirection ");
#ifdef CONFIG_SMP
- if (no_int_routing)
+ if (no_int_routing)
smp_int_redirect &= ~SMP_IRQ_REDIRECTION;
else
smp_int_redirect |= SMP_IRQ_REDIRECTION;
diff -urN linux-2.4.13/arch/ia64/kernel/setup.c linux-2.4.13-lia/arch/ia64/kernel/setup.c
--- linux-2.4.13/arch/ia64/kernel/setup.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/setup.c Thu Oct 4 00:21:39 2001
@@ -534,10 +534,13 @@
/*
* Initialize default control register to defer all speculative faults. The
* kernel MUST NOT depend on a particular setting of these bits (in other words,
- * the kernel must have recovery code for all speculative accesses).
+ * the kernel must have recovery code for all speculative accesses). Turn on
+ * dcr.lc as per recommendation by the architecture team. Most IA-32 apps
+ * shouldn't be affected by this (moral: keep your ia32 locks aligned and you'll
+ * be fine).
*/
ia64_set_dcr( IA64_DCR_DM | IA64_DCR_DP | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_DR
- | IA64_DCR_DA | IA64_DCR_DD);
+ | IA64_DCR_DA | IA64_DCR_DD | IA64_DCR_LC);
#ifndef CONFIG_SMP
ia64_set_fpu_owner(0);
#endif
diff -urN linux-2.4.13/arch/ia64/kernel/sigframe.h linux-2.4.13-lia/arch/ia64/kernel/sigframe.h
--- linux-2.4.13/arch/ia64/kernel/sigframe.h Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/sigframe.h Thu Oct 4 00:21:52 2001
@@ -1,3 +1,9 @@
+struct sigscratch {
+ unsigned long scratch_unat; /* ar.unat for the general registers saved in pt */
+ unsigned long pad;
+ struct pt_regs pt;
+};
+
struct sigframe {
/*
* Place signal handler args where user-level unwinder can find them easily.
@@ -7,10 +13,11 @@
unsigned long arg0; /* signum */
unsigned long arg1; /* siginfo pointer */
unsigned long arg2; /* sigcontext pointer */
+ /*
+ * End of architected state.
+ */
- unsigned long rbs_base; /* base of new register backing store (or NULL) */
void *handler; /* pointer to the plabel of the signal handler */
-
struct siginfo info;
struct sigcontext sc;
};
diff -urN linux-2.4.13/arch/ia64/kernel/signal.c linux-2.4.13-lia/arch/ia64/kernel/signal.c
--- linux-2.4.13/arch/ia64/kernel/signal.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/signal.c Thu Oct 4 00:21:52 2001
@@ -2,7 +2,7 @@
* Architecture-specific signal handling support.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Derived from i386 and Alpha versions.
*/
@@ -39,12 +39,6 @@
# define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0])
#endif
-struct sigscratch {
- unsigned long scratch_unat; /* ar.unat for the general registers saved in pt */
- unsigned long pad;
- struct pt_regs pt;
-};
-
extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); /* forward decl */
long
@@ -55,6 +49,10 @@
/* XXX: Don't preclude handling different sized sigset_t's. */
if (sigsetsize != sizeof(sigset_t))
return -EINVAL;
+
+ if (!access_ok(VERIFY_READ, uset, sigsetsize))
+ return -EFAULT;
+
if (GET_SIGSET(&set, uset))
return -EFAULT;
@@ -73,15 +71,9 @@
* pre-set the correct error code here to ensure that the right values
* get saved in sigcontext by ia64_do_signal.
*/
-#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(&scr->pt)) {
- scr->pt.r8 = -EINTR;
- } else
-#endif
- {
- scr->pt.r8 = EINTR;
- scr->pt.r10 = -1;
- }
+ scr->pt.r8 = EINTR;
+ scr->pt.r10 = -1;
+
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
@@ -139,10 +131,9 @@
struct ia64_psr *psr = ia64_psr(&scr->pt);
__copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
- if (!psr->dfh) {
- psr->mfh = 0;
+ psr->mfh = 0; /* drop signal handler's fph contents... */
+ if (!psr->dfh)
__ia64_load_fpu(current->thread.fph);
- }
}
return err;
}
@@ -380,7 +371,8 @@
err = __put_user(sig, &frame->arg0);
err |= __put_user(&frame->info, &frame->arg1);
err |= __put_user(&frame->sc, &frame->arg2);
- err |= __put_user(new_rbs, &frame->rbs_base);
+ err |= __put_user(new_rbs, &frame->sc.sc_rbs_base);
+ err |= __put_user(0, &frame->sc.sc_loadrs); /* initialize to zero */
err |= __put_user(ka->sa.sa_handler, &frame->handler);
err |= copy_siginfo_to_user(&frame->info, info);
@@ -460,6 +452,7 @@
long
ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall)
{
+ struct signal_struct *sig;
struct k_sigaction *ka;
siginfo_t info;
long restart = in_syscall;
@@ -571,8 +564,8 @@
case SIGSTOP:
current->state = TASK_STOPPED;
current->exit_code = signr;
- if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags
- & SA_NOCLDSTOP))
+ sig = current->p_pptr->sig;
+ if (sig && !(sig->action[SIGCHLD-1].sa.sa_flags & SA_NOCLDSTOP))
notify_parent(current, SIGCHLD);
schedule();
continue;
diff -urN linux-2.4.13/arch/ia64/kernel/smp.c linux-2.4.13-lia/arch/ia64/kernel/smp.c
--- linux-2.4.13/arch/ia64/kernel/smp.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/smp.c Wed Oct 10 18:50:56 2001
@@ -48,6 +48,7 @@
#include <asm/sal.h>
#include <asm/system.h>
#include <asm/unistd.h>
+#include <asm/mca.h>
/* The 'big kernel lock' */
spinlock_t kernel_flag = SPIN_LOCK_UNLOCKED;
@@ -70,20 +71,18 @@
#define IPI_CALL_FUNC 0
#define IPI_CPU_STOP 1
-#ifndef CONFIG_ITANIUM_PTCG
-# define IPI_FLUSH_TLB 2
-#endif /*!CONFIG_ITANIUM_PTCG */
static void
stop_this_cpu (void)
{
+ extern void cpu_halt (void);
/*
* Remove this CPU:
*/
clear_bit(smp_processor_id(), &cpu_online_map);
max_xtp();
__cli();
- for (;;);
+ cpu_halt();
}
void
@@ -136,49 +135,6 @@
stop_this_cpu();
break;
-#ifndef CONFIG_ITANIUM_PTCG
- case IPI_FLUSH_TLB:
- {
- extern unsigned long flush_start, flush_end, flush_nbits, flush_rid;
- extern atomic_t flush_cpu_count;
- unsigned long saved_rid = ia64_get_rr(flush_start);
- unsigned long end = flush_end;
- unsigned long start = flush_start;
- unsigned long nbits = flush_nbits;
-
- /*
- * Current CPU may be running with different RID so we need to
- * reload the RID of flushed address. Purging the translation
- * also needs ALAT invalidation; we do not need "invala" here
- * since it is done in ia64_leave_kernel.
- */
- ia64_srlz_d();
- if (saved_rid != flush_rid) {
- ia64_set_rr(flush_start, flush_rid);
- ia64_srlz_d();
- }
-
- do {
- /*
- * Purge local TLB entries.
- */
- __asm__ __volatile__ ("ptc.l %0,%1" ::
- "r"(start), "r"(nbits<<2) : "memory");
- start += (1UL << nbits);
- } while (start < end);
-
- ia64_insn_group_barrier();
- ia64_srlz_i(); /* srlz.i implies srlz.d */
-
- if (saved_rid != flush_rid) {
- ia64_set_rr(flush_start, saved_rid);
- ia64_srlz_d();
- }
- atomic_dec(&flush_cpu_count);
- break;
- }
-#endif /* !CONFIG_ITANIUM_PTCG */
-
default:
printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which);
break;
@@ -228,30 +184,6 @@
platform_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0);
}
-#ifndef CONFIG_ITANIUM_PTCG
-
-void
-smp_send_flush_tlb (void)
-{
- send_IPI_allbutself(IPI_FLUSH_TLB);
-}
-
-void
-smp_resend_flush_tlb (void)
-{
- int i;
-
- /*
- * Really need a null IPI but since this rarely should happen & since this code
- * will go away, lets not add one.
- */
- for (i = 0; i < smp_num_cpus; ++i)
- if (i != smp_processor_id())
- smp_send_reschedule(i);
-}
-
-#endif /* !CONFIG_ITANIUM_PTCG */
-
void
smp_flush_tlb_all (void)
{
@@ -277,10 +209,6 @@
{
struct call_data_struct data;
int cpus = 1;
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- unsigned long timeout;
-#endif
if (cpuid = smp_processor_id()) {
printk(__FUNCTION__" trying to call self\n");
@@ -295,26 +223,15 @@
atomic_set(&data.finished, 0);
spin_lock_bh(&call_lock);
- call_data = &data;
-
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- resend:
- send_IPI_single(cpuid, IPI_CALL_FUNC);
- /* Wait for response */
- timeout = jiffies + HZ;
- while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
- barrier();
- if (atomic_read(&data.started) != cpus)
- goto resend;
-#else
+ call_data = &data;
+ mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */
send_IPI_single(cpuid, IPI_CALL_FUNC);
/* Wait for response */
while (atomic_read(&data.started) != cpus)
barrier();
-#endif
+
if (wait)
while (atomic_read(&data.finished) != cpus)
barrier();
@@ -348,10 +265,6 @@
{
struct call_data_struct data;
int cpus = smp_num_cpus-1;
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- unsigned long timeout;
-#endif
if (!cpus)
return 0;
@@ -364,27 +277,14 @@
atomic_set(&data.finished, 0);
spin_lock_bh(&call_lock);
- call_data = &data;
-
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- resend:
- /* Send a message to all other CPUs and wait for them to respond */
- send_IPI_allbutself(IPI_CALL_FUNC);
- /* Wait for response */
- timeout = jiffies + HZ;
- while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
- barrier();
- if (atomic_read(&data.started) != cpus)
- goto resend;
-#else
+ call_data = &data;
+ mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */
send_IPI_allbutself(IPI_CALL_FUNC);
/* Wait for response */
while (atomic_read(&data.started) != cpus)
barrier();
-#endif
if (wait)
while (atomic_read(&data.finished) != cpus)
diff -urN linux-2.4.13/arch/ia64/kernel/smpboot.c linux-2.4.13-lia/arch/ia64/kernel/smpboot.c
--- linux-2.4.13/arch/ia64/kernel/smpboot.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/smpboot.c Thu Oct 4 00:21:39 2001
@@ -33,6 +33,7 @@
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/machvec.h>
+#include <asm/mca.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
@@ -42,6 +43,8 @@
#include <asm/system.h>
#include <asm/unistd.h>
+#define SMP_DEBUG 0
+
#if SMP_DEBUG
#define Dprintk(x...) printk(x)
#else
@@ -310,7 +313,7 @@
}
-void __init
+static void __init
smp_callin (void)
{
int cpuid, phys_id;
@@ -324,8 +327,7 @@
phys_id = hard_smp_processor_id();
if (test_and_set_bit(cpuid, &cpu_online_map)) {
- printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n",
- phys_id, cpuid);
+ printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n", phys_id, cpuid);
BUG();
}
@@ -341,6 +343,12 @@
* Get our bogomips.
*/
ia64_init_itm();
+
+#ifdef CONFIG_IA64_MCA
+ ia64_mca_cmc_vector_setup(); /* Setup vector on AP & enable */
+ ia64_mca_check_errors(); /* For post-failure MCA error logging */
+#endif
+
#ifdef CONFIG_PERFMON
perfmon_init_percpu();
#endif
@@ -364,14 +372,15 @@
{
extern int cpu_idle (void);
+ Dprintk("start_secondary: starting CPU 0x%x\n", hard_smp_processor_id());
efi_map_pal_code();
cpu_init();
smp_callin();
- Dprintk("CPU %d is set to go. \n", smp_processor_id());
+ Dprintk("CPU %d is set to go.\n", smp_processor_id());
while (!atomic_read(&smp_commenced))
;
- Dprintk("CPU %d is starting idle. \n", smp_processor_id());
+ Dprintk("CPU %d is starting idle.\n", smp_processor_id());
return cpu_idle();
}
@@ -415,7 +424,7 @@
unhash_process(idle);
init_tasks[cpu] = idle;
- Dprintk("Sending Wakeup Vector to AP 0x%x/0x%x.\n", cpu, sapicid);
+ Dprintk("Sending wakeup vector %u to AP 0x%x/0x%x.\n", ap_wakeup_vector, cpu, sapicid);
platform_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
@@ -424,7 +433,6 @@
*/
Dprintk("Waiting on callin_map ...");
for (timeout = 0; timeout < 100000; timeout++) {
- Dprintk(".");
if (test_bit(cpu, &cpu_callin_map))
break; /* It has booted */
udelay(100);
diff -urN linux-2.4.13/arch/ia64/kernel/sys_ia64.c linux-2.4.13-lia/arch/ia64/kernel/sys_ia64.c
--- linux-2.4.13/arch/ia64/kernel/sys_ia64.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/sys_ia64.c Thu Oct 4 00:21:39 2001
@@ -19,24 +19,29 @@
#include <asm/shmparam.h>
#include <asm/uaccess.h>
-#define COLOR_ALIGN(addr) (((addr) + SHMLBA - 1) & ~(SHMLBA - 1))
-
unsigned long
arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
{
- struct vm_area_struct * vmm;
long map_shared = (flags & MAP_SHARED);
+ unsigned long align_mask = PAGE_SIZE - 1;
+ struct vm_area_struct * vmm;
if (len > RGN_MAP_LIMIT)
return -ENOMEM;
if (!addr)
addr = TASK_UNMAPPED_BASE;
- if (map_shared)
- addr = COLOR_ALIGN(addr);
- else
- addr = PAGE_ALIGN(addr);
+ if (map_shared && (TASK_SIZE > 0xfffffffful))
+ /*
+ * For 64-bit tasks, align shared segments to 1MB to avoid potential
+ * performance penalty due to virtual aliasing (see ASDM). For 32-bit
+ * tasks, we prefer to avoid exhausting the address space too quickly by
+ * limiting alignment to a single page.
+ */
+ align_mask = SHMLBA - 1;
+
+ addr = (addr + align_mask) & ~align_mask;
for (vmm = find_vma(current->mm, addr); ; vmm = vmm->vm_next) {
/* At this point: (!vmm || addr < vmm->vm_end). */
@@ -46,9 +51,7 @@
return -ENOMEM;
if (!vmm || addr + len <= vmm->vm_start)
return addr;
- addr = vmm->vm_end;
- if (map_shared)
- addr = COLOR_ALIGN(addr);
+ addr = (vmm->vm_end + align_mask) & ~align_mask;
}
}
@@ -184,8 +187,10 @@
if (!file)
return -EBADF;
- if (!file->f_op || !file->f_op->mmap)
- return -ENODEV;
+ if (!file->f_op || !file->f_op->mmap) {
+ addr = -ENODEV;
+ goto out;
+ }
}
/*
@@ -194,22 +199,26 @@
*/
len = PAGE_ALIGN(len);
if (len = 0)
- return addr;
+ goto out;
/* don't permit mappings into unmapped space or the virtual page table of a region: */
roff = rgn_offset(addr);
- if ((len | roff | (roff + len)) >= RGN_MAP_LIMIT)
- return -EINVAL;
+ if ((len | roff | (roff + len)) >= RGN_MAP_LIMIT) {
+ addr = -EINVAL;
+ goto out;
+ }
/* don't permit mappings that would cross a region boundary: */
- if (rgn_index(addr) != rgn_index(addr + len))
- return -EINVAL;
+ if (rgn_index(addr) != rgn_index(addr + len)) {
+ addr = -EINVAL;
+ goto out;
+ }
down_write(¤t->mm->mmap_sem);
addr = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
up_write(¤t->mm->mmap_sem);
- if (file)
+out: if (file)
fput(file);
return addr;
}
diff -urN linux-2.4.13/arch/ia64/kernel/time.c linux-2.4.13-lia/arch/ia64/kernel/time.c
--- linux-2.4.13/arch/ia64/kernel/time.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/time.c Thu Oct 4 00:21:39 2001
@@ -145,6 +145,9 @@
tv->tv_usec = usec;
}
+/* XXX there should be a cleaner way for declaring an alias... */
+asm (".global get_fast_time; get_fast_time = do_gettimeofday");
+
static void
timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
diff -urN linux-2.4.13/arch/ia64/kernel/traps.c linux-2.4.13-lia/arch/ia64/kernel/traps.c
--- linux-2.4.13/arch/ia64/kernel/traps.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/traps.c Wed Oct 24 18:15:16 2001
@@ -1,20 +1,19 @@
/*
* Architecture-specific trap handling.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE
*/
/*
- * The fpu_fault() handler needs to be able to access and update all
- * floating point registers. Those saved in pt_regs can be accessed
- * through that structure, but those not saved, will be accessed
- * directly. To make this work, we need to ensure that the compiler
- * does not end up using a preserved floating point register on its
- * own. The following achieves this by declaring preserved registers
- * that are not marked as "fixed" as global register variables.
+ * fp_emulate() needs to be able to access and update all floating point registers. Those
+ * saved in pt_regs can be accessed through that structure, but those not saved, will be
+ * accessed directly. To make this work, we need to ensure that the compiler does not end
+ * up using a preserved floating point register on its own. The following achieves this
+ * by declaring preserved registers that are not marked as "fixed" as global register
+ * variables.
*/
register double f2 asm ("f2"); register double f3 asm ("f3");
register double f4 asm ("f4"); register double f5 asm ("f5");
@@ -33,13 +32,17 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
+#include <linux/vt_kern.h> /* For unblank_screen() */
+#include <asm/hardirq.h>
#include <asm/ia32.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/fpswa.h>
+extern spinlock_t timerlist_lock;
+
static fpswa_interface_t *fpswa_interface;
void __init
@@ -51,30 +54,74 @@
fpswa_interface = __va(ia64_boot_param->fpswa);
}
+/*
+ * Unlock any spinlocks which will prevent us from getting the message out (timerlist_lock
+ * is acquired through the console unblank code)
+ */
void
-die_if_kernel (char *str, struct pt_regs *regs, long err)
+bust_spinlocks (int yes)
{
- if (user_mode(regs)) {
-#if 0
- /* XXX for debugging only */
- printk ("!!die_if_kernel: %s(%d): %s %ld\n",
- current->comm, current->pid, str, err);
- show_regs(regs);
+ spin_lock_init(&timerlist_lock);
+ if (yes) {
+ oops_in_progress = 1;
+#ifdef CONFIG_SMP
+ global_irq_lock = 0; /* Many serial drivers do __global_cli() */
#endif
- return;
+ } else {
+ int loglevel_save = console_loglevel;
+#ifdef CONFIG_VT
+ unblank_screen();
+#endif
+ oops_in_progress = 0;
+ /*
+ * OK, the message is on the console. Now we call printk() without
+ * oops_in_progress set so that printk will give klogd a poke. Hold onto
+ * your hats...
+ */
+ console_loglevel = 15; /* NMI oopser may have shut the console up */
+ printk(" ");
+ console_loglevel = loglevel_save;
}
+}
- printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
-
- show_regs(regs);
+void
+die (const char *str, struct pt_regs *regs, long err)
+{
+ static struct {
+ spinlock_t lock;
+ int lock_owner;
+ int lock_owner_depth;
+ } die = {
+ lock: SPIN_LOCK_UNLOCKED,
+ lock_owner: -1,
+ lock_owner_depth: 0
+ };
- if (current->thread.flags & IA64_KERNEL_DEATH) {
- printk("die_if_kernel recursion detected.\n");
- sti();
- while (1);
+ if (die.lock_owner != smp_processor_id()) {
+ console_verbose();
+ spin_lock_irq(&die.lock);
+ die.lock_owner = smp_processor_id();
+ die.lock_owner_depth = 0;
+ bust_spinlocks(1);
}
- current->thread.flags |= IA64_KERNEL_DEATH;
- do_exit(SIGSEGV);
+
+ if (++die.lock_owner_depth < 3) {
+ printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
+ show_regs(regs);
+ } else
+ printk(KERN_ERR "Recursive die() failure, output suppressed\n");
+
+ bust_spinlocks(0);
+ die.lock_owner = -1;
+ spin_unlock_irq(&die.lock);
+ do_exit(SIGSEGV);
+}
+
+void
+die_if_kernel (char *str, struct pt_regs *regs, long err)
+{
+ if (!user_mode(regs))
+ die(str, regs, err);
}
void
@@ -169,14 +216,12 @@
}
/*
- * disabled_fph_fault() is called when a user-level process attempts
- * to access one of the registers f32..f127 when it doesn't own the
- * fp-high register partition. When this happens, we save the current
- * fph partition in the task_struct of the fpu-owner (if necessary)
- * and then load the fp-high partition of the current task (if
- * necessary). Note that the kernel has access to fph by the time we
- * get here, as the IVT's "Diabled FP-Register" handler takes care of
- * clearing psr.dfh.
+ * disabled_fph_fault() is called when a user-level process attempts to access f32..f127
+ * and it doesn't own the fp-high register partition. When this happens, we save the
+ * current fph partition in the task_struct of the fpu-owner (if necessary) and then load
+ * the fp-high partition of the current task (if necessary). Note that the kernel has
+ * access to fph by the time we get here, as the IVT's "Disabled FP-Register" handler takes
+ * care of clearing psr.dfh.
*/
static inline void
disabled_fph_fault (struct pt_regs *regs)
@@ -277,7 +322,7 @@
if (jiffies - last_time > 5*HZ)
fpu_swa_count = 0;
- if (++fpu_swa_count < 5) {
+ if ((++fpu_swa_count < 5) && !(current->thread.flags & IA64_THREAD_FPEMU_NOPRINT)) {
last_time = jiffies;
printk(KERN_WARNING "%s(%d): floating-point assist fault at ip %016lx\n",
current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri);
@@ -478,12 +523,12 @@
case 32: /* fp fault */
case 33: /* fp trap */
result = handle_fpu_swa((vector = 32) ? 1 : 0, regs, isr);
- if (result < 0) {
+ if ((result < 0) || (current->thread.flags & IA64_THREAD_FPEMU_SIGFPE)) {
siginfo.si_signo = SIGFPE;
siginfo.si_errno = 0;
siginfo.si_code = FPE_FLTINV;
siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
- force_sig(SIGFPE, current);
+ force_sig_info(SIGFPE, &siginfo, current);
}
return;
@@ -510,6 +555,10 @@
break;
case 46:
+#ifdef CONFIG_IA32_SUPPORT
+ if (ia32_intercept(regs, isr) = 0)
+ return;
+#endif
printk("Unexpected IA-32 intercept trap (Trap 46)\n");
printk(" iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n",
regs->cr_iip, ifa, isr, iim);
diff -urN linux-2.4.13/arch/ia64/kernel/unaligned.c linux-2.4.13-lia/arch/ia64/kernel/unaligned.c
--- linux-2.4.13/arch/ia64/kernel/unaligned.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/unaligned.c Wed Oct 24 18:15:29 2001
@@ -5,6 +5,8 @@
* Copyright (C) 1999-2000 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*
+ * 2001/10/11 Fix unaligned access to rotating registers in s/w pipelined loops.
+ * 2001/08/13 Correct size of extended floats (float_fsz) from 16 to 10 bytes.
* 2001/01/17 Add support emulation of unaligned kernel accesses.
*/
#include <linux/kernel.h>
@@ -282,9 +284,19 @@
unsigned long rnats, nat_mask;
unsigned long on_kbs;
long sof = (regs->cr_ifs) & 0x7f;
+ long sor = 8 * ((regs->cr_ifs >> 14) & 0xf);
+ long rrb_gr = (regs->cr_ifs >> 18) & 0x7f;
+ long ridx;
+
+ if ((r1 - 32) > sor)
+ ridx = -sof + (r1 - 32);
+ else if ((r1 - 32) < (sor - rrb_gr))
+ ridx = -sof + (r1 - 32) + rrb_gr;
+ else
+ ridx = -sof + (r1 - 32) - (sor - rrb_gr);
- DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
- r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld ridx=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f, ridx);
if ((r1 - 32) >= sof) {
/* this should never happen, as the "rsvd register fault" has higher priority */
@@ -293,7 +305,7 @@
}
on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
- addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx);
if (addr >= kbs) {
/* the register is on the kernel backing store: easy... */
rnat_addr = ia64_rse_rnat_addr(addr);
@@ -318,12 +330,12 @@
return;
}
- bspstore = (unsigned long *) regs->ar_bspstore;
+ bspstore = (unsigned long *)regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
bsp = ia64_rse_skip_regs(ubs_end, -sof);
- addr = ia64_rse_skip_regs(bsp, r1 - 32);
+ addr = ia64_rse_skip_regs(bsp, ridx + sof);
- DPRINT("ubs_end=%p bsp=%p addr=%px\n", (void *) ubs_end, (void *) bsp, (void *) addr);
+ DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
ia64_poke(current, sw, (unsigned long) ubs_end, (unsigned long) addr, val);
@@ -353,9 +365,19 @@
unsigned long rnats, nat_mask;
unsigned long on_kbs;
long sof = (regs->cr_ifs) & 0x7f;
+ long sor = 8 * ((regs->cr_ifs >> 14) & 0xf);
+ long rrb_gr = (regs->cr_ifs >> 18) & 0x7f;
+ long ridx;
+
+ if ((r1 - 32) > sor)
+ ridx = -sof + (r1 - 32);
+ else if ((r1 - 32) < (sor - rrb_gr))
+ ridx = -sof + (r1 - 32) + rrb_gr;
+ else
+ ridx = -sof + (r1 - 32) - (sor - rrb_gr);
- DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
- r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld ridx=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f, ridx);
if ((r1 - 32) >= sof) {
/* this should never happen, as the "rsvd register fault" has higher priority */
@@ -364,7 +386,7 @@
}
on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
- addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx);
if (addr >= kbs) {
/* the register is on the kernel backing store: easy... */
*val = *addr;
@@ -390,7 +412,7 @@
bspstore = (unsigned long *)regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
bsp = ia64_rse_skip_regs(ubs_end, -sof);
- addr = ia64_rse_skip_regs(bsp, r1 - 32);
+ addr = ia64_rse_skip_regs(bsp, ridx + sof);
DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
@@ -908,7 +930,7 @@
* floating point operations sizes in bytes
*/
static const unsigned char float_fsz[4]={
- 16, /* extended precision (e) */
+ 10, /* extended precision (e) */
8, /* integer (8) */
4, /* single precision (s) */
8 /* double precision (d) */
@@ -978,11 +1000,11 @@
unsigned long len = float_fsz[ld.x6_sz];
/*
- * fr0 & fr1 don't need to be checked because Illegal Instruction
- * faults have higher priority than unaligned faults.
+ * fr0 & fr1 don't need to be checked because Illegal Instruction faults have
+ * higher priority than unaligned faults.
*
- * r0 cannot be found as the base as it would never generate an
- * unaligned reference.
+ * r0 cannot be found as the base as it would never generate an unaligned
+ * reference.
*/
/*
@@ -996,8 +1018,10 @@
* invalidate the ALAT entry and execute updates, if any.
*/
if (ld.x6_op != 0x2) {
- /* this assumes little-endian byte-order: */
-
+ /*
+ * This assumes little-endian byte-order. Note that there is no "ldfpe"
+ * instruction:
+ */
if (copy_from_user(&fpr_init[0], (void *) ifa, len)
|| copy_from_user(&fpr_init[1], (void *) (ifa + len), len))
return -1;
@@ -1337,7 +1361,7 @@
/*
* IMPORTANT:
- * Notice that the swictch statement DOES not cover all possible instructions
+ * Notice that the switch statement DOES not cover all possible instructions
* that DO generate unaligned references. This is made on purpose because for some
* instructions it DOES NOT make sense to try and emulate the access. Sometimes it
* is WRONG to try and emulate. Here is a list of instruction we don't emulate i.e.,
diff -urN linux-2.4.13/arch/ia64/kernel/unwind.c linux-2.4.13-lia/arch/ia64/kernel/unwind.c
--- linux-2.4.13/arch/ia64/kernel/unwind.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/unwind.c Thu Oct 4 00:21:39 2001
@@ -504,7 +504,7 @@
return 0;
}
-inline int
+int
unw_access_pr (struct unw_frame_info *info, unsigned long *val, int write)
{
unsigned long *addr;
diff -urN linux-2.4.13/arch/ia64/lib/clear_page.S linux-2.4.13-lia/arch/ia64/lib/clear_page.S
--- linux-2.4.13/arch/ia64/lib/clear_page.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/clear_page.S Thu Oct 4 00:21:39 2001
@@ -47,5 +47,5 @@
br.cloop.dptk.few 1b
;;
mov ar.lc = r2 // restore lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(clear_page)
diff -urN linux-2.4.13/arch/ia64/lib/clear_user.S linux-2.4.13-lia/arch/ia64/lib/clear_user.S
--- linux-2.4.13/arch/ia64/lib/clear_user.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/clear_user.S Thu Oct 4 00:21:39 2001
@@ -8,7 +8,7 @@
* r8: number of bytes that didn't get cleared due to a fault
*
* Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*/
#include <asm/asmmacro.h>
@@ -62,11 +62,11 @@
;; // avoid WAW on CFM
adds tmp=-1,len // br.ctop is repeat/until
mov ret0=len // return value is length at this point
-(p6) br.ret.spnt.few rp
+(p6) br.ret.spnt.many rp
;;
cmp.lt p6,p0\x16,len // if len > 16 then long memset
mov ar.lc=tmp // initialize lc for small count
-(p6) br.cond.dptk.few long_do_clear
+(p6) br.cond.dptk .long_do_clear
;; // WAR on ar.lc
//
// worst case 16 iterations, avg 8 iterations
@@ -79,7 +79,7 @@
1:
EX( .Lexit1, st1 [buf]=r0,1 )
adds len=-1,len // countdown length using len
- br.cloop.dptk.few 1b
+ br.cloop.dptk 1b
;; // avoid RAW on ar.lc
//
// .Lexit4: comes from byte by byte loop
@@ -87,7 +87,7 @@
.Lexit1:
mov ret0=len // faster than using ar.lc
mov ar.lc=saved_lc
- br.ret.sptk.few rp // end of short clear_user
+ br.ret.sptk.many rp // end of short clear_user
//
@@ -98,7 +98,7 @@
// instead of ret0 is due to the fact that the exception code
// changes the values of r8.
//
-long_do_clear:
+.long_do_clear:
tbit.nz p6,p0=buf,0 // odd alignment (for long_do_clear)
;;
EX( .Lexit3, (p6) st1 [buf]=r0,1 ) // 1-byte aligned
@@ -119,7 +119,7 @@
;;
cmp.eq p6,p0=r0,cnt
adds tmp=-1,cnt
-(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p6) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds buf2=8,buf // setup second base pointer
mov ar.lc=tmp
@@ -148,7 +148,7 @@
;; // needed to get len correct when error
st8 [buf2]=r0,16
adds len=-16,len
- br.cloop.dptk.few 2b
+ br.cloop.dptk 2b
;;
mov ar.lc=saved_lc
//
@@ -178,7 +178,7 @@
;;
EX( .Lexit2, (p7) st1 [buf]=r0 ) // only 1 byte left
mov ret0=r0 // success
- br.ret.dptk.few rp // end of most likely path
+ br.ret.sptk.many rp // end of most likely path
//
// Outlined error handling code
@@ -205,5 +205,5 @@
.Lexit3:
mov ret0=len
mov ar.lc=saved_lc
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(__do_clear_user)
diff -urN linux-2.4.13/arch/ia64/lib/copy_page.S linux-2.4.13-lia/arch/ia64/lib/copy_page.S
--- linux-2.4.13/arch/ia64/lib/copy_page.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/copy_page.S Thu Oct 4 00:21:39 2001
@@ -90,5 +90,5 @@
mov pr=saved_pr,0xffffffffffff0000 // restore predicates
mov ar.pfs=saved_pfs
mov ar.lc=saved_lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(copy_page)
diff -urN linux-2.4.13/arch/ia64/lib/copy_user.S linux-2.4.13-lia/arch/ia64/lib/copy_user.S
--- linux-2.4.13/arch/ia64/lib/copy_user.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/copy_user.S Thu Oct 4 00:21:39 2001
@@ -19,8 +19,8 @@
* ret0 0 in case of success. The number of bytes NOT copied in
* case of error.
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* Fixme:
* - handle the case where we have more than 16 bytes and the alignment
@@ -85,7 +85,7 @@
cmp.eq p8,p0=r0,len // check for zero length
.save ar.lc, saved_lc
mov saved_lc=ar.lc // preserve ar.lc (slow)
-(p8) br.ret.spnt.few rp // empty mempcy()
+(p8) br.ret.spnt.many rp // empty mempcy()
;;
add enddst=dst,len // first byte after end of source
add endsrc=src,len // first byte after end of destination
@@ -103,26 +103,26 @@
cmp.lt p10,p7=COPY_BREAK,len // if len > COPY_BREAK then long copy
xor tmp=src,dst // same alignment test prepare
-(p10) br.cond.dptk.few long_copy_user
+(p10) br.cond.dptk .long_copy_user
;; // RAW pr.rot/p16 ?
//
// Now we do the byte by byte loop with software pipeline
//
// p7 is necessarily false by now
1:
- EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 1b
;;
mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs // restore ar.ec
- br.ret.sptk.few rp // end of short memcpy
+ br.ret.sptk.many rp // end of short memcpy
//
// Not 8-byte aligned
//
-diff_align_copy_user:
+.diff_align_copy_user:
// At this point we know we have more than 16 bytes to copy
// and also that src and dest do _not_ have the same alignment.
and src2=0x7,src1 // src offset
@@ -153,7 +153,7 @@
// We know src1 is not 8-byte aligned in this case.
//
cmp.eq p14,p15=r0,dst2
-(p15) br.cond.spnt.few 1f
+(p15) br.cond.spnt 1f
;;
sub t1=8,src2
mov t2=src2
@@ -163,7 +163,7 @@
;;
sub lshiftd,rshift
;;
- br.cond.spnt.few word_copy_user
+ br.cond.spnt .word_copy_user
;;
1:
cmp.leu p14,p15=src2,dst2
@@ -192,15 +192,15 @@
mov ar.lc=cnt
;;
2:
- EX(failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 2b
;;
clrrrb
;;
-word_copy_user:
+.word_copy_user:
cmp.gtu p9,p0\x16,len1
-(p9) br.cond.spnt.few 4f // if (16 > len1) skip 8-byte copy
+(p9) br.cond.spnt 4f // if (16 > len1) skip 8-byte copy
;;
shr.u cnt=len1,3 // number of 64-bit words
;;
@@ -232,24 +232,24 @@
#define EPI_1 p[PIPE_DEPTH-2]
#define SWITCH(pred, shift) cmp.eq pred,p0=shift,rshift
#define CASE(pred, shift) \
- (pred) br.cond.spnt.few copy_user_bit##shift
+ (pred) br.cond.spnt .copy_user_bit##shift
#define BODY(rshift) \
-copy_user_bit##rshift: \
+.copy_user_bit##rshift: \
1: \
- EX(failure_out,(EPI) st8 [dst1]=tmp,8); \
+ EX(.failure_out,(EPI) st8 [dst1]=tmp,8); \
(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
EX(3f,(p16) ld8 val1[0]=[src1],8); \
- br.ctop.dptk.few 1b; \
+ br.ctop.dptk 1b; \
;; \
- br.cond.sptk.few .diff_align_do_tail; \
+ br.cond.sptk.many .diff_align_do_tail; \
2: \
(EPI) st8 [dst1]=tmp,8; \
(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
3: \
(p16) mov val1[0]=r0; \
- br.ctop.dptk.few 2b; \
+ br.ctop.dptk 2b; \
;; \
- br.cond.sptk.few failure_in2
+ br.cond.sptk.many .failure_in2
//
// Since the instruction 'shrp' requires a fixed 128-bit value
@@ -301,25 +301,25 @@
mov ar.lc=len1
;;
5:
- EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 5b
;;
mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// Beginning of long mempcy (i.e. > 16 bytes)
//
-long_copy_user:
+.long_copy_user:
tbit.nz p6,p7=src1,0 // odd alignement
and tmp=7,tmp
;;
cmp.eq p10,p8=r0,tmp
mov len1=len // copy because of rotation
-(p8) br.cond.dpnt.few diff_align_copy_user
+(p8) br.cond.dpnt .diff_align_copy_user
;;
// At this point we know we have more than 16 bytes to copy
// and also that both src and dest have the same alignment
@@ -327,11 +327,11 @@
// forward slowly until we reach 16byte alignment: no need to
// worry about reaching the end of buffer.
//
- EX(failure_in1,(p6) ld1 val1[0]=[src1],1) // 1-byte aligned
+ EX(.failure_in1,(p6) ld1 val1[0]=[src1],1) // 1-byte aligned
(p6) adds len1=-1,len1;;
tbit.nz p7,p0=src1,1
;;
- EX(failure_in1,(p7) ld2 val1[1]=[src1],2) // 2-byte aligned
+ EX(.failure_in1,(p7) ld2 val1[1]=[src1],2) // 2-byte aligned
(p7) adds len1=-2,len1;;
tbit.nz p8,p0=src1,2
;;
@@ -339,28 +339,28 @@
// Stop bit not required after ld4 because if we fail on ld4
// we have never executed the ld1, therefore st1 is not executed.
//
- EX(failure_in1,(p8) ld4 val2[0]=[src1],4) // 4-byte aligned
+ EX(.failure_in1,(p8) ld4 val2[0]=[src1],4) // 4-byte aligned
;;
- EX(failure_out,(p6) st1 [dst1]=val1[0],1)
+ EX(.failure_out,(p6) st1 [dst1]=val1[0],1)
tbit.nz p9,p0=src1,3
;;
//
// Stop bit not required after ld8 because if we fail on ld8
// we have never executed the ld2, therefore st2 is not executed.
//
- EX(failure_in1,(p9) ld8 val2[1]=[src1],8) // 8-byte aligned
- EX(failure_out,(p7) st2 [dst1]=val1[1],2)
+ EX(.failure_in1,(p9) ld8 val2[1]=[src1],8) // 8-byte aligned
+ EX(.failure_out,(p7) st2 [dst1]=val1[1],2)
(p8) adds len1=-4,len1
;;
- EX(failure_out, (p8) st4 [dst1]=val2[0],4)
+ EX(.failure_out, (p8) st4 [dst1]=val2[0],4)
(p9) adds len1=-8,len1;;
shr.u cnt=len1,4 // number of 128-bit (2x64bit) words
;;
- EX(failure_out, (p9) st8 [dst1]=val2[1],8)
+ EX(.failure_out, (p9) st8 [dst1]=val2[1],8)
tbit.nz p6,p0=len1,3
cmp.eq p7,p0=r0,cnt
adds tmp=-1,cnt // br.ctop is repeat/until
-(p7) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p7) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds src2=8,src1
adds dst2=8,dst1
@@ -370,12 +370,12 @@
// 16bytes/iteration
//
2:
- EX(failure_in3,(p16) ld8 val1[0]=[src1],16)
+ EX(.failure_in3,(p16) ld8 val1[0]=[src1],16)
(p16) ld8 val2[0]=[src2],16
- EX(failure_out, (EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16)
+ EX(.failure_out, (EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16)
(EPI) st8 [dst2]=val2[PIPE_DEPTH-1],16
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;; // RAW on src1 when fall through from loop
//
// Tail correction based on len only
@@ -384,29 +384,28 @@
// is 16 byte aligned AND we have less than 16 bytes to copy.
//
.dotail:
- EX(failure_in1,(p6) ld8 val1[0]=[src1],8) // at least 8 bytes
+ EX(.failure_in1,(p6) ld8 val1[0]=[src1],8) // at least 8 bytes
tbit.nz p7,p0=len1,2
;;
- EX(failure_in1,(p7) ld4 val1[1]=[src1],4) // at least 4 bytes
+ EX(.failure_in1,(p7) ld4 val1[1]=[src1],4) // at least 4 bytes
tbit.nz p8,p0=len1,1
;;
- EX(failure_in1,(p8) ld2 val2[0]=[src1],2) // at least 2 bytes
+ EX(.failure_in1,(p8) ld2 val2[0]=[src1],2) // at least 2 bytes
tbit.nz p9,p0=len1,0
;;
- EX(failure_out, (p6) st8 [dst1]=val1[0],8)
+ EX(.failure_out, (p6) st8 [dst1]=val1[0],8)
;;
- EX(failure_in1,(p9) ld1 val2[1]=[src1]) // only 1 byte left
+ EX(.failure_in1,(p9) ld1 val2[1]=[src1]) // only 1 byte left
mov ar.lc=saved_lc
;;
- EX(failure_out,(p7) st4 [dst1]=val1[1],4)
+ EX(.failure_out,(p7) st4 [dst1]=val1[1],4)
mov pr=saved_pr,0xffffffffffff0000
;;
- EX(failure_out, (p8) st2 [dst1]=val2[0],2)
+ EX(.failure_out, (p8) st2 [dst1]=val2[0],2)
mov ar.pfs=saved_pfs
;;
- EX(failure_out, (p9) st1 [dst1]=val2[1])
- br.ret.dptk.few rp
-
+ EX(.failure_out, (p9) st1 [dst1]=val2[1])
+ br.ret.sptk.many rp
//
@@ -433,32 +432,32 @@
// pipeline going. We can't really do this inline because
// p16 is always reset to 1 when lc > 0.
//
-failure_in_pipe1:
+.failure_in_pipe1:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
1:
(p16) mov val1[0]=r0
(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1
- br.ctop.dptk.few 1b
+ br.ctop.dptk 1b
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// This is the case where the byte by byte copy fails on the load
// when we copy the head. We need to finish the pipeline and copy
// zeros for the rest of the destination. Since this happens
// at the top we still need to fill the body and tail.
-failure_in_pipe2:
+.failure_in_pipe2:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
2:
(p16) mov val1[0]=r0
(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;;
sub len=enddst,dst1,1 // precompute len
- br.cond.dptk.few failure_in1bis
+ br.cond.dptk.many .failure_in1bis
;;
//
@@ -533,9 +532,7 @@
// This means that we are in a situation similar the a fault in the
// head part. That's nice!
//
-failure_in1:
-// sub ret0=enddst,dst1 // number of bytes to zero, i.e. not copied
-// sub len=enddst,dst1,1
+.failure_in1:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
sub len=endsrc,src1,1
//
@@ -546,18 +543,17 @@
// calling side.
//
;;
-failure_in1bis: // from (failure_in3)
+.failure_in1bis: // from (.failure_in3)
mov ar.lc=len // Continue with a stupid byte store.
;;
5:
st1 [dst1]=r0,1
- br.cloop.dptk.few 5b
+ br.cloop.dptk 5b
;;
-skip_loop:
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// Here we simply restart the loop but instead
@@ -569,7 +565,7 @@
// we MUST use src1/endsrc here and not dst1/enddst because
// of the pipeline effect.
//
-failure_in3:
+.failure_in3:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
;;
2:
@@ -577,36 +573,36 @@
(p16) mov val2[0]=r0
(EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16
(EPI) st8 [dst2]=val2[PIPE_DEPTH-1],16
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;;
cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
sub len=enddst,dst1,1 // precompute len
-(p6) br.cond.dptk.few failure_in1bis
+(p6) br.cond.dptk .failure_in1bis
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
-failure_in2:
+.failure_in2:
sub ret0=endsrc,src1
cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
sub len=enddst,dst1,1 // precompute len
-(p6) br.cond.dptk.few failure_in1bis
+(p6) br.cond.dptk .failure_in1bis
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// handling of failures on stores: that's the easy part
//
-failure_out:
+.failure_out:
sub ret0=enddst,dst1
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(__copy_user)
diff -urN linux-2.4.13/arch/ia64/lib/do_csum.S linux-2.4.13-lia/arch/ia64/lib/do_csum.S
--- linux-2.4.13/arch/ia64/lib/do_csum.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/do_csum.S Thu Oct 4 00:21:39 2001
@@ -16,7 +16,6 @@
* back-to-back 8-byte words per loop. Clean up the initialization
* for the loop. Support the cases where load latency = 1 or 2.
* Set CONFIG_IA64_LOAD_LATENCY to 1 or 2 (default).
- *
*/
#include <asm/asmmacro.h>
@@ -130,7 +129,7 @@
;; // avoid WAW on CFM
mov tmp3=0x7 // a temporary mask/value
add tmp1=buf,len // last byte's address
-(p6) br.ret.spnt.few rp // return if true (hope we can avoid that)
+(p6) br.ret.spnt.many rp // return if true (hope we can avoid that)
and firstoff=7,buf // how many bytes off for first1 element
tbit.nz p15,p0=buf,0 // is buf an odd address ?
@@ -181,9 +180,9 @@
cmp.ltu p6,p0=result1[0],word1[0] // check the carry
;;
(p6) adds result1[0]=1,result1[0]
-(p8) br.cond.dptk.few do_csum_exit // if (within an 8-byte word)
+(p8) br.cond.dptk .do_csum_exit // if (within an 8-byte word)
;;
-(p11) br.cond.dptk.few do_csum16 // if (count is even)
+(p11) br.cond.dptk .do_csum16 // if (count is even)
;;
// Here count is odd.
ld8 word1[1]=[first1],8 // load an 8-byte word
@@ -196,14 +195,14 @@
;;
(p6) adds result1[0]=1,result1[0]
;;
-(p9) br.cond.sptk.few do_csum_exit // if (count = 1) exit
+(p9) br.cond.sptk .do_csum_exit // if (count = 1) exit
// Fall through to caluculate the checksum, feeding result1[0] as
// the initial value in result1[0].
;;
//
// Calculate the checksum loading two 8-byte words per loop.
//
-do_csum16:
+.do_csum16:
mov saved_lc=ar.lc
shr.u count=count,1 // we do 16 bytes per loop
;;
@@ -225,7 +224,7 @@
;;
add first2=8,first1
;;
-(p9) br.cond.sptk.few do_csum_exit
+(p9) br.cond.sptk .do_csum_exit
;;
nop.m 0
nop.i 0
@@ -241,7 +240,7 @@
2:
(p16) ld8 word1[0]=[first1],16
(p16) ld8 word2[0]=[first2],16
- br.ctop.sptk.few 1b
+ br.ctop.sptk 1b
;;
// Since len is a 32-bit value, carry cannot be larger than
// a 64-bit value.
@@ -263,7 +262,7 @@
;;
(p6) adds result1[0]=1,result1[0]
;;
-do_csum_exit:
+.do_csum_exit:
movl tmp3=0xffffffff
;;
// XXX Fixme
@@ -299,7 +298,7 @@
;;
mov ar.lc=saved_lc
(p15) shr.u ret0=ret0,64-16 // + shift back to position = swap bytes
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
// I (Jun Nakajima) wrote an equivalent code (see below), but it was
// not much better than the original. So keep the original there so that
@@ -331,6 +330,6 @@
//(p15) mux1 ret0=ret0,@rev // reverse word
// ;;
//(p15) shr.u ret0=ret0,64-16 // + shift back to position = swap bytes
-// br.ret.sptk.few rp
+// br.ret.sptk.many rp
END(do_csum)
diff -urN linux-2.4.13/arch/ia64/lib/idiv32.S linux-2.4.13-lia/arch/ia64/lib/idiv32.S
--- linux-2.4.13/arch/ia64/lib/idiv32.S Mon Oct 9 17:54:56 2000
+++ linux-2.4.13-lia/arch/ia64/lib/idiv32.S Thu Oct 4 00:21:39 2001
@@ -79,5 +79,5 @@
;;
#endif
getf.sig r8 = f6 // transfer result to result register
- br.ret.sptk rp
+ br.ret.sptk.many rp
END(NAME)
diff -urN linux-2.4.13/arch/ia64/lib/idiv64.S linux-2.4.13-lia/arch/ia64/lib/idiv64.S
--- linux-2.4.13/arch/ia64/lib/idiv64.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/idiv64.S Thu Oct 4 00:21:39 2001
@@ -89,5 +89,5 @@
#endif
getf.sig r8 = f17 // transfer result to result register
ldf.fill f17 = [sp]
- br.ret.sptk rp
+ br.ret.sptk.many rp
END(NAME)
diff -urN linux-2.4.13/arch/ia64/lib/memcpy.S linux-2.4.13-lia/arch/ia64/lib/memcpy.S
--- linux-2.4.13/arch/ia64/lib/memcpy.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/memcpy.S Thu Oct 4 00:21:39 2001
@@ -9,20 +9,14 @@
* Output:
* no return value
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
#include <asm/asmmacro.h>
-#if defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC)
-# define BRP(args...) nop.b 0
-#else
-# define BRP(args...) brp.loop.imp args
-#endif
-
GLOBAL_ENTRY(bcopy)
.regstk 3,0,0,0
mov r8=in0
@@ -103,8 +97,8 @@
cmp.ne p6,p0=t0,r0
mov src=in1 // copy because of rotation
-(p7) br.cond.spnt.few memcpy_short
-(p6) br.cond.spnt.few memcpy_long
+(p7) br.cond.spnt.few .memcpy_short
+(p6) br.cond.spnt.few .memcpy_long
;;
nop.m 0
;;
@@ -119,7 +113,7 @@
1: { .mib
(p[0]) ld8 val[0]=[src],8
nop.i 0
- BRP(1b, 2f)
+ brp.loop.imp 1b, 2f
}
2: { .mfb
(p[N-1])st8 [dst]=val[N-1],8
@@ -139,14 +133,14 @@
* issues, we want to avoid read-modify-write of entire words.
*/
.align 32
-memcpy_short:
+.memcpy_short:
adds cnt=-1,in2 // br.ctop is repeat/until
mov ar.ec=MEM_LAT
- BRP(1f, 2f)
+ brp.loop.imp 1f, 2f
;;
mov ar.lc=cnt
;;
- nop.m 0
+ nop.m 0
;;
nop.m 0
nop.i 0
@@ -163,7 +157,7 @@
1: { .mib
(p[0]) ld1 val[0]=[src],1
nop.i 0
- BRP(1b, 2f)
+ brp.loop.imp 1b, 2f
} ;;
2: { .mfb
(p[MEM_LAT-1])st1 [dst]=val[MEM_LAT-1],1
@@ -202,7 +196,7 @@
#define LOG_LOOP_SIZE 6
-memcpy_long:
+.memcpy_long:
alloc t3=ar.pfs,3,Nrot,0,Nrot // resize register frame
and t0=-8,src // t0 = src & ~7
and t2=7,src // t2 = src & 7
@@ -247,7 +241,7 @@
mov t4=ip
} ;;
and src2=-8,src // align source pointer
- adds t4=memcpy_loops-1b,t4
+ adds t4=.memcpy_loops-1b,t4
mov ar.ec=N
and t0=7,src // t0 = src & 7
@@ -266,7 +260,7 @@
mov pr=cnt,0x38 // set (p5,p4,p3) to # of bytes last-word bytes to copy
mov ar.lc=t2
;;
- nop.m 0
+ nop.m 0
;;
nop.m 0
nop.i 0
@@ -278,7 +272,7 @@
br.sptk.few b6
;;
-memcpy_tail:
+.memcpy_tail:
// At this point, (p5,p4,p3) are set to the number of bytes left to copy (which is
// less than 8) and t0 contains the last few bytes of the src buffer:
(p5) st4 [dst]=t0,4
@@ -300,7 +294,7 @@
1: { .mib \
(p[0]) ld8 val[0]=[src2],8; \
(p[MEM_LAT+3]) shrp w[0]=val[MEM_LAT+3],val[MEM_LAT+4-index],shift; \
- BRP(1b, 2f) \
+ brp.loop.imp 1b, 2f \
}; \
2: { .mfb \
(p[MEM_LAT+4]) st8 [dst]=w[1],8; \
@@ -311,8 +305,8 @@
ld8 val[N-1]=[src_end]; /* load last word (may be same as val[N]) */ \
;; \
shrp t0=val[N-1],val[N-index],shift; \
- br memcpy_tail
-memcpy_loops:
+ br .memcpy_tail
+.memcpy_loops:
COPY(0, 1) /* no point special casing this---it doesn't go any faster without shrp */
COPY(8, 0)
COPY(16, 0)
diff -urN linux-2.4.13/arch/ia64/lib/memset.S linux-2.4.13-lia/arch/ia64/lib/memset.S
--- linux-2.4.13/arch/ia64/lib/memset.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/memset.S Thu Oct 4 00:21:40 2001
@@ -43,11 +43,11 @@
adds tmp=-1,len // br.ctop is repeat/until
tbit.nz p6,p0=buf,0 // odd alignment
-(p8) br.ret.spnt.few rp
+(p8) br.ret.spnt.many rp
cmp.lt p7,p0\x16,len // if len > 16 then long memset
mux1 val=val,@brcst // prepare value
-(p7) br.cond.dptk.few long_memset
+(p7) br.cond.dptk .long_memset
;;
mov ar.lc=tmp // initialize lc for small count
;; // avoid RAW and WAW on ar.lc
@@ -57,11 +57,11 @@
;; // avoid RAW on ar.lc
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.sptk.few rp // end of short memset
+ br.ret.sptk.many rp // end of short memset
// at this point we know we have more than 16 bytes to copy
// so we focus on alignment
-long_memset:
+.long_memset:
(p6) st1 [buf]=val,1 // 1-byte aligned
(p6) adds len=-1,len;; // sync because buf is modified
tbit.nz p6,p0=buf,1
@@ -80,7 +80,7 @@
;;
cmp.eq p6,p0=r0,cnt
adds tmp=-1,cnt
-(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p6) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds buf2=8,buf // setup second base pointer
mov ar.lc=tmp
@@ -104,5 +104,5 @@
mov ar.lc=saved_lc
;;
(p6) st1 [buf]=val // only 1 byte left
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(memset)
diff -urN linux-2.4.13/arch/ia64/lib/strlen.S linux-2.4.13-lia/arch/ia64/lib/strlen.S
--- linux-2.4.13/arch/ia64/lib/strlen.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strlen.S Thu Oct 4 00:21:40 2001
@@ -11,7 +11,7 @@
* does not count the \0
*
* Copyright (C) 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 09/24/99 S.Eranian add speculation recovery code
*/
@@ -116,7 +116,7 @@
ld8.s w[0]=[src],8 // speculatively load next to next
cmp.eq.and p6,p0=8,val1 // p6 = p6 and val1=8
cmp.eq.and p6,p0=8,val2 // p6 = p6 and mask=8
-(p6) br.wtop.dptk.few 1b // loop until p6 = 0
+(p6) br.wtop.dptk 1b // loop until p6 = 0
;;
//
// We must return try the recovery code iff
@@ -127,14 +127,14 @@
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
tnat.nz p6,p7=val1 // test NaT on val1
-(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT
;;
//
// if we come here p7 is true, i.e., initialized for // cmp
//
cmp.eq.and p7,p0=8,val1// val1=8?
tnat.nz.and p7,p0=val2 // test NaT if val2
-(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT
;;
(p8) mov val1=val2 // the other test got us out of the loop
(p8) adds src=-16,src // correct position when 3 ahead
@@ -146,7 +146,7 @@
;;
sub ret0=ret0,tmp // adjust
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of normal execution
+ br.ret.sptk.many rp // end of normal execution
//
// Outlined recovery code when speculation failed
@@ -165,7 +165,7 @@
// - today we restart from the beginning of the string instead
// of trying to continue where we left off.
//
-recover:
+.recover:
ld8 val=[base],8 // will fail if unrecoverable fault
;;
or val=val,mask // remask first bytes
@@ -180,7 +180,7 @@
czx1.r val1=val // search 0 byte from right
;;
cmp.eq p6,p0=8,val1 // val1=8 ?
-(p6) br.wtop.dptk.few 2b // loop until p6 = 0
+(p6) br.wtop.dptk 2b // loop until p6 = 0
;; // (avoid WAW on p63)
sub ret0ºse,orig // distance from base
sub tmp=8,val1
@@ -188,5 +188,5 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of successful recovery code
+ br.ret.sptk.many rp // end of successful recovery code
END(strlen)
diff -urN linux-2.4.13/arch/ia64/lib/strlen_user.S linux-2.4.13-lia/arch/ia64/lib/strlen_user.S
--- linux-2.4.13/arch/ia64/lib/strlen_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strlen_user.S Thu Oct 4 00:21:40 2001
@@ -8,8 +8,8 @@
* ret0 0 in case of fault, strlen(buffer)+1 otherwise
*
* Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 01/19/99 S.Eranian heavily enhanced version (see details below)
* 09/24/99 S.Eranian added speculation recovery code
@@ -108,7 +108,7 @@
mov ar.ec=r0 // clear epilogue counter (saved in ar.pfs)
;;
add base=-16,src // keep track of aligned base
- chk.s v[1], recover // if already NaT, then directly skip to recover
+ chk.s v[1], .recover // if already NaT, then directly skip to recover
or v[1]=v[1],mask // now we have a safe initial byte pattern
;;
1:
@@ -130,14 +130,14 @@
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
tnat.nz p6,p7=val1 // test NaT on val1
-(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT
;;
//
// if we come here p7 is true, i.e., initialized for // cmp
//
cmp.eq.and p7,p0=8,val1// val1=8?
tnat.nz.and p7,p0=val2 // test NaT if val2
-(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT
;;
(p8) mov val1=val2 // val2 contains the value
(p8) adds src=-16,src // correct position when 3 ahead
@@ -149,7 +149,7 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of normal execution
+ br.ret.sptk.many rp // end of normal execution
//
// Outlined recovery code when speculation failed
@@ -162,7 +162,7 @@
// - today we restart from the beginning of the string instead
// of trying to continue where we left off.
//
-recover:
+.recover:
EX(.Lexit1, ld8 val=[base],8) // load the initial bytes
;;
or val=val,mask // remask first bytes
@@ -185,7 +185,7 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of successful recovery code
+ br.ret.sptk.many rp // end of successful recovery code
//
// We failed even on the normal load (called from exception handler)
@@ -194,5 +194,5 @@
mov ret0=0
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strlen_user)
diff -urN linux-2.4.13/arch/ia64/lib/strncpy_from_user.S linux-2.4.13-lia/arch/ia64/lib/strncpy_from_user.S
--- linux-2.4.13/arch/ia64/lib/strncpy_from_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strncpy_from_user.S Thu Oct 4 00:21:40 2001
@@ -40,5 +40,5 @@
(p6) mov r8=in2 // buffer filled up---return buffer length
(p7) sub r8=in1,r9,1 // return string length (excluding NUL character)
[.Lexit:]
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strncpy_from_user)
diff -urN linux-2.4.13/arch/ia64/lib/strnlen_user.S linux-2.4.13-lia/arch/ia64/lib/strnlen_user.S
--- linux-2.4.13/arch/ia64/lib/strnlen_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strnlen_user.S Thu Oct 4 00:21:40 2001
@@ -33,7 +33,7 @@
add r9=1,r9
;;
cmp.eq p6,p0=r8,r0
-(p6) br.dpnt.few .Lexit
+(p6) br.cond.dpnt .Lexit
br.cloop.dptk.few .Loop1
add r9=1,in1 // NUL not found---return N+1
@@ -41,5 +41,5 @@
.Lexit:
mov r8=r9
mov ar.lc=r16 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strnlen_user)
diff -urN linux-2.4.13/arch/ia64/mm/fault.c linux-2.4.13-lia/arch/ia64/mm/fault.c
--- linux-2.4.13/arch/ia64/mm/fault.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/mm/fault.c Thu Oct 4 00:21:40 2001
@@ -1,8 +1,8 @@
/*
* MMU fault handling support.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/sched.h>
#include <linux/kernel.h>
@@ -16,7 +16,7 @@
#include <asm/uaccess.h>
#include <asm/hardirq.h>
-extern void die_if_kernel (char *, struct pt_regs *, long);
+extern void die (char *, struct pt_regs *, long);
/*
* This routine is analogous to expand_stack() but instead grows the
@@ -46,16 +46,15 @@
void
ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *regs)
{
+ int signal = SIGSEGV, code = SEGV_MAPERR;
+ struct vm_area_struct *vma, *prev_vma;
struct mm_struct *mm = current->mm;
struct exception_fixup fix;
- struct vm_area_struct *vma, *prev_vma;
struct siginfo si;
- int signal = SIGSEGV;
unsigned long mask;
/*
- * If we're in an interrupt or have no user
- * context, we must not take the fault..
+ * If we're in an interrupt or have no user context, we must not take the fault..
*/
if (in_interrupt() || !mm)
goto no_context;
@@ -71,6 +70,8 @@
goto check_expansion;
good_area:
+ code = SEGV_ACCERR;
+
/* OK, we've got a good vm_area for this memory area. Check the access permissions: */
# define VM_READ_BIT 0
@@ -89,12 +90,13 @@
if ((vma->vm_flags & mask) != mask)
goto bad_area;
+ survive:
/*
* If for any reason at all we couldn't handle the fault, make
* sure we exit gracefully rather than endlessly redo the
* fault.
*/
- switch (handle_mm_fault(mm, vma, address, mask) != 0) {
+ switch (handle_mm_fault(mm, vma, address, mask)) {
case 1:
++current->min_flt;
break;
@@ -147,7 +149,7 @@
if (user_mode(regs)) {
si.si_signo = signal;
si.si_errno = 0;
- si.si_code = SI_KERNEL;
+ si.si_code = code;
si.si_addr = (void *) address;
force_sig_info(signal, &si, current);
return;
@@ -174,17 +176,29 @@
}
/*
- * Oops. The kernel tried to access some bad page. We'll have
- * to terminate things with extreme prejudice.
+ * Oops. The kernel tried to access some bad page. We'll have to terminate things
+ * with extreme prejudice.
*/
- printk(KERN_ALERT "Unable to handle kernel paging request at "
- "virtual address %016lx\n", address);
- die_if_kernel("Oops", regs, isr);
+ bust_spinlocks(1);
+
+ if (address < PAGE_SIZE)
+ printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk(KERN_ALERT "Unable to handle kernel paging request at "
+ "virtual address %016lx\n", address);
+ die("Oops", regs, isr);
+ bust_spinlocks(0);
do_exit(SIGKILL);
return;
out_of_memory:
up_read(&mm->mmap_sem);
+ if (current->pid = 1) {
+ current->policy |= SCHED_YIELD;
+ schedule();
+ down_read(&mm->mmap_sem);
+ goto survive;
+ }
printk("VM: killing process %s\n", current->comm);
if (user_mode(regs))
do_exit(SIGKILL);
diff -urN linux-2.4.13/arch/ia64/mm/init.c linux-2.4.13-lia/arch/ia64/mm/init.c
--- linux-2.4.13/arch/ia64/mm/init.c Mon Sep 24 15:06:13 2001
+++ linux-2.4.13-lia/arch/ia64/mm/init.c Wed Oct 10 17:43:54 2001
@@ -167,13 +167,40 @@
}
void
-show_mem (void)
+show_mem(void)
{
int i, total = 0, reserved = 0;
int shared = 0, cached = 0;
printk("Mem-info:\n");
show_free_areas();
+
+#ifdef CONFIG_DISCONTIGMEM
+ {
+ pg_data_t *pgdat = pgdat_list;
+
+ printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
+ do {
+ printk("Node ID: %d\n", pgdat->node_id);
+ for(i = 0; i < pgdat->node_size; i++) {
+ if (PageReserved(pgdat->node_mem_map+i))
+ reserved++;
+ else if (PageSwapCache(pgdat->node_mem_map+i))
+ cached++;
+ else if (page_count(pgdat->node_mem_map + i))
+ shared += page_count(pgdat->node_mem_map + i) - 1;
+ }
+ printk("\t%d pages of RAM\n", pgdat->node_size);
+ printk("\t%d reserved pages\n", reserved);
+ printk("\t%d pages shared\n", shared);
+ printk("\t%d pages swap cached\n", cached);
+ pgdat = pgdat->node_next;
+ } while (pgdat);
+ printk("Total of %ld pages in page table cache\n", pgtable_cache_size);
+ show_buffers();
+ printk("%d free buffer pages\n", nr_free_buffer_pages());
+ }
+#else /* !CONFIG_DISCONTIGMEM */
printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
i = max_mapnr;
while (i-- > 0) {
@@ -191,6 +218,7 @@
printk("%d pages swap cached\n", cached);
printk("%ld pages in page table cache\n", pgtable_cache_size);
show_buffers();
+#endif /* !CONFIG_DISCONTIGMEM */
}
/*
diff -urN linux-2.4.13/arch/ia64/mm/tlb.c linux-2.4.13-lia/arch/ia64/mm/tlb.c
--- linux-2.4.13/arch/ia64/mm/tlb.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/mm/tlb.c Wed Oct 10 17:45:07 2001
@@ -2,7 +2,7 @@
* TLB support routines.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 08/02/00 A. Mallick <asit.k.mallick@intel.com>
* Modified RID allocation for SMP
@@ -41,89 +41,6 @@
};
/*
- * Seralize usage of ptc.g
- */
-spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED; /* see <asm/pgtable.h> */
-
-#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG)
-
-#include <linux/irq.h>
-
-unsigned long flush_end, flush_start, flush_nbits, flush_rid;
-atomic_t flush_cpu_count;
-
-/*
- * flush_tlb_no_ptcg is called with ptcg_lock locked
- */
-static inline void
-flush_tlb_no_ptcg (unsigned long start, unsigned long end, unsigned long nbits)
-{
- extern void smp_send_flush_tlb (void);
- unsigned long saved_tpr = 0;
- unsigned long flags;
-
- /*
- * Some times this is called with interrupts disabled and causes
- * dead-lock; to avoid this we enable interrupt and raise the TPR
- * to enable ONLY IPI.
- */
- __save_flags(flags);
- if (!(flags & IA64_PSR_I)) {
- saved_tpr = ia64_get_tpr();
- ia64_srlz_d();
- ia64_set_tpr(IA64_IPI_VECTOR - 16);
- ia64_srlz_d();
- local_irq_enable();
- }
-
- spin_lock(&ptcg_lock);
- flush_rid = ia64_get_rr(start);
- ia64_srlz_d();
- flush_start = start;
- flush_end = end;
- flush_nbits = nbits;
- atomic_set(&flush_cpu_count, smp_num_cpus - 1);
- smp_send_flush_tlb();
- /*
- * Purge local TLB entries. ALAT invalidation is done in ia64_leave_kernel.
- */
- do {
- asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
- start += (1UL << nbits);
- } while (start < end);
-
- ia64_srlz_i(); /* srlz.i implies srlz.d */
-
- /*
- * Wait for other CPUs to finish purging entries.
- */
-#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
- {
- extern void smp_resend_flush_tlb (void);
- unsigned long start = ia64_get_itc();
-
- while (atomic_read(&flush_cpu_count) > 0) {
- if ((ia64_get_itc() - start) > 400000UL) {
- smp_resend_flush_tlb();
- start = ia64_get_itc();
- }
- }
- }
-#else
- while (atomic_read(&flush_cpu_count)) {
- /* Nothing */
- }
-#endif
- if (!(flags & IA64_PSR_I)) {
- local_irq_disable();
- ia64_set_tpr(saved_tpr);
- ia64_srlz_d();
- }
-}
-
-#endif /* CONFIG_SMP && !CONFIG_ITANIUM_PTCG */
-
-/*
* Acquire the ia64_ctx.lock before calling this function!
*/
void
@@ -162,6 +79,26 @@
flush_tlb_all();
}
+static void
+ia64_global_tlb_purge (unsigned long start, unsigned long end, unsigned long nbits)
+{
+ static spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED;
+
+ /* HW requires global serialization of ptc.ga. */
+ spin_lock(&ptcg_lock);
+ {
+ do {
+ /*
+ * Flush ALAT entries also.
+ */
+ asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2)
+ : "memory");
+ start += (1UL << nbits);
+ } while (start < end);
+ }
+ spin_unlock(&ptcg_lock);
+}
+
void
__flush_tlb_all (void)
{
@@ -222,23 +159,15 @@
}
start &= ~((1UL << nbits) - 1);
-#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG)
- flush_tlb_no_ptcg(start, end, nbits);
-#else
- spin_lock(&ptcg_lock);
- do {
# ifdef CONFIG_SMP
- /*
- * Flush ALAT entries also.
- */
- asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2) : "memory");
+ platform_global_tlb_purge(start, end, nbits);
# else
+ do {
asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
-# endif
start += (1UL << nbits);
} while (start < end);
-#endif /* CONFIG_SMP && !defined(CONFIG_ITANIUM_PTCG) */
- spin_unlock(&ptcg_lock);
+# endif
+
ia64_insn_group_barrier();
ia64_srlz_i(); /* srlz.i implies srlz.d */
ia64_insn_group_barrier();
diff -urN linux-2.4.13/arch/ia64/sn/sn1/llsc4.c linux-2.4.13-lia/arch/ia64/sn/sn1/llsc4.c
--- linux-2.4.13/arch/ia64/sn/sn1/llsc4.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/sn/sn1/llsc4.c Thu Oct 4 00:21:40 2001
@@ -35,16 +35,6 @@
static int inttest=0;
#endif
-#ifdef IA64_SEMFIX_INSN
-#undef IA64_SEMFIX_INSN
-#endif
-#ifdef IA64_SEMFIX
-#undef IA64_SEMFIX
-#endif
-# define IA64_SEMFIX_INSN
-# define IA64_SEMFIX ""
-
-
/*
* Test parameter table for AUTOTEST
*/
@@ -192,7 +182,6 @@
printk (" llscfail \t%s\tForce a failure to test the trigger & error messages\n", fail_enabled ? "on" : "off");
printk (" llscselt \t%s\tSelective triger on failures\n", selective_trigger ? "on" : "off");
printk (" llscblkadr \t%s\tDump data block addresses\n", dump_block_addrs_opt ? "on" : "off");
- printk (" SEMFIX: %s\n", IA64_SEMFIX);
printk ("\n");
}
__setup("autotest", autotest_enable);
diff -urN linux-2.4.13/arch/ia64/tools/print_offsets.c linux-2.4.13-lia/arch/ia64/tools/print_offsets.c
--- linux-2.4.13/arch/ia64/tools/print_offsets.c Tue Jul 31 10:30:09 2001
+++ linux-2.4.13-lia/arch/ia64/tools/print_offsets.c Thu Oct 4 00:21:52 2001
@@ -57,11 +57,8 @@
{ "IA64_TASK_PROCESSOR_OFFSET", offsetof (struct task_struct, processor) },
{ "IA64_TASK_THREAD_OFFSET", offsetof (struct task_struct, thread) },
{ "IA64_TASK_THREAD_KSP_OFFSET", offsetof (struct task_struct, thread.ksp) },
-#ifdef CONFIG_IA32_SUPPORT
- { "IA64_TASK_THREAD_SIGMASK_OFFSET",offsetof (struct task_struct, thread.un.sigmask) },
-#endif
#ifdef CONFIG_PERFMON
- { "IA64_TASK_PFM_NOTIFY_OFFSET", offsetof(struct task_struct, thread.pfm_pend_notify) },
+ { "IA64_TASK_PFM_MUST_BLOCK_OFFSET",offsetof(struct task_struct, thread.pfm_must_block) },
#endif
{ "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) },
{ "IA64_TASK_MM_OFFSET", offsetof (struct task_struct, mm) },
@@ -165,17 +162,18 @@
{ "IA64_SIGCONTEXT_FR6_OFFSET", offsetof (struct sigcontext, sc_fr[6]) },
{ "IA64_SIGCONTEXT_PR_OFFSET", offsetof (struct sigcontext, sc_pr) },
{ "IA64_SIGCONTEXT_R12_OFFSET", offsetof (struct sigcontext, sc_gr[12]) },
+ { "IA64_SIGCONTEXT_RBS_BASE_OFFSET",offsetof (struct sigcontext, sc_rbs_base) },
+ { "IA64_SIGCONTEXT_LOADRS_OFFSET", offsetof (struct sigcontext, sc_loadrs) },
{ "IA64_SIGFRAME_ARG0_OFFSET", offsetof (struct sigframe, arg0) },
{ "IA64_SIGFRAME_ARG1_OFFSET", offsetof (struct sigframe, arg1) },
{ "IA64_SIGFRAME_ARG2_OFFSET", offsetof (struct sigframe, arg2) },
- { "IA64_SIGFRAME_RBS_BASE_OFFSET", offsetof (struct sigframe, rbs_base) },
{ "IA64_SIGFRAME_HANDLER_OFFSET", offsetof (struct sigframe, handler) },
{ "IA64_SIGFRAME_SIGCONTEXT_OFFSET", offsetof (struct sigframe, sc) },
{ "IA64_CLONE_VFORK", CLONE_VFORK },
{ "IA64_CLONE_VM", CLONE_VM },
{ "IA64_CPU_IRQ_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.irq_count) },
{ "IA64_CPU_BH_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.bh_count) },
- { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET", offsetof (struct cpuinfo_ia64, phys_stacked_size_p8) },
+ { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET",offsetof (struct cpuinfo_ia64, phys_stacked_size_p8)},
};
static const char *tabs = "\t\t\t\t\t\t\t\t\t\t";
diff -urN linux-2.4.13/arch/parisc/kernel/traps.c linux-2.4.13-lia/arch/parisc/kernel/traps.c
--- linux-2.4.13/arch/parisc/kernel/traps.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/parisc/kernel/traps.c Wed Oct 24 18:17:29 2001
@@ -43,7 +43,6 @@
static inline void console_verbose(void)
{
- extern int console_loglevel;
console_loglevel = 15;
}
diff -urN linux-2.4.13/drivers/acpi/acpiconf.c linux-2.4.13-lia/drivers/acpi/acpiconf.c
--- linux-2.4.13/drivers/acpi/acpiconf.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/acpiconf.c Wed Oct 10 17:47:00 2001
@@ -0,0 +1,593 @@
+/*
+ * acpiconf.c - ACPI based kernel configuration
+ *
+ * Copyright (C) 2000-2001 Intel Corp.
+ * Copyright (C) 2000-2001 J.I. Lee <Jung-Ik.Lee@intel.com>
+ *
+ * Revision History:
+ * 9/15/2000 J.I.
+ * Major revision: for new ACPI initialization requirements
+ * 11/15/2000 J.I.
+ * Major revision: ACPI 2.0 tables support
+ * 04/23/2001 J.I.
+ * Rewrote functions to support multiple _PRTs of child P2Ps
+ * under root pci bus
+ */
+
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <asm/system.h>
+#include <asm/iosapic.h>
+#include <asm/efi.h>
+#include <asm/acpikcfg.h>
+#include "acpi.h"
+#include "osconf.h"
+#include "acpiconf.h"
+
+
+static int acpi_cf_initialized __initdata = 0;
+
+acpi_status __init
+acpi_cf_init (
+ void * rsdp
+ )
+{
+ acpi_status status;
+
+ acpi_os_bind_osd(ACPI_CF_PHASE_BOOTTIME);
+
+ status = acpi_initialize_subsystem ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:initialize_subsystem error=0x%x\n", status);
+ return status;
+ }
+ dprintk(("Acpi cfg:initialize_subsystem pass\n"));
+
+ status = acpi_load_tables ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:load firmware tables error=0x%x\n", status);
+ acpi_terminate();
+ return status;
+ }
+ dprintk(("Acpi cfg:load firmware tables pass\n"));
+
+ status = acpi_enable_subsystem (ACPI_FULL_INITIALIZATION);
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:enable_subsystem error=0x%x\n", status);
+ acpi_terminate();
+ return status;
+ }
+ dprintk(("Acpi cfg:enable_subsystem pass\n"));
+
+ acpi_cf_initialized++;
+
+ return AE_OK;
+}
+
+
+acpi_status __init
+acpi_cf_terminate ( void )
+{
+ acpi_status status;
+
+ if (! ACPI_CF_INITIALIZED()) {
+ acpi_os_bind_osd(ACPI_CF_PHASE_RUNTIME);
+ return AE_ERROR;
+ }
+
+ status = acpi_disable ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:disable fail=0x%x\n", status);
+ /* fall thru...*/
+ }
+
+ status = acpi_terminate ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:acpi terminate error=0x%x\n", status);
+ /* fall thru...*/
+ }
+
+ acpi_cf_cleanup();
+ acpi_os_bind_osd(ACPI_CF_PHASE_RUNTIME);
+
+ acpi_cf_initialized--;
+
+ return status;
+}
+
+
+acpi_status __init
+acpi_cf_get_pci_vectors (
+ struct pci_vector_struct **vectors,
+ int *num_pci_vectors
+ )
+{
+ acpi_status status;
+ void *prts;
+
+ if (! ACPI_CF_INITIALIZED()) {
+ status = acpi_cf_init((void *)efi.acpi);
+ if (ACPI_FAILURE (status))
+ return status;
+ }
+
+ *vectors = NULL;
+ *num_pci_vectors = 0;
+
+ status = acpi_cf_get_prt (&prts);
+ if (ACPI_FAILURE (status)) {
+ printk("Acpi cfg: get prt fail\n");
+ return status;
+ }
+
+ status = acpi_cf_convert_prt_to_vectors (prts, vectors, num_pci_vectors);
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ if (ACPI_SUCCESS(status)) {
+ acpi_cf_print_pci_vectors (*vectors, *num_pci_vectors);
+ }
+#endif
+ printk("Acpi cfg: get PCI interrupt vectors %s\n",
+ (ACPI_SUCCESS(status))?"pass":"fail");
+
+ return status;
+}
+
+
+static pci_routing_table *pci_routing_tables[PCI_MAX_BUS] __initdata = {NULL};
+
+
+typedef struct _acpi_rpb {
+ NATIVE_UINT rpb_busnum;
+ NATIVE_UINT lastbusnum;
+ acpi_handle rpb_handle;
+} acpi_rpb_t;
+
+
+static acpi_status __init
+acpi_cf_evaluate_method (
+ acpi_handle handle,
+ UINT8 *method_name,
+ NATIVE_UINT *nuint
+ )
+{
+ UINT32 tnuint = 0;
+ acpi_status status;
+
+ acpi_buffer ret_buf;
+ acpi_object *ext_obj;
+ UINT8 buf[PATHNAME_MAX];
+
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) buf;
+
+ status = acpi_evaluate_object(handle, method_name, NULL, &ret_buf);
+ if (ACPI_FAILURE(status)) {
+ if (status = AE_NOT_FOUND) {
+ printk("Acpi cfg: no %s found\n", method_name);
+ } else {
+ printk("Acpi cfg: %s fail=0x%x\n", method_name, status);
+ }
+ } else {
+ ext_obj = (acpi_object *) ret_buf.pointer;
+
+ switch (ext_obj->type) {
+ case ACPI_TYPE_INTEGER:
+ tnuint = (NATIVE_UINT) ext_obj->integer.value;
+ break;
+ default:
+ printk("Acpi cfg: %s obj type incorrect\n", method_name);
+ status = AE_TYPE;
+ break;
+ }
+ }
+
+ *nuint = tnuint;
+ return (status);
+}
+
+
+static acpi_status __init
+acpi_cf_evaluate_PRT (
+ acpi_handle handle,
+ pci_routing_table **prt
+ )
+{
+ acpi_buffer acpi_buffer;
+ acpi_status status;
+
+ acpi_buffer.length = 0;
+ acpi_buffer.pointer = NULL;
+
+ status = acpi_get_irq_routing_table (handle, &acpi_buffer);
+
+ switch (status) {
+ case AE_BUFFER_OVERFLOW:
+ dprintk(("Acpi cfg: _PRT found. need %d bytes\n",
+ acpi_buffer.length));
+ break; /* found */
+ default:
+ printk("Acpi cfg: _PRT fail=0x%x\n", status);
+ case AE_NOT_FOUND:
+ return status;
+ }
+
+ *prt = (pci_routing_table *) acpi_os_callocate (acpi_buffer.length);
+ if (!*prt) {
+ printk("Acpi cfg: callocate %d bytes for _PRT fail\n",
+ acpi_buffer.length);
+ return AE_NO_MEMORY;
+ }
+ acpi_buffer.pointer = (void *) *prt;
+
+ status = acpi_get_irq_routing_table (handle, &acpi_buffer);
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg: _PRT fail=0x%x.\n", status);
+ acpi_os_free(prt);
+ }
+
+ return status;
+}
+
+static acpi_status __init
+acpi_cf_get_root_pci_callback (
+ acpi_handle handle,
+ UINT32 Level,
+ void *context,
+ void **retval
+ )
+{
+ NATIVE_UINT busnum = 0;
+ acpi_status status;
+ acpi_rpb_t rpb;
+ pci_routing_table *prt;
+
+ UINT8 path_name[PATHNAME_MAX];
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ acpi_buffer ret_buf;
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
+
+ /*
+ * get bus number of this pci root bridge
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__BBN, &busnum);
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s evaluate _BBN fail=0x%x\n",
+ path_name, status);
+ return (status);
+ }
+ printk("Acpi cfg:%s ROOT PCI bus %ld\n", path_name, busnum);
+
+ /*
+ * evaluate root pci bridge's _CRS for Bus number range for child P2P
+ * (bus min/max/len) - not yet.
+ */
+
+ /*
+ * get immediate _PRT of this root pci bridge if any
+ */
+ status = acpi_cf_evaluate_PRT (handle, &prt);
+ switch(status) {
+ case AE_NOT_FOUND:
+ break;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _PRT fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ dprintk(("Acpi cfg:%s bus %ld got _PRT\n", path_name, busnum));
+ acpi_cf_add_to_pci_routing_tables (busnum, prt);
+ break;
+ }
+
+
+ /*
+ * walk down this root pci bridge to get _PRTs if any
+ */
+ rpb.rpb_busnum = rpb.lastbusnum = busnum;
+ rpb.rpb_handle = handle;
+ status = acpi_walk_namespace ( ACPI_TYPE_DEVICE,
+ handle,
+ ACPI_UINT32_MAX,
+ acpi_cf_get_prt_callback,
+ &rpb,
+ NULL );
+ if (ACPI_FAILURE(status))
+ printk("Acpi cfg:%s walk namespace for _PRT error=0x%x\n",
+ path_name, status);
+
+ return (status);
+}
+
+
+/*
+ * handle _PRTs of immediate P2Ps of root pci.
+ */
+static acpi_status __init
+acpi_cf_associate_prt_to_bus (
+ acpi_handle handle,
+ acpi_rpb_t *rpb,
+ NATIVE_UINT *retbusnum,
+ NATIVE_UINT depth
+ )
+{
+ acpi_status status;
+ UINT32 segbus;
+ NATIVE_UINT devfn;
+ UINT8 bn;
+
+ UINT8 path_name[PATHNAME_MAX];
+ acpi_pci_id pci_id;
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ acpi_buffer ret_buf;
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
+
+ /*
+ * get devfn from _ADR
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__ADR, &devfn);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s _ADR fail=0x%x. Set busnum to %ld\n",
+ path_name, status, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s _ADR =0x%x\n", path_name, (UINT32)devfn));
+
+
+ /*
+ * access pci config space for bus number
+ * segbus = from rpb, devfn = from _ADR
+ */
+ pci_id.segment = 0;
+ pci_id.bus = (u16)(rpb->rpb_busnum & 0xffffffff);
+ pci_id.device = (u16)((devfn >> 16) & 0xffff);
+ pci_id.function = (u16)(devfn & 0xffff);
+
+ status = acpi_os_read_pci_configuration(&pci_id, PCI_PRIMARY_BUS,
+ &bn, 8);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
+ path_name, status, segbus, (UINT32)devfn,
+ PCI_PRIMARY_BUS);
+ printk("Acpi cfg:%s Set busnum to %ld\n",
+ path_name, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s pribus %d\n", path_name, bn));
+
+
+ status = acpi_os_read_pci_configuration(&pci_id, PCI_SECONDARY_BUS,
+ &bn, 8);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
+ path_name, status, segbus, (UINT32)devfn,
+ PCI_SECONDARY_BUS);
+ printk("Acpi cfg:%s Set busnum to %ld\n",
+ path_name, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s busnum %d\n", path_name, bn));
+
+ *retbusnum = (NATIVE_UINT)bn;
+ return AE_OK;
+}
+
+
+static acpi_status __init
+acpi_cf_get_prt (
+ void **prts
+ )
+{
+ acpi_status status;
+
+ status = acpi_get_devices ( PCI_ROOT_HID_STRING,
+ acpi_cf_get_root_pci_callback,
+ NULL,
+ NULL );
+
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:get_device PCI ROOT HID error=0x%x\n", status);
+ }
+
+ *prts = (void *)pci_routing_tables;
+
+ return status;
+}
+
+static acpi_status __init
+acpi_cf_get_prt_callback (
+ acpi_handle handle,
+ UINT32 Level,
+ void *context,
+ void **retval
+ )
+{
+ pci_routing_table *prt;
+ NATIVE_UINT busnum = 0;
+ NATIVE_UINT temp = 0x0F;
+ acpi_status status;
+
+ UINT8 path_name[PATHNAME_MAX];
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ acpi_buffer ret_buf;
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
+
+ status = acpi_cf_evaluate_PRT (handle, &prt);
+ switch(status) {
+ case AE_NOT_FOUND:
+ return AE_OK;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _PRT fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ }
+
+ /*
+ * evaluate _STA in case this device does not exist
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__STA, &temp);
+ switch(status) {
+ case AE_NOT_FOUND:
+ break;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _STA fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ if (!(temp & ACPI_STA_DEVICE_PRESENT)) {
+ dprintk(("Acpi cfg:%s not exist. _PRT discarded\n",
+ path_name));
+ acpi_os_free(prt);
+ return AE_OK;
+ }
+ break;
+ }
+
+ /*
+ * associate a bus number to this _PRT since
+ * this _PRT is not on root pci bridge
+ */
+ acpi_cf_associate_prt_to_bus(handle, context, &busnum, 0);
+
+ printk("Acpi cfg:%s busnum %ld got _PRT\n", path_name, busnum);
+ acpi_cf_add_to_pci_routing_tables (busnum, prt);
+
+ return AE_OK;
+}
+
+
+static void __init
+acpi_cf_add_to_pci_routing_tables (
+ NATIVE_UINT busnum,
+ pci_routing_table *prt
+ )
+{
+ if ( busnum >= PCI_MAX_BUS ) {
+ printk("Acpi cfg:invalid pci bus number %ld\n", busnum);
+ acpi_os_free(prt);
+ return;
+ }
+
+ if (pci_routing_tables[busnum]) {
+ printk("Acpi cfg:duplicate PRT for pci bus %ld. overiding...\n", busnum);
+ acpi_os_free(pci_routing_tables[busnum]);
+ }
+
+ pci_routing_tables[busnum] = prt;
+}
+
+
+#define DUMPVECTOR(pv) printk("PCI bus=0x%x id=0x%x pin=0x%x irq=0x%x\n", pv->bus, pv->pci_id, pv->pin, pv->irq);
+
+static acpi_status __init
+acpi_cf_convert_prt_to_vectors (
+ void *prts,
+ struct pci_vector_struct **vectors,
+ int *num_pci_vectors
+ )
+{
+ struct pci_vector_struct *pvec;
+ pci_routing_table **pprts, *prt, *prtf;
+ int nvec = 0;
+ int i;
+
+
+ pprts = (pci_routing_table **)prts;
+
+ for ( i = 0; i < PCI_MAX_BUS; i++) {
+ prt = *pprts++;
+ if (prt) {
+ for ( ; prt->length > 0; nvec++) {
+ prt = (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
+ }
+ }
+ }
+
+ *num_pci_vectors = nvec;
+ *vectors = acpi_os_callocate (sizeof(struct pci_vector_struct) * nvec);
+ if (!*vectors) {
+ printk("Acpi cfg: callocate for pci_vector error\n");
+ return AE_NO_MEMORY;
+ }
+
+ pvec = *vectors;
+ pprts = (pci_routing_table **)prts;
+
+ for ( i = 0; i < PCI_MAX_BUS; i++) {
+ prt = prtf = *pprts++;
+ if (prt) {
+ for ( ; prt->length > 0; pvec++) {
+ pvec->bus = (UINT16)i;
+ pvec->pci_id = prt->address;
+ pvec->pin = (UINT8)prt->pin;
+ pvec->irq = (UINT8)prt->source_index;
+
+ prt = (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
+ }
+ acpi_os_free((void *)prtf);
+ }
+ }
+
+ return AE_OK;
+}
+
+
+void __init
+acpi_cf_cleanup ( void )
+{
+ /* nothing to free, pci_vectors are used by the kernel */
+}
+
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+void __init
+acpi_cf_print_pci_vectors (
+ struct pci_vector_struct *vectors,
+ int num_pci_vectors
+ )
+{
+ struct pci_vector_struct *pvec;
+ int i;
+
+ printk("number of PCI interrupt vectors = %d\n", num_pci_vectors);
+
+ pvec = vectors;
+ for (i = 0; i < num_pci_vectors; i++) {
+ DUMPVECTOR(pvec);
+ pvec++;
+ }
+}
+#endif
diff -urN linux-2.4.13/drivers/acpi/acpiconf.h linux-2.4.13-lia/drivers/acpi/acpiconf.h
--- linux-2.4.13/drivers/acpi/acpiconf.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/acpiconf.h Fri Oct 12 09:03:25 2001
@@ -0,0 +1,63 @@
+/*
+ * acpiconf.h - ACPI based kernel configuration
+ *
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <Jung-Ik.Lee@intel.com>
+ */
+
+#include <linux/init.h>
+
+#define PCI_MAX_BUS 0x100
+#define ACPI_STA_DEVICE_PRESENT 0x01
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+#define ACPI_CF_INITIALIZED() (acpi_cf_initialized > 0)
+#undef dprintk
+#define dprintk(a) printk a
+#else
+#define ACPI_CF_INITIALIZED() 1
+#undef dprintk
+#define dprintk(a)
+#endif
+
+
+extern
+void __init
+acpi_os_bind_osd(int acpi_phase);
+
+
+static
+acpi_status __init
+acpi_cf_get_prt (void **prts);
+
+
+static
+acpi_status __init
+acpi_cf_get_prt_callback (
+ acpi_handle handle,
+ UINT32 level,
+ void *context,
+ void **retval
+ );
+
+
+static
+void __init
+acpi_cf_add_to_pci_routing_tables (
+ NATIVE_UINT busnum,
+ pci_routing_table *prt
+ );
+
+
+static
+acpi_status __init
+acpi_cf_convert_prt_to_vectors (
+ void *prts,
+ struct pci_vector_struct **vectors,
+ int *num_pci_vectors
+ );
+
+
+void __init
+acpi_cf_cleanup ( void );
+
diff -urN linux-2.4.13/drivers/acpi/hardware/hwacpi.c linux-2.4.13-lia/drivers/acpi/hardware/hwacpi.c
--- linux-2.4.13/drivers/acpi/hardware/hwacpi.c Mon Sep 24 15:06:41 2001
+++ linux-2.4.13-lia/drivers/acpi/hardware/hwacpi.c Thu Oct 4 00:21:40 2001
@@ -196,6 +196,7 @@
{
acpi_status status = AE_NO_HARDWARE_RESPONSE;
+ u32 retries = 20;
FUNCTION_TRACE ("Hw_set_mode");
@@ -220,11 +221,14 @@
/* Give the platform some time to react */
- acpi_os_stall (5000);
+ while (retries-- > 0) {
+ acpi_os_stall (5000);
- if (acpi_hw_get_mode () = mode) {
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
- status = AE_OK;
+ if (acpi_hw_get_mode () = mode) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
+ status = AE_OK;
+ break;
+ }
}
return_ACPI_STATUS (status);
diff -urN linux-2.4.13/drivers/acpi/include/actypes.h linux-2.4.13-lia/drivers/acpi/include/actypes.h
--- linux-2.4.13/drivers/acpi/include/actypes.h Mon Sep 24 15:06:42 2001
+++ linux-2.4.13-lia/drivers/acpi/include/actypes.h Thu Oct 4 00:21:40 2001
@@ -60,6 +60,7 @@
typedef int INT32;
typedef unsigned int UINT32;
typedef COMPILER_DEPENDENT_UINT64 UINT64;
+typedef long INT64;
typedef UINT64 NATIVE_UINT;
typedef INT64 NATIVE_INT;
diff -urN linux-2.4.13/drivers/acpi/include/acutils.h linux-2.4.13-lia/drivers/acpi/include/acutils.h
--- linux-2.4.13/drivers/acpi/include/acutils.h Mon Sep 24 15:06:42 2001
+++ linux-2.4.13-lia/drivers/acpi/include/acutils.h Wed Oct 24 18:17:40 2001
@@ -383,6 +383,7 @@
/* Method name strings */
#define METHOD_NAME__HID "_HID"
+#define METHOD_NAME__CID "_CID"
#define METHOD_NAME__UID "_UID"
#define METHOD_NAME__ADR "_ADR"
#define METHOD_NAME__STA "_STA"
@@ -396,6 +397,11 @@
NATIVE_CHAR *object_name,
acpi_namespace_node *device_node,
acpi_integer *address);
+
+acpi_status
+acpi_ut_execute_CID (
+ acpi_namespace_node *device_node,
+ ACPI_DEVICE_ID *cid);
acpi_status
acpi_ut_execute_HID (
diff -urN linux-2.4.13/drivers/acpi/include/platform/acgcc.h linux-2.4.13-lia/drivers/acpi/include/platform/acgcc.h
--- linux-2.4.13/drivers/acpi/include/platform/acgcc.h Wed Oct 24 10:17:44 2001
+++ linux-2.4.13-lia/drivers/acpi/include/platform/acgcc.h Wed Oct 24 18:17:50 2001
@@ -42,11 +42,32 @@
/*! [Begin] no source code translation */
+#include <linux/interrupt.h>
+
+#include <asm/processor.h>
#include <asm/pal.h>
#define halt() ia64_pal_halt_light() /* PAL_HALT[_LIGHT] */
#define safe_halt() ia64_pal_halt(1) /* PAL_HALT */
+static inline void
+wbinvd (void)
+{
+ unsigned long flags, vector, position = 0;
+ long status;
+
+ do {
+ ia64_clear_ic(flags);
+ status = ia64_pal_cache_flush(0x3, (PAL_CACHE_FLUSH_INVALIDATE
+ | PAL_CACHE_FLUSH_CHK_INTRS),
+ &position, &vector);
+ local_irq_restore(flags);
+ if (status = 1) {
+ ia64_eoi();
+ hw_resend_irq(NULL, vector);
+ }
+ } while (status = 1);
+}
#define ACPI_ACQUIRE_GLOBAL_LOCK(GLptr, Acq) \
do { \
diff -urN linux-2.4.13/drivers/acpi/namespace/nsxfobj.c linux-2.4.13-lia/drivers/acpi/namespace/nsxfobj.c
--- linux-2.4.13/drivers/acpi/namespace/nsxfobj.c Mon Sep 24 15:06:43 2001
+++ linux-2.4.13-lia/drivers/acpi/namespace/nsxfobj.c Wed Oct 24 18:18:06 2001
@@ -588,6 +588,7 @@
acpi_namespace_node *node;
u32 flags;
ACPI_DEVICE_ID device_id;
+ ACPI_DEVICE_ID compatible_id;
ACPI_GET_DEVICES_INFO *info;
@@ -628,7 +629,17 @@
}
if (STRNCMP (device_id.buffer, info->hid, sizeof (device_id.buffer)) != 0) {
- return (AE_OK);
+ status = acpi_ut_execute_CID (node, &compatible_id);
+ if (status = AE_NOT_FOUND) {
+ return (AE_OK);
+ }
+ else if (ACPI_FAILURE (status)) {
+ return (AE_CTRL_DEPTH);
+ }
+
+ if (STRNCMP (compatible_id.buffer, info->hid, sizeof (compatible_id.buffer)) != 0) {
+ return (AE_OK);
+ }
}
}
diff -urN linux-2.4.13/drivers/acpi/os.c linux-2.4.13-lia/drivers/acpi/os.c
--- linux-2.4.13/drivers/acpi/os.c Mon Sep 24 15:06:43 2001
+++ linux-2.4.13-lia/drivers/acpi/os.c Thu Oct 4 00:21:40 2001
@@ -31,6 +31,8 @@
* - Fixed improper kernel_thread parameters
*/
+#include <linux/config.h>
+
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/mm.h>
@@ -48,7 +50,8 @@
#ifdef _IA64
#include <asm/hw_irq.h>
-#endif
+#include <asm/delay.h>
+#endif
#define _COMPONENT ACPI_OS_SERVICES
MODULE_NAME ("os")
@@ -61,6 +64,33 @@
/*****************************************************************************
+ * Function Binding
+ *****************************************************************************/
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+#include "osconf.h"
+
+struct acpi_osd acpi_osd_rt = {
+ /* these are runtime osd entries that differ from boottime entries */
+ acpi_os_allocate_rt,
+ acpi_os_callocate_rt,
+ acpi_os_free_rt,
+ acpi_os_queue_for_execution_rt,
+ acpi_os_read_pci_configuration_rt,
+ acpi_os_write_pci_configuration_rt,
+ acpi_os_stall_rt
+};
+#else
+#define acpi_os_allocate_rt acpi_os_allocate
+#define acpi_os_callocate_rt acpi_os_callocate
+#define acpi_os_free_rt acpi_os_free
+#define acpi_os_queue_for_execution_rt acpi_os_queue_for_execution
+#define acpi_os_read_pci_configuration_rt acpi_os_read_pci_configuration
+#define acpi_os_write_pci_configuration_rt acpi_os_write_pci_configuration
+#define acpi_os_stall_rt acpi_os_stall
+#endif
+
+/*****************************************************************************
* Debugger Stuff
*****************************************************************************/
@@ -137,13 +167,13 @@
}
void *
-acpi_os_allocate(u32 size)
+acpi_os_allocate_rt(u32 size)
{
return kmalloc(size, GFP_KERNEL);
}
void *
-acpi_os_callocate(u32 size)
+acpi_os_callocate_rt(u32 size)
{
void *ptr = acpi_os_allocate(size);
if (ptr)
@@ -153,7 +183,7 @@
}
void
-acpi_os_free(void *ptr)
+acpi_os_free_rt(void *ptr)
{
kfree(ptr);
}
@@ -233,12 +263,105 @@
(*acpi_irq_handler)(acpi_irq_context);
}
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+struct irqaction acpiirqaction;
+/*
+ * codes from request_irq and free_irq.
+ */
acpi_status
acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *context)
{
-#ifdef _IA64
+ struct irqaction *act;
+ int retval;
+
+ if (irq >= NR_IRQS) {
+ printk("ACPI: install SCI handler fail: invalid irq%d\n", irq);
+ return AE_ERROR;
+ }
+
+ if (!handler) {
+ printk("ACPI: install SCI handler fail: invalid handler\n");
+ return AE_ERROR;
+ }
+
+ act = & acpiirqaction;
+
irq = isa_irq_to_vector(irq);
-#endif /*_IA64*/
+ acpi_irq_irq = irq;
+ acpi_irq_handler = handler;
+ acpi_irq_context = context;
+
+ act->handler = acpi_irq;
+ act->flags = SA_INTERRUPT | SA_SHIRQ;
+ act->mask = 0;
+ act->name = "acpi";
+ act->next = NULL;
+ act->dev_id = acpi_irq;
+
+ retval = setup_irq(irq, act);
+ if (retval) {
+ printk("ACPI: install SCI handler fail: setup_irq\n");
+ acpi_irq_handler = NULL;
+ return AE_ERROR;
+ }
+ printk("ACPI: install SCI %d handler pass\n", irq);
+
+ return AE_OK;
+}
+
+acpi_status
+acpi_os_remove_interrupt_handler(u32 irq, OSD_HANDLER handler)
+{
+ irq_desc_t *desc;
+ struct irqaction **p;
+ unsigned long flags;
+
+ if (!acpi_irq_handler)
+ return AE_OK;
+
+ irq = isa_irq_to_vector(irq);
+ if (irq != acpi_irq_irq) return AE_ERROR;
+
+ acpi_irq_handler = NULL;
+
+ desc = irq_desc(irq);
+ spin_lock_irqsave(&desc->lock,flags);
+ p = &desc->action;
+ for (;;) {
+ struct irqaction * action = *p;
+ if (action) {
+ struct irqaction **pp = p;
+ p = &action->next;
+ if (action->dev_id != acpi_irq)
+ continue;
+
+ /* Found it - now remove it from the list of entries */
+ *pp = action->next;
+ if (!desc->action) {
+ desc->status |= IRQ_DISABLED;
+ desc->handler->shutdown(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock,flags);
+
+#ifdef CONFIG_SMP
+ /* Wait to make sure it's not being used on another CPU */
+ while (desc->status & IRQ_INPROGRESS)
+ barrier();
+#endif
+ return AE_OK;
+ }
+ printk("ACPI: Trying to free free IRQ%d\n",irq);
+ spin_unlock_irqrestore(&desc->lock,flags);
+ return AE_OK;
+ }
+
+ return AE_OK;
+}
+
+#else
+acpi_status
+acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *context)
+{
acpi_irq_irq = irq;
acpi_irq_handler = handler;
acpi_irq_context = context;
@@ -267,6 +390,7 @@
return AE_OK;
}
+#endif
/*
* Running in interpreter thread context, safe to sleep
@@ -280,7 +404,7 @@
}
void
-acpi_os_stall(u32 us)
+acpi_os_stall_rt(u32 us)
{
if (us > 10000) {
mdelay(us / 1000);
@@ -322,7 +446,7 @@
acpi_status
acpi_os_write_port(
ACPI_IO_ADDRESS port,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
switch (width)
@@ -375,7 +499,7 @@
acpi_status
acpi_os_write_memory(
ACPI_PHYSICAL_ADDRESS phys_addr,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
switch (width)
@@ -468,7 +592,7 @@
#else /*CONFIG_ACPI_PCI*/
acpi_status
-acpi_os_read_pci_configuration (
+acpi_os_read_pci_configuration_rt (
acpi_pci_id *pci_id,
u32 reg,
void *value,
@@ -502,10 +626,10 @@
}
acpi_status
-acpi_os_write_pci_configuration (
+acpi_os_write_pci_configuration_rt (
acpi_pci_id *pci_id,
u32 reg,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
int devfn = PCI_DEVFN(pci_id->device, pci_id->function);
@@ -620,6 +744,22 @@
acpi_os_free(dpc);
}
}
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+/*
+ * Queue for interpreter thread
+ */
+
+acpi_status
+acpi_os_queue_for_execution_rt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context)
+{
+ (*callback)(context);
+ return AE_OK;
+}
+#endif
acpi_status
acpi_os_queue_for_execution(
diff -urN linux-2.4.13/drivers/acpi/osconf.c linux-2.4.13-lia/drivers/acpi/osconf.c
--- linux-2.4.13/drivers/acpi/osconf.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/osconf.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,286 @@
+/*
+ * osconf.c - ACPI OS-dependent functions for Kernel Boot/Configuration time
+ *
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <Jung-Ik.Lee@intel.com>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/pci.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/sal.h>
+#include <asm/delay.h>
+
+#include "acpi.h"
+#include "osconf.h"
+
+
+static void * __init acpi_os_allocate_bt(u32 size);
+static void * __init acpi_os_callocate_bt(u32 size);
+static void __init acpi_os_free_bt(void *ptr);
+static void __init acpi_os_stall_bt(u32 us);
+
+static acpi_status __init
+acpi_os_queue_for_execution_bt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context
+ );
+
+static acpi_status __init
+acpi_os_read_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, void *value, u32 width);
+
+static acpi_status __init
+acpi_os_write_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width);
+
+
+extern struct acpi_osd acpi_osd_rt;
+static struct acpi_osd acpi_osd_bt __initdata = {
+ /* these are boottime osd entries that differ from runtime entries */
+ acpi_os_allocate_bt,
+ acpi_os_callocate_bt,
+ acpi_os_free_bt,
+ acpi_os_queue_for_execution_bt,
+ acpi_os_read_pci_configuration_bt,
+ acpi_os_write_pci_configuration_bt,
+ acpi_os_stall_bt
+};
+static struct acpi_osd *acpi_osd = &acpi_osd_rt;
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+static void __init
+acpi_cf_bm_statistics( void );
+#endif
+
+void __init
+acpi_os_bind_osd(int acpi_phase)
+{
+ switch (acpi_phase) {
+ case ACPI_CF_PHASE_BOOTTIME:
+ acpi_osd = &acpi_osd_bt;
+ printk("Acpi cfg:bind to Boot time Acpi OSD\n");
+ break;
+ case ACPI_CF_PHASE_RUNTIME:
+ default:
+ acpi_osd = &acpi_osd_rt;
+ printk("Acpi cfg:bind to Run time Acpi OSD\n");
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+ acpi_cf_bm_statistics();
+#endif
+ break;
+ }
+}
+
+void *
+acpi_os_allocate(u32 size)
+{
+ return acpi_osd->allocate(size);
+}
+
+void *
+acpi_os_callocate(u32 size)
+{
+ return acpi_osd->callocate(size);
+}
+
+void
+acpi_os_free(void *ptr)
+{
+ acpi_osd->free(ptr);
+ return;
+}
+
+void
+acpi_os_stall(u32 us)
+{
+ acpi_osd->stall(us);
+ return;
+}
+
+acpi_status
+acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value, u32 width)
+{
+ return acpi_osd->read_pci_configuration(pci_id, reg, value, width);
+}
+
+
+acpi_status
+acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width)
+{
+ return acpi_osd->write_pci_configuration(pci_id, reg, value, width);
+}
+
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+/*
+ * Let's profile bootmem usage to see how much we consume. J.I.
+ */
+static unsigned long bm_alloc_size __initdata = 0;
+static unsigned long bm_alloc_size_max __initdata = 0;
+static unsigned long bm_alloc_count_max __initdata = 0;
+static unsigned long bm_free_count_max __initdata = 0;
+
+static void __init
+acpi_cf_bm_checkin(void *ptr, u32 size)
+{
+ bm_alloc_count_max++;
+ bm_alloc_size += size;
+ if (bm_alloc_size > bm_alloc_size_max)
+ bm_alloc_size_max = bm_alloc_size;
+};
+
+static void __init
+acpi_cf_bm_checkout(void *ptr, u32 size)
+{
+ bm_free_count_max++;
+ bm_alloc_size -= size;
+};
+
+static void __init
+acpi_cf_bm_statistics( void )
+{
+ printk("Acpi cfg:bm_alloc_size_max =%ld bytes\n", bm_alloc_size_max);
+ printk("Acpi cfg:bm_alloc_count_max=%ld\n", bm_alloc_count_max);
+ printk("Acpi cfg:bm_free_count_max =%ld\n", bm_free_count_max);
+}
+#endif
+
+
+static void * __init
+acpi_os_allocate_bt(u32 size)
+{
+ void *ptr;
+
+ size += sizeof(unsigned long);
+ ptr = alloc_bootmem(size);
+
+ if (ptr) {
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+ acpi_cf_bm_checkin(ptr, size);
+#endif
+ *((unsigned long *)ptr) = (unsigned long)size;
+ ptr += sizeof(unsigned long);
+ }
+
+ return ptr;
+}
+
+static void * __init
+acpi_os_callocate_bt(u32 size)
+{
+ void *ptr = acpi_os_allocate_bt(size);
+
+ return ptr;
+}
+
+static void __init
+acpi_os_free_bt(void *ptr)
+{
+ unsigned long size;
+
+ ptr -= sizeof(size);
+ size = *((unsigned long *)ptr);
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+ acpi_cf_bm_checkout(ptr, (unsigned long)size);
+#endif
+ //if (size)
+ free_bootmem (__pa((unsigned long)ptr), (u32)size);
+}
+
+
+static void __init
+acpi_os_stall_bt(u32 us)
+{
+ unsigned long start = ia64_get_itc();
+ unsigned long cycles = us*733; /* XXX: 733 or 800 */
+ while (ia64_get_itc() - start < cycles)
+ /* skip */;
+}
+
+
+static acpi_status __init
+acpi_os_queue_for_execution_bt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context)
+{
+ /*
+ * run callback immediately
+ */
+ (*callback)(context);
+ return AE_OK;
+}
+
+
+static acpi_status __init
+acpi_os_read_pci_configuration_bt (
+ acpi_pci_id *pci_id,
+ u32 reg,
+ void *value,
+ u32 width)
+{
+ unsigned int devfn;
+ s64 status;
+ u64 lval;
+
+ devfn = PCI_DEVFN(pci_id->device, pci_id->function);
+
+ switch (width)
+ {
+ case 8:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 1, &lval);
+ *(u8*)value = (u8)lval;
+ break;
+ case 16:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 2, &lval);
+ *(u16*)value = (u16)lval;
+ break;
+ case 32:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 4, &lval);
+ *(u32*)value = (u32)lval;
+ break;
+ default:
+ BUG();
+ }
+
+ return status;
+}
+
+
+static acpi_status __init
+acpi_os_write_pci_configuration_bt (
+ acpi_pci_id *pci_id,
+ u32 reg,
+ NATIVE_UINT value,
+ u32 width)
+{
+ unsigned int devfn;
+ s64 status;
+
+ devfn = PCI_DEVFN(pci_id->device, pci_id->function);
+
+ switch (width)
+ {
+ case 8:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 1, value);
+ break;
+ case 16:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 2, value);
+ break;
+ case 32:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 4, value);
+ break;
+ default:
+ BUG();
+ }
+
+ return status;
+}
diff -urN linux-2.4.13/drivers/acpi/osconf.h linux-2.4.13-lia/drivers/acpi/osconf.h
--- linux-2.4.13/drivers/acpi/osconf.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/osconf.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,57 @@
+/*
+ * osconf.h - ACPI OS-dependent headers for Kernel Boot/Configuration time
+ *
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <Jung-Ik.Lee@intel.com>
+ */
+
+
+struct acpi_osd {
+ void * (*allocate)(u32 size);
+ void * (*callocate)(u32 size);
+ void (*free)(void *ptr);
+ acpi_status (*queue_for_exec)(u32 pri, OSD_EXECUTION_CALLBACK cb, void *context);
+ acpi_status (*read_pci_configuration)(acpi_pci_id *pci_id, u32 reg, void *value, u32 width);
+ acpi_status (*write_pci_configuration)(acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width);
+ void (*stall)(u32 us);
+};
+
+
+#define PCI_CONFIG_ADDRESS(bus, devfn, where) \
+ (((u64) bus << 16) | ((u64) (devfn & 0xff) << 8) | (where & 0xff))
+
+#define ACPI_CF_PHASE_BOOTTIME 0x00
+#define ACPI_CF_PHASE_RUNTIME 0x01
+
+
+/* acpi_osd functions */
+void * acpi_os_allocate(u32 size);
+void * acpi_os_callocate(u32 size);
+void acpi_os_free(void *ptr);
+void acpi_os_stall(u32 us);
+
+acpi_status
+acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value, u32 width );
+
+acpi_status
+acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width );
+
+
+/* acpi_osd_rt functions */
+extern void * acpi_os_allocate_rt(u32 size);
+extern void * acpi_os_callocate_rt(u32 size);
+extern void acpi_os_free_rt(void *ptr);
+extern void acpi_os_stall_rt(u32 us);
+
+extern acpi_status
+acpi_os_queue_for_execution_rt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context
+ );
+
+extern acpi_status
+acpi_os_read_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, void *value, u32 width );
+
+extern acpi_status
+acpi_os_write_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width );
diff -urN linux-2.4.13/drivers/acpi/ospm/include/ec.h linux-2.4.13-lia/drivers/acpi/ospm/include/ec.h
--- linux-2.4.13/drivers/acpi/ospm/include/ec.h Mon Sep 24 15:06:44 2001
+++ linux-2.4.13-lia/drivers/acpi/ospm/include/ec.h Thu Oct 4 00:21:40 2001
@@ -167,14 +167,14 @@
acpi_status
ec_io_read (
EC_CONTEXT *ec,
- u32 io_port,
+ ACPI_IO_ADDRESS io_port,
u8 *data,
EC_EVENT wait_event);
acpi_status
ec_io_write (
EC_CONTEXT *ec,
- u32 io_port,
+ ACPI_IO_ADDRESS io_port,
u8 data,
EC_EVENT wait_event);
diff -urN linux-2.4.13/drivers/acpi/ospm/system/sm_osl.c linux-2.4.13-lia/drivers/acpi/ospm/system/sm_osl.c
--- linux-2.4.13/drivers/acpi/ospm/system/sm_osl.c Mon Sep 24 15:06:44 2001
+++ linux-2.4.13-lia/drivers/acpi/ospm/system/sm_osl.c Thu Oct 4 00:21:40 2001
@@ -33,7 +33,9 @@
#include <asm/uaccess.h>
#include <linux/acpi.h>
#include <asm/io.h>
+#ifndef __ia64__
#include <linux/mc146818rtc.h>
+#endif
#include <linux/delay.h>
#include <acpi.h>
@@ -278,6 +280,7 @@
int *eof,
void *context)
{
+#ifndef _IA64
char *str = page;
int len;
u32 sec,min,hr;
@@ -351,6 +354,9 @@
*start = page;
return len;
+#else
+ return 0;
+#endif
}
static int get_date_field(char **str, u32 *value)
@@ -381,6 +387,7 @@
unsigned long count,
void *data)
{
+#ifndef _IA64
char buf[30];
char *str = buf;
u32 sec,min,hr;
@@ -520,6 +527,9 @@
error = 0;
out:
return error ? error : count;
+#else
+ return 0;
+#endif
}
static int
diff -urN linux-2.4.13/drivers/acpi/utilities/uteval.c linux-2.4.13-lia/drivers/acpi/utilities/uteval.c
--- linux-2.4.13/drivers/acpi/utilities/uteval.c Mon Sep 24 15:06:47 2001
+++ linux-2.4.13-lia/drivers/acpi/utilities/uteval.c Wed Oct 24 18:18:19 2001
@@ -115,6 +115,93 @@
/*******************************************************************************
*
+ * FUNCTION: Acpi_ut_execute_CID
+ *
+ * PARAMETERS: Device_node - Node for the device
+ * *Cid - Where the CID is returned
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Executes the _CID control method that returns the compatible
+ * ID of the device.
+ *
+ * NOTE: Internal function, no parameter validation
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ut_execute_CID (
+ acpi_namespace_node *device_node,
+ ACPI_DEVICE_ID *cid)
+{
+ acpi_operand_object *obj_desc;
+ acpi_status status;
+
+
+ FUNCTION_TRACE ("Ut_execute_CID");
+
+
+ /* Execute the method */
+
+ status = acpi_ns_evaluate_relative (device_node,
+ METHOD_NAME__CID, NULL, &obj_desc);
+ if (ACPI_FAILURE (status)) {
+ if (status = AE_NOT_FOUND) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "_CID on %4.4s was not found\n",
+ &device_node->name));
+ }
+
+ else {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "_CID on %4.4s failed %s\n",
+ &device_node->name, acpi_format_exception (status)));
+ }
+
+ return_ACPI_STATUS (status);
+ }
+
+ /* Did we get a return object? */
+
+ if (!obj_desc) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "No object was returned from _CID\n"));
+ return_ACPI_STATUS (AE_TYPE);
+ }
+
+ /*
+ * A _CID can return either a Number (32 bit compressed EISA ID) or
+ * a string
+ */
+ if ((obj_desc->common.type != ACPI_TYPE_INTEGER) &&
+ (obj_desc->common.type != ACPI_TYPE_STRING)) {
+ status = AE_TYPE;
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Type returned from _CID not a number or string: %s(%X) \n",
+ acpi_ut_get_type_name (obj_desc->common.type), obj_desc->common.type));
+ }
+
+ else {
+ if (obj_desc->common.type = ACPI_TYPE_INTEGER) {
+ /* Convert the Numeric CID to string */
+
+ acpi_ex_eisa_id_to_string ((u32) obj_desc->integer.value, cid->buffer);
+ }
+
+ else {
+ /* Copy the String CID from the returned object */
+
+ STRNCPY(cid->buffer, obj_desc->string.pointer, sizeof(cid->buffer));
+ }
+ }
+
+
+ /* On exit, we must delete the return object */
+
+ acpi_ut_remove_reference (obj_desc);
+
+ return_ACPI_STATUS (status);
+}
+
+/*******************************************************************************
+ *
* FUNCTION: Acpi_ut_execute_HID
*
* PARAMETERS: Device_node - Node for the device
diff -urN linux-2.4.13/drivers/char/Config.in linux-2.4.13-lia/drivers/char/Config.in
--- linux-2.4.13/drivers/char/Config.in Wed Oct 24 10:17:45 2001
+++ linux-2.4.13-lia/drivers/char/Config.in Wed Oct 24 10:21:08 2001
@@ -207,6 +207,9 @@
dep_tristate '/dev/agpgart (AGP Support)' CONFIG_AGP $CONFIG_DRM_AGP
if [ "$CONFIG_AGP" != "n" ]; then
bool ' Intel 440LX/BX/GX and I815/I830M/I840/I850 support' CONFIG_AGP_INTEL
+ if [ "$CONFIG_IA64" != "n" ]; then
+ bool ' Intel 460GX support' CONFIG_AGP_I460
+ fi
bool ' Intel I810/I815/I830M (on-board) support' CONFIG_AGP_I810
bool ' VIA chipset support' CONFIG_AGP_VIA
bool ' AMD Irongate, 761, and 762 support' CONFIG_AGP_AMD
@@ -215,7 +218,17 @@
bool ' Serverworks LE/HE support' CONFIG_AGP_SWORKS
fi
-source drivers/char/drm/Config.in
+bool 'Direct Rendering Manager (XFree86 DRI support)' CONFIG_DRM
+
+if [ "$CONFIG_DRM" = "y" ]; then
+ bool ' Build drivers for new (XFree 4.1) DRM' CONFIG_DRM_NEW
+ if [ "$CONFIG_DRM_NEW" = "y" ]; then
+ source drivers/char/drm/Config.in
+ else
+ define_bool CONFIG_DRM_OLD y
+ source drivers/char/drm-4.0/Config.in
+ fi
+fi
if [ "$CONFIG_HOTPLUG" = "y" -a "$CONFIG_PCMCIA" != "n" ]; then
source drivers/char/pcmcia/Config.in
diff -urN linux-2.4.13/drivers/char/Makefile linux-2.4.13-lia/drivers/char/Makefile
--- linux-2.4.13/drivers/char/Makefile Wed Oct 24 10:17:45 2001
+++ linux-2.4.13-lia/drivers/char/Makefile Wed Oct 24 10:21:08 2001
@@ -25,7 +25,7 @@
misc.o pty.o random.o selection.o serial.o \
sonypi.o tty_io.o tty_ioctl.o generic_serial.o
-mod-subdirs := joystick ftape drm pcmcia
+mod-subdirs := joystick ftape drm pcmcia drm-4.0
list-multi :=
@@ -138,6 +138,7 @@
obj-$(CONFIG_MAGIC_SYSRQ) += sysrq.o
obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o
+obj-$(CONFIG_SIM_SERIAL) += simserial.o
obj-$(CONFIG_ROCKETPORT) += rocket.o
obj-$(CONFIG_MOXA_SMARTIO) += mxser.o
obj-$(CONFIG_MOXA_INTELLIO) += moxa.o
@@ -198,7 +199,8 @@
obj-$(CONFIG_QIC02_TAPE) += tpqic02.o
subdir-$(CONFIG_FTAPE) += ftape
-subdir-$(CONFIG_DRM) += drm
+subdir-$(CONFIG_DRM_NEW) += drm
+subdir-$(CONFIG_DRM_OLD) += drm-4.0
subdir-$(CONFIG_PCMCIA) += pcmcia
subdir-$(CONFIG_AGP) += agp
diff -urN linux-2.4.13/drivers/char/agp/agp.h linux-2.4.13-lia/drivers/char/agp/agp.h
--- linux-2.4.13/drivers/char/agp/agp.h Wed Oct 10 16:31:46 2001
+++ linux-2.4.13-lia/drivers/char/agp/agp.h Wed Oct 10 16:33:17 2001
@@ -84,8 +84,8 @@
void *dev_private_data;
struct pci_dev *dev;
gatt_mask *masks;
- unsigned long *gatt_table;
- unsigned long *gatt_table_real;
+ u32 *gatt_table;
+ u32 *gatt_table_real;
unsigned long scratch_page;
unsigned long gart_bus_addr;
unsigned long gatt_bus_addr;
@@ -111,6 +111,7 @@
void (*cleanup) (void);
void (*tlb_flush) (agp_memory *);
unsigned long (*mask_memory) (unsigned long, int);
+ unsigned long (*unmask_memory) (unsigned long);
void (*cache_flush) (void);
int (*create_gatt_table) (void);
int (*free_gatt_table) (void);
@@ -150,6 +151,10 @@
#define A_IDXFIX() (A_SIZE_FIX(agp_bridge.aperture_sizes) + i)
#define MAXKEY (4096 * 32)
+#ifndef max
+#define max(a,b) (((a)>(b))?(a):(b))
+#endif
+
#define AGPGART_MODULE_NAME "agpgart"
#define PFX AGPGART_MODULE_NAME ": "
@@ -209,6 +214,9 @@
#ifndef PCI_DEVICE_ID_INTEL_82443GX_1
#define PCI_DEVICE_ID_INTEL_82443GX_1 0x71a1
#endif
+#ifndef PCI_DEVICE_ID_INTEL_460GX
+#define PCI_DEVICE_ID_INTEL_460GX 0x84ea
+#endif
#ifndef PCI_DEVICE_ID_AMD_IRONGATE_0
#define PCI_DEVICE_ID_AMD_IRONGATE_0 0x7006
#endif
@@ -250,6 +258,15 @@
#define INTEL_AGPCTRL 0xb0
#define INTEL_NBXCFG 0x50
#define INTEL_ERRSTS 0x91
+
+/* Intel 460GX Registers */
+#define INTEL_I460_APBASE 0x10
+#define INTEL_I460_BAPBASE 0x98
+#define INTEL_I460_GXBCTL 0xa0
+#define INTEL_I460_AGPSIZ 0xa2
+#define INTEL_I460_ATTBASE 0xfe200000
+#define INTEL_I460_GATT_VALID (1UL << 24)
+#define INTEL_I460_GATT_COHERENT (1UL << 25)
/* intel i840 registers */
#define INTEL_I840_MCHCFG 0x50
diff -urN linux-2.4.13/drivers/char/agp/agpgart_be.c linux-2.4.13-lia/drivers/char/agp/agpgart_be.c
--- linux-2.4.13/drivers/char/agp/agpgart_be.c Wed Oct 10 16:31:46 2001
+++ linux-2.4.13-lia/drivers/char/agp/agpgart_be.c Wed Oct 10 16:33:17 2001
@@ -22,6 +22,7 @@
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
+ * 460GX support by Chris Ahna <christopher.j.ahna@intel.com>
*/
#include <linux/config.h>
#include <linux/version.h>
@@ -43,6 +44,9 @@
#include <asm/uaccess.h>
#include <asm/io.h>
#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/smplock.h>
#include <linux/agp_backend.h>
#include "agp.h"
@@ -60,7 +64,7 @@
EXPORT_SYMBOL(agp_backend_release);
static void flush_cache(void);
-
+
static struct agp_bridge_data agp_bridge;
static int agp_try_unsupported __initdata = 0;
@@ -205,19 +209,56 @@
agp_bridge.free_by_type(curr);
return;
}
- if (curr->page_count != 0) {
- for (i = 0; i < curr->page_count; i++) {
- curr->memory[i] &= ~(0x00000fff);
- agp_bridge.agp_destroy_page((unsigned long)
- phys_to_virt(curr->memory[i]));
+ if(agp_bridge.cant_use_aperture = 0) {
+ if (curr->page_count != 0) {
+ for (i = 0; i < curr->page_count; i++) {
+ curr->memory[i] = agp_bridge.unmask_memory(
+ curr->memory[i]);
+ agp_bridge.agp_destroy_page((unsigned long)
+ phys_to_virt(curr->memory[i]));
+ }
}
+ } else {
+ vfree(curr->vmptr);
}
+
agp_free_key(curr->key);
vfree(curr->memory);
kfree(curr);
MOD_DEC_USE_COUNT;
}
+#define IN_VMALLOC(_x) (((_x) >= VMALLOC_START) && ((_x) < VMALLOC_END))
+
+/*
+ * Look up and return the pte corresponding to addr. We only do this for
+ * agp_ioremap'ed addresses.
+ */
+static pte_t * agp_lookup_pte(unsigned long addr) {
+
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ if(!IN_VMALLOC(addr))
+ return NULL;
+
+ dir = pgd_offset_k(addr);
+ pmd = pmd_offset(dir, addr);
+
+ if(pmd) {
+ pte = pte_offset(pmd, addr);
+
+ if(pte) {
+ return pte;
+ } else {
+ return NULL;
+ }
+ } else {
+ return NULL;
+ }
+}
+
#define ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(unsigned long))
agp_memory *agp_allocate_memory(size_t page_count, u32 type)
@@ -247,24 +288,60 @@
scratch_pages = (page_count + ENTRIES_PER_PAGE - 1) / ENTRIES_PER_PAGE;
new = agp_create_memory(scratch_pages);
-
if (new = NULL) {
MOD_DEC_USE_COUNT;
return NULL;
}
- for (i = 0; i < page_count; i++) {
- new->memory[i] = agp_bridge.agp_alloc_page();
- if (new->memory[i] = 0) {
- /* Free this structure */
- agp_free_memory(new);
+ if(agp_bridge.cant_use_aperture = 0) {
+ for (i = 0; i < page_count; i++) {
+ new->memory[i] = agp_bridge.agp_alloc_page();
+
+ if (new->memory[i] = 0) {
+ /* Free this structure */
+ agp_free_memory(new);
+ return NULL;
+ }
+ new->memory[i] + agp_bridge.mask_memory(
+ virt_to_phys((void *) new->memory[i]),
+ type);
+ new->page_count++;
+ }
+ } else {
+ void *vmblock;
+ unsigned long vaddr, paddr;
+ pte_t *pte;
+
+ vmblock = __vmalloc(page_count << PAGE_SHIFT, GFP_KERNEL,
+#ifdef __ia64__
+ pgprot_writecombine(PAGE_KERNEL));
+#else
+ PAGE_KERNEL);
+#endif
+ if(vmblock = NULL) {
+ MOD_DEC_USE_COUNT;
return NULL;
}
- new->memory[i] - agp_bridge.mask_memory(
- virt_to_phys((void *) new->memory[i]),
- type);
- new->page_count++;
+
+ new->vmptr = vmblock;
+ vaddr = (unsigned long) vmblock;
+
+ for(i = 0; i < page_count; i++, vaddr += PAGE_SIZE) {
+ pte = agp_lookup_pte(vaddr);
+ if(pte = NULL) {
+ MOD_DEC_USE_COUNT;
+ return NULL;
+ }
+#ifdef __ia64__
+ paddr = pte_val(*pte) & _PFN_MASK;
+#else
+ paddr = pte_val(*pte) & PAGE_MASK;
+#endif
+ new->memory[i] = agp_bridge.mask_memory(paddr, type);
+ }
+
+ new->page_count = page_count;
}
return new;
@@ -353,12 +430,13 @@
curr->is_flushed = TRUE;
}
ret_val = agp_bridge.insert_memory(curr, pg_start, curr->type);
-
+
if (ret_val != 0) {
return ret_val;
}
curr->is_bound = TRUE;
curr->pg_start = pg_start;
+
return 0;
}
@@ -377,6 +455,7 @@
if (ret_val != 0) {
return ret_val;
}
+
curr->is_bound = FALSE;
curr->pg_start = 0;
return 0;
@@ -387,9 +466,9 @@
/*
* Driver routines - start
* Currently this module supports the following chipsets:
- * i810, i815, 440lx, 440bx, 440gx, i840, i850, via vp3, via mvp3,
- * via kx133, via kt133, amd irongate, amd 761, amd 762, ALi M1541,
- * and generic support for the SiS chipsets.
+ * i810, 440lx, 440bx, 440gx, 460gx, i840, i850, via vp3, via mvp3, via kx133,
+ * via kt133, amd irongate, ALi M1541, and generic support for the SiS
+ * chipsets.
*/
/* Generic Agp routines - Start */
@@ -614,7 +693,7 @@
for (page = virt_to_page(table); page <= virt_to_page(table_end); page++)
set_bit(PG_reserved, &page->flags);
- agp_bridge.gatt_table_real = (unsigned long *) table;
+ agp_bridge.gatt_table_real = (u32 *) table;
CACHE_FLUSH();
agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
(PAGE_SIZE * (1 << page_order)));
@@ -832,6 +911,11 @@
agp_bridge.agp_enable(mode);
}
+static unsigned long agp_generic_unmask_memory(unsigned long addr)
+{
+ return addr & ~(0x00000fff);
+}
+
/* End - Generic Agp routines */
#ifdef CONFIG_AGP_I810
@@ -1096,6 +1180,7 @@
agp_bridge.cleanup = intel_i810_cleanup;
agp_bridge.tlb_flush = intel_i810_tlbflush;
agp_bridge.mask_memory = intel_i810_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = intel_i810_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1399,6 +1484,633 @@
#endif /* CONFIG_AGP_I810 */
+#ifdef CONFIG_AGP_I460
+
+/* BIOS configures the chipset so that one of two apbase registers are used */
+static u8 intel_i460_dynamic_apbase = 0x10;
+
+/* 460 supports multiple GART page sizes, so GART pageshift is dynamic */
+static u8 intel_i460_pageshift = 12;
+
+/* Keep track of which is larger, chipset or kernel page size. */
+static u32 intel_i460_cpk = 1;
+
+/* Structure for tracking partial use of 4MB GART pages */
+static u32 **i460_pg_detail = NULL;
+static u32 *i460_pg_count = NULL;
+
+#define I460_CPAGES_PER_KPAGE (PAGE_SIZE >> intel_i460_pageshift)
+#define I460_KPAGES_PER_CPAGE ((1 << intel_i460_pageshift) >> PAGE_SHIFT)
+
+#define I460_SRAM_IO_DISABLE (1 << 4)
+#define I460_BAPBASE_ENABLE (1 << 3)
+#define I460_AGPSIZ_MASK 0x7
+#define I460_4M_PS (1 << 1)
+
+#define log2(x) ffz(~(x))
+
+static int intel_i460_fetch_size(void)
+{
+ int i;
+ u8 temp;
+ aper_size_info_8 *values;
+
+ /* Determine the GART page size */
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &temp);
+ intel_i460_pageshift = (temp & I460_4M_PS) ? 22 : 12;
+
+ values = A_SIZE_8(agp_bridge.aperture_sizes);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+
+ /* Exit now if the IO drivers for the GART SRAMS are turned off */
+ if(temp & I460_SRAM_IO_DISABLE) {
+ printk("[agpgart] GART SRAMS disabled on 460GX chipset\n");
+ printk("[agpgart] AGPGART operation not possible\n");
+ return 0;
+ }
+
+ /* Make sure we don't try to create an 2 ^ 23 entry GATT */
+ if((intel_i460_pageshift = 0) && ((temp & I460_AGPSIZ_MASK) = 4)) {
+ printk("[agpgart] We can't have a 32GB aperture with 4KB"
+ " GART pages\n");
+ return 0;
+ }
+
+ /* Determine the proper APBASE register */
+ if(temp & I460_BAPBASE_ENABLE)
+ intel_i460_dynamic_apbase = INTEL_I460_BAPBASE;
+ else intel_i460_dynamic_apbase = INTEL_I460_APBASE;
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+
+ /*
+ * Dynamically calculate the proper num_entries and page_order
+ * values for the define aperture sizes. Take care not to
+ * shift off the end of values[i].size.
+ */
+ values[i].num_entries = (values[i].size << 8) >>
+ (intel_i460_pageshift - 12);
+ values[i].page_order = log2((sizeof(u32)*values[i].num_entries)
+ >> PAGE_SHIFT);
+ }
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+ /* Neglect control bits when matching up size_value */
+ if ((temp & I460_AGPSIZ_MASK) = values[i].size_value) {
+ agp_bridge.previous_size + agp_bridge.current_size = (void *) (values + i);
+ agp_bridge.aperture_size_idx = i;
+ return values[i].size;
+ }
+ }
+
+ return 0;
+}
+
+/* There isn't anything to do here since 460 has no GART TLB. */
+static void intel_i460_tlb_flush(agp_memory * mem)
+{
+ return;
+}
+
+/*
+ * This utility function is needed to prevent corruption of the control bits
+ * which are stored along with the aperture size in 460's AGPSIZ register
+ */
+static void intel_i460_write_agpsiz(u8 size_value)
+{
+ u8 temp;
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ,
+ ((temp & ~I460_AGPSIZ_MASK) | size_value));
+}
+
+static void intel_i460_cleanup(void)
+{
+ aper_size_info_8 *previous_size;
+
+ previous_size = A_SIZE_8(agp_bridge.previous_size);
+ intel_i460_write_agpsiz(previous_size->size_value);
+
+ if(intel_i460_cpk = 0)
+ {
+ vfree(i460_pg_detail);
+ vfree(i460_pg_count);
+ }
+}
+
+
+/* Control bits for Out-Of-GART coherency and Burst Write Combining */
+#define I460_GXBCTL_OOG (1UL << 0)
+#define I460_GXBCTL_BWC (1UL << 2)
+
+static int intel_i460_configure(void)
+{
+ union {
+ u32 small[2];
+ u64 large;
+ } temp;
+ u8 scratch;
+ int i;
+
+ aper_size_info_8 *current_size;
+
+ temp.large = 0;
+
+ current_size = A_SIZE_8(agp_bridge.current_size);
+ intel_i460_write_agpsiz(current_size->size_value);
+
+ /*
+ * Do the necessary rigmarole to read all eight bytes of APBASE.
+ * This has to be done since the AGP aperture can be above 4GB on
+ * 460 based systems.
+ */
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase,
+ &(temp.small[0]));
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase + 4,
+ &(temp.small[1]));
+
+ /* Clear BAR control bits */
+ agp_bridge.gart_bus_addr = temp.large & ~((1UL << 3) - 1);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &scratch);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL,
+ (scratch & 0x02) | I460_GXBCTL_OOG | I460_GXBCTL_BWC);
+
+ /*
+ * Initialize partial allocation trackers if a GART page is bigger than
+ * a kernel page.
+ */
+ if(I460_CPAGES_PER_KPAGE >= 1) {
+ intel_i460_cpk = 1;
+ } else {
+ intel_i460_cpk = 0;
+
+ i460_pg_detail = (void *) vmalloc(sizeof(*i460_pg_detail) *
+ current_size->num_entries);
+ i460_pg_count = (void *) vmalloc(sizeof(*i460_pg_count) *
+ current_size->num_entries);
+
+ for (i = 0; i < current_size->num_entries; i++) {
+ i460_pg_count[i] = 0;
+ i460_pg_detail[i] = NULL;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_i460_create_gatt_table(void) {
+
+ char *table;
+ int i;
+ int page_order;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ /*
+ * Load up the fixed address of the GART SRAMS which hold our
+ * GATT table.
+ */
+ table = (char *) __va(INTEL_I460_ATTBASE);
+
+ temp = agp_bridge.current_size;
+ page_order = A_SIZE_8(temp)->page_order;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ agp_bridge.gatt_table_real = (u32 *) table;
+ agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
+ (PAGE_SIZE * (1 << page_order)));
+ agp_bridge.gatt_bus_addr = virt_to_phys(agp_bridge.gatt_table_real);
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ return 0;
+}
+
+static int intel_i460_free_gatt_table(void)
+{
+ int num_entries;
+ int i;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ iounmap(agp_bridge.gatt_table);
+
+ return 0;
+}
+
+/* These functions are called when PAGE_SIZE exceeds the GART page size */
+
+static int intel_i460_insert_memory_cpk(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, j, k, num_entries;
+ void *temp;
+ unsigned int hold;
+ unsigned int read_back;
+
+ /*
+ * The rest of the kernel will compute page offsets in terms of
+ * PAGE_SIZE.
+ */
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ if((pg_start + I460_CPAGES_PER_KPAGE * mem->page_count) > num_entries) {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ j = pg_start;
+ while (j < (pg_start + I460_CPAGES_PER_KPAGE * mem->page_count)) {
+ if (!PGE_EMPTY(agp_bridge.gatt_table[j])) {
+ return -EBUSY;
+ }
+ j++;
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for (i = 0, j = pg_start; i < mem->page_count; i++) {
+
+ hold = (unsigned int) (mem->memory[i]);
+
+ for (k = 0; k < I460_CPAGES_PER_KPAGE; k++, j++, hold++)
+ agp_bridge.gatt_table[j] = hold;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[j - 1];
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_cpk(agp_memory * mem, off_t pg_start,
+ int type)
+{
+ int i;
+ unsigned int read_back;
+
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ for (i = pg_start; i < (pg_start + I460_CPAGES_PER_KPAGE *
+ mem->page_count); i++)
+ agp_bridge.gatt_table[i] = 0;
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ return 0;
+}
+
+/*
+ * These functions are called when the GART page size exceeds PAGE_SIZE.
+ *
+ * This situation is interesting since AGP memory allocations that are
+ * smaller than a single GART page are possible. The structures i460_pg_count
+ * and i460_pg_detail track partial allocation of the large GART pages to
+ * work around this issue.
+ *
+ * i460_pg_count[pg_num] tracks the number of kernel pages in use within
+ * GART page pg_num. i460_pg_detail[pg_num] is an array containing a
+ * psuedo-GART entry for each of the aforementioned kernel pages. The whole
+ * of i460_pg_detail is equivalent to a giant GATT with page size equal to
+ * that of the kernel.
+ */
+
+static void *intel_i460_alloc_large_page(int pg_num)
+{
+ int i;
+ void *bp, *bp_end;
+ struct page *page;
+
+ i460_pg_detail[pg_num] = (void *) vmalloc(sizeof(u32) *
+ I460_KPAGES_PER_CPAGE);
+ if(i460_pg_detail[pg_num] = NULL) {
+ printk("[agpgart] Out of memory, we're in trouble...\n");
+ return NULL;
+ }
+
+ for(i = 0; i < I460_KPAGES_PER_CPAGE; i++)
+ i460_pg_detail[pg_num][i] = 0;
+
+ bp = (void *) __get_free_pages(GFP_KERNEL,
+ intel_i460_pageshift - PAGE_SHIFT);
+ if(bp = NULL) {
+ printk("[agpgart] Couldn't alloc 4M GART page...\n");
+ return NULL;
+ }
+
+ bp_end = bp + ((PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT))) - 1);
+
+ for (page = virt_to_page(bp); page <= virt_to_page(bp_end); page++)
+ {
+ atomic_inc(&page->count);
+ set_bit(PG_locked, &page->flags);
+ atomic_inc(&agp_bridge.current_memory_agp);
+ }
+
+ return bp;
+}
+
+static void intel_i460_free_large_page(int pg_num, unsigned long addr)
+{
+ struct page *page;
+ void *bp, *bp_end;
+
+ bp = (void *) __va(addr);
+ bp_end = bp + (PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT)));
+
+ vfree(i460_pg_detail[pg_num]);
+ i460_pg_detail[pg_num] = NULL;
+
+ for (page = virt_to_page(bp); page < virt_to_page(bp_end); page++)
+ {
+ atomic_dec(&page->count);
+ clear_bit(PG_locked, &page->flags);
+ wake_up(&page->wait);
+ atomic_dec(&agp_bridge.current_memory_agp);
+ }
+
+ free_pages((unsigned long) bp, intel_i460_pageshift - PAGE_SHIFT);
+}
+
+static int intel_i460_insert_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages */
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+ I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+ I460_KPAGES_PER_CPAGE;
+
+ if(end_pg > num_entries)
+ {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ /* Check if the requested region of the aperture is free */
+ for(pg = start_pg; pg <= end_pg; pg++)
+ {
+ /* Allocate new GART pages if necessary */
+ if(i460_pg_detail[pg] = NULL) {
+ temp = intel_i460_alloc_large_page(pg);
+ if(temp = NULL)
+ return -ENOMEM;
+ agp_bridge.gatt_table[pg] = agp_bridge.mask_memory(
+ (unsigned long) temp, 0);
+ read_back = agp_bridge.gatt_table[pg];
+ }
+
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++)
+ {
+ if(i460_pg_detail[pg][idx] != 0)
+ return -EBUSY;
+ }
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for(pg = start_pg, i = 0; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ i460_pg_detail[pg][idx] = agp_bridge.gatt_table[pg] +
+ ((idx * PAGE_SIZE) >> 12);
+ i460_pg_count[pg]++;
+
+ /* Finally we fill in mem->memory... */
+ mem->memory[i] = ((unsigned long) (0xffffff &
+ i460_pg_detail[pg][idx])) << 12;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+ unsigned long addr;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages */
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+ I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+ I460_KPAGES_PER_CPAGE;
+
+ for(i = 0, pg = start_pg; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ mem->memory[i] = 0;
+ i460_pg_detail[pg][idx] = 0;
+ i460_pg_count[pg]--;
+ }
+
+ /* Free GART pages if they are unused */
+ if(i460_pg_count[pg] = 0) {
+ addr = (0xffffffUL & (unsigned long)
+ (agp_bridge.gatt_table[pg])) << 12;
+
+ agp_bridge.gatt_table[pg] = 0;
+ read_back = agp_bridge.gatt_table[pg];
+
+ intel_i460_free_large_page(pg, addr);
+ }
+ }
+
+ return 0;
+}
+
+/* Dummy routines to call the approriate {cpk,kpc} function */
+
+static int intel_i460_insert_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_insert_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_insert_memory_kpc(mem, pg_start, type);
+}
+
+static int intel_i460_remove_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_remove_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_remove_memory_kpc(mem, pg_start, type);
+}
+
+/*
+ * If the kernel page size is smaller that the chipset page size, we don't
+ * want to allocate memory until we know where it is to be bound in the
+ * aperture (a multi-kernel-page alloc might fit inside of an already
+ * allocated GART page). Consequently, don't allocate or free anything
+ * if i460_cpk (meaning chipset pages per kernel page) isn't set.
+ *
+ * Let's just hope nobody counts on the allocated AGP memory being there
+ * before bind time (I don't think current drivers do)...
+ */
+static unsigned long intel_i460_alloc_page(void)
+{
+ if(intel_i460_cpk)
+ return agp_generic_alloc_page();
+
+ /* Returning NULL would cause problems */
+ return ((unsigned long) ~0UL);
+}
+
+static void intel_i460_destroy_page(unsigned long page)
+{
+ if(intel_i460_cpk)
+ agp_generic_destroy_page(page);
+}
+
+static gatt_mask intel_i460_masks[] +{
+ {
+ INTEL_I460_GATT_VALID,
+ 0
+ }
+};
+
+static unsigned long intel_i460_mask_memory(unsigned long addr, int type)
+{
+ /* Make sure the returned address is a valid GATT entry */
+ return (agp_bridge.masks[0].mask | (((addr &
+ ~((1 << intel_i460_pageshift) - 1)) & 0xffffff000) >> 12));
+}
+
+static unsigned long intel_i460_unmask_memory(unsigned long addr)
+{
+ /* Turn a GATT entry into a physical address */
+ return ((addr & 0xffffff) << 12);
+}
+
+static aper_size_info_8 intel_i460_sizes[3] +{
+ /*
+ * The 32GB aperture is only available with a 4M GART page size.
+ * Due to the dynamic GART page size, we can't figure out page_order
+ * or num_entries until runtime.
+ */
+ {32768, 0, 0, 4},
+ {1024, 0, 0, 2},
+ {256, 0, 0, 1}
+};
+
+static int __init intel_i460_setup (struct pci_dev *pdev)
+{
+
+ agp_bridge.masks = intel_i460_masks;
+ agp_bridge.num_of_masks = 1;
+ agp_bridge.aperture_sizes = (void *) intel_i460_sizes;
+ agp_bridge.size_type = U8_APER_SIZE;
+ agp_bridge.num_aperture_sizes = 3;
+ agp_bridge.dev_private_data = NULL;
+ agp_bridge.needs_scratch_page = FALSE;
+ agp_bridge.configure = intel_i460_configure;
+ agp_bridge.fetch_size = intel_i460_fetch_size;
+ agp_bridge.cleanup = intel_i460_cleanup;
+ agp_bridge.tlb_flush = intel_i460_tlb_flush;
+ agp_bridge.mask_memory = intel_i460_mask_memory;
+ agp_bridge.unmask_memory = intel_i460_unmask_memory;
+ agp_bridge.agp_enable = agp_generic_agp_enable;
+ agp_bridge.cache_flush = global_cache_flush;
+ agp_bridge.create_gatt_table = intel_i460_create_gatt_table;
+ agp_bridge.free_gatt_table = intel_i460_free_gatt_table;
+ agp_bridge.insert_memory = intel_i460_insert_memory;
+ agp_bridge.remove_memory = intel_i460_remove_memory;
+ agp_bridge.alloc_by_type = agp_generic_alloc_by_type;
+ agp_bridge.free_by_type = agp_generic_free_by_type;
+ agp_bridge.agp_alloc_page = intel_i460_alloc_page;
+ agp_bridge.agp_destroy_page = intel_i460_destroy_page;
+#if 0
+ agp_bridge.suspend = ??;
+ agp_bridge.resume = ??;
+#endif
+ agp_bridge.cant_use_aperture = 1;
+
+ return 0;
+
+ (void) pdev; /* unused */
+}
+
+#endif /* CONFIG_AGP_I460 */
+
#ifdef CONFIG_AGP_INTEL
static int intel_fetch_size(void)
@@ -1579,6 +2291,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1612,6 +2325,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1645,6 +2359,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1765,6 +2480,7 @@
agp_bridge.cleanup = via_cleanup;
agp_bridge.tlb_flush = via_tlbflush;
agp_bridge.mask_memory = via_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1879,6 +2595,7 @@
agp_bridge.cleanup = sis_cleanup;
agp_bridge.tlb_flush = sis_tlbflush;
agp_bridge.mask_memory = sis_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1901,8 +2618,8 @@
#ifdef CONFIG_AGP_AMD
typedef struct _amd_page_map {
- unsigned long *real;
- unsigned long *remapped;
+ u32 *real;
+ u32 *remapped;
} amd_page_map;
static struct _amd_irongate_private {
@@ -1915,7 +2632,7 @@
{
int i;
- page_map->real = (unsigned long *) __get_free_page(GFP_KERNEL);
+ page_map->real = (u32 *) __get_free_page(GFP_KERNEL);
if (page_map->real = NULL) {
return -ENOMEM;
}
@@ -2170,7 +2887,7 @@
off_t pg_start, int type)
{
int i, j, num_entries;
- unsigned long *cur_gatt;
+ u32 *cur_gatt;
unsigned long addr;
num_entries = A_SIZE_LVL2(agp_bridge.current_size)->num_entries;
@@ -2210,7 +2927,7 @@
int type)
{
int i;
- unsigned long *cur_gatt;
+ u32 *cur_gatt;
unsigned long addr;
if (type != 0 || mem->type != 0) {
@@ -2257,6 +2974,7 @@
agp_bridge.cleanup = amd_irongate_cleanup;
agp_bridge.tlb_flush = amd_irongate_tlbflush;
agp_bridge.mask_memory = amd_irongate_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = amd_create_gatt_table;
@@ -2505,6 +3223,7 @@
agp_bridge.cleanup = ali_cleanup;
agp_bridge.tlb_flush = ali_tlbflush;
agp_bridge.mask_memory = ali_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = ali_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -3287,6 +4006,15 @@
#endif /* CONFIG_AGP_INTEL */
+#ifdef CONFIG_AGP_I460
+ { PCI_DEVICE_ID_INTEL_460GX,
+ PCI_VENDOR_ID_INTEL,
+ INTEL_460GX,
+ "Intel",
+ "460GX",
+ intel_i460_setup },
+#endif
+
#ifdef CONFIG_AGP_SIS
{ PCI_DEVICE_ID_SI_630,
PCI_VENDOR_ID_SI,
@@ -3455,6 +4183,18 @@
return -ENODEV;
}
+static int agp_check_supported_device(struct pci_dev *dev) {
+
+ int i;
+
+ for(i = 0; i < ARRAY_SIZE (agp_bridge_info); i++) {
+ if(dev->vendor = agp_bridge_info[i].vendor_id &&
+ dev->device = agp_bridge_info[i].device_id)
+ return 1;
+ }
+
+ return 0;
+}
/* Supported Device Scanning routine */
@@ -3464,8 +4204,14 @@
u8 cap_ptr = 0x00;
u32 cap_id, scratch;
- if ((dev = pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, NULL)) = NULL)
- return -ENODEV;
+ /*
+ * Some systems have multiple host bridges (i.e. BigSur), so
+ * we can't just use the first one we find.
+ */
+ do {
+ if ((dev = pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, dev)) = NULL)
+ return -ENODEV;
+ } while(!agp_check_supported_device(dev));
agp_bridge.dev = dev;
diff -urN linux-2.4.13/drivers/char/drm/Config.in linux-2.4.13-lia/drivers/char/drm/Config.in
--- linux-2.4.13/drivers/char/drm/Config.in Wed Aug 8 09:42:10 2001
+++ linux-2.4.13-lia/drivers/char/drm/Config.in Thu Oct 4 00:21:40 2001
@@ -5,12 +5,9 @@
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
#
-bool 'Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)' CONFIG_DRM
-if [ "$CONFIG_DRM" != "n" ]; then
- tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM_TDFX
- tristate ' 3dlabs GMX 2000' CONFIG_DRM_GAMMA
- tristate ' ATI Rage 128' CONFIG_DRM_R128
- dep_tristate ' ATI Radeon' CONFIG_DRM_RADEON $CONFIG_AGP
- dep_tristate ' Intel I810' CONFIG_DRM_I810 $CONFIG_AGP
- dep_tristate ' Matrox g200/g400' CONFIG_DRM_MGA $CONFIG_AGP
-fi
+tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM_TDFX
+tristate ' 3dlabs GMX 2000' CONFIG_DRM_GAMMA
+tristate ' ATI Rage 128' CONFIG_DRM_R128
+dep_tristate ' ATI Radeon' CONFIG_DRM_RADEON $CONFIG_AGP
+dep_tristate ' Intel I810' CONFIG_DRM_I810 $CONFIG_AGP
+dep_tristate ' Matrox g200/g400' CONFIG_DRM_MGA $CONFIG_AGP
diff -urN linux-2.4.13/drivers/char/drm/ati_pcigart.h linux-2.4.13-lia/drivers/char/drm/ati_pcigart.h
--- linux-2.4.13/drivers/char/drm/ati_pcigart.h Mon Sep 24 15:06:57 2001
+++ linux-2.4.13-lia/drivers/char/drm/ati_pcigart.h Thu Oct 4 00:21:40 2001
@@ -30,7 +30,10 @@
#define __NO_VERSION__
#include "drmP.h"
-#if PAGE_SIZE = 8192
+#if PAGE_SIZE = 16384
+# define ATI_PCIGART_TABLE_ORDER 1
+# define ATI_PCIGART_TABLE_PAGES (1 << 1)
+#elif PAGE_SIZE = 8192
# define ATI_PCIGART_TABLE_ORDER 2
# define ATI_PCIGART_TABLE_PAGES (1 << 2)
#elif PAGE_SIZE = 4096
@@ -103,6 +106,7 @@
goto done;
}
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
if ( !dev->pdev ) {
DRM_ERROR( "PCI device unknown!\n" );
goto done;
@@ -117,6 +121,9 @@
address = 0;
goto done;
}
+#else
+ bus_address = virt_to_bus( (void *)address );
+#endif
pci_gart = (u32 *)address;
@@ -126,6 +133,7 @@
memset( pci_gart, 0, ATI_MAX_PCIGART_PAGES * sizeof(u32) );
for ( i = 0 ; i < pages ; i++ ) {
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
/* we need to support large memory configurations */
entry->busaddr[i] = pci_map_single(dev->pdev,
page_address( entry->pagelist[i] ),
@@ -139,7 +147,9 @@
goto done;
}
page_base = (u32) entry->busaddr[i];
-
+#else
+ page_base = page_to_bus( entry->pagelist[i] );
+#endif
for (j = 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) {
*pci_gart++ = cpu_to_le32( page_base );
page_base += ATI_PCIGART_PAGE_SIZE;
@@ -164,6 +174,7 @@
unsigned long addr,
dma_addr_t bus_addr)
{
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
drm_sg_mem_t *entry = dev->sg;
unsigned long pages;
int i;
@@ -188,6 +199,8 @@
PAGE_SIZE, PCI_DMA_TODEVICE);
}
}
+
+#endif
if ( addr ) {
DRM(ati_free_pcigart_table)( addr );
diff -urN linux-2.4.13/drivers/char/drm/drmP.h linux-2.4.13-lia/drivers/char/drm/drmP.h
--- linux-2.4.13/drivers/char/drm/drmP.h Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/drmP.h Thu Oct 4 00:21:52 2001
@@ -366,13 +366,13 @@
if (len > DRM_PROC_LIMIT) { ret; *eof = 1; return len - offset; }
/* Mapping helper macros */
-#define DRM_IOREMAP(map) \
- (map)->handle = DRM(ioremap)( (map)->offset, (map)->size )
+#define DRM_IOREMAP(map, dev) \
+ (map)->handle = DRM(ioremap)( (map)->offset, (map)->size, (dev) )
-#define DRM_IOREMAPFREE(map) \
+#define DRM_IOREMAPFREE(map, dev) \
do { \
if ( (map)->handle && (map)->size ) \
- DRM(ioremapfree)( (map)->handle, (map)->size ); \
+ DRM(ioremapfree)( (map)->handle, (map)->size, (dev) ); \
} while (0)
#define DRM_FIND_MAP(_map, _o) \
@@ -826,8 +826,8 @@
extern unsigned long DRM(alloc_pages)(int order, int area);
extern void DRM(free_pages)(unsigned long address, int order,
int area);
-extern void *DRM(ioremap)(unsigned long offset, unsigned long size);
-extern void DRM(ioremapfree)(void *pt, unsigned long size);
+extern void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev);
+extern void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev);
#if __REALLY_HAVE_AGP
extern agp_memory *DRM(alloc_agp)(int pages, u32 type);
diff -urN linux-2.4.13/drivers/char/drm/drm_agpsupport.h linux-2.4.13-lia/drivers/char/drm/drm_agpsupport.h
--- linux-2.4.13/drivers/char/drm/drm_agpsupport.h Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_agpsupport.h Thu Oct 4 00:21:40 2001
@@ -275,6 +275,7 @@
case INTEL_I815: head->chipset = "Intel i815"; break;
case INTEL_I840: head->chipset = "Intel i840"; break;
case INTEL_I850: head->chipset = "Intel i850"; break;
+ case INTEL_460GX: head->chipset = "Intel 460GX"; break;
#endif
case VIA_GENERIC: head->chipset = "VIA"; break;
diff -urN linux-2.4.13/drivers/char/drm/drm_bufs.h linux-2.4.13-lia/drivers/char/drm/drm_bufs.h
--- linux-2.4.13/drivers/char/drm/drm_bufs.h Fri Aug 10 18:14:41 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_bufs.h Thu Oct 4 00:21:40 2001
@@ -107,7 +107,7 @@
switch ( map->type ) {
case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER:
-#if !defined(__sparc__) && !defined(__alpha__)
+#if !defined(__sparc__) && !defined(__alpha__) && !defined(__ia64__)
if ( map->offset + map->size < map->offset ||
map->offset < virt_to_phys(high_memory) ) {
DRM(free)( map, sizeof(*map), DRM_MEM_MAPS );
@@ -124,7 +124,7 @@
MTRR_TYPE_WRCOMB, 1 );
}
#endif
- map->handle = DRM(ioremap)( map->offset, map->size );
+ map->handle = DRM(ioremap)( map->offset, map->size, dev );
break;
case _DRM_SHM:
@@ -249,7 +249,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
diff -urN linux-2.4.13/drivers/char/drm/drm_drv.h linux-2.4.13-lia/drivers/char/drm/drm_drv.h
--- linux-2.4.13/drivers/char/drm/drm_drv.h Wed Oct 24 10:17:46 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_drv.h Wed Oct 24 10:21:09 2001
@@ -439,7 +439,7 @@
DRM_DEBUG( "mtrr_del=%d\n", retcode );
}
#endif
- DRM(ioremapfree)( map->handle, map->size );
+ DRM(ioremapfree)( map->handle, map->size, dev );
break;
case _DRM_SHM:
vfree(map->handle);
diff -urN linux-2.4.13/drivers/char/drm/drm_memory.h linux-2.4.13-lia/drivers/char/drm/drm_memory.h
--- linux-2.4.13/drivers/char/drm/drm_memory.h Fri Aug 10 18:14:41 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_memory.h Thu Oct 4 00:21:40 2001
@@ -306,9 +306,14 @@
}
}
-void *DRM(ioremap)(unsigned long offset, unsigned long size)
+void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev)
{
void *pt;
+#if __REALLY_HAVE_AGP
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ struct list_head *list;
+#endif
if (!size) {
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
@@ -316,12 +321,51 @@
return NULL;
}
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 0)
+ goto standard_ioremap;
+
+ list_for_each(list, &dev->maplist->head) {
+ r_list = (drm_map_list_t *)list;
+ map = r_list->map;
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
+ break;
+ }
+
+ if(agpmem = NULL)
+ goto ioremap_failure;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
+ }
+
+standard_ioremap:
+#endif
if (!(pt = ioremap(offset, size))) {
+#if __REALLY_HAVE_AGP
+ioremap_failure:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&DRM(mem_lock));
return NULL;
}
+#if __REALLY_HAVE_AGP
+ioremap_success:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count;
DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated += size;
@@ -329,7 +373,7 @@
return pt;
}
-void DRM(ioremapfree)(void *pt, unsigned long size)
+void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev)
{
int alloc_count;
int free_count;
@@ -337,7 +381,11 @@
if (!pt)
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
+#if __REALLY_HAVE_AGP
+ else if(dev->agp->cant_use_aperture = 0)
+#else
else
+#endif
iounmap(pt);
spin_lock(&DRM(mem_lock));
diff -urN linux-2.4.13/drivers/char/drm/drm_scatter.h linux-2.4.13-lia/drivers/char/drm/drm_scatter.h
--- linux-2.4.13/drivers/char/drm/drm_scatter.h Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_scatter.h Thu Oct 4 00:21:40 2001
@@ -47,9 +47,11 @@
vfree( entry->virtual );
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
DRM(free)( entry->busaddr,
entry->pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
+#endif
DRM(free)( entry->pagelist,
entry->pages * sizeof(*entry->pagelist),
DRM_MEM_PAGES );
@@ -97,6 +99,7 @@
return -ENOMEM;
}
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
entry->busaddr = DRM(alloc)( pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
if ( !entry->busaddr ) {
@@ -109,12 +112,15 @@
return -ENOMEM;
}
memset( (void *)entry->busaddr, 0, pages * sizeof(*entry->busaddr) );
+#endif
entry->virtual = vmalloc_32( pages << PAGE_SHIFT );
if ( !entry->virtual ) {
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
DRM(free)( entry->busaddr,
entry->pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
+#endif
DRM(free)( entry->pagelist,
entry->pages * sizeof(*entry->pagelist),
DRM_MEM_PAGES );
diff -urN linux-2.4.13/drivers/char/drm/drm_vm.h linux-2.4.13-lia/drivers/char/drm/drm_vm.h
--- linux-2.4.13/drivers/char/drm/drm_vm.h Wed Oct 24 10:17:48 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_vm.h Wed Oct 24 10:21:09 2001
@@ -89,7 +89,7 @@
if (map && map->type = _DRM_AGP) {
unsigned long offset = address - vma->vm_start;
- unsigned long baddr = VM_OFFSET(vma) + offset;
+ unsigned long baddr = VM_OFFSET(vma) + offset, paddr;
struct drm_agp_mem *agpmem;
struct page *page;
@@ -115,8 +115,19 @@
* Get the page, inc the use count, and return it
*/
offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
- agpmem->memory->memory[offset] &= dev->agp->page_mask;
- page = virt_to_page(__va(agpmem->memory->memory[offset]));
+
+ /*
+ * This is bad. What we really want to do here is unmask
+ * the GART table entry held in the agp_memory structure.
+ * There isn't a convenient way to call agp_bridge.unmask_
+ * memory from here, so hard code it for now.
+ */
+#if defined(__ia64__)
+ paddr = (agpmem->memory->memory[offset] & 0xffffff) << 12;
+#else
+ paddr = agpmem->memory->memory[offset] & dev->agp->page_mask;
+#endif
+ page = virt_to_page(__va(paddr));
get_page(page);
DRM_DEBUG("baddr = 0x%lx page = 0x%p, offset = 0x%lx\n",
@@ -255,7 +266,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
@@ -502,15 +513,21 @@
switch (map->type) {
case _DRM_AGP:
-#if defined(__alpha__)
- /*
- * On Alpha we can't talk to bus dma address from the
- * CPU, so for memory of type DRM_AGP, we'll deal with
- * sorting out the real physical pages and mappings
- * in nopage()
- */
- vma->vm_ops = &DRM(vm_ops);
- break;
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 1) {
+ /*
+ * On some systems we can't talk to bus dma address from
+ * the CPU, so for memory of type DRM_AGP, we'll deal
+ * with sorting out the real physical pages and mappings
+ * in nopage()
+ */
+ vma->vm_ops = &DRM(vm_ops);
+#if defined(__ia64__)
+ vma->vm_page_prot + pgprot_writecombine(vma->vm_page_prot);
+#endif
+ goto mapswitch_out;
+ }
#endif
/* fall through to _DRM_FRAME_BUFFER... */
case _DRM_FRAME_BUFFER:
@@ -522,8 +539,7 @@
pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
}
#elif defined(__ia64__)
- if (map->type != _DRM_AGP)
- vma->vm_page_prot + vma->vm_page_prot pgprot_writecombine(vma->vm_page_prot);
#elif defined(__powerpc__)
pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE | _PAGE_GUARDED;
@@ -572,6 +588,9 @@
default:
return -EINVAL; /* This should never happen. */
}
+#if __REALLY_HAVE_AGP
+mapswitch_out:
+#endif
vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
#if LINUX_VERSION_CODE < 0x020203 /* KERNEL_VERSION(2,2,3) */
diff -urN linux-2.4.13/drivers/char/drm/i810_dma.c linux-2.4.13-lia/drivers/char/drm/i810_dma.c
--- linux-2.4.13/drivers/char/drm/i810_dma.c Wed Aug 8 09:42:15 2001
+++ linux-2.4.13-lia/drivers/char/drm/i810_dma.c Thu Oct 4 00:21:40 2001
@@ -315,7 +315,7 @@
if(dev_priv->ring.virtual_start) {
DRM(ioremapfree)((void *) dev_priv->ring.virtual_start,
- dev_priv->ring.Size);
+ dev_priv->ring.Size, dev);
}
if(dev_priv->hw_status_page != 0UL) {
i810_free_page(dev, dev_priv->hw_status_page);
@@ -329,7 +329,8 @@
for (i = 0; i < dma->buf_count; i++) {
drm_buf_t *buf = dma->buflist[ i ];
drm_i810_buf_priv_t *buf_priv = buf->dev_private;
- DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total);
+ DRM(ioremapfree)(buf_priv->kernel_virtual,
+ buf->total, dev);
}
}
return 0;
@@ -402,7 +403,7 @@
*buf_priv->in_use = I810_BUF_FREE;
buf_priv->kernel_virtual = DRM(ioremap)(buf->bus_address,
- buf->total);
+ buf->total, dev);
}
return 0;
}
@@ -458,7 +459,7 @@
dev_priv->ring.virtual_start = DRM(ioremap)(dev->agp->base +
init->ring_start,
- init->ring_size);
+ init->ring_size, dev);
if (dev_priv->ring.virtual_start = NULL) {
dev->dev_private = (void *) dev_priv;
diff -urN linux-2.4.13/drivers/char/drm/mga_dma.c linux-2.4.13-lia/drivers/char/drm/mga_dma.c
--- linux-2.4.13/drivers/char/drm/mga_dma.c Wed Aug 8 09:42:15 2001
+++ linux-2.4.13-lia/drivers/char/drm/mga_dma.c Thu Oct 4 00:21:40 2001
@@ -557,9 +557,9 @@
(drm_mga_sarea_t *)((u8 *)dev_priv->sarea->handle +
init->sarea_priv_offset);
- DRM_IOREMAP( dev_priv->warp );
- DRM_IOREMAP( dev_priv->primary );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->warp, dev );
+ DRM_IOREMAP( dev_priv->primary, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->warp->handle ||
!dev_priv->primary->handle ||
@@ -647,9 +647,9 @@
if ( dev->dev_private ) {
drm_mga_private_t *dev_priv = dev->dev_private;
- DRM_IOREMAPFREE( dev_priv->warp );
- DRM_IOREMAPFREE( dev_priv->primary );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->warp, dev );
+ DRM_IOREMAPFREE( dev_priv->primary, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
if ( dev_priv->head != NULL ) {
mga_freelist_cleanup( dev );
diff -urN linux-2.4.13/drivers/char/drm/r128_cce.c linux-2.4.13-lia/drivers/char/drm/r128_cce.c
--- linux-2.4.13/drivers/char/drm/r128_cce.c Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/r128_cce.c Thu Oct 4 00:21:52 2001
@@ -216,7 +216,22 @@
int i;
for ( i = 0 ; i < dev_priv->usec_timeout ; i++ ) {
+#ifndef CONFIG_AGP_I460
if ( GET_RING_HEAD( &dev_priv->ring ) = dev_priv->ring.tail ) {
+#else
+ /*
+ * XXX - this is (I think) a 460GX specific hack
+ *
+ * When doing texturing, ring.tail sometimes gets ahead of
+ * PM4_BUFFER_DL_WPTR by 2; consequently, the card processes
+ * its whole quota of instructions and *ring.head is still 2
+ * short of ring.tail. Work around this for now in lieu of
+ * a better solution.
+ */
+ if ( GET_RING_HEAD( &dev_priv->ring ) = dev_priv->ring.tail ||
+ ( dev_priv->ring.tail -
+ GET_RING_HEAD( &dev_priv->ring ) ) = 2 ) {
+#endif
int pm4stat = R128_READ( R128_PM4_STAT );
if ( ( (pm4stat & R128_PM4_FIFOCNT_MASK) > dev_priv->cce_fifo_size ) &&
@@ -341,8 +356,27 @@
SET_RING_HEAD( &dev_priv->ring, 0 );
if ( !dev_priv->is_pci ) {
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * XXX - This is a 460GX specific hack
+ *
+ * We have to hack this right now. 460GX isn't claiming PCI
+ * writes from the card into the AGP aperture. Because of this,
+ * we have to get space outside of the aperture for RPTR_ADDR.
+ */
+ if( dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off;
+
+ alt_rh_off = __get_free_page(GFP_KERNEL | GFP_DMA);
+ atomic_inc(&virt_to_page(alt_rh_off)->count);
+ set_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+
+ dev_priv->ring.head = (__volatile__ u32 *) alt_rh_off;
+ SET_RING_HEAD( &dev_priv->ring, 0 );
+ }
+#endif
R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
- dev_priv->ring_rptr->offset );
+ __pa( dev_priv->ring.head ) );
} else {
drm_sg_mem_t *entry = dev->sg;
unsigned long tmp_ofs, page_ofs;
@@ -350,11 +384,20 @@
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
page_ofs = tmp_ofs >> PAGE_SHIFT;
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
entry->busaddr[page_ofs]);
DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
+#else
+ R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
+ page_to_bus(entry->pagelist[page_ofs]));
+
+ DRM_DEBUG( "ring rptr: offset=0x%08lx handle=0x%08lx\n",
+ page_to_bus(entry->pagelist[page_ofs]),
+ entry->handle + tmp_ofs );
+#endif
}
/* Set watermark control */
@@ -550,9 +593,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cce_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cce_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cce_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -624,9 +667,9 @@
drm_r128_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cce_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cce_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
@@ -634,6 +677,21 @@
DRM_ERROR( "failed to cleanup PCI GART!\n" );
}
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * Free the page we grabbed for RPTR_ADDR
+ */
+ if( !dev_priv->is_pci && dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head;
+
+ atomic_dec(&virt_to_page(alt_rh_off)->count);
+ clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+ wake_up(&virt_to_page(alt_rh_off)->wait);
+ free_page(alt_rh_off);
+ }
+#endif
+
DRM(free)( dev->dev_private, sizeof(drm_r128_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN linux-2.4.13/drivers/char/drm/radeon_cp.c linux-2.4.13-lia/drivers/char/drm/radeon_cp.c
--- linux-2.4.13/drivers/char/drm/radeon_cp.c Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/radeon_cp.c Thu Oct 4 00:21:52 2001
@@ -612,8 +612,27 @@
dev_priv->ring.tail = cur_read_ptr;
if ( !dev_priv->is_pci ) {
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * XXX - This is a 460GX specific hack
+ *
+ * We have to hack this right now. 460GX isn't claiming PCI
+ * writes from the card into the AGP aperture. Because of this,
+ * we have to get space outside of the aperture for RPTR_ADDR.
+ */
+ if( dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off;
+
+ alt_rh_off = __get_free_page(GFP_KERNEL | GFP_DMA);
+ atomic_inc(&virt_to_page(alt_rh_off)->count);
+ set_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+
+ dev_priv->ring.head = (__volatile__ u32 *) alt_rh_off;
+ *dev_priv->ring.head = cur_read_ptr;
+ }
+#endif
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
- dev_priv->ring_rptr->offset );
+ __pa( dev_priv->ring.head ) );
} else {
drm_sg_mem_t *entry = dev->sg;
unsigned long tmp_ofs, page_ofs;
@@ -621,11 +640,19 @@
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
page_ofs = tmp_ofs >> PAGE_SHIFT;
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
+ RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
+ entry->busaddr[page_ofs]);
+ DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
+ entry->busaddr[page_ofs],
+ entry->handle + tmp_ofs );
+#else
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
entry->busaddr[page_ofs]);
DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
+#endif
}
/* Set ring buffer size */
@@ -836,9 +863,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cp_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cp_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cp_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -983,9 +1010,9 @@
drm_radeon_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cp_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cp_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
@@ -993,6 +1020,21 @@
DRM_ERROR( "failed to cleanup PCI GART!\n" );
}
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * Free the page we grabbed for RPTR_ADDR
+ */
+ if( !dev_priv->is_pci && dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head;
+
+ atomic_dec(&virt_to_page(alt_rh_off)->count);
+ clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+ wake_up(&virt_to_page(alt_rh_off)->wait);
+ free_page(alt_rh_off);
+ }
+#endif
+
DRM(free)( dev->dev_private, sizeof(drm_radeon_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN linux-2.4.13/drivers/char/drm-4.0/Config.in linux-2.4.13-lia/drivers/char/drm-4.0/Config.in
--- linux-2.4.13/drivers/char/drm-4.0/Config.in Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/Config.in Thu Oct 4 00:21:40 2001
@@ -0,0 +1,13 @@
+#
+# drm device configuration
+#
+# This driver provides support for the
+# Direct Rendering Infrastructure (DRI) in XFree86 4.x.
+#
+
+tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM40_TDFX
+tristate ' 3dlabs GMX 2000' CONFIG_DRM40_GAMMA
+dep_tristate ' ATI Rage 128' CONFIG_DRM40_R128 $CONFIG_AGP
+dep_tristate ' ATI Radeon' CONFIG_DRM40_RADEON $CONFIG_AGP
+dep_tristate ' Intel I810' CONFIG_DRM40_I810 $CONFIG_AGP
+dep_tristate ' Matrox g200/g400' CONFIG_DRM40_MGA $CONFIG_AGP
diff -urN linux-2.4.13/drivers/char/drm-4.0/Makefile linux-2.4.13-lia/drivers/char/drm-4.0/Makefile
--- linux-2.4.13/drivers/char/drm-4.0/Makefile Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/Makefile Thu Oct 4 00:21:40 2001
@@ -0,0 +1,104 @@
+#
+# Makefile for the drm device driver. This driver provides support for
+# the Direct Rendering Infrastructure (DRI) in XFree86 4.x.
+#
+
+O_TARGET := drm.o
+
+export-objs := gamma_drv.o tdfx_drv.o r128_drv.o ffb_drv.o mga_drv.o \
+ i810_drv.o
+
+# lib-objs are included in every module so that radical changes to the
+# architecture of the DRM support library can be made at a later time.
+#
+# The downside is that each module is larger, and a system that uses
+# more than one module (i.e., a dual-head system) will use more memory
+# (but a system that uses exactly one module will use the same amount of
+# memory).
+#
+# The upside is that if the DRM support library ever becomes insufficient
+# for new families of cards, a new library can be implemented for those new
+# cards without impacting the drivers for the old cards. This is significant,
+# because testing architectural changes to old cards may be impossible, and
+# may delay the implementation of a better architecture. We've traded slight
+# memory waste (in the dual-head case) for greatly improved long-term
+# maintainability.
+#
+# NOTE: lib-objs will be eliminated in future versions, thereby
+# eliminating the need to compile the .o files into every module, but
+# for now we still need them.
+#
+
+lib-objs := init.o memory.o proc.o auth.o context.o drawable.o bufs.o
+lib-objs += lists.o lock.o ioctl.o fops.o vm.o dma.o ctxbitmap.o
+
+ifeq ($(CONFIG_AGP),y)
+ lib-objs += agpsupport.o
+else
+ ifeq ($(CONFIG_AGP),m)
+ lib-objs += agpsupport.o
+ endif
+endif
+
+list-multi := gamma.o tdfx.o r128.o ffb.o mga.o i810.o
+gamma-objs := gamma_drv.o gamma_dma.o
+tdfx-objs := tdfx_drv.o tdfx_context.o
+r128-objs := r128_drv.o r128_cce.o r128_context.o r128_bufs.o r128_state.o
+ffb-objs := ffb_drv.o ffb_context.o
+mga-objs := mga_drv.o mga_dma.o mga_context.o mga_bufs.o mga_state.o
+i810-objs := i810_drv.o i810_dma.o i810_context.o i810_bufs.o
+radeon-objs := radeon_drv.o radeon_cp.o radeon_context.o radeon_bufs.o radeon_state.o
+
+obj-$(CONFIG_DRM40_GAMMA) += gamma.o
+obj-$(CONFIG_DRM40_TDFX) += tdfx.o
+obj-$(CONFIG_DRM40_R128) += r128.o
+obj-$(CONFIG_DRM40_RADEON)+= radeon.o
+obj-$(CONFIG_DRM40_FFB) += ffb.o
+obj-$(CONFIG_DRM40_MGA) += mga.o
+obj-$(CONFIG_DRM40_I810) += i810.o
+
+
+# When linking into the kernel, link the library just once.
+# If making modules, we include the library into each module
+
+lib-objs-mod := $(patsubst %.o,%-mod.o,$(lib-objs))
+
+ifdef MAKING_MODULES
+ lib = drmlib-mod.a
+else
+ obj-y += drmlib.a
+endif
+
+include $(TOPDIR)/Rules.make
+
+$(patsubst %.o,%.c,$(lib-objs-mod)):
+ @ln -sf $(subst -mod,,$@) $@
+
+drmlib-mod.a: $(lib-objs-mod)
+ rm -f $@
+ $(AR) $(EXTRA_ARFLAGS) rcs $@ $(lib-objs-mod)
+
+drmlib.a: $(lib-objs)
+ rm -f $@
+ $(AR) $(EXTRA_ARFLAGS) rcs $@ $(lib-objs)
+
+gamma.o: $(gamma-objs) $(lib)
+ $(LD) -r -o $@ $(gamma-objs) $(lib)
+
+tdfx.o: $(tdfx-objs) $(lib)
+ $(LD) -r -o $@ $(tdfx-objs) $(lib)
+
+mga.o: $(mga-objs) $(lib)
+ $(LD) -r -o $@ $(mga-objs) $(lib)
+
+i810.o: $(i810-objs) $(lib)
+ $(LD) -r -o $@ $(i810-objs) $(lib)
+
+r128.o: $(r128-objs) $(lib)
+ $(LD) -r -o $@ $(r128-objs) $(lib)
+
+radeon.o: $(radeon-objs) $(lib)
+ $(LD) -r -o $@ $(radeon-objs) $(lib)
+
+ffb.o: $(ffb-objs) $(lib)
+ $(LD) -r -o $@ $(ffb-objs) $(lib)
diff -urN linux-2.4.13/drivers/char/drm-4.0/README.drm linux-2.4.13-lia/drivers/char/drm-4.0/README.drm
--- linux-2.4.13/drivers/char/drm-4.0/README.drm Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/README.drm Thu Oct 4 00:21:40 2001
@@ -0,0 +1,46 @@
+************************************************************
+* For the very latest on DRI development, please see: *
+* http://dri.sourceforge.net/ *
+************************************************************
+
+The Direct Rendering Manager (drm) is a device-independent kernel-level
+device driver that provides support for the XFree86 Direct Rendering
+Infrastructure (DRI).
+
+The DRM supports the Direct Rendering Infrastructure (DRI) in four major
+ways:
+
+ 1. The DRM provides synchronized access to the graphics hardware via
+ the use of an optimized two-tiered lock.
+
+ 2. The DRM enforces the DRI security policy for access to the graphics
+ hardware by only allowing authenticated X11 clients access to
+ restricted regions of memory.
+
+ 3. The DRM provides a generic DMA engine, complete with multiple
+ queues and the ability to detect the need for an OpenGL context
+ switch.
+
+ 4. The DRM is extensible via the use of small device-specific modules
+ that rely extensively on the API exported by the DRM module.
+
+
+Documentation on the DRI is available from:
+ http://precisioninsight.com/piinsights.html
+
+For specific information about kernel-level support, see:
+
+ The Direct Rendering Manager, Kernel Support for the Direct Rendering
+ Infrastructure
+ http://precisioninsight.com/dr/drm.html
+
+ Hardware Locking for the Direct Rendering Infrastructure
+ http://precisioninsight.com/dr/locking.html
+
+ A Security Analysis of the Direct Rendering Infrastructure
+ http://precisioninsight.com/dr/security.html
+
+************************************************************
+* For the very latest on DRI development, please see: *
+* http://dri.sourceforge.net/ *
+************************************************************
diff -urN linux-2.4.13/drivers/char/drm-4.0/agpsupport.c linux-2.4.13-lia/drivers/char/drm-4.0/agpsupport.c
--- linux-2.4.13/drivers/char/drm-4.0/agpsupport.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/agpsupport.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,349 @@
+/* agpsupport.c -- DRM support for AGP/GART backend -*- linux-c -*-
+ * Created: Mon Dec 13 09:56:45 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include <linux/config.h>
+#include <linux/module.h>
+#if LINUX_VERSION_CODE < 0x020400
+#include "agpsupport-pre24.h"
+#else
+#define DRM_AGP_GET (drm_agp_t *)inter_module_get("drm_agp")
+#define DRM_AGP_PUT inter_module_put("drm_agp")
+#endif
+
+static const drm_agp_t *drm_agp = NULL;
+
+int drm_agp_info(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ agp_kern_info *kern;
+ drm_agp_info_t info;
+
+ if (!dev->agp->acquired || !drm_agp->copy_info) return -EINVAL;
+
+ kern = &dev->agp->agp_info;
+ info.agp_version_major = kern->version.major;
+ info.agp_version_minor = kern->version.minor;
+ info.mode = kern->mode;
+ info.aperture_base = kern->aper_base;
+ info.aperture_size = kern->aper_size * 1024 * 1024;
+ info.memory_allowed = kern->max_memory << PAGE_SHIFT;
+ info.memory_used = kern->current_memory << PAGE_SHIFT;
+ info.id_vendor = kern->device->vendor;
+ info.id_device = kern->device->device;
+
+ if (copy_to_user((drm_agp_info_t *)arg, &info, sizeof(info)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_agp_acquire(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode;
+
+ if (dev->agp->acquired || !drm_agp->acquire) return -EINVAL;
+ if ((retcode = drm_agp->acquire())) return retcode;
+ dev->agp->acquired = 1;
+ return 0;
+}
+
+int drm_agp_release(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ if (!dev->agp->acquired || !drm_agp->release) return -EINVAL;
+ drm_agp->release();
+ dev->agp->acquired = 0;
+ return 0;
+
+}
+
+void _drm_agp_release(void)
+{
+ if (drm_agp->release) drm_agp->release();
+}
+
+int drm_agp_enable(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_mode_t mode;
+
+ if (!dev->agp->acquired || !drm_agp->enable) return -EINVAL;
+
+ if (copy_from_user(&mode, (drm_agp_mode_t *)arg, sizeof(mode)))
+ return -EFAULT;
+
+ dev->agp->mode = mode.mode;
+ drm_agp->enable(mode.mode);
+ dev->agp->base = dev->agp->agp_info.aper_base;
+ dev->agp->enabled = 1;
+ return 0;
+}
+
+int drm_agp_alloc(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_buffer_t request;
+ drm_agp_mem_t *entry;
+ agp_memory *memory;
+ unsigned long pages;
+ u32 type;
+ if (!dev->agp->acquired) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_buffer_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_alloc(sizeof(*entry), DRM_MEM_AGPLISTS)))
+ return -ENOMEM;
+
+ memset(entry, 0, sizeof(*entry));
+
+ pages = (request.size + PAGE_SIZE - 1) / PAGE_SIZE;
+ type = (u32) request.type;
+
+ if (!(memory = drm_alloc_agp(pages, type))) {
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return -ENOMEM;
+ }
+
+ entry->handle = (unsigned long)memory->memory;
+ entry->memory = memory;
+ entry->bound = 0;
+ entry->pages = pages;
+ entry->prev = NULL;
+ entry->next = dev->agp->memory;
+ if (dev->agp->memory) dev->agp->memory->prev = entry;
+ dev->agp->memory = entry;
+
+ request.handle = entry->handle;
+ request.physical = memory->physical;
+
+ if (copy_to_user((drm_agp_buffer_t *)arg, &request, sizeof(request))) {
+ dev->agp->memory = entry->next;
+ dev->agp->memory->prev = NULL;
+ drm_free_agp(memory, pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return -EFAULT;
+ }
+ return 0;
+}
+
+static drm_agp_mem_t *drm_agp_lookup_entry(drm_device_t *dev,
+ unsigned long handle)
+{
+ drm_agp_mem_t *entry;
+
+ for (entry = dev->agp->memory; entry; entry = entry->next) {
+ if (entry->handle = handle) return entry;
+ }
+ return NULL;
+}
+
+int drm_agp_unbind(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_binding_t request;
+ drm_agp_mem_t *entry;
+
+ if (!dev->agp->acquired) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_binding_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (!entry->bound) return -EINVAL;
+ return drm_unbind_agp(entry->memory);
+}
+
+int drm_agp_bind(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_binding_t request;
+ drm_agp_mem_t *entry;
+ int retcode;
+ int page;
+
+ if (!dev->agp->acquired || !drm_agp->bind_memory) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_binding_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (entry->bound) return -EINVAL;
+ page = (request.offset + PAGE_SIZE - 1) / PAGE_SIZE;
+ if ((retcode = drm_bind_agp(entry->memory, page))) return retcode;
+ entry->bound = dev->agp->base + (page << PAGE_SHIFT);
+ DRM_DEBUG("base = 0x%lx entry->bound = 0x%lx\n",
+ dev->agp->base, entry->bound);
+ return 0;
+}
+
+int drm_agp_free(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_buffer_t request;
+ drm_agp_mem_t *entry;
+
+ if (!dev->agp->acquired) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_buffer_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (entry->bound) drm_unbind_agp(entry->memory);
+
+ if (entry->prev) entry->prev->next = entry->next;
+ else dev->agp->memory = entry->next;
+ if (entry->next) entry->next->prev = entry->prev;
+ drm_free_agp(entry->memory, entry->pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return 0;
+}
+
+drm_agp_head_t *drm_agp_init(void)
+{
+ drm_agp_head_t *head = NULL;
+
+ drm_agp = DRM_AGP_GET;
+ if (drm_agp) {
+ if (!(head = drm_alloc(sizeof(*head), DRM_MEM_AGPLISTS)))
+ return NULL;
+ memset((void *)head, 0, sizeof(*head));
+ drm_agp->copy_info(&head->agp_info);
+ if (head->agp_info.chipset = NOT_SUPPORTED) {
+ drm_free(head, sizeof(*head), DRM_MEM_AGPLISTS);
+ return NULL;
+ }
+ head->memory = NULL;
+ switch (head->agp_info.chipset) {
+ case INTEL_GENERIC: head->chipset = "Intel"; break;
+ case INTEL_LX: head->chipset = "Intel 440LX"; break;
+ case INTEL_BX: head->chipset = "Intel 440BX"; break;
+ case INTEL_GX: head->chipset = "Intel 440GX"; break;
+ case INTEL_I810: head->chipset = "Intel i810"; break;
+
+#if LINUX_VERSION_CODE >= 0x020400
+ case INTEL_I840: head->chipset = "Intel i840"; break;
+#endif
+ case INTEL_460GX: head->chipset = "Intel 460GX"; break;
+
+ case VIA_GENERIC: head->chipset = "VIA"; break;
+ case VIA_VP3: head->chipset = "VIA VP3"; break;
+ case VIA_MVP3: head->chipset = "VIA MVP3"; break;
+
+#if LINUX_VERSION_CODE >= 0x020400
+ case VIA_MVP4: head->chipset = "VIA MVP4"; break;
+ case VIA_APOLLO_KX133: head->chipset = "VIA Apollo KX133";
+ break;
+ case VIA_APOLLO_KT133: head->chipset = "VIA Apollo KT133";
+ break;
+#endif
+
+ case VIA_APOLLO_PRO: head->chipset = "VIA Apollo Pro";
+ break;
+ case SIS_GENERIC: head->chipset = "SiS"; break;
+ case AMD_GENERIC: head->chipset = "AMD"; break;
+ case AMD_IRONGATE: head->chipset = "AMD Irongate"; break;
+ case ALI_GENERIC: head->chipset = "ALi"; break;
+ case ALI_M1541: head->chipset = "ALi M1541"; break;
+ case ALI_M1621: head->chipset = "ALi M1621"; break;
+ case ALI_M1631: head->chipset = "ALi M1631"; break;
+ case ALI_M1632: head->chipset = "ALi M1632"; break;
+ case ALI_M1641: head->chipset = "ALi M1641"; break;
+ case ALI_M1647: head->chipset = "ALi M1647"; break;
+ case ALI_M1651: head->chipset = "ALi M1651"; break;
+ case SVWRKS_GENERIC: head->chipset = "Serverworks Generic";
+ break;
+ case SVWRKS_HE: head->chipset = "Serverworks HE"; break;
+ case SVWRKS_LE: head->chipset = "Serverworks LE"; break;
+
+ default: head->chipset = "Unknown"; break;
+ }
+#if LINUX_VERSION_CODE <= 0x020408
+ head->cant_use_aperture = 0;
+ head->page_mask = ~(0xfff);
+#else
+ head->cant_use_aperture = head->agp_info.cant_use_aperture;
+ head->page_mask = head->agp_info.page_mask;
+#endif
+
+ DRM_INFO("AGP %d.%d on %s @ 0x%08lx %ZuMB\n",
+ head->agp_info.version.major,
+ head->agp_info.version.minor,
+ head->chipset,
+ head->agp_info.aper_base,
+ head->agp_info.aper_size);
+ }
+ return head;
+}
+
+void drm_agp_uninit(void)
+{
+ DRM_AGP_PUT;
+ drm_agp = NULL;
+}
+
+agp_memory *drm_agp_allocate_memory(size_t pages, u32 type)
+{
+ if (!drm_agp->allocate_memory) return NULL;
+ return drm_agp->allocate_memory(pages, type);
+}
+
+int drm_agp_free_memory(agp_memory *handle)
+{
+ if (!handle || !drm_agp->free_memory) return 0;
+ drm_agp->free_memory(handle);
+ return 1;
+}
+
+int drm_agp_bind_memory(agp_memory *handle, off_t start)
+{
+ if (!handle || !drm_agp->bind_memory) return -EINVAL;
+ return drm_agp->bind_memory(handle, start);
+}
+
+int drm_agp_unbind_memory(agp_memory *handle)
+{
+ if (!handle || !drm_agp->unbind_memory) return -EINVAL;
+ return drm_agp->unbind_memory(handle);
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/auth.c linux-2.4.13-lia/drivers/char/drm-4.0/auth.c
--- linux-2.4.13/drivers/char/drm-4.0/auth.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/auth.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,162 @@
+/* auth.c -- IOCTLs for authentication -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+static int drm_hash_magic(drm_magic_t magic)
+{
+ return magic & (DRM_HASH_SIZE-1);
+}
+
+static drm_file_t *drm_find_file(drm_device_t *dev, drm_magic_t magic)
+{
+ drm_file_t *retval = NULL;
+ drm_magic_entry_t *pt;
+ int hash = drm_hash_magic(magic);
+
+ down(&dev->struct_sem);
+ for (pt = dev->magiclist[hash].head; pt; pt = pt->next) {
+ if (pt->magic = magic) {
+ retval = pt->priv;
+ break;
+ }
+ }
+ up(&dev->struct_sem);
+ return retval;
+}
+
+int drm_add_magic(drm_device_t *dev, drm_file_t *priv, drm_magic_t magic)
+{
+ int hash;
+ drm_magic_entry_t *entry;
+
+ DRM_DEBUG("%d\n", magic);
+
+ hash = drm_hash_magic(magic);
+ entry = drm_alloc(sizeof(*entry), DRM_MEM_MAGIC);
+ if (!entry) return -ENOMEM;
+ entry->magic = magic;
+ entry->priv = priv;
+ entry->next = NULL;
+
+ down(&dev->struct_sem);
+ if (dev->magiclist[hash].tail) {
+ dev->magiclist[hash].tail->next = entry;
+ dev->magiclist[hash].tail = entry;
+ } else {
+ dev->magiclist[hash].head = entry;
+ dev->magiclist[hash].tail = entry;
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+int drm_remove_magic(drm_device_t *dev, drm_magic_t magic)
+{
+ drm_magic_entry_t *prev = NULL;
+ drm_magic_entry_t *pt;
+ int hash;
+
+ DRM_DEBUG("%d\n", magic);
+ hash = drm_hash_magic(magic);
+
+ down(&dev->struct_sem);
+ for (pt = dev->magiclist[hash].head; pt; prev = pt, pt = pt->next) {
+ if (pt->magic = magic) {
+ if (dev->magiclist[hash].head = pt) {
+ dev->magiclist[hash].head = pt->next;
+ }
+ if (dev->magiclist[hash].tail = pt) {
+ dev->magiclist[hash].tail = prev;
+ }
+ if (prev) {
+ prev->next = pt->next;
+ }
+ up(&dev->struct_sem);
+ return 0;
+ }
+ }
+ up(&dev->struct_sem);
+
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+
+ return -EINVAL;
+}
+
+int drm_getmagic(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ static drm_magic_t sequence = 0;
+ static spinlock_t lock = SPIN_LOCK_UNLOCKED;
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_auth_t auth;
+
+ /* Find unique magic */
+ if (priv->magic) {
+ auth.magic = priv->magic;
+ } else {
+ do {
+ spin_lock(&lock);
+ if (!sequence) ++sequence; /* reserve 0 */
+ auth.magic = sequence++;
+ spin_unlock(&lock);
+ } while (drm_find_file(dev, auth.magic));
+ priv->magic = auth.magic;
+ drm_add_magic(dev, priv, auth.magic);
+ }
+
+ DRM_DEBUG("%u\n", auth.magic);
+ if (copy_to_user((drm_auth_t *)arg, &auth, sizeof(auth)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_authmagic(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_auth_t auth;
+ drm_file_t *file;
+
+ if (copy_from_user(&auth, (drm_auth_t *)arg, sizeof(auth)))
+ return -EFAULT;
+ DRM_DEBUG("%u\n", auth.magic);
+ if ((file = drm_find_file(dev, auth.magic))) {
+ file->authenticated = 1;
+ drm_remove_magic(dev, auth.magic);
+ return 0;
+ }
+ return -EINVAL;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/bufs.c linux-2.4.13-lia/drivers/char/drm-4.0/bufs.c
--- linux-2.4.13/drivers/char/drm-4.0/bufs.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/bufs.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,543 @@
+/* bufs.c -- IOCTLs to manage buffers -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include <linux/config.h>
+#include "drmP.h"
+#include "linux/un.h"
+
+ /* Compute order. Can be made faster. */
+int drm_order(unsigned long size)
+{
+ int order;
+ unsigned long tmp;
+
+ for (order = 0, tmp = size; tmp >>= 1; ++order);
+ if (size & ~(1 << order)) ++order;
+ return order;
+}
+
+int drm_addmap(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map;
+
+ if (!(filp->f_mode & 3)) return -EACCES; /* Require read/write */
+
+ map = drm_alloc(sizeof(*map), DRM_MEM_MAPS);
+ if (!map) return -ENOMEM;
+ if (copy_from_user(map, (drm_map_t *)arg, sizeof(*map))) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EFAULT;
+ }
+
+ DRM_DEBUG("offset = 0x%08lx, size = 0x%08lx, type = %d\n",
+ map->offset, map->size, map->type);
+ if ((map->offset & (~PAGE_MASK)) || (map->size & (~PAGE_MASK))) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+ map->mtrr = -1;
+ map->handle = 0;
+
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+#if !defined(__sparc__) && !defined(__ia64__)
+ if (map->offset + map->size < map->offset
+ || map->offset < virt_to_phys(high_memory)) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+#endif
+#ifdef CONFIG_MTRR
+ if (map->type = _DRM_FRAME_BUFFER
+ || (map->flags & _DRM_WRITE_COMBINING)) {
+ map->mtrr = mtrr_add(map->offset, map->size,
+ MTRR_TYPE_WRCOMB, 1);
+ }
+#endif
+ map->handle = drm_ioremap(map->offset, map->size, dev);
+ break;
+
+
+ case _DRM_SHM:
+ map->handle = (void *)drm_alloc_pages(drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ DRM_DEBUG("%ld %d %p\n", map->size, drm_order(map->size),
+ map->handle);
+ if (!map->handle) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -ENOMEM;
+ }
+ map->offset = (unsigned long)map->handle;
+ if (map->flags & _DRM_CONTAINS_LOCK) {
+ dev->lock.hw_lock = map->handle; /* Pointer to lock */
+ }
+ break;
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+ case _DRM_AGP:
+ map->offset = map->offset + dev->agp->base;
+ break;
+#endif
+ default:
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+
+ down(&dev->struct_sem);
+ if (dev->maplist) {
+ ++dev->map_count;
+ dev->maplist = drm_realloc(dev->maplist,
+ (dev->map_count-1)
+ * sizeof(*dev->maplist),
+ dev->map_count
+ * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ } else {
+ dev->map_count = 1;
+ dev->maplist = drm_alloc(dev->map_count*sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ }
+ dev->maplist[dev->map_count-1] = map;
+ up(&dev->struct_sem);
+
+ if (copy_to_user((drm_map_t *)arg, map, sizeof(*map)))
+ return -EFAULT;
+ if (map->type != _DRM_SHM) {
+ if (copy_to_user(&((drm_map_t *)arg)->handle,
+ &map->offset,
+ sizeof(map->offset)))
+ return -EFAULT;
+ }
+ return 0;
+}
+
+int drm_addbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int count;
+ int order;
+ int size;
+ int total;
+ int page_order;
+ drm_buf_entry_t *entry;
+ unsigned long page;
+ drm_buf_t *buf;
+ int alignment;
+ unsigned long offset;
+ int i;
+ int byte_count;
+ int page_count;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+
+ DRM_DEBUG("count = %d, size = %d (%d), order = %d, queue_count = %d\n",
+ request.count, request.size, size, order, dev->queue_count);
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if(count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->seglist = drm_alloc(count * sizeof(*entry->seglist),
+ DRM_MEM_SEGS);
+ if (!entry->seglist) {
+ drm_free(entry->buflist,
+ count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->seglist, 0, count * sizeof(*entry->seglist));
+
+ dma->pagelist = drm_realloc(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ DRM_DEBUG("pagelist: %d entries\n",
+ dma->page_count + (count << page_order));
+
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ byte_count = 0;
+ page_count = 0;
+ while (entry->buf_count < count) {
+ if (!(page = drm_alloc_pages(page_order, DRM_MEM_DMA))) break;
+ entry->seglist[entry->seg_count++] = page;
+ for (i = 0; i < (1 << page_order); i++) {
+ DRM_DEBUG("page %d @ 0x%08lx\n",
+ dma->page_count + page_count,
+ page + PAGE_SIZE * i);
+ dma->pagelist[dma->page_count + page_count++]
+ = page + PAGE_SIZE * i;
+ }
+ for (offset = 0;
+ offset + size <= total && entry->buf_count < count;
+ offset += alignment, ++entry->buf_count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+ buf->offset = (dma->byte_count + byte_count + offset);
+ buf->address = (void *)(page + offset);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ DRM_DEBUG("buffer %d @ %p\n",
+ entry->buf_count, buf->address);
+ }
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+ dma->seg_count += entry->seg_count;
+ dma->page_count += entry->seg_count << page_order;
+ dma->byte_count += PAGE_SIZE * (entry->seg_count << page_order);
+
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+ return 0;
+}
+
+int drm_infobufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_info_t request;
+ int i;
+ int count;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_info_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) ++count;
+ }
+
+ DRM_DEBUG("count = %d\n", count);
+
+ if (request.count >= count) {
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) {
+ if (copy_to_user(&request.list[count].count,
+ &dma->bufs[i].buf_count,
+ sizeof(dma->bufs[0]
+ .buf_count)) ||
+ copy_to_user(&request.list[count].size,
+ &dma->bufs[i].buf_size,
+ sizeof(dma->bufs[0].buf_size)) ||
+ copy_to_user(&request.list[count].low_mark,
+ &dma->bufs[i]
+ .freelist.low_mark,
+ sizeof(dma->bufs[0]
+ .freelist.low_mark)) ||
+ copy_to_user(&request.list[count]
+ .high_mark,
+ &dma->bufs[i]
+ .freelist.high_mark,
+ sizeof(dma->bufs[0]
+ .freelist.high_mark)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d %d %d %d %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].buf_size,
+ dma->bufs[i].freelist.low_mark,
+ dma->bufs[i].freelist.high_mark);
+ ++count;
+ }
+ }
+ }
+ request.count = count;
+
+ if (copy_to_user((drm_buf_info_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int drm_markbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int order;
+ drm_buf_entry_t *entry;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d, %d, %d\n",
+ request.size, request.low_mark, request.high_mark);
+ order = drm_order(request.size);
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ entry = &dma->bufs[order];
+
+ if (request.low_mark < 0 || request.low_mark > entry->buf_count)
+ return -EINVAL;
+ if (request.high_mark < 0 || request.high_mark > entry->buf_count)
+ return -EINVAL;
+
+ entry->freelist.low_mark = request.low_mark;
+ entry->freelist.high_mark = request.high_mark;
+
+ return 0;
+}
+
+int drm_freebufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_free_t request;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_free_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", request.count);
+ for (i = 0; i < request.count; i++) {
+ if (copy_from_user(&idx,
+ &request.list[i],
+ sizeof(idx)))
+ return -EFAULT;
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ idx, dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[idx];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d freeing buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ drm_free_buffer(dev, buf);
+ }
+
+ return 0;
+}
+
+int drm_mapbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ int retcode = 0;
+ const int zero = 0;
+ unsigned long virtual;
+ unsigned long address;
+ drm_buf_map_t request;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ DRM_DEBUG("\n");
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_map_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if (request.count >= dma->buf_count) {
+ down_write(¤t->mm->mmap_sem);
+ virtual = do_mmap(filp, 0, dma->byte_count,
+ PROT_READ|PROT_WRITE, MAP_SHARED, 0);
+ up_write(¤t->mm->mmap_sem);
+ if (virtual > -1024UL) {
+ /* Real error */
+ retcode = (signed long)virtual;
+ goto done;
+ }
+ request.virtual = (void *)virtual;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ if (copy_to_user(&request.list[i].idx,
+ &dma->buflist[i]->idx,
+ sizeof(request.list[0].idx))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].total,
+ &dma->buflist[i]->total,
+ sizeof(request.list[0].total))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].used,
+ &zero,
+ sizeof(zero))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ address = virtual + dma->buflist[i]->offset;
+ if (copy_to_user(&request.list[i].address,
+ &address,
+ sizeof(address))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ }
+ }
+done:
+ request.count = dma->buf_count;
+ DRM_DEBUG("%d buffers, retcode = %d\n", request.count, retcode);
+
+ if (copy_to_user((drm_buf_map_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return retcode;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/context.c linux-2.4.13-lia/drivers/char/drm-4.0/context.c
--- linux-2.4.13/drivers/char/drm-4.0/context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,321 @@
+/* context.c -- IOCTLs for contexts and DMA queues -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+static int drm_init_queue(drm_device_t *dev, drm_queue_t *q, drm_ctx_t *ctx)
+{
+ DRM_DEBUG("\n");
+
+ if (atomic_read(&q->use_count) != 1
+ || atomic_read(&q->finalization)
+ || atomic_read(&q->block_count)) {
+ DRM_ERROR("New queue is already in use: u%d f%d b%d\n",
+ atomic_read(&q->use_count),
+ atomic_read(&q->finalization),
+ atomic_read(&q->block_count));
+ }
+
+ atomic_set(&q->finalization, 0);
+ atomic_set(&q->block_count, 0);
+ atomic_set(&q->block_read, 0);
+ atomic_set(&q->block_write, 0);
+ atomic_set(&q->total_queued, 0);
+ atomic_set(&q->total_flushed, 0);
+ atomic_set(&q->total_locks, 0);
+
+ init_waitqueue_head(&q->write_queue);
+ init_waitqueue_head(&q->read_queue);
+ init_waitqueue_head(&q->flush_queue);
+
+ q->flags = ctx->flags;
+
+ drm_waitlist_create(&q->waitlist, dev->dma->buf_count);
+
+ return 0;
+}
+
+
+/* drm_alloc_queue:
+PRE: 1) dev->queuelist[0..dev->queue_count] is allocated and will not
+ disappear (so all deallocation must be done after IOCTLs are off)
+ 2) dev->queue_count < dev->queue_slots
+ 3) dev->queuelist[i].use_count = 0 and
+ dev->queuelist[i].finalization = 0 if i not in use
+POST: 1) dev->queuelist[i].use_count = 1
+ 2) dev->queue_count < dev->queue_slots */
+
+static int drm_alloc_queue(drm_device_t *dev)
+{
+ int i;
+ drm_queue_t *queue;
+ int oldslots;
+ int newslots;
+ /* Check for a free queue */
+ for (i = 0; i < dev->queue_count; i++) {
+ atomic_inc(&dev->queuelist[i]->use_count);
+ if (atomic_read(&dev->queuelist[i]->use_count) = 1
+ && !atomic_read(&dev->queuelist[i]->finalization)) {
+ DRM_DEBUG("%d (free)\n", i);
+ return i;
+ }
+ atomic_dec(&dev->queuelist[i]->use_count);
+ }
+ /* Allocate a new queue */
+
+ queue = drm_alloc(sizeof(*queue), DRM_MEM_QUEUES);
+ if(queue = NULL)
+ return -ENOMEM;
+
+ memset(queue, 0, sizeof(*queue));
+ down(&dev->struct_sem);
+ atomic_set(&queue->use_count, 1);
+
+ ++dev->queue_count;
+ if (dev->queue_count >= dev->queue_slots) {
+ oldslots = dev->queue_slots * sizeof(*dev->queuelist);
+ if (!dev->queue_slots) dev->queue_slots = 1;
+ dev->queue_slots *= 2;
+ newslots = dev->queue_slots * sizeof(*dev->queuelist);
+
+ dev->queuelist = drm_realloc(dev->queuelist,
+ oldslots,
+ newslots,
+ DRM_MEM_QUEUES);
+ if (!dev->queuelist) {
+ up(&dev->struct_sem);
+ DRM_DEBUG("out of memory\n");
+ return -ENOMEM;
+ }
+ }
+ dev->queuelist[dev->queue_count-1] = queue;
+
+ up(&dev->struct_sem);
+ DRM_DEBUG("%d (new)\n", dev->queue_count - 1);
+ return dev->queue_count - 1;
+}
+
+int drm_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS);
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+
+int drm_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ if ((ctx.handle = drm_alloc_queue(dev)) = DRM_KERNEL_CONTEXT) {
+ /* Init kernel's context and get a new one. */
+ drm_init_queue(dev, dev->queuelist[ctx.handle], &ctx);
+ ctx.handle = drm_alloc_queue(dev);
+ }
+ drm_init_queue(dev, dev->queuelist[ctx.handle], &ctx);
+ DRM_DEBUG("%d\n", ctx.handle);
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ drm_queue_t *q;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ if (ctx.handle < 0 || ctx.handle >= dev->queue_count) return -EINVAL;
+ q = dev->queuelist[ctx.handle];
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ /* No longer in use */
+ atomic_dec(&q->use_count);
+ return -EINVAL;
+ }
+
+ if (DRM_BUFCOUNT(&q->waitlist)) {
+ atomic_dec(&q->use_count);
+ return -EBUSY;
+ }
+
+ q->flags = ctx.flags;
+
+ atomic_dec(&q->use_count);
+ return 0;
+}
+
+int drm_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ drm_queue_t *q;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ if (ctx.handle >= dev->queue_count) return -EINVAL;
+ q = dev->queuelist[ctx.handle];
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ /* No longer in use */
+ atomic_dec(&q->use_count);
+ return -EINVAL;
+ }
+
+ ctx.flags = q->flags;
+ atomic_dec(&q->use_count);
+
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int drm_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return drm_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int drm_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ drm_context_switch_complete(dev, ctx.handle);
+
+ return 0;
+}
+
+int drm_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ drm_queue_t *q;
+ drm_buf_t *buf;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ if (ctx.handle >= dev->queue_count) return -EINVAL;
+ q = dev->queuelist[ctx.handle];
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ /* No longer in use */
+ atomic_dec(&q->use_count);
+ return -EINVAL;
+ }
+
+ atomic_inc(&q->finalization); /* Mark queue in finalization state */
+ atomic_sub(2, &q->use_count); /* Mark queue as unused (pending
+ finalization) */
+
+ while (test_and_set_bit(0, &dev->interrupt_flag)) {
+ schedule();
+ if (signal_pending(current)) {
+ clear_bit(0, &dev->interrupt_flag);
+ return -EINTR;
+ }
+ }
+ /* Remove queued buffers */
+ while ((buf = drm_waitlist_get(&q->waitlist))) {
+ drm_free_buffer(dev, buf);
+ }
+ clear_bit(0, &dev->interrupt_flag);
+
+ /* Wakeup blocked processes */
+ wake_up_interruptible(&q->read_queue);
+ wake_up_interruptible(&q->write_queue);
+ wake_up_interruptible(&q->flush_queue);
+
+ /* Finalization over. Queue is made
+ available when both use_count and
+ finalization become 0, which won't
+ happen until all the waiting processes
+ stop waiting. */
+ atomic_dec(&q->finalization);
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/ctxbitmap.c linux-2.4.13-lia/drivers/char/drm-4.0/ctxbitmap.c
--- linux-2.4.13/drivers/char/drm-4.0/ctxbitmap.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ctxbitmap.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,85 @@
+/* ctxbitmap.c -- Context bitmap management -*- linux-c -*-
+ * Created: Thu Jan 6 03:56:42 2000 by jhartmann@precisioninsight.com
+ *
+ * Copyright 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+void drm_ctxbitmap_free(drm_device_t *dev, int ctx_handle)
+{
+ if (ctx_handle < 0) goto failed;
+
+ if (ctx_handle < DRM_MAX_CTXBITMAP) {
+ clear_bit(ctx_handle, dev->ctx_bitmap);
+ return;
+ }
+failed:
+ DRM_ERROR("Attempt to free invalid context handle: %d\n",
+ ctx_handle);
+ return;
+}
+
+int drm_ctxbitmap_next(drm_device_t *dev)
+{
+ int bit;
+
+ bit = find_first_zero_bit(dev->ctx_bitmap, DRM_MAX_CTXBITMAP);
+ if (bit < DRM_MAX_CTXBITMAP) {
+ set_bit(bit, dev->ctx_bitmap);
+ DRM_DEBUG("drm_ctxbitmap_next bit : %d\n", bit);
+ return bit;
+ }
+ return -1;
+}
+
+int drm_ctxbitmap_init(drm_device_t *dev)
+{
+ int i;
+ int temp;
+
+ dev->ctx_bitmap = (unsigned long *) drm_alloc(PAGE_SIZE,
+ DRM_MEM_CTXBITMAP);
+ if(dev->ctx_bitmap = NULL) {
+ return -ENOMEM;
+ }
+ memset((void *) dev->ctx_bitmap, 0, PAGE_SIZE);
+ for(i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ temp = drm_ctxbitmap_next(dev);
+ DRM_DEBUG("drm_ctxbitmap_init : %d\n", temp);
+ }
+
+ return 0;
+}
+
+void drm_ctxbitmap_cleanup(drm_device_t *dev)
+{
+ drm_free((void *)dev->ctx_bitmap, PAGE_SIZE,
+ DRM_MEM_CTXBITMAP);
+}
+
diff -urN linux-2.4.13/drivers/char/drm-4.0/dma.c linux-2.4.13-lia/drivers/char/drm-4.0/dma.c
--- linux-2.4.13/drivers/char/drm-4.0/dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,546 @@
+/* dma.c -- DMA IOCTL and function support -*- linux-c -*-
+ * Created: Fri Mar 19 14:30:16 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinuxa.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+void drm_dma_setup(drm_device_t *dev)
+{
+ int i;
+
+ if (!(dev->dma = drm_alloc(sizeof(*dev->dma), DRM_MEM_DRIVER))) {
+ printk(KERN_ERR "drm_dma_setup: can't drm_alloc dev->dma");
+ return;
+ }
+ memset(dev->dma, 0, sizeof(*dev->dma));
+ for (i = 0; i <= DRM_MAX_ORDER; i++)
+ memset(&dev->dma->bufs[i], 0, sizeof(dev->dma->bufs[0]));
+}
+
+void drm_dma_takedown(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i, j;
+
+ if (!dma) return;
+
+ /* Clear dma buffers */
+ for (i = 0; i <= DRM_MAX_ORDER; i++) {
+ if (dma->bufs[i].seg_count) {
+ DRM_DEBUG("order %d: buf_count = %d,"
+ " seg_count = %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].seg_count);
+ for (j = 0; j < dma->bufs[i].seg_count; j++) {
+ drm_free_pages(dma->bufs[i].seglist[j],
+ dma->bufs[i].page_order,
+ DRM_MEM_DMA);
+ }
+ drm_free(dma->bufs[i].seglist,
+ dma->bufs[i].seg_count
+ * sizeof(*dma->bufs[0].seglist),
+ DRM_MEM_SEGS);
+ }
+ if(dma->bufs[i].buf_count) {
+ for(j = 0; j < dma->bufs[i].buf_count; j++) {
+ if(dma->bufs[i].buflist[j].dev_private) {
+ drm_free(dma->bufs[i].buflist[j].dev_private,
+ dma->bufs[i].buflist[j].dev_priv_size,
+ DRM_MEM_BUFS);
+ }
+ }
+ drm_free(dma->bufs[i].buflist,
+ dma->bufs[i].buf_count *
+ sizeof(*dma->bufs[0].buflist),
+ DRM_MEM_BUFS);
+ drm_freelist_destroy(&dma->bufs[i].freelist);
+ }
+ }
+
+ if (dma->buflist) {
+ drm_free(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ }
+
+ if (dma->pagelist) {
+ drm_free(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ }
+ drm_free(dev->dma, sizeof(*dev->dma), DRM_MEM_DRIVER);
+ dev->dma = NULL;
+}
+
+#if DRM_DMA_HISTOGRAM
+/* This is slow, but is useful for debugging. */
+int drm_histogram_slot(unsigned long count)
+{
+ int value = DRM_DMA_HISTOGRAM_INITIAL;
+ int slot;
+
+ for (slot = 0;
+ slot < DRM_DMA_HISTOGRAM_SLOTS;
+ ++slot, value = DRM_DMA_HISTOGRAM_NEXT(value)) {
+ if (count < value) return slot;
+ }
+ return DRM_DMA_HISTOGRAM_SLOTS - 1;
+}
+
+void drm_histogram_compute(drm_device_t *dev, drm_buf_t *buf)
+{
+ cycles_t queued_to_dispatched;
+ cycles_t dispatched_to_completed;
+ cycles_t completed_to_freed;
+ int q2d, d2c, c2f, q2c, q2f;
+
+ if (buf->time_queued) {
+ queued_to_dispatched = (buf->time_dispatched
+ - buf->time_queued);
+ dispatched_to_completed = (buf->time_completed
+ - buf->time_dispatched);
+ completed_to_freed = (buf->time_freed
+ - buf->time_completed);
+
+ q2d = drm_histogram_slot(queued_to_dispatched);
+ d2c = drm_histogram_slot(dispatched_to_completed);
+ c2f = drm_histogram_slot(completed_to_freed);
+
+ q2c = drm_histogram_slot(queued_to_dispatched
+ + dispatched_to_completed);
+ q2f = drm_histogram_slot(queued_to_dispatched
+ + dispatched_to_completed
+ + completed_to_freed);
+
+ atomic_inc(&dev->histo.total);
+ atomic_inc(&dev->histo.queued_to_dispatched[q2d]);
+ atomic_inc(&dev->histo.dispatched_to_completed[d2c]);
+ atomic_inc(&dev->histo.completed_to_freed[c2f]);
+
+ atomic_inc(&dev->histo.queued_to_completed[q2c]);
+ atomic_inc(&dev->histo.queued_to_freed[q2f]);
+
+ }
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+}
+#endif
+
+void drm_free_buffer(drm_device_t *dev, drm_buf_t *buf)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ if (!buf) return;
+
+ buf->waiting = 0;
+ buf->pending = 0;
+ buf->pid = 0;
+ buf->used = 0;
+#if DRM_DMA_HISTOGRAM
+ buf->time_completed = get_cycles();
+#endif
+ if (waitqueue_active(&buf->dma_wait)) {
+ wake_up_interruptible(&buf->dma_wait);
+ } else {
+ /* If processes are waiting, the last one
+ to wake will put the buffer on the free
+ list. If no processes are waiting, we
+ put the buffer on the freelist here. */
+ drm_freelist_put(dev, &dma->bufs[buf->order].freelist, buf);
+ }
+}
+
+void drm_reclaim_buffers(drm_device_t *dev, pid_t pid)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+
+ if (!dma) return;
+ for (i = 0; i < dma->buf_count; i++) {
+ if (dma->buflist[i]->pid = pid) {
+ switch (dma->buflist[i]->list) {
+ case DRM_LIST_NONE:
+ drm_free_buffer(dev, dma->buflist[i]);
+ break;
+ case DRM_LIST_WAIT:
+ dma->buflist[i]->list = DRM_LIST_RECLAIM;
+ break;
+ default:
+ /* Buffer already on hardware. */
+ break;
+ }
+ }
+ }
+}
+
+int drm_context_switch(drm_device_t *dev, int old, int new)
+{
+ char buf[64];
+ drm_queue_t *q;
+
+ atomic_inc(&dev->total_ctx);
+
+ if (test_and_set_bit(0, &dev->context_flag)) {
+ DRM_ERROR("Reentering -- FIXME\n");
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new >= dev->queue_count) {
+ clear_bit(0, &dev->context_flag);
+ return -EINVAL;
+ }
+
+ if (new = dev->last_context) {
+ clear_bit(0, &dev->context_flag);
+ return 0;
+ }
+
+ q = dev->queuelist[new];
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ atomic_dec(&q->use_count);
+ clear_bit(0, &dev->context_flag);
+ return -EINVAL;
+ }
+
+ if (drm_flags & DRM_FLAG_NOCTX) {
+ drm_context_switch_complete(dev, new);
+ } else {
+ sprintf(buf, "C %d %d\n", old, new);
+ drm_write_string(dev, buf);
+ }
+
+ atomic_dec(&q->use_count);
+
+ return 0;
+}
+
+int drm_context_switch_complete(drm_device_t *dev, int new)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
+ dev->last_switch = jiffies;
+
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Lock isn't held after context switch\n");
+ }
+
+ if (!dma || !(dma->next_buffer && dma->next_buffer->while_locked)) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("Cannot free lock\n");
+ }
+ }
+
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles()
+ - dev->ctx_start)]);
+
+#endif
+ clear_bit(0, &dev->context_flag);
+ wake_up_interruptible(&dev->context_wait);
+
+ return 0;
+}
+
+void drm_clear_next_buffer(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ dma->next_buffer = NULL;
+ if (dma->next_queue && !DRM_BUFCOUNT(&dma->next_queue->waitlist)) {
+ wake_up_interruptible(&dma->next_queue->flush_queue);
+ }
+ dma->next_queue = NULL;
+}
+
+
+int drm_select_queue(drm_device_t *dev, void (*wrapper)(unsigned long))
+{
+ int i;
+ int candidate = -1;
+ int j = jiffies;
+
+ if (!dev) {
+ DRM_ERROR("No device\n");
+ return -1;
+ }
+ if (!dev->queuelist || !dev->queuelist[DRM_KERNEL_CONTEXT]) {
+ /* This only happens between the time the
+ interrupt is initialized and the time
+ the queues are initialized. */
+ return -1;
+ }
+
+ /* Doing "while locked" DMA? */
+ if (DRM_WAITCOUNT(dev, DRM_KERNEL_CONTEXT)) {
+ return DRM_KERNEL_CONTEXT;
+ }
+
+ /* If there are buffers on the last_context
+ queue, and we have not been executing
+ this context very long, continue to
+ execute this context. */
+ if (dev->last_switch <= j
+ && dev->last_switch + DRM_TIME_SLICE > j
+ && DRM_WAITCOUNT(dev, dev->last_context)) {
+ return dev->last_context;
+ }
+
+ /* Otherwise, find a candidate */
+ for (i = dev->last_checked + 1; i < dev->queue_count; i++) {
+ if (DRM_WAITCOUNT(dev, i)) {
+ candidate = dev->last_checked = i;
+ break;
+ }
+ }
+
+ if (candidate < 0) {
+ for (i = 0; i < dev->queue_count; i++) {
+ if (DRM_WAITCOUNT(dev, i)) {
+ candidate = dev->last_checked = i;
+ break;
+ }
+ }
+ }
+
+ if (wrapper
+ && candidate >= 0
+ && candidate != dev->last_context
+ && dev->last_switch <= j
+ && dev->last_switch + DRM_TIME_SLICE > j) {
+ if (dev->timer.expires != dev->last_switch + DRM_TIME_SLICE) {
+ del_timer(&dev->timer);
+ dev->timer.function = wrapper;
+ dev->timer.data = (unsigned long)dev;
+ dev->timer.expires = dev->last_switch+DRM_TIME_SLICE;
+ add_timer(&dev->timer);
+ }
+ return -1;
+ }
+
+ return candidate;
+}
+
+
+int drm_dma_enqueue(drm_device_t *dev, drm_dma_t *d)
+{
+ int i;
+ drm_queue_t *q;
+ drm_buf_t *buf;
+ int idx;
+ int while_locked = 0;
+ drm_device_dma_t *dma = dev->dma;
+ DECLARE_WAITQUEUE(entry, current);
+
+ DRM_DEBUG("%d\n", d->send_count);
+
+ if (d->flags & _DRM_DMA_WHILE_LOCKED) {
+ int context = dev->lock.hw_lock->lock;
+
+ if (!_DRM_LOCK_IS_HELD(context)) {
+ DRM_ERROR("No lock held during \"while locked\""
+ " request\n");
+ return -EINVAL;
+ }
+ if (d->context != _DRM_LOCKING_CONTEXT(context)
+ && _DRM_LOCKING_CONTEXT(context) != DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Lock held by %d while %d makes"
+ " \"while locked\" request\n",
+ _DRM_LOCKING_CONTEXT(context),
+ d->context);
+ return -EINVAL;
+ }
+ q = dev->queuelist[DRM_KERNEL_CONTEXT];
+ while_locked = 1;
+ } else {
+ q = dev->queuelist[d->context];
+ }
+
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->block_write)) {
+ add_wait_queue(&q->write_queue, &entry);
+ atomic_inc(&q->block_count);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!atomic_read(&q->block_write)) break;
+ schedule();
+ if (signal_pending(current)) {
+ atomic_dec(&q->use_count);
+ remove_wait_queue(&q->write_queue, &entry);
+ return -EINTR;
+ }
+ }
+ atomic_dec(&q->block_count);
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&q->write_queue, &entry);
+ }
+
+ for (i = 0; i < d->send_count; i++) {
+ idx = d->send_indices[i];
+ if (idx < 0 || idx >= dma->buf_count) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Index %d (of %d max)\n",
+ d->send_indices[i], dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[ idx ];
+ if (buf->pid != current->pid) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Process %d using buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ if (buf->list != DRM_LIST_NONE) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Process %d using buffer %d on list %d\n",
+ current->pid, buf->idx, buf->list);
+ }
+ buf->used = d->send_sizes[i];
+ buf->while_locked = while_locked;
+ buf->context = d->context;
+ if (!buf->used) {
+ DRM_ERROR("Queueing 0 length buffer\n");
+ }
+ if (buf->pending) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Queueing pending buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ return -EINVAL;
+ }
+ if (buf->waiting) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Queueing waiting buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ return -EINVAL;
+ }
+ buf->waiting = 1;
+ if (atomic_read(&q->use_count) = 1
+ || atomic_read(&q->finalization)) {
+ drm_free_buffer(dev, buf);
+ } else {
+ drm_waitlist_put(&q->waitlist, buf);
+ atomic_inc(&q->total_queued);
+ }
+ }
+ atomic_dec(&q->use_count);
+
+ return 0;
+}
+
+static int drm_dma_get_buffers_of_order(drm_device_t *dev, drm_dma_t *d,
+ int order)
+{
+ int i;
+ drm_buf_t *buf;
+ drm_device_dma_t *dma = dev->dma;
+
+ for (i = d->granted_count; i < d->request_count; i++) {
+ buf = drm_freelist_get(&dma->bufs[order].freelist,
+ d->flags & _DRM_DMA_WAIT);
+ if (!buf) break;
+ if (buf->pending || buf->waiting) {
+ DRM_ERROR("Free buffer %d in use by %d (w%d, p%d)\n",
+ buf->idx,
+ buf->pid,
+ buf->waiting,
+ buf->pending);
+ }
+ buf->pid = current->pid;
+ if (copy_to_user(&d->request_indices[i],
+ &buf->idx,
+ sizeof(buf->idx)))
+ return -EFAULT;
+
+ if (copy_to_user(&d->request_sizes[i],
+ &buf->total,
+ sizeof(buf->total)))
+ return -EFAULT;
+
+ ++d->granted_count;
+ }
+ return 0;
+}
+
+
+int drm_dma_get_buffers(drm_device_t *dev, drm_dma_t *dma)
+{
+ int order;
+ int retcode = 0;
+ int tmp_order;
+
+ order = drm_order(dma->request_size);
+
+ dma->granted_count = 0;
+ retcode = drm_dma_get_buffers_of_order(dev, dma, order);
+
+ if (dma->granted_count < dma->request_count
+ && (dma->flags & _DRM_DMA_SMALLER_OK)) {
+ for (tmp_order = order - 1;
+ !retcode
+ && dma->granted_count < dma->request_count
+ && tmp_order >= DRM_MIN_ORDER;
+ --tmp_order) {
+
+ retcode = drm_dma_get_buffers_of_order(dev, dma,
+ tmp_order);
+ }
+ }
+
+ if (dma->granted_count < dma->request_count
+ && (dma->flags & _DRM_DMA_LARGER_OK)) {
+ for (tmp_order = order + 1;
+ !retcode
+ && dma->granted_count < dma->request_count
+ && tmp_order <= DRM_MAX_ORDER;
+ ++tmp_order) {
+
+ retcode = drm_dma_get_buffers_of_order(dev, dma,
+ tmp_order);
+ }
+ }
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/drawable.c linux-2.4.13-lia/drivers/char/drm-4.0/drawable.c
--- linux-2.4.13/drivers/char/drm-4.0/drawable.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/drawable.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,51 @@
+/* drawable.c -- IOCTLs for drawables -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_adddraw(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_draw_t draw;
+
+ draw.handle = 0; /* NOOP */
+ DRM_DEBUG("%d\n", draw.handle);
+ if (copy_to_user((drm_draw_t *)arg, &draw, sizeof(draw)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_rmdraw(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ return 0; /* NOOP */
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/drm.h linux-2.4.13-lia/drivers/char/drm-4.0/drm.h
--- linux-2.4.13/drivers/char/drm-4.0/drm.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/drm.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,414 @@
+/* drm.h -- Header for Direct Rendering Manager -*- linux-c -*-
+ * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ * Acknowledgements:
+ * Dec 1999, Richard Henderson <rth@twiddle.net>, move to generic cmpxchg.
+ *
+ */
+
+#ifndef _DRM_H_
+#define _DRM_H_
+
+#include <linux/config.h>
+#if defined(__linux__)
+#include <asm/ioctl.h> /* For _IO* macros */
+#define DRM_IOCTL_NR(n) _IOC_NR(n)
+#elif defined(__FreeBSD__)
+#include <sys/ioccom.h>
+#define DRM_IOCTL_NR(n) ((n) & 0xff)
+#endif
+
+#define DRM_PROC_DEVICES "/proc/devices"
+#define DRM_PROC_MISC "/proc/misc"
+#define DRM_PROC_DRM "/proc/drm"
+#define DRM_DEV_DRM "/dev/drm"
+#define DRM_DEV_MODE (S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP)
+#define DRM_DEV_UID 0
+#define DRM_DEV_GID 0
+
+
+#define DRM_NAME "drm" /* Name in kernel, /dev, and /proc */
+#define DRM_MIN_ORDER 5 /* At least 2^5 bytes = 32 bytes */
+#define DRM_MAX_ORDER 22 /* Up to 2^22 bytes = 4MB */
+#define DRM_RAM_PERCENT 10 /* How much system ram can we lock? */
+
+#define _DRM_LOCK_HELD 0x80000000 /* Hardware lock is held */
+#define _DRM_LOCK_CONT 0x40000000 /* Hardware lock is contended */
+#define _DRM_LOCK_IS_HELD(lock) ((lock) & _DRM_LOCK_HELD)
+#define _DRM_LOCK_IS_CONT(lock) ((lock) & _DRM_LOCK_CONT)
+#define _DRM_LOCKING_CONTEXT(lock) ((lock) & ~(_DRM_LOCK_HELD|_DRM_LOCK_CONT))
+
+typedef unsigned long drm_handle_t;
+typedef unsigned int drm_context_t;
+typedef unsigned int drm_drawable_t;
+typedef unsigned int drm_magic_t;
+
+/* Warning: If you change this structure, make sure you change
+ * XF86DRIClipRectRec in the server as well */
+
+typedef struct drm_clip_rect {
+ unsigned short x1;
+ unsigned short y1;
+ unsigned short x2;
+ unsigned short y2;
+} drm_clip_rect_t;
+
+/* Seperate include files for the i810/mga/r128 specific structures */
+#include "mga_drm.h"
+#include "i810_drm.h"
+#include "r128_drm.h"
+#include "radeon_drm.h"
+#ifdef CONFIG_DRM40_SIS
+#include "sis_drm.h"
+#endif
+
+typedef struct drm_version {
+ int version_major; /* Major version */
+ int version_minor; /* Minor version */
+ int version_patchlevel;/* Patch level */
+ size_t name_len; /* Length of name buffer */
+ char *name; /* Name of driver */
+ size_t date_len; /* Length of date buffer */
+ char *date; /* User-space buffer to hold date */
+ size_t desc_len; /* Length of desc buffer */
+ char *desc; /* User-space buffer to hold desc */
+} drm_version_t;
+
+typedef struct drm_unique {
+ size_t unique_len; /* Length of unique */
+ char *unique; /* Unique name for driver instantiation */
+} drm_unique_t;
+
+typedef struct drm_list {
+ int count; /* Length of user-space structures */
+ drm_version_t *version;
+} drm_list_t;
+
+typedef struct drm_block {
+ int unused;
+} drm_block_t;
+
+typedef struct drm_control {
+ enum {
+ DRM_ADD_COMMAND,
+ DRM_RM_COMMAND,
+ DRM_INST_HANDLER,
+ DRM_UNINST_HANDLER
+ } func;
+ int irq;
+} drm_control_t;
+
+typedef enum drm_map_type {
+ _DRM_FRAME_BUFFER = 0, /* WC (no caching), no core dump */
+ _DRM_REGISTERS = 1, /* no caching, no core dump */
+ _DRM_SHM = 2, /* shared, cached */
+ _DRM_AGP = 3 /* AGP/GART */
+} drm_map_type_t;
+
+typedef enum drm_map_flags {
+ _DRM_RESTRICTED = 0x01, /* Cannot be mapped to user-virtual */
+ _DRM_READ_ONLY = 0x02,
+ _DRM_LOCKED = 0x04, /* shared, cached, locked */
+ _DRM_KERNEL = 0x08, /* kernel requires access */
+ _DRM_WRITE_COMBINING = 0x10, /* use write-combining if available */
+ _DRM_CONTAINS_LOCK = 0x20 /* SHM page that contains lock */
+} drm_map_flags_t;
+
+typedef struct drm_map {
+ unsigned long offset; /* Requested physical address (0 for SAREA)*/
+ unsigned long size; /* Requested physical size (bytes) */
+ drm_map_type_t type; /* Type of memory to map */
+ drm_map_flags_t flags; /* Flags */
+ void *handle; /* User-space: "Handle" to pass to mmap */
+ /* Kernel-space: kernel-virtual address */
+ int mtrr; /* MTRR slot used */
+ /* Private data */
+} drm_map_t;
+
+typedef enum drm_lock_flags {
+ _DRM_LOCK_READY = 0x01, /* Wait until hardware is ready for DMA */
+ _DRM_LOCK_QUIESCENT = 0x02, /* Wait until hardware quiescent */
+ _DRM_LOCK_FLUSH = 0x04, /* Flush this context's DMA queue first */
+ _DRM_LOCK_FLUSH_ALL = 0x08, /* Flush all DMA queues first */
+ /* These *HALT* flags aren't supported yet
+ -- they will be used to support the
+ full-screen DGA-like mode. */
+ _DRM_HALT_ALL_QUEUES = 0x10, /* Halt all current and future queues */
+ _DRM_HALT_CUR_QUEUES = 0x20 /* Halt all current queues */
+} drm_lock_flags_t;
+
+typedef struct drm_lock {
+ int context;
+ drm_lock_flags_t flags;
+} drm_lock_t;
+
+typedef enum drm_dma_flags { /* These values *MUST* match xf86drm.h */
+ /* Flags for DMA buffer dispatch */
+ _DRM_DMA_BLOCK = 0x01, /* Block until buffer dispatched.
+ Note, the buffer may not yet have
+ been processed by the hardware --
+ getting a hardware lock with the
+ hardware quiescent will ensure
+ that the buffer has been
+ processed. */
+ _DRM_DMA_WHILE_LOCKED = 0x02, /* Dispatch while lock held */
+ _DRM_DMA_PRIORITY = 0x04, /* High priority dispatch */
+
+ /* Flags for DMA buffer request */
+ _DRM_DMA_WAIT = 0x10, /* Wait for free buffers */
+ _DRM_DMA_SMALLER_OK = 0x20, /* Smaller-than-requested buffers ok */
+ _DRM_DMA_LARGER_OK = 0x40 /* Larger-than-requested buffers ok */
+} drm_dma_flags_t;
+
+typedef struct drm_buf_desc {
+ int count; /* Number of buffers of this size */
+ int size; /* Size in bytes */
+ int low_mark; /* Low water mark */
+ int high_mark; /* High water mark */
+ enum {
+ _DRM_PAGE_ALIGN = 0x01, /* Align on page boundaries for DMA */
+ _DRM_AGP_BUFFER = 0x02 /* Buffer is in agp space */
+ } flags;
+ unsigned long agp_start; /* Start address of where the agp buffers
+ * are in the agp aperture */
+} drm_buf_desc_t;
+
+typedef struct drm_buf_info {
+ int count; /* Entries in list */
+ drm_buf_desc_t *list;
+} drm_buf_info_t;
+
+typedef struct drm_buf_free {
+ int count;
+ int *list;
+} drm_buf_free_t;
+
+typedef struct drm_buf_pub {
+ int idx; /* Index into master buflist */
+ int total; /* Buffer size */
+ int used; /* Amount of buffer in use (for DMA) */
+ void *address; /* Address of buffer */
+} drm_buf_pub_t;
+
+typedef struct drm_buf_map {
+ int count; /* Length of buflist */
+ void *virtual; /* Mmaped area in user-virtual */
+ drm_buf_pub_t *list; /* Buffer information */
+} drm_buf_map_t;
+
+typedef struct drm_dma {
+ /* Indices here refer to the offset into
+ buflist in drm_buf_get_t. */
+ int context; /* Context handle */
+ int send_count; /* Number of buffers to send */
+ int *send_indices; /* List of handles to buffers */
+ int *send_sizes; /* Lengths of data to send */
+ drm_dma_flags_t flags; /* Flags */
+ int request_count; /* Number of buffers requested */
+ int request_size; /* Desired size for buffers */
+ int *request_indices; /* Buffer information */
+ int *request_sizes;
+ int granted_count; /* Number of buffers granted */
+} drm_dma_t;
+
+typedef enum {
+ _DRM_CONTEXT_PRESERVED = 0x01,
+ _DRM_CONTEXT_2DONLY = 0x02
+} drm_ctx_flags_t;
+
+typedef struct drm_ctx {
+ drm_context_t handle;
+ drm_ctx_flags_t flags;
+} drm_ctx_t;
+
+typedef struct drm_ctx_res {
+ int count;
+ drm_ctx_t *contexts;
+} drm_ctx_res_t;
+
+typedef struct drm_draw {
+ drm_drawable_t handle;
+} drm_draw_t;
+
+typedef struct drm_auth {
+ drm_magic_t magic;
+} drm_auth_t;
+
+typedef struct drm_irq_busid {
+ int irq;
+ int busnum;
+ int devnum;
+ int funcnum;
+} drm_irq_busid_t;
+
+typedef struct drm_agp_mode {
+ unsigned long mode;
+} drm_agp_mode_t;
+
+ /* For drm_agp_alloc -- allocated a buffer */
+typedef struct drm_agp_buffer {
+ unsigned long size; /* In bytes -- will round to page boundary */
+ unsigned long handle; /* Used for BIND/UNBIND ioctls */
+ unsigned long type; /* Type of memory to allocate */
+ unsigned long physical; /* Physical used by i810 */
+} drm_agp_buffer_t;
+
+ /* For drm_agp_bind */
+typedef struct drm_agp_binding {
+ unsigned long handle; /* From drm_agp_buffer */
+ unsigned long offset; /* In bytes -- will round to page boundary */
+} drm_agp_binding_t;
+
+typedef struct drm_agp_info {
+ int agp_version_major;
+ int agp_version_minor;
+ unsigned long mode;
+ unsigned long aperture_base; /* physical address */
+ unsigned long aperture_size; /* bytes */
+ unsigned long memory_allowed; /* bytes */
+ unsigned long memory_used;
+
+ /* PCI information */
+ unsigned short id_vendor;
+ unsigned short id_device;
+} drm_agp_info_t;
+
+#define DRM_IOCTL_BASE 'd'
+#define DRM_IO(nr) _IO(DRM_IOCTL_BASE,nr)
+#define DRM_IOR(nr,size) _IOR(DRM_IOCTL_BASE,nr,size)
+#define DRM_IOW(nr,size) _IOW(DRM_IOCTL_BASE,nr,size)
+#define DRM_IOWR(nr,size) _IOWR(DRM_IOCTL_BASE,nr,size)
+
+
+#define DRM_IOCTL_VERSION DRM_IOWR(0x00, drm_version_t)
+#define DRM_IOCTL_GET_UNIQUE DRM_IOWR(0x01, drm_unique_t)
+#define DRM_IOCTL_GET_MAGIC DRM_IOR( 0x02, drm_auth_t)
+#define DRM_IOCTL_IRQ_BUSID DRM_IOWR(0x03, drm_irq_busid_t)
+
+#define DRM_IOCTL_SET_UNIQUE DRM_IOW( 0x10, drm_unique_t)
+#define DRM_IOCTL_AUTH_MAGIC DRM_IOW( 0x11, drm_auth_t)
+#define DRM_IOCTL_BLOCK DRM_IOWR(0x12, drm_block_t)
+#define DRM_IOCTL_UNBLOCK DRM_IOWR(0x13, drm_block_t)
+#define DRM_IOCTL_CONTROL DRM_IOW( 0x14, drm_control_t)
+#define DRM_IOCTL_ADD_MAP DRM_IOWR(0x15, drm_map_t)
+#define DRM_IOCTL_ADD_BUFS DRM_IOWR(0x16, drm_buf_desc_t)
+#define DRM_IOCTL_MARK_BUFS DRM_IOW( 0x17, drm_buf_desc_t)
+#define DRM_IOCTL_INFO_BUFS DRM_IOWR(0x18, drm_buf_info_t)
+#define DRM_IOCTL_MAP_BUFS DRM_IOWR(0x19, drm_buf_map_t)
+#define DRM_IOCTL_FREE_BUFS DRM_IOW( 0x1a, drm_buf_free_t)
+
+#define DRM_IOCTL_ADD_CTX DRM_IOWR(0x20, drm_ctx_t)
+#define DRM_IOCTL_RM_CTX DRM_IOWR(0x21, drm_ctx_t)
+#define DRM_IOCTL_MOD_CTX DRM_IOW( 0x22, drm_ctx_t)
+#define DRM_IOCTL_GET_CTX DRM_IOWR(0x23, drm_ctx_t)
+#define DRM_IOCTL_SWITCH_CTX DRM_IOW( 0x24, drm_ctx_t)
+#define DRM_IOCTL_NEW_CTX DRM_IOW( 0x25, drm_ctx_t)
+#define DRM_IOCTL_RES_CTX DRM_IOWR(0x26, drm_ctx_res_t)
+#define DRM_IOCTL_ADD_DRAW DRM_IOWR(0x27, drm_draw_t)
+#define DRM_IOCTL_RM_DRAW DRM_IOWR(0x28, drm_draw_t)
+#define DRM_IOCTL_DMA DRM_IOWR(0x29, drm_dma_t)
+#define DRM_IOCTL_LOCK DRM_IOW( 0x2a, drm_lock_t)
+#define DRM_IOCTL_UNLOCK DRM_IOW( 0x2b, drm_lock_t)
+#define DRM_IOCTL_FINISH DRM_IOW( 0x2c, drm_lock_t)
+
+#define DRM_IOCTL_AGP_ACQUIRE DRM_IO( 0x30)
+#define DRM_IOCTL_AGP_RELEASE DRM_IO( 0x31)
+#define DRM_IOCTL_AGP_ENABLE DRM_IOW( 0x32, drm_agp_mode_t)
+#define DRM_IOCTL_AGP_INFO DRM_IOR( 0x33, drm_agp_info_t)
+#define DRM_IOCTL_AGP_ALLOC DRM_IOWR(0x34, drm_agp_buffer_t)
+#define DRM_IOCTL_AGP_FREE DRM_IOW( 0x35, drm_agp_buffer_t)
+#define DRM_IOCTL_AGP_BIND DRM_IOW( 0x36, drm_agp_binding_t)
+#define DRM_IOCTL_AGP_UNBIND DRM_IOW( 0x37, drm_agp_binding_t)
+
+/* Mga specific ioctls */
+#define DRM_IOCTL_MGA_INIT DRM_IOW( 0x40, drm_mga_init_t)
+#define DRM_IOCTL_MGA_SWAP DRM_IOW( 0x41, drm_mga_swap_t)
+#define DRM_IOCTL_MGA_CLEAR DRM_IOW( 0x42, drm_mga_clear_t)
+#define DRM_IOCTL_MGA_ILOAD DRM_IOW( 0x43, drm_mga_iload_t)
+#define DRM_IOCTL_MGA_VERTEX DRM_IOW( 0x44, drm_mga_vertex_t)
+#define DRM_IOCTL_MGA_FLUSH DRM_IOW( 0x45, drm_lock_t )
+#define DRM_IOCTL_MGA_INDICES DRM_IOW( 0x46, drm_mga_indices_t)
+#define DRM_IOCTL_MGA_BLIT DRM_IOW( 0x47, drm_mga_blit_t)
+
+/* I810 specific ioctls */
+#define DRM_IOCTL_I810_INIT DRM_IOW( 0x40, drm_i810_init_t)
+#define DRM_IOCTL_I810_VERTEX DRM_IOW( 0x41, drm_i810_vertex_t)
+#define DRM_IOCTL_I810_CLEAR DRM_IOW( 0x42, drm_i810_clear_t)
+#define DRM_IOCTL_I810_FLUSH DRM_IO( 0x43)
+#define DRM_IOCTL_I810_GETAGE DRM_IO( 0x44)
+#define DRM_IOCTL_I810_GETBUF DRM_IOWR(0x45, drm_i810_dma_t)
+#define DRM_IOCTL_I810_SWAP DRM_IO( 0x46)
+#define DRM_IOCTL_I810_COPY DRM_IOW( 0x47, drm_i810_copy_t)
+#define DRM_IOCTL_I810_DOCOPY DRM_IO( 0x48)
+
+/* Rage 128 specific ioctls */
+#define DRM_IOCTL_R128_INIT DRM_IOW( 0x40, drm_r128_init_t)
+#define DRM_IOCTL_R128_CCE_START DRM_IO( 0x41)
+#define DRM_IOCTL_R128_CCE_STOP DRM_IOW( 0x42, drm_r128_cce_stop_t)
+#define DRM_IOCTL_R128_CCE_RESET DRM_IO( 0x43)
+#define DRM_IOCTL_R128_CCE_IDLE DRM_IO( 0x44)
+#define DRM_IOCTL_R128_RESET DRM_IO( 0x46)
+#define DRM_IOCTL_R128_SWAP DRM_IO( 0x47)
+#define DRM_IOCTL_R128_CLEAR DRM_IOW( 0x48, drm_r128_clear_t)
+#define DRM_IOCTL_R128_VERTEX DRM_IOW( 0x49, drm_r128_vertex_t)
+#define DRM_IOCTL_R128_INDICES DRM_IOW( 0x4a, drm_r128_indices_t)
+#define DRM_IOCTL_R128_BLIT DRM_IOW( 0x4b, drm_r128_blit_t)
+#define DRM_IOCTL_R128_DEPTH DRM_IOW( 0x4c, drm_r128_depth_t)
+#define DRM_IOCTL_R128_STIPPLE DRM_IOW( 0x4d, drm_r128_stipple_t)
+#define DRM_IOCTL_R128_PACKET DRM_IOWR(0x4e, drm_r128_packet_t)
+
+/* Radeon specific ioctls */
+#define DRM_IOCTL_RADEON_CP_INIT DRM_IOW( 0x40, drm_radeon_init_t)
+#define DRM_IOCTL_RADEON_CP_START DRM_IO( 0x41)
+#define DRM_IOCTL_RADEON_CP_STOP DRM_IOW( 0x42, drm_radeon_cp_stop_t)
+#define DRM_IOCTL_RADEON_CP_RESET DRM_IO( 0x43)
+#define DRM_IOCTL_RADEON_CP_IDLE DRM_IO( 0x44)
+#define DRM_IOCTL_RADEON_RESET DRM_IO( 0x45)
+#define DRM_IOCTL_RADEON_FULLSCREEN DRM_IOW( 0x46, drm_radeon_fullscreen_t)
+#define DRM_IOCTL_RADEON_SWAP DRM_IO( 0x47)
+#define DRM_IOCTL_RADEON_CLEAR DRM_IOW( 0x48, drm_radeon_clear_t)
+#define DRM_IOCTL_RADEON_VERTEX DRM_IOW( 0x49, drm_radeon_vertex_t)
+#define DRM_IOCTL_RADEON_INDICES DRM_IOW( 0x4a, drm_radeon_indices_t)
+#define DRM_IOCTL_RADEON_BLIT DRM_IOW( 0x4b, drm_radeon_blit_t)
+#define DRM_IOCTL_RADEON_STIPPLE DRM_IOW( 0x4c, drm_radeon_stipple_t)
+#define DRM_IOCTL_RADEON_INDIRECT DRM_IOWR(0x4d, drm_radeon_indirect_t)
+
+#ifdef CONFIG_DRM40_SIS
+/* SiS specific ioctls */
+#define SIS_IOCTL_FB_ALLOC DRM_IOWR(0x44, drm_sis_mem_t)
+#define SIS_IOCTL_FB_FREE DRM_IOW( 0x45, drm_sis_mem_t)
+#define SIS_IOCTL_AGP_INIT DRM_IOWR(0x53, drm_sis_agp_t)
+#define SIS_IOCTL_AGP_ALLOC DRM_IOWR(0x54, drm_sis_mem_t)
+#define SIS_IOCTL_AGP_FREE DRM_IOW( 0x55, drm_sis_mem_t)
+#define SIS_IOCTL_FLIP DRM_IOW( 0x48, drm_sis_flip_t)
+#define SIS_IOCTL_FLIP_INIT DRM_IO( 0x49)
+#define SIS_IOCTL_FLIP_FINAL DRM_IO( 0x50)
+#endif
+
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/drmP.h linux-2.4.13-lia/drivers/char/drm-4.0/drmP.h
--- linux-2.4.13/drivers/char/drm-4.0/drmP.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/drmP.h Wed Oct 24 18:34:24 2001
@@ -0,0 +1,839 @@
+/* drmP.h -- Private header for Direct Rendering Manager -*- linux-c -*-
+ * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#ifndef _DRM_P_H_
+#define _DRM_P_H_
+
+#ifdef __KERNEL__
+#ifdef __alpha__
+/* add include of current.h so that "current" is defined
+ * before static inline funcs in wait.h. Doing this so we
+ * can build the DRM (part of PI DRI). 4/21/2000 S + B */
+#include <asm/current.h>
+#endif /* __alpha__ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/miscdevice.h>
+#include <linux/major.h>
+#include <linux/fs.h>
+#include <linux/proc_fs.h>
+#include <linux/init.h>
+#include <linux/file.h>
+#include <linux/pci.h>
+#include <linux/wrapper.h>
+#include <linux/version.h>
+#include <linux/sched.h>
+#include <linux/smp_lock.h> /* For (un)lock_kernel */
+#include <linux/mm.h>
+#ifdef __alpha__
+#include <asm/pgtable.h> /* For pte_wrprotect */
+#endif
+#include <asm/io.h>
+#include <asm/mman.h>
+#include <asm/uaccess.h>
+#ifdef CONFIG_MTRR
+#include <asm/mtrr.h>
+#endif
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+#include <linux/types.h>
+#include <linux/agp_backend.h>
+#endif
+#if LINUX_VERSION_CODE >= 0x020100 /* KERNEL_VERSION(2,1,0) */
+#include <linux/tqueue.h>
+#include <linux/poll.h>
+#endif
+#if LINUX_VERSION_CODE < 0x020400
+#include "compat-pre24.h"
+#endif
+#include "drm.h"
+
+#define DRM_DEBUG_CODE 2 /* Include debugging code (if > 1, then
+ also include looping detection. */
+#define DRM_DMA_HISTOGRAM 1 /* Make histogram of DMA latency. */
+
+#define DRM_HASH_SIZE 16 /* Size of key hash table */
+#define DRM_KERNEL_CONTEXT 0 /* Change drm_resctx if changed */
+#define DRM_RESERVED_CONTEXTS 1 /* Change drm_resctx if changed */
+#define DRM_LOOPING_LIMIT 5000000
+#define DRM_BSZ 1024 /* Buffer size for /dev/drm? output */
+#define DRM_TIME_SLICE (HZ/20) /* Time slice for GLXContexts */
+#define DRM_LOCK_SLICE 1 /* Time slice for lock, in jiffies */
+
+#define DRM_FLAG_DEBUG 0x01
+#define DRM_FLAG_NOCTX 0x02
+
+#define DRM_MEM_DMA 0
+#define DRM_MEM_SAREA 1
+#define DRM_MEM_DRIVER 2
+#define DRM_MEM_MAGIC 3
+#define DRM_MEM_IOCTLS 4
+#define DRM_MEM_MAPS 5
+#define DRM_MEM_VMAS 6
+#define DRM_MEM_BUFS 7
+#define DRM_MEM_SEGS 8
+#define DRM_MEM_PAGES 9
+#define DRM_MEM_FILES 10
+#define DRM_MEM_QUEUES 11
+#define DRM_MEM_CMDS 12
+#define DRM_MEM_MAPPINGS 13
+#define DRM_MEM_BUFLISTS 14
+#define DRM_MEM_AGPLISTS 15
+#define DRM_MEM_TOTALAGP 16
+#define DRM_MEM_BOUNDAGP 17
+#define DRM_MEM_CTXBITMAP 18
+
+#define DRM_MAX_CTXBITMAP (PAGE_SIZE * 8)
+
+ /* Backward compatibility section */
+ /* _PAGE_WT changed to _PAGE_PWT in 2.2.6 */
+#ifndef _PAGE_PWT
+#define _PAGE_PWT _PAGE_WT
+#endif
+ /* Wait queue declarations changed in 2.3.1 */
+#ifndef DECLARE_WAITQUEUE
+#define DECLARE_WAITQUEUE(w,c) struct wait_queue w = { c, NULL }
+typedef struct wait_queue *wait_queue_head_t;
+#define init_waitqueue_head(q) *q = NULL;
+#endif
+
+ /* _PAGE_4M changed to _PAGE_PSE in 2.3.23 */
+#ifndef _PAGE_PSE
+#define _PAGE_PSE _PAGE_4M
+#endif
+
+ /* vm_offset changed to vm_pgoff in 2.3.25 */
+#if LINUX_VERSION_CODE < 0x020319
+#define VM_OFFSET(vma) ((vma)->vm_offset)
+#else
+#define VM_OFFSET(vma) ((vma)->vm_pgoff << PAGE_SHIFT)
+#endif
+
+ /* *_nopage return values defined in 2.3.26 */
+#ifndef NOPAGE_SIGBUS
+#define NOPAGE_SIGBUS 0
+#endif
+#ifndef NOPAGE_OOM
+#define NOPAGE_OOM 0
+#endif
+
+ /* module_init/module_exit added in 2.3.13 */
+#ifndef module_init
+#define module_init(x) int init_module(void) { return x(); }
+#endif
+#ifndef module_exit
+#define module_exit(x) void cleanup_module(void) { x(); }
+#endif
+
+ /* Generic cmpxchg added in 2.3.x */
+#ifndef __HAVE_ARCH_CMPXCHG
+ /* Include this here so that driver can be
+ used with older kernels. */
+#if defined(__alpha__)
+static __inline__ unsigned long
+__cmpxchg_u32(volatile int *m, int old, int new)
+{
+ unsigned long prev, cmp;
+
+ __asm__ __volatile__(
+ "1: ldl_l %0,%2\n"
+ " cmpeq %0,%3,%1\n"
+ " beq %1,2f\n"
+ " mov %4,%1\n"
+ " stl_c %1,%2\n"
+ " beq %1,3f\n"
+ "2: mb\n"
+ ".subsection 2\n"
+ "3: br 1b\n"
+ ".previous"
+ : "=&r"(prev), "=&r"(cmp), "=m"(*m)
+ : "r"((long) old), "r"(new), "m"(*m));
+
+ return prev;
+}
+
+static __inline__ unsigned long
+__cmpxchg_u64(volatile long *m, unsigned long old, unsigned long new)
+{
+ unsigned long prev, cmp;
+
+ __asm__ __volatile__(
+ "1: ldq_l %0,%2\n"
+ " cmpeq %0,%3,%1\n"
+ " beq %1,2f\n"
+ " mov %4,%1\n"
+ " stq_c %1,%2\n"
+ " beq %1,3f\n"
+ "2: mb\n"
+ ".subsection 2\n"
+ "3: br 1b\n"
+ ".previous"
+ : "=&r"(prev), "=&r"(cmp), "=m"(*m)
+ : "r"((long) old), "r"(new), "m"(*m));
+
+ return prev;
+}
+
+static __inline__ unsigned long
+__cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
+{
+ switch (size) {
+ case 4:
+ return __cmpxchg_u32(ptr, old, new);
+ case 8:
+ return __cmpxchg_u64(ptr, old, new);
+ }
+ return old;
+}
+#define cmpxchg(ptr,o,n) \
+ ({ \
+ __typeof__(*(ptr)) _o_ = (o); \
+ __typeof__(*(ptr)) _n_ = (n); \
+ (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \
+ (unsigned long)_n_, sizeof(*(ptr))); \
+ })
+
+#elif __i386__
+static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+ unsigned long new, int size)
+{
+ unsigned long prev;
+ switch (size) {
+ case 1:
+ __asm__ __volatile__(LOCK_PREFIX "cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
+ : "memory");
+ return prev;
+ case 2:
+ __asm__ __volatile__(LOCK_PREFIX "cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
+ : "memory");
+ return prev;
+ case 4:
+ __asm__ __volatile__(LOCK_PREFIX "cmpxchgl %1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
+ : "memory");
+ return prev;
+ }
+ return old;
+}
+
+#define cmpxchg(ptr,o,n) \
+ ((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)(o), \
+ (unsigned long)(n),sizeof(*(ptr))))
+#endif /* i386 & alpha */
+#endif
+
+ /* Macros to make printk easier */
+#define DRM_ERROR(fmt, arg...) \
+ printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ "] *ERROR* " fmt , ##arg)
+#define DRM_MEM_ERROR(area, fmt, arg...) \
+ printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ ":%s] *ERROR* " fmt , \
+ drm_mem_stats[area].name , ##arg)
+#define DRM_INFO(fmt, arg...) printk(KERN_INFO "[" DRM_NAME "] " fmt , ##arg)
+
+#if DRM_DEBUG_CODE
+#define DRM_DEBUG(fmt, arg...) \
+ do { \
+ if (drm_flags&DRM_FLAG_DEBUG) \
+ printk(KERN_DEBUG \
+ "[" DRM_NAME ":" __FUNCTION__ "] " fmt , \
+ ##arg); \
+ } while (0)
+#else
+#define DRM_DEBUG(fmt, arg...) do { } while (0)
+#endif
+
+#define DRM_PROC_LIMIT (PAGE_SIZE-80)
+
+#define DRM_PROC_PRINT(fmt, arg...) \
+ len += sprintf(&buf[len], fmt , ##arg); \
+ if (len > DRM_PROC_LIMIT) return len;
+
+#define DRM_PROC_PRINT_RET(ret, fmt, arg...) \
+ len += sprintf(&buf[len], fmt , ##arg); \
+ if (len > DRM_PROC_LIMIT) { ret; return len; }
+
+ /* Internal types and structures */
+#define DRM_ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0]))
+#define DRM_MIN(a,b) ((a)<(b)?(a):(b))
+#define DRM_MAX(a,b) ((a)>(b)?(a):(b))
+
+#define DRM_LEFTCOUNT(x) (((x)->rp + (x)->count - (x)->wp) % ((x)->count + 1))
+#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x))
+#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist)
+
+typedef int drm_ioctl_t(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+typedef struct drm_ioctl_desc {
+ drm_ioctl_t *func;
+ int auth_needed;
+ int root_only;
+} drm_ioctl_desc_t;
+
+typedef struct drm_devstate {
+ pid_t owner; /* X server pid holding x_lock */
+
+} drm_devstate_t;
+
+typedef struct drm_magic_entry {
+ drm_magic_t magic;
+ struct drm_file *priv;
+ struct drm_magic_entry *next;
+} drm_magic_entry_t;
+
+typedef struct drm_magic_head {
+ struct drm_magic_entry *head;
+ struct drm_magic_entry *tail;
+} drm_magic_head_t;
+
+typedef struct drm_vma_entry {
+ struct vm_area_struct *vma;
+ struct drm_vma_entry *next;
+ pid_t pid;
+} drm_vma_entry_t;
+
+typedef struct drm_buf {
+ int idx; /* Index into master buflist */
+ int total; /* Buffer size */
+ int order; /* log-base-2(total) */
+ int used; /* Amount of buffer in use (for DMA) */
+ unsigned long offset; /* Byte offset (used internally) */
+ void *address; /* Address of buffer */
+ unsigned long bus_address; /* Bus address of buffer */
+ struct drm_buf *next; /* Kernel-only: used for free list */
+ __volatile__ int waiting; /* On kernel DMA queue */
+ __volatile__ int pending; /* On hardware DMA queue */
+ wait_queue_head_t dma_wait; /* Processes waiting */
+ pid_t pid; /* PID of holding process */
+ int context; /* Kernel queue for this buffer */
+ int while_locked;/* Dispatch this buffer while locked */
+ enum {
+ DRM_LIST_NONE = 0,
+ DRM_LIST_FREE = 1,
+ DRM_LIST_WAIT = 2,
+ DRM_LIST_PEND = 3,
+ DRM_LIST_PRIO = 4,
+ DRM_LIST_RECLAIM = 5
+ } list; /* Which list we're on */
+
+#if DRM_DMA_HISTOGRAM
+ cycles_t time_queued; /* Queued to kernel DMA queue */
+ cycles_t time_dispatched; /* Dispatched to hardware */
+ cycles_t time_completed; /* Completed by hardware */
+ cycles_t time_freed; /* Back on freelist */
+#endif
+
+ int dev_priv_size; /* Size of buffer private stoarge */
+ void *dev_private; /* Per-buffer private storage */
+} drm_buf_t;
+
+#if DRM_DMA_HISTOGRAM
+#define DRM_DMA_HISTOGRAM_SLOTS 9
+#define DRM_DMA_HISTOGRAM_INITIAL 10
+#define DRM_DMA_HISTOGRAM_NEXT(current) ((current)*10)
+typedef struct drm_histogram {
+ atomic_t total;
+
+ atomic_t queued_to_dispatched[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t dispatched_to_completed[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t completed_to_freed[DRM_DMA_HISTOGRAM_SLOTS];
+
+ atomic_t queued_to_completed[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t queued_to_freed[DRM_DMA_HISTOGRAM_SLOTS];
+
+ atomic_t dma[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t schedule[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t ctx[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t lacq[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t lhld[DRM_DMA_HISTOGRAM_SLOTS];
+} drm_histogram_t;
+#endif
+
+ /* bufs is one longer than it has to be */
+typedef struct drm_waitlist {
+ int count; /* Number of possible buffers */
+ drm_buf_t **bufs; /* List of pointers to buffers */
+ drm_buf_t **rp; /* Read pointer */
+ drm_buf_t **wp; /* Write pointer */
+ drm_buf_t **end; /* End pointer */
+ spinlock_t read_lock;
+ spinlock_t write_lock;
+} drm_waitlist_t;
+
+typedef struct drm_freelist {
+ int initialized; /* Freelist in use */
+ atomic_t count; /* Number of free buffers */
+ drm_buf_t *next; /* End pointer */
+
+ wait_queue_head_t waiting; /* Processes waiting on free bufs */
+ int low_mark; /* Low water mark */
+ int high_mark; /* High water mark */
+ atomic_t wfh; /* If waiting for high mark */
+ spinlock_t lock;
+} drm_freelist_t;
+
+typedef struct drm_buf_entry {
+ int buf_size;
+ int buf_count;
+ drm_buf_t *buflist;
+ int seg_count;
+ int page_order;
+ unsigned long *seglist;
+
+ drm_freelist_t freelist;
+} drm_buf_entry_t;
+
+typedef struct drm_hw_lock {
+ __volatile__ unsigned int lock;
+ char padding[60]; /* Pad to cache line */
+} drm_hw_lock_t;
+
+typedef struct drm_file {
+ int authenticated;
+ int minor;
+ pid_t pid;
+ uid_t uid;
+ drm_magic_t magic;
+ unsigned long ioctl_count;
+ struct drm_file *next;
+ struct drm_file *prev;
+ struct drm_device *dev;
+ int remove_auth_on_close;
+} drm_file_t;
+
+
+typedef struct drm_queue {
+ atomic_t use_count; /* Outstanding uses (+1) */
+ atomic_t finalization; /* Finalization in progress */
+ atomic_t block_count; /* Count of processes waiting */
+ atomic_t block_read; /* Queue blocked for reads */
+ wait_queue_head_t read_queue; /* Processes waiting on block_read */
+ atomic_t block_write; /* Queue blocked for writes */
+ wait_queue_head_t write_queue; /* Processes waiting on block_write */
+ atomic_t total_queued; /* Total queued statistic */
+ atomic_t total_flushed;/* Total flushes statistic */
+ atomic_t total_locks; /* Total locks statistics */
+ drm_ctx_flags_t flags; /* Context preserving and 2D-only */
+ drm_waitlist_t waitlist; /* Pending buffers */
+ wait_queue_head_t flush_queue; /* Processes waiting until flush */
+} drm_queue_t;
+
+typedef struct drm_lock_data {
+ drm_hw_lock_t *hw_lock; /* Hardware lock */
+ pid_t pid; /* PID of lock holder (0=kernel) */
+ wait_queue_head_t lock_queue; /* Queue of blocked processes */
+ unsigned long lock_time; /* Time of last lock in jiffies */
+} drm_lock_data_t;
+
+typedef struct drm_device_dma {
+ /* Performance Counters */
+ atomic_t total_prio; /* Total DRM_DMA_PRIORITY */
+ atomic_t total_bytes; /* Total bytes DMA'd */
+ atomic_t total_dmas; /* Total DMA buffers dispatched */
+
+ atomic_t total_missed_dma; /* Missed drm_do_dma */
+ atomic_t total_missed_lock; /* Missed lock in drm_do_dma */
+ atomic_t total_missed_free; /* Missed drm_free_this_buffer */
+ atomic_t total_missed_sched;/* Missed drm_dma_schedule */
+
+ atomic_t total_tried; /* Tried next_buffer */
+ atomic_t total_hit; /* Sent next_buffer */
+ atomic_t total_lost; /* Lost interrupt */
+
+ drm_buf_entry_t bufs[DRM_MAX_ORDER+1];
+ int buf_count;
+ drm_buf_t **buflist; /* Vector of pointers info bufs */
+ int seg_count;
+ int page_count;
+ unsigned long *pagelist;
+ unsigned long byte_count;
+ enum {
+ _DRM_DMA_USE_AGP = 0x01
+ } flags;
+
+ /* DMA support */
+ drm_buf_t *this_buffer; /* Buffer being sent */
+ drm_buf_t *next_buffer; /* Selected buffer to send */
+ drm_queue_t *next_queue; /* Queue from which buffer selected*/
+ wait_queue_head_t waiting; /* Processes waiting on free bufs */
+} drm_device_dma_t;
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+typedef struct drm_agp_mem {
+ unsigned long handle;
+ agp_memory *memory;
+ unsigned long bound; /* address */
+ int pages;
+ struct drm_agp_mem *prev;
+ struct drm_agp_mem *next;
+} drm_agp_mem_t;
+
+typedef struct drm_agp_head {
+ agp_kern_info agp_info;
+ const char *chipset;
+ drm_agp_mem_t *memory;
+ unsigned long mode;
+ int enabled;
+ int acquired;
+ unsigned long base;
+ int agp_mtrr;
+ int cant_use_aperture;
+ unsigned long page_mask;
+} drm_agp_head_t;
+#endif
+
+typedef struct drm_sigdata {
+ int context;
+ drm_hw_lock_t *lock;
+} drm_sigdata_t;
+
+typedef struct drm_device {
+ const char *name; /* Simple driver name */
+ char *unique; /* Unique identifier: e.g., busid */
+ int unique_len; /* Length of unique field */
+ dev_t device; /* Device number for mknod */
+ char *devname; /* For /proc/interrupts */
+
+ int blocked; /* Blocked due to VC switch? */
+ struct proc_dir_entry *root; /* Root for this device's entries */
+
+ /* Locks */
+ spinlock_t count_lock; /* For inuse, open_count, buf_use */
+ struct semaphore struct_sem; /* For others */
+
+ /* Usage Counters */
+ int open_count; /* Outstanding files open */
+ atomic_t ioctl_count; /* Outstanding IOCTLs pending */
+ atomic_t vma_count; /* Outstanding vma areas open */
+ int buf_use; /* Buffers in use -- cannot alloc */
+ atomic_t buf_alloc; /* Buffer allocation in progress */
+
+ /* Performance Counters */
+ atomic_t total_open;
+ atomic_t total_close;
+ atomic_t total_ioctl;
+ atomic_t total_irq; /* Total interruptions */
+ atomic_t total_ctx; /* Total context switches */
+
+ atomic_t total_locks;
+ atomic_t total_unlocks;
+ atomic_t total_contends;
+ atomic_t total_sleeps;
+
+ /* Authentication */
+ drm_file_t *file_first;
+ drm_file_t *file_last;
+ drm_magic_head_t magiclist[DRM_HASH_SIZE];
+
+ /* Memory management */
+ drm_map_t **maplist; /* Vector of pointers to regions */
+ int map_count; /* Number of mappable regions */
+
+ drm_vma_entry_t *vmalist; /* List of vmas (for debugging) */
+ drm_lock_data_t lock; /* Information on hardware lock */
+
+ /* DMA queues (contexts) */
+ int queue_count; /* Number of active DMA queues */
+ int queue_reserved; /* Number of reserved DMA queues */
+ int queue_slots; /* Actual length of queuelist */
+ drm_queue_t **queuelist; /* Vector of pointers to DMA queues */
+ drm_device_dma_t *dma; /* Optional pointer for DMA support */
+
+ /* Context support */
+ int irq; /* Interrupt used by board */
+ __volatile__ long context_flag; /* Context swapping flag */
+ __volatile__ long interrupt_flag; /* Interruption handler flag */
+ __volatile__ long dma_flag; /* DMA dispatch flag */
+ struct timer_list timer; /* Timer for delaying ctx switch */
+ wait_queue_head_t context_wait; /* Processes waiting on ctx switch */
+ int last_checked; /* Last context checked for DMA */
+ int last_context; /* Last current context */
+ unsigned long last_switch; /* jiffies at last context switch */
+ struct tq_struct tq;
+ cycles_t ctx_start;
+ cycles_t lck_start;
+#if DRM_DMA_HISTOGRAM
+ drm_histogram_t histo;
+#endif
+
+ /* Callback to X server for context switch
+ and for heavy-handed reset. */
+ char buf[DRM_BSZ]; /* Output buffer */
+ char *buf_rp; /* Read pointer */
+ char *buf_wp; /* Write pointer */
+ char *buf_end; /* End pointer */
+ struct fasync_struct *buf_async;/* Processes waiting for SIGIO */
+ wait_queue_head_t buf_readers; /* Processes waiting to read */
+ wait_queue_head_t buf_writers; /* Processes waiting to ctx switch */
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+ drm_agp_head_t *agp;
+#endif
+ unsigned long *ctx_bitmap;
+ void *dev_private;
+ drm_sigdata_t sigdata; /* For block_all_signals */
+ sigset_t sigmask;
+} drm_device_t;
+
+ /* Internal function definitions */
+
+ /* Misc. support (init.c) */
+extern int drm_flags;
+extern void drm_parse_options(char *s);
+extern int drm_cpu_valid(void);
+
+
+ /* Device support (fops.c) */
+extern int drm_open_helper(struct inode *inode, struct file *filp,
+ drm_device_t *dev);
+extern int drm_flush(struct file *filp);
+extern int drm_release(struct inode *inode, struct file *filp);
+extern int drm_fasync(int fd, struct file *filp, int on);
+extern ssize_t drm_read(struct file *filp, char *buf, size_t count,
+ loff_t *off);
+extern int drm_write_string(drm_device_t *dev, const char *s);
+extern unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait);
+
+ /* Mapping support (vm.c) */
+#if LINUX_VERSION_CODE < 0x020317
+extern unsigned long drm_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern unsigned long drm_vm_shm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern unsigned long drm_vm_shm_nopage_lock(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern unsigned long drm_vm_dma_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+#else
+ /* Return type changed in 2.3.23 */
+extern struct page *drm_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern struct page *drm_vm_shm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern struct page *drm_vm_shm_nopage_lock(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern struct page *drm_vm_dma_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+#endif
+extern void drm_vm_open(struct vm_area_struct *vma);
+extern void drm_vm_close(struct vm_area_struct *vma);
+extern int drm_mmap_dma(struct file *filp,
+ struct vm_area_struct *vma);
+extern int drm_mmap(struct file *filp, struct vm_area_struct *vma);
+
+
+ /* Proc support (proc.c) */
+extern int drm_proc_init(drm_device_t *dev);
+extern int drm_proc_cleanup(void);
+
+ /* Memory management support (memory.c) */
+extern void drm_mem_init(void);
+extern int drm_mem_info(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data);
+extern void *drm_alloc(size_t size, int area);
+extern void *drm_realloc(void *oldpt, size_t oldsize, size_t size,
+ int area);
+extern char *drm_strdup(const char *s, int area);
+extern void drm_strfree(const char *s, int area);
+extern void drm_free(void *pt, size_t size, int area);
+extern unsigned long drm_alloc_pages(int order, int area);
+extern void drm_free_pages(unsigned long address, int order,
+ int area);
+extern void *drm_ioremap(unsigned long offset, unsigned long size,
+ drm_device_t *dev);
+extern void drm_ioremapfree(void *pt, unsigned long size,
+ drm_device_t *dev);
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+extern agp_memory *drm_alloc_agp(int pages, u32 type);
+extern int drm_free_agp(agp_memory *handle, int pages);
+extern int drm_bind_agp(agp_memory *handle, unsigned int start);
+extern int drm_unbind_agp(agp_memory *handle);
+#endif
+
+
+ /* Buffer management support (bufs.c) */
+extern int drm_order(unsigned long size);
+extern int drm_addmap(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_addbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_infobufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_markbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_freebufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_mapbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Buffer list management support (lists.c) */
+extern int drm_waitlist_create(drm_waitlist_t *bl, int count);
+extern int drm_waitlist_destroy(drm_waitlist_t *bl);
+extern int drm_waitlist_put(drm_waitlist_t *bl, drm_buf_t *buf);
+extern drm_buf_t *drm_waitlist_get(drm_waitlist_t *bl);
+
+extern int drm_freelist_create(drm_freelist_t *bl, int count);
+extern int drm_freelist_destroy(drm_freelist_t *bl);
+extern int drm_freelist_put(drm_device_t *dev, drm_freelist_t *bl,
+ drm_buf_t *buf);
+extern drm_buf_t *drm_freelist_get(drm_freelist_t *bl, int block);
+
+ /* DMA support (gen_dma.c) */
+extern void drm_dma_setup(drm_device_t *dev);
+extern void drm_dma_takedown(drm_device_t *dev);
+extern void drm_free_buffer(drm_device_t *dev, drm_buf_t *buf);
+extern void drm_reclaim_buffers(drm_device_t *dev, pid_t pid);
+extern int drm_context_switch(drm_device_t *dev, int old, int new);
+extern int drm_context_switch_complete(drm_device_t *dev, int new);
+extern void drm_clear_next_buffer(drm_device_t *dev);
+extern int drm_select_queue(drm_device_t *dev,
+ void (*wrapper)(unsigned long));
+extern int drm_dma_enqueue(drm_device_t *dev, drm_dma_t *dma);
+extern int drm_dma_get_buffers(drm_device_t *dev, drm_dma_t *dma);
+#if DRM_DMA_HISTOGRAM
+extern int drm_histogram_slot(unsigned long count);
+extern void drm_histogram_compute(drm_device_t *dev, drm_buf_t *buf);
+#endif
+
+
+ /* Misc. IOCTL support (ioctl.c) */
+extern int drm_irq_busid(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_getunique(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_setunique(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Context IOCTL support (context.c) */
+extern int drm_resctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_addctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_modctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_getctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_switchctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_newctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_rmctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Drawable IOCTL support (drawable.c) */
+extern int drm_adddraw(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_rmdraw(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Authentication IOCTL support (auth.c) */
+extern int drm_add_magic(drm_device_t *dev, drm_file_t *priv,
+ drm_magic_t magic);
+extern int drm_remove_magic(drm_device_t *dev, drm_magic_t magic);
+extern int drm_getmagic(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_authmagic(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Locking IOCTL support (lock.c) */
+extern int drm_block(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_unblock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_lock_take(__volatile__ unsigned int *lock,
+ unsigned int context);
+extern int drm_lock_transfer(drm_device_t *dev,
+ __volatile__ unsigned int *lock,
+ unsigned int context);
+extern int drm_lock_free(drm_device_t *dev,
+ __volatile__ unsigned int *lock,
+ unsigned int context);
+extern int drm_finish(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_flush_unblock(drm_device_t *dev, int context,
+ drm_lock_flags_t flags);
+extern int drm_flush_block_and_flush(drm_device_t *dev, int context,
+ drm_lock_flags_t flags);
+extern int drm_notifier(void *priv);
+
+ /* Context Bitmap support (ctxbitmap.c) */
+extern int drm_ctxbitmap_init(drm_device_t *dev);
+extern void drm_ctxbitmap_cleanup(drm_device_t *dev);
+extern int drm_ctxbitmap_next(drm_device_t *dev);
+extern void drm_ctxbitmap_free(drm_device_t *dev, int ctx_handle);
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+ /* AGP/GART support (agpsupport.c) */
+extern drm_agp_head_t *drm_agp_init(void);
+extern void drm_agp_uninit(void);
+extern int drm_agp_acquire(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern void _drm_agp_release(void);
+extern int drm_agp_release(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_enable(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_info(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_alloc(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_free(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_unbind(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_bind(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern agp_memory *drm_agp_allocate_memory(size_t pages, u32 type);
+extern int drm_agp_free_memory(agp_memory *handle);
+extern int drm_agp_bind_memory(agp_memory *handle, off_t start);
+extern int drm_agp_unbind_memory(agp_memory *handle);
+#endif
+#endif
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_context.c linux-2.4.13-lia/drivers/char/drm-4.0/ffb_context.c
--- linux-2.4.13/drivers/char/drm-4.0/ffb_context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,540 @@
+/* $Id: ffb_context.c,v 1.4 2000/08/29 07:01:55 davem Exp $
+ * ffb_context.c: Creator/Creator3D DRI/DRM context switching.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ *
+ * Almost entirely stolen from tdfx_context.c, see there
+ * for authors.
+ */
+
+#include <linux/sched.h>
+#include <asm/upa.h>
+
+#include "drmP.h"
+
+#include "ffb_drv.h"
+
+static int ffb_alloc_queue(drm_device_t *dev, int is_2d_only)
+{
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ int i;
+
+ for (i = 0; i < FFB_MAX_CTXS; i++) {
+ if (fpriv->hw_state[i] = NULL)
+ break;
+ }
+ if (i = FFB_MAX_CTXS)
+ return -1;
+
+ fpriv->hw_state[i] = kmalloc(sizeof(struct ffb_hw_context), GFP_KERNEL);
+ if (fpriv->hw_state[i] = NULL)
+ return -1;
+
+ fpriv->hw_state[i]->is_2d_only = is_2d_only;
+
+ /* Plus one because 0 is the special DRM_KERNEL_CONTEXT. */
+ return i + 1;
+}
+
+static void ffb_save_context(ffb_dev_priv_t *fpriv, int idx)
+{
+ ffb_fbcPtr ffb = fpriv->regs;
+ struct ffb_hw_context *ctx;
+ int i;
+
+ ctx = fpriv->hw_state[idx - 1];
+ if (idx = 0 || ctx = NULL)
+ return;
+
+ if (ctx->is_2d_only) {
+ /* 2D applications only care about certain pieces
+ * of state.
+ */
+ ctx->drawop = upa_readl(&ffb->drawop);
+ ctx->ppc = upa_readl(&ffb->ppc);
+ ctx->wid = upa_readl(&ffb->wid);
+ ctx->fg = upa_readl(&ffb->fg);
+ ctx->bg = upa_readl(&ffb->bg);
+ ctx->xclip = upa_readl(&ffb->xclip);
+ ctx->fbc = upa_readl(&ffb->fbc);
+ ctx->rop = upa_readl(&ffb->rop);
+ ctx->cmp = upa_readl(&ffb->cmp);
+ ctx->matchab = upa_readl(&ffb->matchab);
+ ctx->magnab = upa_readl(&ffb->magnab);
+ ctx->pmask = upa_readl(&ffb->pmask);
+ ctx->xpmask = upa_readl(&ffb->xpmask);
+ ctx->lpat = upa_readl(&ffb->lpat);
+ ctx->fontxy = upa_readl(&ffb->fontxy);
+ ctx->fontw = upa_readl(&ffb->fontw);
+ ctx->fontinc = upa_readl(&ffb->fontinc);
+
+ /* stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ ctx->stencil = upa_readl(&ffb->stencil);
+ ctx->stencilctl = upa_readl(&ffb->stencilctl);
+ }
+
+ for (i = 0; i < 32; i++)
+ ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]);
+ ctx->ucsr = upa_readl(&ffb->ucsr);
+ return;
+ }
+
+ /* Fetch drawop. */
+ ctx->drawop = upa_readl(&ffb->drawop);
+
+ /* If we were saving the vertex registers, this is where
+ * we would do it. We would save 32 32-bit words starting
+ * at ffb->suvtx.
+ */
+
+ /* Capture rendering attributes. */
+
+ ctx->ppc = upa_readl(&ffb->ppc); /* Pixel Processor Control */
+ ctx->wid = upa_readl(&ffb->wid); /* Current WID */
+ ctx->fg = upa_readl(&ffb->fg); /* Constant FG color */
+ ctx->bg = upa_readl(&ffb->bg); /* Constant BG color */
+ ctx->consty = upa_readl(&ffb->consty); /* Constant Y */
+ ctx->constz = upa_readl(&ffb->constz); /* Constant Z */
+ ctx->xclip = upa_readl(&ffb->xclip); /* X plane clip */
+ ctx->dcss = upa_readl(&ffb->dcss); /* Depth Cue Scale Slope */
+ ctx->vclipmin = upa_readl(&ffb->vclipmin); /* Primary XY clip, minimum */
+ ctx->vclipmax = upa_readl(&ffb->vclipmax); /* Primary XY clip, maximum */
+ ctx->vclipzmin = upa_readl(&ffb->vclipzmin); /* Primary Z clip, minimum */
+ ctx->vclipzmax = upa_readl(&ffb->vclipzmax); /* Primary Z clip, maximum */
+ ctx->dcsf = upa_readl(&ffb->dcsf); /* Depth Cue Scale Front Bound */
+ ctx->dcsb = upa_readl(&ffb->dcsb); /* Depth Cue Scale Back Bound */
+ ctx->dczf = upa_readl(&ffb->dczf); /* Depth Cue Scale Z Front */
+ ctx->dczb = upa_readl(&ffb->dczb); /* Depth Cue Scale Z Back */
+ ctx->blendc = upa_readl(&ffb->blendc); /* Alpha Blend Control */
+ ctx->blendc1 = upa_readl(&ffb->blendc1); /* Alpha Blend Color 1 */
+ ctx->blendc2 = upa_readl(&ffb->blendc2); /* Alpha Blend Color 2 */
+ ctx->fbc = upa_readl(&ffb->fbc); /* Frame Buffer Control */
+ ctx->rop = upa_readl(&ffb->rop); /* Raster Operation */
+ ctx->cmp = upa_readl(&ffb->cmp); /* Compare Controls */
+ ctx->matchab = upa_readl(&ffb->matchab); /* Buffer A/B Match Ops */
+ ctx->matchc = upa_readl(&ffb->matchc); /* Buffer C Match Ops */
+ ctx->magnab = upa_readl(&ffb->magnab); /* Buffer A/B Magnitude Ops */
+ ctx->magnc = upa_readl(&ffb->magnc); /* Buffer C Magnitude Ops */
+ ctx->pmask = upa_readl(&ffb->pmask); /* RGB Plane Mask */
+ ctx->xpmask = upa_readl(&ffb->xpmask); /* X Plane Mask */
+ ctx->ypmask = upa_readl(&ffb->ypmask); /* Y Plane Mask */
+ ctx->zpmask = upa_readl(&ffb->zpmask); /* Z Plane Mask */
+
+ /* Auxiliary Clips. */
+ ctx->auxclip0min = upa_readl(&ffb->auxclip[0].min);
+ ctx->auxclip0max = upa_readl(&ffb->auxclip[0].max);
+ ctx->auxclip1min = upa_readl(&ffb->auxclip[1].min);
+ ctx->auxclip1max = upa_readl(&ffb->auxclip[1].max);
+ ctx->auxclip2min = upa_readl(&ffb->auxclip[2].min);
+ ctx->auxclip2max = upa_readl(&ffb->auxclip[2].max);
+ ctx->auxclip3min = upa_readl(&ffb->auxclip[3].min);
+ ctx->auxclip3max = upa_readl(&ffb->auxclip[3].max);
+
+ ctx->lpat = upa_readl(&ffb->lpat); /* Line Pattern */
+ ctx->fontxy = upa_readl(&ffb->fontxy); /* XY Font Coordinate */
+ ctx->fontw = upa_readl(&ffb->fontw); /* Font Width */
+ ctx->fontinc = upa_readl(&ffb->fontinc); /* Font X/Y Increment */
+
+ /* These registers/features only exist on FFB2 and later chips. */
+ if (fpriv->ffb_type >= ffb2_prototype) {
+ ctx->dcss1 = upa_readl(&ffb->dcss1); /* Depth Cue Scale Slope 1 */
+ ctx->dcss2 = upa_readl(&ffb->dcss2); /* Depth Cue Scale Slope 2 */
+ ctx->dcss2 = upa_readl(&ffb->dcss3); /* Depth Cue Scale Slope 3 */
+ ctx->dcs2 = upa_readl(&ffb->dcs2); /* Depth Cue Scale 2 */
+ ctx->dcs3 = upa_readl(&ffb->dcs3); /* Depth Cue Scale 3 */
+ ctx->dcs4 = upa_readl(&ffb->dcs4); /* Depth Cue Scale 4 */
+ ctx->dcd2 = upa_readl(&ffb->dcd2); /* Depth Cue Depth 2 */
+ ctx->dcd3 = upa_readl(&ffb->dcd3); /* Depth Cue Depth 3 */
+ ctx->dcd4 = upa_readl(&ffb->dcd4); /* Depth Cue Depth 4 */
+
+ /* And stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ ctx->stencil = upa_readl(&ffb->stencil);
+ ctx->stencilctl = upa_readl(&ffb->stencilctl);
+ }
+ }
+
+ /* Save the 32x32 area pattern. */
+ for (i = 0; i < 32; i++)
+ ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]);
+
+ /* Finally, stash away the User Constol/Status Register. */
+ ctx->ucsr = upa_readl(&ffb->ucsr);
+}
+
+static void ffb_restore_context(ffb_dev_priv_t *fpriv, int old, int idx)
+{
+ ffb_fbcPtr ffb = fpriv->regs;
+ struct ffb_hw_context *ctx;
+ int i;
+
+ ctx = fpriv->hw_state[idx - 1];
+ if (idx = 0 || ctx = NULL)
+ return;
+
+ if (ctx->is_2d_only) {
+ /* 2D applications only care about certain pieces
+ * of state.
+ */
+ upa_writel(ctx->drawop, &ffb->drawop);
+
+ /* If we were restoring the vertex registers, this is where
+ * we would do it. We would restore 32 32-bit words starting
+ * at ffb->suvtx.
+ */
+
+ upa_writel(ctx->ppc, &ffb->ppc);
+ upa_writel(ctx->wid, &ffb->wid);
+ upa_writel(ctx->fg, &ffb->fg);
+ upa_writel(ctx->bg, &ffb->bg);
+ upa_writel(ctx->xclip, &ffb->xclip);
+ upa_writel(ctx->fbc, &ffb->fbc);
+ upa_writel(ctx->rop, &ffb->rop);
+ upa_writel(ctx->cmp, &ffb->cmp);
+ upa_writel(ctx->matchab, &ffb->matchab);
+ upa_writel(ctx->magnab, &ffb->magnab);
+ upa_writel(ctx->pmask, &ffb->pmask);
+ upa_writel(ctx->xpmask, &ffb->xpmask);
+ upa_writel(ctx->lpat, &ffb->lpat);
+ upa_writel(ctx->fontxy, &ffb->fontxy);
+ upa_writel(ctx->fontw, &ffb->fontw);
+ upa_writel(ctx->fontinc, &ffb->fontinc);
+
+ /* stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ upa_writel(ctx->stencil, &ffb->stencil);
+ upa_writel(ctx->stencilctl, &ffb->stencilctl);
+ upa_writel(0x80000000, &ffb->fbc);
+ upa_writel((ctx->stencilctl | 0x80000),
+ &ffb->rawstencilctl);
+ upa_writel(ctx->fbc, &ffb->fbc);
+ }
+
+ for (i = 0; i < 32; i++)
+ upa_writel(ctx->area_pattern[i], &ffb->pattern[i]);
+ upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr);
+ return;
+ }
+
+ /* Restore drawop. */
+ upa_writel(ctx->drawop, &ffb->drawop);
+
+ /* If we were restoring the vertex registers, this is where
+ * we would do it. We would restore 32 32-bit words starting
+ * at ffb->suvtx.
+ */
+
+ /* Restore rendering attributes. */
+
+ upa_writel(ctx->ppc, &ffb->ppc); /* Pixel Processor Control */
+ upa_writel(ctx->wid, &ffb->wid); /* Current WID */
+ upa_writel(ctx->fg, &ffb->fg); /* Constant FG color */
+ upa_writel(ctx->bg, &ffb->bg); /* Constant BG color */
+ upa_writel(ctx->consty, &ffb->consty); /* Constant Y */
+ upa_writel(ctx->constz, &ffb->constz); /* Constant Z */
+ upa_writel(ctx->xclip, &ffb->xclip); /* X plane clip */
+ upa_writel(ctx->dcss, &ffb->dcss); /* Depth Cue Scale Slope */
+ upa_writel(ctx->vclipmin, &ffb->vclipmin); /* Primary XY clip, minimum */
+ upa_writel(ctx->vclipmax, &ffb->vclipmax); /* Primary XY clip, maximum */
+ upa_writel(ctx->vclipzmin, &ffb->vclipzmin); /* Primary Z clip, minimum */
+ upa_writel(ctx->vclipzmax, &ffb->vclipzmax); /* Primary Z clip, maximum */
+ upa_writel(ctx->dcsf, &ffb->dcsf); /* Depth Cue Scale Front Bound */
+ upa_writel(ctx->dcsb, &ffb->dcsb); /* Depth Cue Scale Back Bound */
+ upa_writel(ctx->dczf, &ffb->dczf); /* Depth Cue Scale Z Front */
+ upa_writel(ctx->dczb, &ffb->dczb); /* Depth Cue Scale Z Back */
+ upa_writel(ctx->blendc, &ffb->blendc); /* Alpha Blend Control */
+ upa_writel(ctx->blendc1, &ffb->blendc1); /* Alpha Blend Color 1 */
+ upa_writel(ctx->blendc2, &ffb->blendc2); /* Alpha Blend Color 2 */
+ upa_writel(ctx->fbc, &ffb->fbc); /* Frame Buffer Control */
+ upa_writel(ctx->rop, &ffb->rop); /* Raster Operation */
+ upa_writel(ctx->cmp, &ffb->cmp); /* Compare Controls */
+ upa_writel(ctx->matchab, &ffb->matchab); /* Buffer A/B Match Ops */
+ upa_writel(ctx->matchc, &ffb->matchc); /* Buffer C Match Ops */
+ upa_writel(ctx->magnab, &ffb->magnab); /* Buffer A/B Magnitude Ops */
+ upa_writel(ctx->magnc, &ffb->magnc); /* Buffer C Magnitude Ops */
+ upa_writel(ctx->pmask, &ffb->pmask); /* RGB Plane Mask */
+ upa_writel(ctx->xpmask, &ffb->xpmask); /* X Plane Mask */
+ upa_writel(ctx->ypmask, &ffb->ypmask); /* Y Plane Mask */
+ upa_writel(ctx->zpmask, &ffb->zpmask); /* Z Plane Mask */
+
+ /* Auxiliary Clips. */
+ upa_writel(ctx->auxclip0min, &ffb->auxclip[0].min);
+ upa_writel(ctx->auxclip0max, &ffb->auxclip[0].max);
+ upa_writel(ctx->auxclip1min, &ffb->auxclip[1].min);
+ upa_writel(ctx->auxclip1max, &ffb->auxclip[1].max);
+ upa_writel(ctx->auxclip2min, &ffb->auxclip[2].min);
+ upa_writel(ctx->auxclip2max, &ffb->auxclip[2].max);
+ upa_writel(ctx->auxclip3min, &ffb->auxclip[3].min);
+ upa_writel(ctx->auxclip3max, &ffb->auxclip[3].max);
+
+ upa_writel(ctx->lpat, &ffb->lpat); /* Line Pattern */
+ upa_writel(ctx->fontxy, &ffb->fontxy); /* XY Font Coordinate */
+ upa_writel(ctx->fontw, &ffb->fontw); /* Font Width */
+ upa_writel(ctx->fontinc, &ffb->fontinc); /* Font X/Y Increment */
+
+ /* These registers/features only exist on FFB2 and later chips. */
+ if (fpriv->ffb_type >= ffb2_prototype) {
+ upa_writel(ctx->dcss1, &ffb->dcss1); /* Depth Cue Scale Slope 1 */
+ upa_writel(ctx->dcss2, &ffb->dcss2); /* Depth Cue Scale Slope 2 */
+ upa_writel(ctx->dcss3, &ffb->dcss2); /* Depth Cue Scale Slope 3 */
+ upa_writel(ctx->dcs2, &ffb->dcs2); /* Depth Cue Scale 2 */
+ upa_writel(ctx->dcs3, &ffb->dcs3); /* Depth Cue Scale 3 */
+ upa_writel(ctx->dcs4, &ffb->dcs4); /* Depth Cue Scale 4 */
+ upa_writel(ctx->dcd2, &ffb->dcd2); /* Depth Cue Depth 2 */
+ upa_writel(ctx->dcd3, &ffb->dcd3); /* Depth Cue Depth 3 */
+ upa_writel(ctx->dcd4, &ffb->dcd4); /* Depth Cue Depth 4 */
+
+ /* And stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ /* Unfortunately, there is a hardware bug on
+ * the FFB2+ chips which prevents a normal write
+ * to the stencil control register from working
+ * as it should.
+ *
+ * The state controlled by the FFB stencilctl register
+ * really gets transferred to the per-buffer instances
+ * of the stencilctl register in the 3DRAM chips.
+ *
+ * The bug is that FFB does not update buffer C correctly,
+ * so we have to do it by hand for them.
+ */
+
+ /* This will update buffers A and B. */
+ upa_writel(ctx->stencil, &ffb->stencil);
+ upa_writel(ctx->stencilctl, &ffb->stencilctl);
+
+ /* Force FFB to use buffer C 3dram regs. */
+ upa_writel(0x80000000, &ffb->fbc);
+ upa_writel((ctx->stencilctl | 0x80000),
+ &ffb->rawstencilctl);
+
+ /* Now restore the correct FBC controls. */
+ upa_writel(ctx->fbc, &ffb->fbc);
+ }
+ }
+
+ /* Restore the 32x32 area pattern. */
+ for (i = 0; i < 32; i++)
+ upa_writel(ctx->area_pattern[i], &ffb->pattern[i]);
+
+ /* Finally, stash away the User Constol/Status Register.
+ * The only state we really preserve here is the picking
+ * control.
+ */
+ upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr);
+}
+
+#define FFB_UCSR_FB_BUSY 0x01000000
+#define FFB_UCSR_RP_BUSY 0x02000000
+#define FFB_UCSR_ALL_BUSY (FFB_UCSR_RP_BUSY|FFB_UCSR_FB_BUSY)
+
+static void FFBWait(ffb_fbcPtr ffb)
+{
+ int limit = 100000;
+
+ do {
+ u32 regval = upa_readl(&ffb->ucsr);
+
+ if ((regval & FFB_UCSR_ALL_BUSY) = 0)
+ break;
+ } while (--limit);
+}
+
+int ffb_context_switch(drm_device_t *dev, int old, int new)
+{
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+
+ atomic_inc(&dev->total_ctx);
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new = dev->last_context ||
+ dev->last_context = 0) {
+ dev->last_context = new;
+ return 0;
+ }
+
+ FFBWait(fpriv->regs);
+ ffb_save_context(fpriv, old);
+ ffb_restore_context(fpriv, old, new);
+ FFBWait(fpriv->regs);
+
+ dev->last_context = new;
+
+ return 0;
+}
+
+int ffb_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS);
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+
+int ffb_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ idx = ffb_alloc_queue(dev, (ctx.flags & _DRM_CONTEXT_2DONLY));
+ if (idx < 0)
+ return -ENFILE;
+
+ DRM_DEBUG("%d\n", ctx.handle);
+ ctx.handle = idx;
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int ffb_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ struct ffb_hw_context *hwctx;
+ drm_ctx_t ctx;
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ idx = ctx.handle;
+ if (idx <= 0 || idx >= FFB_MAX_CTXS)
+ return -EINVAL;
+
+ hwctx = fpriv->hw_state[idx - 1];
+ if (hwctx = NULL)
+ return -EINVAL;
+
+ if ((ctx.flags & _DRM_CONTEXT_2DONLY) = 0)
+ hwctx->is_2d_only = 0;
+ else
+ hwctx->is_2d_only = 1;
+
+ return 0;
+}
+
+int ffb_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ struct ffb_hw_context *hwctx;
+ drm_ctx_t ctx;
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ idx = ctx.handle;
+ if (idx <= 0 || idx >= FFB_MAX_CTXS)
+ return -EINVAL;
+
+ hwctx = fpriv->hw_state[idx - 1];
+ if (hwctx = NULL)
+ return -EINVAL;
+
+ if (hwctx->is_2d_only != 0)
+ ctx.flags = _DRM_CONTEXT_2DONLY;
+ else
+ ctx.flags = 0;
+
+ if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int ffb_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return ffb_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int ffb_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ return 0;
+}
+
+int ffb_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ idx = ctx.handle - 1;
+ if (idx < 0 || idx >= FFB_MAX_CTXS)
+ return -EINVAL;
+
+ if (fpriv->hw_state[idx] != NULL) {
+ kfree(fpriv->hw_state[idx]);
+ fpriv->hw_state[idx] = NULL;
+ }
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_drv.c linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.c
--- linux-2.4.13/drivers/char/drm-4.0/ffb_drv.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,951 @@
+/* $Id: ffb_drv.c,v 1.14 2001/05/24 12:01:47 davem Exp $
+ * ffb_drv.c: Creator/Creator3D direct rendering driver.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ */
+
+#include "drmP.h"
+
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+#include <asm/shmparam.h>
+#include <asm/oplib.h>
+#include <asm/upa.h>
+
+#include "ffb_drv.h"
+
+#define FFB_NAME "ffb"
+#define FFB_DESC "Creator/Creator3D"
+#define FFB_DATE "20000517"
+#define FFB_MAJOR 0
+#define FFB_MINOR 0
+#define FFB_PATCHLEVEL 1
+
+/* Forward declarations. */
+int ffb_init(void);
+void ffb_cleanup(void);
+static int ffb_version(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_open(struct inode *inode, struct file *filp);
+static int ffb_release(struct inode *inode, struct file *filp);
+static int ffb_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_lock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_unlock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_mmap(struct file *filp, struct vm_area_struct *vma);
+static unsigned long ffb_get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+
+/* From ffb_context.c */
+extern int ffb_resctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_addctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_modctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_getctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_switchctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_newctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_rmctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_context_switch(drm_device_t *, int, int);
+
+static struct file_operations ffb_fops = {
+ owner: THIS_MODULE,
+ open: ffb_open,
+ flush: drm_flush,
+ release: ffb_release,
+ ioctl: ffb_ioctl,
+ mmap: ffb_mmap,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+ get_unmapped_area: ffb_get_unmapped_area,
+};
+
+/* This is just a template, we make a new copy for each FFB
+ * we discover at init time so that each one gets a unique
+ * misc device minor number.
+ */
+static struct miscdevice ffb_misc = {
+ minor: MISC_DYNAMIC_MINOR,
+ name: FFB_NAME,
+ fops: &ffb_fops,
+};
+
+static drm_ioctl_desc_t ffb_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { ffb_version, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_busid, 0, 1 }, /* XXX */
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_block, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_unblock, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 },
+
+ /* The implementation is currently a nop just like on tdfx.
+ * Later we can do something more clever. -DaveM
+ */
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { ffb_addctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { ffb_rmctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { ffb_modctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { ffb_getctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { ffb_switchctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { ffb_newctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { ffb_resctx, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { ffb_lock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { ffb_unlock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_finish, 1, 0 },
+};
+#define FFB_IOCTL_COUNT DRM_ARRAY_SIZE(ffb_ioctls)
+
+#ifdef MODULE
+static char *ffb = NULL;
+#endif
+
+MODULE_AUTHOR("David S. Miller (davem@redhat.com)");
+MODULE_DESCRIPTION("Sun Creator/Creator3D DRI");
+
+static int ffb_takedown(drm_device_t *dev)
+{
+ int i;
+ drm_magic_entry_t *pt, *next;
+ drm_map_t *map;
+ drm_vma_entry_t *vma, *vma_next;
+
+ DRM_DEBUG("\n");
+
+ down(&dev->struct_sem);
+ del_timer(&dev->timer);
+
+ if (dev->devname) {
+ drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER);
+ dev->devname = NULL;
+ }
+
+ if (dev->unique) {
+ drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER);
+ dev->unique = NULL;
+ dev->unique_len = 0;
+ }
+
+ /* Clear pid list */
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ for (pt = dev->magiclist[i].head; pt; pt = next) {
+ next = pt->next;
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+ }
+ dev->magiclist[i].head = dev->magiclist[i].tail = NULL;
+ }
+
+ /* Clear vma list (only built for debugging) */
+ if (dev->vmalist) {
+ for (vma = dev->vmalist; vma; vma = vma_next) {
+ vma_next = vma->next;
+ drm_free(vma, sizeof(*vma), DRM_MEM_VMAS);
+ }
+ dev->vmalist = NULL;
+ }
+
+ /* Clear map area information */
+ if (dev->maplist) {
+ for (i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+
+ case _DRM_SHM:
+ drm_free_pages((unsigned long)map->handle,
+ drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ break;
+
+ default:
+ break;
+ };
+
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+
+ drm_free(dev->maplist,
+ dev->map_count * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ }
+
+ if (dev->lock.hw_lock) {
+ dev->lock.hw_lock = NULL; /* SHM removed */
+ dev->lock.pid = 0;
+ wake_up_interruptible(&dev->lock.lock_queue);
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+drm_device_t **ffb_dev_table;
+static int ffb_dev_table_size;
+
+static void get_ffb_type(ffb_dev_priv_t *ffb_priv, int instance)
+{
+ volatile unsigned char *strap_bits;
+ unsigned char val;
+
+ strap_bits = (volatile unsigned char *)
+ (ffb_priv->card_phys_base + 0x00200000UL);
+
+ /* Don't ask, you have to read the value twice for whatever
+ * reason to get correct contents.
+ */
+ val = upa_readb(strap_bits);
+ val = upa_readb(strap_bits);
+ switch (val & 0x78) {
+ case (0x0 << 5) | (0x0 << 3):
+ ffb_priv->ffb_type = ffb1_prototype;
+ printk("ffb%d: Detected FFB1 pre-FCS prototype\n", instance);
+ break;
+ case (0x0 << 5) | (0x1 << 3):
+ ffb_priv->ffb_type = ffb1_standard;
+ printk("ffb%d: Detected FFB1\n", instance);
+ break;
+ case (0x0 << 5) | (0x3 << 3):
+ ffb_priv->ffb_type = ffb1_speedsort;
+ printk("ffb%d: Detected FFB1-SpeedSort\n", instance);
+ break;
+ case (0x1 << 5) | (0x0 << 3):
+ ffb_priv->ffb_type = ffb2_prototype;
+ printk("ffb%d: Detected FFB2/vertical pre-FCS prototype\n", instance);
+ break;
+ case (0x1 << 5) | (0x1 << 3):
+ ffb_priv->ffb_type = ffb2_vertical;
+ printk("ffb%d: Detected FFB2/vertical\n", instance);
+ break;
+ case (0x1 << 5) | (0x2 << 3):
+ ffb_priv->ffb_type = ffb2_vertical_plus;
+ printk("ffb%d: Detected FFB2+/vertical\n", instance);
+ break;
+ case (0x2 << 5) | (0x0 << 3):
+ ffb_priv->ffb_type = ffb2_horizontal;
+ printk("ffb%d: Detected FFB2/horizontal\n", instance);
+ break;
+ case (0x2 << 5) | (0x2 << 3):
+ ffb_priv->ffb_type = ffb2_horizontal;
+ printk("ffb%d: Detected FFB2+/horizontal\n", instance);
+ break;
+ default:
+ ffb_priv->ffb_type = ffb2_vertical;
+ printk("ffb%d: Unknown boardID[%08x], assuming FFB2\n", instance, val);
+ break;
+ };
+}
+
+static void __init ffb_apply_upa_parent_ranges(int parent, struct linux_prom64_registers *regs)
+{
+ struct linux_prom64_ranges ranges[PROMREG_MAX];
+ char name[128];
+ int len, i;
+
+ prom_getproperty(parent, "name", name, sizeof(name));
+ if (strcmp(name, "upa") != 0)
+ return;
+
+ len = prom_getproperty(parent, "ranges", (void *) ranges, sizeof(ranges));
+ if (len <= 0)
+ return;
+
+ len /= sizeof(struct linux_prom64_ranges);
+ for (i = 0; i < len; i++) {
+ struct linux_prom64_ranges *rng = &ranges[i];
+ u64 phys_addr = regs->phys_addr;
+
+ if (phys_addr >= rng->ot_child_base &&
+ phys_addr < (rng->ot_child_base + rng->or_size)) {
+ regs->phys_addr -= rng->ot_child_base;
+ regs->phys_addr += rng->ot_parent_base;
+ return;
+ }
+ }
+
+ return;
+}
+
+static int __init ffb_init_one(int prom_node, int parent_node, int instance)
+{
+ struct linux_prom64_registers regs[2*PROMREG_MAX];
+ drm_device_t *dev;
+ ffb_dev_priv_t *ffb_priv;
+ int ret, i;
+
+ dev = kmalloc(sizeof(drm_device_t) + sizeof(ffb_dev_priv_t), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+
+ memset(dev, 0, sizeof(*dev));
+ spin_lock_init(&dev->count_lock);
+ sema_init(&dev->struct_sem, 1);
+
+ ffb_priv = (ffb_dev_priv_t *) (dev + 1);
+ ffb_priv->prom_node = prom_node;
+ if (prom_getproperty(ffb_priv->prom_node, "reg",
+ (void *)regs, sizeof(regs)) <= 0) {
+ kfree(dev);
+ return -EINVAL;
+ }
+ ffb_apply_upa_parent_ranges(parent_node, ®s[0]);
+ ffb_priv->card_phys_base = regs[0].phys_addr;
+ ffb_priv->regs = (ffb_fbcPtr)
+ (regs[0].phys_addr + 0x00600000UL);
+ get_ffb_type(ffb_priv, instance);
+ for (i = 0; i < FFB_MAX_CTXS; i++)
+ ffb_priv->hw_state[i] = NULL;
+
+ ffb_dev_table[instance] = dev;
+
+#ifdef MODULE
+ drm_parse_options(ffb);
+#endif
+
+ memcpy(&ffb_priv->miscdev, &ffb_misc, sizeof(ffb_misc));
+ ret = misc_register(&ffb_priv->miscdev);
+ if (ret) {
+ ffb_dev_table[instance] = NULL;
+ kfree(dev);
+ return ret;
+ }
+
+ dev->device = MKDEV(MISC_MAJOR, ffb_priv->miscdev.minor);
+ dev->name = FFB_NAME;
+
+ drm_mem_init();
+ drm_proc_init(dev);
+
+ DRM_INFO("Initialized %s %d.%d.%d %s on minor %d at %016lx\n",
+ FFB_NAME,
+ FFB_MAJOR,
+ FFB_MINOR,
+ FFB_PATCHLEVEL,
+ FFB_DATE,
+ ffb_priv->miscdev.minor,
+ ffb_priv->card_phys_base);
+
+ return 0;
+}
+
+static int __init ffb_count_siblings(int root)
+{
+ int node, child, count = 0;
+
+ child = prom_getchild(root);
+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb"))
+ count++;
+
+ return count;
+}
+
+static int __init ffb_init_dev_table(void)
+{
+ int root, total;
+
+ total = ffb_count_siblings(prom_root_node);
+ root = prom_getchild(prom_root_node);
+ for (root = prom_searchsiblings(root, "upa"); root;
+ root = prom_searchsiblings(prom_getsibling(root), "upa"))
+ total += ffb_count_siblings(root);
+
+ if (!total)
+ return -ENODEV;
+
+ ffb_dev_table = kmalloc(sizeof(drm_device_t *) * total, GFP_KERNEL);
+ if (!ffb_dev_table)
+ return -ENOMEM;
+
+ ffb_dev_table_size = total;
+
+ return 0;
+}
+
+static int __init ffb_scan_siblings(int root, int instance)
+{
+ int node, child;
+
+ child = prom_getchild(root);
+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) {
+ ffb_init_one(node, root, instance);
+ instance++;
+ }
+
+ return instance;
+}
+
+int __init ffb_init(void)
+{
+ int root, instance, ret;
+
+ ret = ffb_init_dev_table();
+ if (ret)
+ return ret;
+
+ instance = ffb_scan_siblings(prom_root_node, 0);
+
+ root = prom_getchild(prom_root_node);
+ for (root = prom_searchsiblings(root, "upa"); root;
+ root = prom_searchsiblings(prom_getsibling(root), "upa"))
+ instance = ffb_scan_siblings(root, instance);
+
+ return 0;
+}
+
+void __exit ffb_cleanup(void)
+{
+ int instance;
+
+ DRM_DEBUG("\n");
+
+ drm_proc_cleanup();
+ for (instance = 0; instance < ffb_dev_table_size; instance++) {
+ drm_device_t *dev = ffb_dev_table[instance];
+ ffb_dev_priv_t *ffb_priv;
+
+ if (!dev)
+ continue;
+
+ ffb_priv = (ffb_dev_priv_t *) (dev + 1);
+ if (misc_deregister(&ffb_priv->miscdev)) {
+ DRM_ERROR("Cannot unload module\n");
+ } else {
+ DRM_INFO("Module unloaded\n");
+ }
+ ffb_takedown(dev);
+ kfree(dev);
+ ffb_dev_table[instance] = NULL;
+ }
+ kfree(ffb_dev_table);
+ ffb_dev_table = NULL;
+ ffb_dev_table_size = 0;
+}
+
+static int ffb_version(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ drm_version_t version;
+ int len, ret;
+
+ ret = copy_from_user(&version, (drm_version_t *)arg, sizeof(version));
+ if (ret)
+ return -EFAULT;
+
+ version.version_major = FFB_MAJOR;
+ version.version_minor = FFB_MINOR;
+ version.version_patchlevel = FFB_PATCHLEVEL;
+
+ len = strlen(FFB_NAME);
+ if (len > version.name_len)
+ len = version.name_len;
+ version.name_len = len;
+ if (len && version.name) {
+ ret = copy_to_user(version.name, FFB_NAME, len);
+ if (ret)
+ return -EFAULT;
+ }
+
+ len = strlen(FFB_DATE);
+ if (len > version.date_len)
+ len = version.date_len;
+ version.date_len = len;
+ if (len && version.date) {
+ ret = copy_to_user(version.date, FFB_DATE, len);
+ if (ret)
+ return -EFAULT;
+ }
+
+ len = strlen(FFB_DESC);
+ if (len > version.desc_len)
+ len = version.desc_len;
+ version.desc_len = len;
+ if (len && version.desc) {
+ ret = copy_to_user(version.desc, FFB_DESC, len);
+ if (ret)
+ return -EFAULT;
+ }
+
+ ret = copy_to_user((drm_version_t *) arg, &version, sizeof(version));
+ if (ret)
+ ret = -EFAULT;
+
+ return ret;
+}
+
+static int ffb_setup(drm_device_t *dev)
+{
+ int i;
+
+ atomic_set(&dev->ioctl_count, 0);
+ atomic_set(&dev->vma_count, 0);
+ dev->buf_use = 0;
+ atomic_set(&dev->buf_alloc, 0);
+
+ atomic_set(&dev->total_open, 0);
+ atomic_set(&dev->total_close, 0);
+ atomic_set(&dev->total_ioctl, 0);
+ atomic_set(&dev->total_irq, 0);
+ atomic_set(&dev->total_ctx, 0);
+ atomic_set(&dev->total_locks, 0);
+ atomic_set(&dev->total_unlocks, 0);
+ atomic_set(&dev->total_contends, 0);
+ atomic_set(&dev->total_sleeps, 0);
+
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ dev->magiclist[i].head = NULL;
+ dev->magiclist[i].tail = NULL;
+ }
+
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ dev->vmalist = NULL;
+ dev->lock.hw_lock = NULL;
+ init_waitqueue_head(&dev->lock.lock_queue);
+ dev->queue_count = 0;
+ dev->queue_reserved = 0;
+ dev->queue_slots = 0;
+ dev->queuelist = NULL;
+ dev->irq = 0;
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma = 0;
+ dev->dma_flag = 0;
+ dev->last_context = 0;
+ dev->last_switch = 0;
+ dev->last_checked = 0;
+ init_timer(&dev->timer);
+ init_waitqueue_head(&dev->context_wait);
+
+ dev->ctx_start = 0;
+ dev->lck_start = 0;
+
+ dev->buf_rp = dev->buf;
+ dev->buf_wp = dev->buf;
+ dev->buf_end = dev->buf + DRM_BSZ;
+ dev->buf_async = NULL;
+ init_waitqueue_head(&dev->buf_readers);
+ init_waitqueue_head(&dev->buf_writers);
+
+ return 0;
+}
+
+static int ffb_open(struct inode *inode, struct file *filp)
+{
+ drm_device_t *dev;
+ int minor, i;
+ int ret = 0;
+
+ minor = MINOR(inode->i_rdev);
+ for (i = 0; i < ffb_dev_table_size; i++) {
+ ffb_dev_priv_t *ffb_priv;
+
+ ffb_priv = (ffb_dev_priv_t *) (ffb_dev_table[i] + 1);
+
+ if (ffb_priv->miscdev.minor = minor)
+ break;
+ }
+
+ if (i >= ffb_dev_table_size)
+ return -EINVAL;
+
+ dev = ffb_dev_table[i];
+ if (!dev)
+ return -EINVAL;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ ret = drm_open_helper(inode, filp, dev);
+ if (!ret) {
+ atomic_inc(&dev->total_open);
+ spin_lock(&dev->count_lock);
+ if (!dev->open_count++) {
+ spin_unlock(&dev->count_lock);
+ return ffb_setup(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+
+ return ret;
+}
+
+static int ffb_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ int ret = 0;
+
+ lock_kernel();
+ dev = priv->dev;
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (dev->lock.hw_lock != NULL
+ && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)
+ && dev->lock.pid = current->pid) {
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ int context = _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock);
+ int idx;
+
+ /* We have to free up the rogue hw context state
+ * holding error or else we will leak it.
+ */
+ idx = context - 1;
+ if (fpriv->hw_state[idx] != NULL) {
+ kfree(fpriv->hw_state[idx]);
+ fpriv->hw_state[idx] = NULL;
+ }
+ }
+
+ ret = drm_release(inode, filp);
+
+ if (!ret) {
+ atomic_inc(&dev->total_close);
+ spin_lock(&dev->count_lock);
+ if (!--dev->open_count) {
+ if (atomic_read(&dev->ioctl_count) || dev->blocked) {
+ DRM_ERROR("Device busy: %d %d\n",
+ atomic_read(&dev->ioctl_count),
+ dev->blocked);
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return -EBUSY;
+ }
+ spin_unlock(&dev->count_lock);
+ ret = ffb_takedown(dev);
+ unlock_kernel();
+ return ret;
+ }
+ spin_unlock(&dev->count_lock);
+ }
+
+ unlock_kernel();
+ return ret;
+}
+
+static int ffb_ioctl(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ int nr = DRM_IOCTL_NR(cmd);
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ioctl_desc_t *ioctl;
+ drm_ioctl_t *func;
+ int ret;
+
+ atomic_inc(&dev->ioctl_count);
+ atomic_inc(&dev->total_ioctl);
+ ++priv->ioctl_count;
+
+ DRM_DEBUG("pid = %d, cmd = 0x%02x, nr = 0x%02x, dev 0x%x, auth = %d\n",
+ current->pid, cmd, nr, dev->device, priv->authenticated);
+
+ if (nr >= FFB_IOCTL_COUNT) {
+ ret = -EINVAL;
+ } else {
+ ioctl = &ffb_ioctls[nr];
+ func = ioctl->func;
+
+ if (!func) {
+ DRM_DEBUG("no function\n");
+ ret = -EINVAL;
+ } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN))
+ || (ioctl->auth_needed && !priv->authenticated)) {
+ ret = -EACCES;
+ } else {
+ ret = (func)(inode, filp, cmd, arg);
+ }
+ }
+
+ atomic_dec(&dev->ioctl_count);
+
+ return ret;
+}
+
+static int ffb_lock(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_lock_t lock;
+
+ ret = copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock));
+ if (ret)
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
+ lock.context, current->pid, dev->lock.hw_lock->lock,
+ lock.flags);
+
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ ret = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ lock.context)) {
+ dev->lock.pid = current->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ break; /* Got lock */
+ }
+
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ current->state = TASK_INTERRUPTIBLE;
+ current->policy |= SCHED_YIELD;
+ schedule();
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+
+ if (!ret) {
+ sigemptyset(&dev->sigmask);
+ sigaddset(&dev->sigmask, SIGSTOP);
+ sigaddset(&dev->sigmask, SIGTSTP);
+ sigaddset(&dev->sigmask, SIGTTIN);
+ sigaddset(&dev->sigmask, SIGTTOU);
+ dev->sigdata.context = lock.context;
+ dev->sigdata.lock = dev->lock.hw_lock;
+ block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
+
+ if (dev->last_context != lock.context)
+ ffb_context_switch(dev, dev->last_context, lock.context);
+ }
+
+ DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock");
+
+ return ret;
+}
+
+int ffb_unlock(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_lock_t lock;
+ unsigned int old, new, prev, ctx;
+ int ret;
+
+ ret = copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock));
+ if (ret)
+ return -EFAULT;
+
+ if ((ctx = lock.context) = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d frees lock (%d holds)\n",
+ lock.context,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ atomic_inc(&dev->total_unlocks);
+ if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock))
+ atomic_inc(&dev->total_contends);
+
+ /* We no longer really hold it, but if we are the next
+ * agent to request it then we should just be able to
+ * take it immediately and not eat the ioctl.
+ */
+ dev->lock.pid = 0;
+ {
+ __volatile__ unsigned int *plock = &dev->lock.hw_lock->lock;
+
+ do {
+ old = *plock;
+ new = ctx;
+ prev = cmpxchg(plock, old, new);
+ } while (prev != old);
+ }
+
+ wake_up_interruptible(&dev->lock.lock_queue);
+
+ unblock_all_signals();
+ return 0;
+}
+
+extern struct vm_operations_struct drm_vm_ops;
+extern struct vm_operations_struct drm_vm_shm_ops;
+extern struct vm_operations_struct drm_vm_shm_lock_ops;
+
+static int ffb_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map = NULL;
+ ffb_dev_priv_t *ffb_priv;
+ int i, minor;
+
+ DRM_DEBUG("start = 0x%lx, end = 0x%lx, offset = 0x%lx\n",
+ vma->vm_start, vma->vm_end, VM_OFFSET(vma));
+
+ minor = MINOR(filp->f_dentry->d_inode->i_rdev);
+ ffb_priv = NULL;
+ for (i = 0; i < ffb_dev_table_size; i++) {
+ ffb_priv = (ffb_dev_priv_t *) (ffb_dev_table[i] + 1);
+ if (ffb_priv->miscdev.minor = minor)
+ break;
+ }
+ if (i >= ffb_dev_table_size)
+ return -EINVAL;
+
+ /* We don't support/need dma mappings, so... */
+ if (!VM_OFFSET(vma))
+ return -EINVAL;
+
+ for (i = 0; i < dev->map_count; i++) {
+ unsigned long off;
+
+ map = dev->maplist[i];
+
+ /* Ok, a little hack to make 32-bit apps work. */
+ off = (map->offset & 0xffffffff);
+ if (off = VM_OFFSET(vma))
+ break;
+ }
+
+ if (i >= dev->map_count)
+ return -EINVAL;
+
+ if (!map ||
+ ((map->flags & _DRM_RESTRICTED) && !capable(CAP_SYS_ADMIN)))
+ return -EPERM;
+
+ if (map->size != (vma->vm_end - vma->vm_start))
+ return -EINVAL;
+
+ /* Set read-only attribute before mappings are created
+ * so it works for fb/reg maps too.
+ */
+ if (map->flags & _DRM_READ_ONLY)
+ vma->vm_page_prot = __pgprot(pte_val(pte_wrprotect(
+ __pte(pgprot_val(vma->vm_page_prot)))));
+
+ switch (map->type) {
+ case _DRM_FRAME_BUFFER:
+ /* FALLTHROUGH */
+
+ case _DRM_REGISTERS:
+ /* In order to handle 32-bit drm apps/xserver we
+ * play a trick. The mappings only really specify
+ * the 32-bit offset from the cards 64-bit base
+ * address, and we just add in the base here.
+ */
+ vma->vm_flags |= VM_IO;
+ if (io_remap_page_range(vma->vm_start,
+ ffb_priv->card_phys_base + VM_OFFSET(vma),
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot, 0))
+ return -EAGAIN;
+
+ vma->vm_ops = &drm_vm_ops;
+ break;
+ case _DRM_SHM:
+ if (map->flags & _DRM_CONTAINS_LOCK)
+ vma->vm_ops = &drm_vm_shm_lock_ops;
+ else {
+ vma->vm_ops = &drm_vm_shm_ops;
+ vma->vm_private_data = (void *) map;
+ }
+
+ /* Don't let this area swap. Change when
+ * DRM_KERNEL advisory is supported.
+ */
+ vma->vm_flags |= VM_LOCKED;
+ break;
+ default:
+ return -EINVAL; /* This should never happen. */
+ };
+
+ vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
+
+ vma->vm_file = filp; /* Needed for drm_vm_open() */
+ drm_vm_open(vma);
+ return 0;
+}
+
+static drm_map_t *ffb_find_map(struct file *filp, unsigned long off)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ drm_map_t *map;
+ int i;
+
+ if (!priv || (dev = priv->dev) = NULL)
+ return NULL;
+
+ for (i = 0; i < dev->map_count; i++) {
+ unsigned long uoff;
+
+ map = dev->maplist[i];
+
+ /* Ok, a little hack to make 32-bit apps work. */
+ uoff = (map->offset & 0xffffffff);
+ if (uoff = off)
+ return map;
+ }
+ return NULL;
+}
+
+static unsigned long ffb_get_unmapped_area(struct file *filp, unsigned long hint, unsigned long len, unsigned long pgoff, unsigned long flags)
+{
+ drm_map_t *map = ffb_find_map(filp, pgoff << PAGE_SHIFT);
+ unsigned long addr = -ENOMEM;
+
+ if (!map)
+ return get_unmapped_area(NULL, hint, len, pgoff, flags);
+
+ if (map->type = _DRM_FRAME_BUFFER ||
+ map->type = _DRM_REGISTERS) {
+#ifdef HAVE_ARCH_FB_UNMAPPED_AREA
+ addr = get_fb_unmapped_area(filp, hint, len, pgoff, flags);
+#else
+ addr = get_unmapped_area(NULL, hint, len, pgoff, flags);
+#endif
+ } else if (map->type = _DRM_SHM && SHMLBA > PAGE_SIZE) {
+ unsigned long slack = SHMLBA - PAGE_SIZE;
+
+ addr = get_unmapped_area(NULL, hint, len + slack, pgoff, flags);
+ if (!(addr & ~PAGE_MASK)) {
+ unsigned long kvirt = (unsigned long) map->handle;
+
+ if ((kvirt & (SHMLBA - 1)) != (addr & (SHMLBA - 1))) {
+ unsigned long koff, aoff;
+
+ koff = kvirt & (SHMLBA - 1);
+ aoff = addr & (SHMLBA - 1);
+ if (koff < aoff)
+ koff += SHMLBA;
+
+ addr += (koff - aoff);
+ }
+ }
+ } else {
+ addr = get_unmapped_area(NULL, hint, len, pgoff, flags);
+ }
+
+ return addr;
+}
+
+module_init(ffb_init);
+module_exit(ffb_cleanup);
diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_drv.h linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.h
--- linux-2.4.13/drivers/char/drm-4.0/ffb_drv.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,276 @@
+/* $Id: ffb_drv.h,v 1.1 2000/06/01 04:24:39 davem Exp $
+ * ffb_drv.h: Creator/Creator3D direct rendering driver.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ */
+
+/* Auxilliary clips. */
+typedef struct {
+ volatile unsigned int min;
+ volatile unsigned int max;
+} ffb_auxclip, *ffb_auxclipPtr;
+
+/* FFB register set. */
+typedef struct _ffb_fbc {
+ /* Next vertex registers, on the right we list which drawops
+ * use said register and the logical name the register has in
+ * that context.
+ */ /* DESCRIPTION DRAWOP(NAME) */
+/*0x00*/unsigned int pad1[3]; /* Reserved */
+/*0x0c*/volatile unsigned int alpha; /* ALPHA Transparency */
+/*0x10*/volatile unsigned int red; /* RED */
+/*0x14*/volatile unsigned int green; /* GREEN */
+/*0x18*/volatile unsigned int blue; /* BLUE */
+/*0x1c*/volatile unsigned int z; /* DEPTH */
+/*0x20*/volatile unsigned int y; /* Y triangle(DOYF) */
+ /* aadot(DYF) */
+ /* ddline(DYF) */
+ /* aaline(DYF) */
+/*0x24*/volatile unsigned int x; /* X triangle(DOXF) */
+ /* aadot(DXF) */
+ /* ddline(DXF) */
+ /* aaline(DXF) */
+/*0x28*/unsigned int pad2[2]; /* Reserved */
+/*0x30*/volatile unsigned int ryf; /* Y (alias to DOYF) ddline(RYF) */
+ /* aaline(RYF) */
+ /* triangle(RYF) */
+/*0x34*/volatile unsigned int rxf; /* X ddline(RXF) */
+ /* aaline(RXF) */
+ /* triangle(RXF) */
+/*0x38*/unsigned int pad3[2]; /* Reserved */
+/*0x40*/volatile unsigned int dmyf; /* Y (alias to DOYF) triangle(DMYF) */
+/*0x44*/volatile unsigned int dmxf; /* X triangle(DMXF) */
+/*0x48*/unsigned int pad4[2]; /* Reserved */
+/*0x50*/volatile unsigned int ebyi; /* Y (alias to RYI) polygon(EBYI) */
+/*0x54*/volatile unsigned int ebxi; /* X polygon(EBXI) */
+/*0x58*/unsigned int pad5[2]; /* Reserved */
+/*0x60*/volatile unsigned int by; /* Y brline(RYI) */
+ /* fastfill(OP) */
+ /* polygon(YI) */
+ /* rectangle(YI) */
+ /* bcopy(SRCY) */
+ /* vscroll(SRCY) */
+/*0x64*/volatile unsigned int bx; /* X brline(RXI) */
+ /* polygon(XI) */
+ /* rectangle(XI) */
+ /* bcopy(SRCX) */
+ /* vscroll(SRCX) */
+ /* fastfill(GO) */
+/*0x68*/volatile unsigned int dy; /* destination Y fastfill(DSTY) */
+ /* bcopy(DSRY) */
+ /* vscroll(DSRY) */
+/*0x6c*/volatile unsigned int dx; /* destination X fastfill(DSTX) */
+ /* bcopy(DSTX) */
+ /* vscroll(DSTX) */
+/*0x70*/volatile unsigned int bh; /* Y (alias to RYI) brline(DYI) */
+ /* dot(DYI) */
+ /* polygon(ETYI) */
+ /* Height fastfill(H) */
+ /* bcopy(H) */
+ /* vscroll(H) */
+ /* Y count fastfill(NY) */
+/*0x74*/volatile unsigned int bw; /* X dot(DXI) */
+ /* brline(DXI) */
+ /* polygon(ETXI) */
+ /* fastfill(W) */
+ /* bcopy(W) */
+ /* vscroll(W) */
+ /* fastfill(NX) */
+/*0x78*/unsigned int pad6[2]; /* Reserved */
+/*0x80*/unsigned int pad7[32]; /* Reserved */
+
+ /* Setup Unit's vertex state register */
+/*100*/ volatile unsigned int suvtx;
+/*104*/ unsigned int pad8[63]; /* Reserved */
+
+ /* Frame Buffer Control Registers */
+/*200*/ volatile unsigned int ppc; /* Pixel Processor Control */
+/*204*/ volatile unsigned int wid; /* Current WID */
+/*208*/ volatile unsigned int fg; /* FG data */
+/*20c*/ volatile unsigned int bg; /* BG data */
+/*210*/ volatile unsigned int consty; /* Constant Y */
+/*214*/ volatile unsigned int constz; /* Constant Z */
+/*218*/ volatile unsigned int xclip; /* X Clip */
+/*21c*/ volatile unsigned int dcss; /* Depth Cue Scale Slope */
+/*220*/ volatile unsigned int vclipmin; /* Viewclip XY Min Bounds */
+/*224*/ volatile unsigned int vclipmax; /* Viewclip XY Max Bounds */
+/*228*/ volatile unsigned int vclipzmin; /* Viewclip Z Min Bounds */
+/*22c*/ volatile unsigned int vclipzmax; /* Viewclip Z Max Bounds */
+/*230*/ volatile unsigned int dcsf; /* Depth Cue Scale Front Bound */
+/*234*/ volatile unsigned int dcsb; /* Depth Cue Scale Back Bound */
+/*238*/ volatile unsigned int dczf; /* Depth Cue Z Front */
+/*23c*/ volatile unsigned int dczb; /* Depth Cue Z Back */
+/*240*/ unsigned int pad9; /* Reserved */
+/*244*/ volatile unsigned int blendc; /* Alpha Blend Control */
+/*248*/ volatile unsigned int blendc1; /* Alpha Blend Color 1 */
+/*24c*/ volatile unsigned int blendc2; /* Alpha Blend Color 2 */
+/*250*/ volatile unsigned int fbramitc; /* FB RAM Interleave Test Control */
+/*254*/ volatile unsigned int fbc; /* Frame Buffer Control */
+/*258*/ volatile unsigned int rop; /* Raster OPeration */
+/*25c*/ volatile unsigned int cmp; /* Frame Buffer Compare */
+/*260*/ volatile unsigned int matchab; /* Buffer AB Match Mask */
+/*264*/ volatile unsigned int matchc; /* Buffer C(YZ) Match Mask */
+/*268*/ volatile unsigned int magnab; /* Buffer AB Magnitude Mask */
+/*26c*/ volatile unsigned int magnc; /* Buffer C(YZ) Magnitude Mask */
+/*270*/ volatile unsigned int fbcfg0; /* Frame Buffer Config 0 */
+/*274*/ volatile unsigned int fbcfg1; /* Frame Buffer Config 1 */
+/*278*/ volatile unsigned int fbcfg2; /* Frame Buffer Config 2 */
+/*27c*/ volatile unsigned int fbcfg3; /* Frame Buffer Config 3 */
+/*280*/ volatile unsigned int ppcfg; /* Pixel Processor Config */
+/*284*/ volatile unsigned int pick; /* Picking Control */
+/*288*/ volatile unsigned int fillmode; /* FillMode */
+/*28c*/ volatile unsigned int fbramwac; /* FB RAM Write Address Control */
+/*290*/ volatile unsigned int pmask; /* RGB PlaneMask */
+/*294*/ volatile unsigned int xpmask; /* X PlaneMask */
+/*298*/ volatile unsigned int ypmask; /* Y PlaneMask */
+/*29c*/ volatile unsigned int zpmask; /* Z PlaneMask */
+/*2a0*/ ffb_auxclip auxclip[4]; /* Auxilliary Viewport Clip */
+
+ /* New 3dRAM III support regs */
+/*2c0*/ volatile unsigned int rawblend2;
+/*2c4*/ volatile unsigned int rawpreblend;
+/*2c8*/ volatile unsigned int rawstencil;
+/*2cc*/ volatile unsigned int rawstencilctl;
+/*2d0*/ volatile unsigned int threedram1;
+/*2d4*/ volatile unsigned int threedram2;
+/*2d8*/ volatile unsigned int passin;
+/*2dc*/ volatile unsigned int rawclrdepth;
+/*2e0*/ volatile unsigned int rawpmask;
+/*2e4*/ volatile unsigned int rawcsrc;
+/*2e8*/ volatile unsigned int rawmatch;
+/*2ec*/ volatile unsigned int rawmagn;
+/*2f0*/ volatile unsigned int rawropblend;
+/*2f4*/ volatile unsigned int rawcmp;
+/*2f8*/ volatile unsigned int rawwac;
+/*2fc*/ volatile unsigned int fbramid;
+
+/*300*/ volatile unsigned int drawop; /* Draw OPeration */
+/*304*/ unsigned int pad10[2]; /* Reserved */
+/*30c*/ volatile unsigned int lpat; /* Line Pattern control */
+/*310*/ unsigned int pad11; /* Reserved */
+/*314*/ volatile unsigned int fontxy; /* XY Font coordinate */
+/*318*/ volatile unsigned int fontw; /* Font Width */
+/*31c*/ volatile unsigned int fontinc; /* Font Increment */
+/*320*/ volatile unsigned int font; /* Font bits */
+/*324*/ unsigned int pad12[3]; /* Reserved */
+/*330*/ volatile unsigned int blend2;
+/*334*/ volatile unsigned int preblend;
+/*338*/ volatile unsigned int stencil;
+/*33c*/ volatile unsigned int stencilctl;
+
+/*340*/ unsigned int pad13[4]; /* Reserved */
+/*350*/ volatile unsigned int dcss1; /* Depth Cue Scale Slope 1 */
+/*354*/ volatile unsigned int dcss2; /* Depth Cue Scale Slope 2 */
+/*358*/ volatile unsigned int dcss3; /* Depth Cue Scale Slope 3 */
+/*35c*/ volatile unsigned int widpmask;
+/*360*/ volatile unsigned int dcs2;
+/*364*/ volatile unsigned int dcs3;
+/*368*/ volatile unsigned int dcs4;
+/*36c*/ unsigned int pad14; /* Reserved */
+/*370*/ volatile unsigned int dcd2;
+/*374*/ volatile unsigned int dcd3;
+/*378*/ volatile unsigned int dcd4;
+/*37c*/ unsigned int pad15; /* Reserved */
+/*380*/ volatile unsigned int pattern[32]; /* area Pattern */
+/*400*/ unsigned int pad16[8]; /* Reserved */
+/*420*/ volatile unsigned int reset; /* chip RESET */
+/*424*/ unsigned int pad17[247]; /* Reserved */
+/*800*/ volatile unsigned int devid; /* Device ID */
+/*804*/ unsigned int pad18[63]; /* Reserved */
+/*900*/ volatile unsigned int ucsr; /* User Control & Status Register */
+/*904*/ unsigned int pad19[31]; /* Reserved */
+/*980*/ volatile unsigned int mer; /* Mode Enable Register */
+/*984*/ unsigned int pad20[1439]; /* Reserved */
+} ffb_fbc, *ffb_fbcPtr;
+
+struct ffb_hw_context {
+ int is_2d_only;
+
+ unsigned int ppc;
+ unsigned int wid;
+ unsigned int fg;
+ unsigned int bg;
+ unsigned int consty;
+ unsigned int constz;
+ unsigned int xclip;
+ unsigned int dcss;
+ unsigned int vclipmin;
+ unsigned int vclipmax;
+ unsigned int vclipzmin;
+ unsigned int vclipzmax;
+ unsigned int dcsf;
+ unsigned int dcsb;
+ unsigned int dczf;
+ unsigned int dczb;
+ unsigned int blendc;
+ unsigned int blendc1;
+ unsigned int blendc2;
+ unsigned int fbc;
+ unsigned int rop;
+ unsigned int cmp;
+ unsigned int matchab;
+ unsigned int matchc;
+ unsigned int magnab;
+ unsigned int magnc;
+ unsigned int pmask;
+ unsigned int xpmask;
+ unsigned int ypmask;
+ unsigned int zpmask;
+ unsigned int auxclip0min;
+ unsigned int auxclip0max;
+ unsigned int auxclip1min;
+ unsigned int auxclip1max;
+ unsigned int auxclip2min;
+ unsigned int auxclip2max;
+ unsigned int auxclip3min;
+ unsigned int auxclip3max;
+ unsigned int drawop;
+ unsigned int lpat;
+ unsigned int fontxy;
+ unsigned int fontw;
+ unsigned int fontinc;
+ unsigned int area_pattern[32];
+ unsigned int ucsr;
+ unsigned int stencil;
+ unsigned int stencilctl;
+ unsigned int dcss1;
+ unsigned int dcss2;
+ unsigned int dcss3;
+ unsigned int dcs2;
+ unsigned int dcs3;
+ unsigned int dcs4;
+ unsigned int dcd2;
+ unsigned int dcd3;
+ unsigned int dcd4;
+ unsigned int mer;
+};
+
+#define FFB_MAX_CTXS 32
+
+enum ffb_chip_type {
+ ffb1_prototype = 0, /* Early pre-FCS FFB */
+ ffb1_standard, /* First FCS FFB, 100Mhz UPA, 66MHz gclk */
+ ffb1_speedsort, /* Second FCS FFB, 100Mhz UPA, 75MHz gclk */
+ ffb2_prototype, /* Early pre-FCS vertical FFB2 */
+ ffb2_vertical, /* First FCS FFB2/vertical, 100Mhz UPA, 100MHZ gclk,
+ 75(SingleBuffer)/83(DoubleBuffer) MHz fclk */
+ ffb2_vertical_plus, /* Second FCS FFB2/vertical, same timings */
+ ffb2_horizontal, /* First FCS FFB2/horizontal, same timings as FFB2/vert */
+ ffb2_horizontal_plus, /* Second FCS FFB2/horizontal, same timings */
+ afb_m3, /* FCS Elite3D, 3 float chips */
+ afb_m6 /* FCS Elite3D, 6 float chips */
+};
+
+typedef struct ffb_dev_priv {
+ /* Misc software state. */
+ int prom_node;
+ enum ffb_chip_type ffb_type;
+ u64 card_phys_base;
+ struct miscdevice miscdev;
+
+ /* Controller registers. */
+ ffb_fbcPtr regs;
+
+ /* Context table. */
+ struct ffb_hw_context *hw_state[FFB_MAX_CTXS];
+} ffb_dev_priv_t;
diff -urN linux-2.4.13/drivers/char/drm-4.0/fops.c linux-2.4.13-lia/drivers/char/drm-4.0/fops.c
--- linux-2.4.13/drivers/char/drm-4.0/fops.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/fops.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,253 @@
+/* fops.c -- File operations for DRM -*- linux-c -*-
+ * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ * Daryll Strauss <daryll@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include <linux/poll.h>
+
+/* drm_open is called whenever a process opens /dev/drm. */
+
+int drm_open_helper(struct inode *inode, struct file *filp, drm_device_t *dev)
+{
+ kdev_t minor = MINOR(inode->i_rdev);
+ drm_file_t *priv;
+
+ if (filp->f_flags & O_EXCL) return -EBUSY; /* No exclusive opens */
+ if (!drm_cpu_valid()) return -EINVAL;
+
+ DRM_DEBUG("pid = %d, minor = %d\n", current->pid, minor);
+
+ priv = drm_alloc(sizeof(*priv), DRM_MEM_FILES);
+ if(priv = NULL)
+ return -ENOMEM;
+ memset(priv, 0, sizeof(*priv));
+
+ filp->private_data = priv;
+ priv->uid = current->euid;
+ priv->pid = current->pid;
+ priv->minor = minor;
+ priv->dev = dev;
+ priv->ioctl_count = 0;
+ priv->authenticated = capable(CAP_SYS_ADMIN);
+
+ down(&dev->struct_sem);
+ if (!dev->file_last) {
+ priv->next = NULL;
+ priv->prev = NULL;
+ dev->file_first = priv;
+ dev->file_last = priv;
+ } else {
+ priv->next = NULL;
+ priv->prev = dev->file_last;
+ dev->file_last->next = priv;
+ dev->file_last = priv;
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+int drm_flush(struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("pid = %d, device = 0x%x, open_count = %d\n",
+ current->pid, dev->device, dev->open_count);
+ return 0;
+}
+
+/* drm_release is called whenever a process closes /dev/drm*. Linux calls
+ this only if any mappings have been closed. */
+
+int drm_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("pid = %d, device = 0x%x, open_count = %d\n",
+ current->pid, dev->device, dev->open_count);
+
+ if (dev->lock.hw_lock
+ && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)
+ && dev->lock.pid = current->pid) {
+ DRM_ERROR("Process %d dead, freeing lock for context %d\n",
+ current->pid,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ drm_lock_free(dev,
+ &dev->lock.hw_lock->lock,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+
+ /* FIXME: may require heavy-handed reset of
+ hardware at this point, possibly
+ processed via a callback to the X
+ server. */
+ }
+ drm_reclaim_buffers(dev, priv->pid);
+
+ drm_fasync(-1, filp, 0);
+
+ down(&dev->struct_sem);
+ if (priv->prev) priv->prev->next = priv->next;
+ else dev->file_first = priv->next;
+ if (priv->next) priv->next->prev = priv->prev;
+ else dev->file_last = priv->prev;
+ up(&dev->struct_sem);
+
+ drm_free(priv, sizeof(*priv), DRM_MEM_FILES);
+
+ return 0;
+}
+
+int drm_fasync(int fd, struct file *filp, int on)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode;
+
+ DRM_DEBUG("fd = %d, device = 0x%x\n", fd, dev->device);
+ retcode = fasync_helper(fd, filp, on, &dev->buf_async);
+ if (retcode < 0) return retcode;
+ return 0;
+}
+
+
+/* The drm_read and drm_write_string code (especially that which manages
+ the circular buffer), is based on Alessandro Rubini's LINUX DEVICE
+ DRIVERS (Cambridge: O'Reilly, 1998), pages 111-113. */
+
+ssize_t drm_read(struct file *filp, char *buf, size_t count, loff_t *off)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int left;
+ int avail;
+ int send;
+ int cur;
+
+ DRM_DEBUG("%p, %p\n", dev->buf_rp, dev->buf_wp);
+
+ while (dev->buf_rp = dev->buf_wp) {
+ DRM_DEBUG(" sleeping\n");
+ if (filp->f_flags & O_NONBLOCK) {
+ return -EAGAIN;
+ }
+ interruptible_sleep_on(&dev->buf_readers);
+ if (signal_pending(current)) {
+ DRM_DEBUG(" interrupted\n");
+ return -ERESTARTSYS;
+ }
+ DRM_DEBUG(" awake\n");
+ }
+
+ left = (dev->buf_rp + DRM_BSZ - dev->buf_wp) % DRM_BSZ;
+ avail = DRM_BSZ - left;
+ send = DRM_MIN(avail, count);
+
+ while (send) {
+ if (dev->buf_wp > dev->buf_rp) {
+ cur = DRM_MIN(send, dev->buf_wp - dev->buf_rp);
+ } else {
+ cur = DRM_MIN(send, dev->buf_end - dev->buf_rp);
+ }
+ if (copy_to_user(buf, dev->buf_rp, cur))
+ return -EFAULT;
+ dev->buf_rp += cur;
+ if (dev->buf_rp = dev->buf_end) dev->buf_rp = dev->buf;
+ send -= cur;
+ }
+
+ wake_up_interruptible(&dev->buf_writers);
+ return DRM_MIN(avail, count);;
+}
+
+int drm_write_string(drm_device_t *dev, const char *s)
+{
+ int left = (dev->buf_rp + DRM_BSZ - dev->buf_wp) % DRM_BSZ;
+ int send = strlen(s);
+ int count;
+
+ DRM_DEBUG("%d left, %d to send (%p, %p)\n",
+ left, send, dev->buf_rp, dev->buf_wp);
+
+ if (left = 1 || dev->buf_wp != dev->buf_rp) {
+ DRM_ERROR("Buffer not empty (%d left, wp = %p, rp = %p)\n",
+ left,
+ dev->buf_wp,
+ dev->buf_rp);
+ }
+
+ while (send) {
+ if (dev->buf_wp >= dev->buf_rp) {
+ count = DRM_MIN(send, dev->buf_end - dev->buf_wp);
+ if (count = left) --count; /* Leave a hole */
+ } else {
+ count = DRM_MIN(send, dev->buf_rp - dev->buf_wp - 1);
+ }
+ strncpy(dev->buf_wp, s, count);
+ dev->buf_wp += count;
+ if (dev->buf_wp = dev->buf_end) dev->buf_wp = dev->buf;
+ send -= count;
+ }
+
+#if LINUX_VERSION_CODE < 0x020315 && !defined(KILLFASYNCHASTHREEPARAMETERS)
+ /* The extra parameter to kill_fasync was added in 2.3.21, and is
+ _not_ present in _stock_ 2.2.14 and 2.2.15. However, some
+ distributions patch 2.2.x kernels to add this parameter. The
+ Makefile.linux attempts to detect this addition and defines
+ KILLFASYNCHASTHREEPARAMETERS if three parameters are found. */
+ if (dev->buf_async) kill_fasync(dev->buf_async, SIGIO);
+#else
+
+ /* Parameter added in 2.3.21. */
+#if LINUX_VERSION_CODE < 0x020400
+ if (dev->buf_async) kill_fasync(dev->buf_async, SIGIO, POLL_IN);
+#else
+ /* Type of first parameter changed in
+ Linux 2.4.0-test2... */
+ if (dev->buf_async) kill_fasync(&dev->buf_async, SIGIO, POLL_IN);
+#endif
+#endif
+ DRM_DEBUG("waking\n");
+ wake_up_interruptible(&dev->buf_readers);
+ return 0;
+}
+
+unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ poll_wait(filp, &dev->buf_readers, wait);
+ if (dev->buf_wp != dev->buf_rp) return POLLIN | POLLRDNORM;
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_dma.c linux-2.4.13-lia/drivers/char/drm-4.0/gamma_dma.c
--- linux-2.4.13/drivers/char/drm-4.0/gamma_dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,836 @@
+/* gamma_dma.c -- DMA support for GMX 2000 -*- linux-c -*-
+ * Created: Fri Mar 19 14:30:16 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "gamma_drv.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+
+/* WARNING!!! MAGIC NUMBER!!! The number of regions already added to the
+ kernel must be specified here. Currently, the number is 2. This must
+ match the order the X server uses for instantiating register regions ,
+ or must be passed in a new ioctl. */
+#define GAMMA_REG(reg) \
+ (2 \
+ + ((reg < 0x1000) \
+ ? 0 \
+ : ((reg < 0x10000) ? 1 : ((reg < 0x11000) ? 2 : 3))))
+
+#define GAMMA_OFF(reg) \
+ ((reg < 0x1000) \
+ ? reg \
+ : ((reg < 0x10000) \
+ ? (reg - 0x1000) \
+ : ((reg < 0x11000) \
+ ? (reg - 0x10000) \
+ : (reg - 0x11000))))
+
+#define GAMMA_BASE(reg) ((unsigned long)dev->maplist[GAMMA_REG(reg)]->handle)
+#define GAMMA_ADDR(reg) (GAMMA_BASE(reg) + GAMMA_OFF(reg))
+#define GAMMA_DEREF(reg) *(__volatile__ int *)GAMMA_ADDR(reg)
+#define GAMMA_READ(reg) GAMMA_DEREF(reg)
+#define GAMMA_WRITE(reg,val) do { GAMMA_DEREF(reg) = val; } while (0)
+
+#define GAMMA_BROADCASTMASK 0x9378
+#define GAMMA_COMMANDINTENABLE 0x0c48
+#define GAMMA_DMAADDRESS 0x0028
+#define GAMMA_DMACOUNT 0x0030
+#define GAMMA_FILTERMODE 0x8c00
+#define GAMMA_GCOMMANDINTFLAGS 0x0c50
+#define GAMMA_GCOMMANDMODE 0x0c40
+#define GAMMA_GCOMMANDSTATUS 0x0c60
+#define GAMMA_GDELAYTIMER 0x0c38
+#define GAMMA_GDMACONTROL 0x0060
+#define GAMMA_GINTENABLE 0x0808
+#define GAMMA_GINTFLAGS 0x0810
+#define GAMMA_INFIFOSPACE 0x0018
+#define GAMMA_OUTFIFOWORDS 0x0020
+#define GAMMA_OUTPUTFIFO 0x2000
+#define GAMMA_SYNC 0x8c40
+#define GAMMA_SYNC_TAG 0x0188
+
+static inline void gamma_dma_dispatch(drm_device_t *dev, unsigned long address,
+ unsigned long length)
+{
+ GAMMA_WRITE(GAMMA_DMAADDRESS, virt_to_phys((void *)address));
+ while (GAMMA_READ(GAMMA_GCOMMANDSTATUS) != 4)
+ ;
+ GAMMA_WRITE(GAMMA_DMACOUNT, length / 4);
+}
+
+static inline void gamma_dma_quiescent_single(drm_device_t *dev)
+{
+ while (GAMMA_READ(GAMMA_DMACOUNT))
+ ;
+ while (GAMMA_READ(GAMMA_INFIFOSPACE) < 3)
+ ;
+
+ GAMMA_WRITE(GAMMA_FILTERMODE, 1 << 10);
+ GAMMA_WRITE(GAMMA_SYNC, 0);
+
+ do {
+ while (!GAMMA_READ(GAMMA_OUTFIFOWORDS))
+ ;
+ } while (GAMMA_READ(GAMMA_OUTPUTFIFO) != GAMMA_SYNC_TAG);
+}
+
+static inline void gamma_dma_quiescent_dual(drm_device_t *dev)
+{
+ while (GAMMA_READ(GAMMA_DMACOUNT))
+ ;
+ while (GAMMA_READ(GAMMA_INFIFOSPACE) < 3)
+ ;
+
+ GAMMA_WRITE(GAMMA_BROADCASTMASK, 3);
+
+ GAMMA_WRITE(GAMMA_FILTERMODE, 1 << 10);
+ GAMMA_WRITE(GAMMA_SYNC, 0);
+
+ /* Read from first MX */
+ do {
+ while (!GAMMA_READ(GAMMA_OUTFIFOWORDS))
+ ;
+ } while (GAMMA_READ(GAMMA_OUTPUTFIFO) != GAMMA_SYNC_TAG);
+
+ /* Read from second MX */
+ do {
+ while (!GAMMA_READ(GAMMA_OUTFIFOWORDS + 0x10000))
+ ;
+ } while (GAMMA_READ(GAMMA_OUTPUTFIFO + 0x10000) != GAMMA_SYNC_TAG);
+}
+
+static inline void gamma_dma_ready(drm_device_t *dev)
+{
+ while (GAMMA_READ(GAMMA_DMACOUNT))
+ ;
+}
+
+static inline int gamma_dma_is_ready(drm_device_t *dev)
+{
+ return !GAMMA_READ(GAMMA_DMACOUNT);
+}
+
+static void gamma_dma_service(int irq, void *device, struct pt_regs *regs)
+{
+ drm_device_t *dev = (drm_device_t *)device;
+ drm_device_dma_t *dma = dev->dma;
+
+ atomic_inc(&dev->total_irq);
+ GAMMA_WRITE(GAMMA_GDELAYTIMER, 0xc350/2); /* 0x05S */
+ GAMMA_WRITE(GAMMA_GCOMMANDINTFLAGS, 8);
+ GAMMA_WRITE(GAMMA_GINTFLAGS, 0x2001);
+ if (gamma_dma_is_ready(dev)) {
+ /* Free previous buffer */
+ if (test_and_set_bit(0, &dev->dma_flag)) {
+ atomic_inc(&dma->total_missed_free);
+ return;
+ }
+ if (dma->this_buffer) {
+ drm_free_buffer(dev, dma->this_buffer);
+ dma->this_buffer = NULL;
+ }
+ clear_bit(0, &dev->dma_flag);
+
+ /* Dispatch new buffer */
+ queue_task(&dev->tq, &tq_immediate);
+ mark_bh(IMMEDIATE_BH);
+ }
+}
+
+/* Only called by gamma_dma_schedule. */
+static int gamma_do_dma(drm_device_t *dev, int locked)
+{
+ unsigned long address;
+ unsigned long length;
+ drm_buf_t *buf;
+ int retcode = 0;
+ drm_device_dma_t *dma = dev->dma;
+#if DRM_DMA_HISTOGRAM
+ cycles_t dma_start, dma_stop;
+#endif
+
+ if (test_and_set_bit(0, &dev->dma_flag)) {
+ atomic_inc(&dma->total_missed_dma);
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dma_start = get_cycles();
+#endif
+
+ if (!dma->next_buffer) {
+ DRM_ERROR("No next_buffer\n");
+ clear_bit(0, &dev->dma_flag);
+ return -EINVAL;
+ }
+
+ buf = dma->next_buffer;
+ address = (unsigned long)buf->address;
+ length = buf->used;
+
+ DRM_DEBUG("context %d, buffer %d (%ld bytes)\n",
+ buf->context, buf->idx, length);
+
+ if (buf->list = DRM_LIST_RECLAIM) {
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ clear_bit(0, &dev->dma_flag);
+ return -EINVAL;
+ }
+
+ if (!length) {
+ DRM_ERROR("0 length buffer\n");
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ clear_bit(0, &dev->dma_flag);
+ return 0;
+ }
+
+ if (!gamma_dma_is_ready(dev)) {
+ clear_bit(0, &dev->dma_flag);
+ return -EBUSY;
+ }
+
+ if (buf->while_locked) {
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Dispatching buffer %d from pid %d"
+ " \"while locked\", but no lock held\n",
+ buf->idx, buf->pid);
+ }
+ } else {
+ if (!locked && !drm_lock_take(&dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ atomic_inc(&dma->total_missed_lock);
+ clear_bit(0, &dev->dma_flag);
+ return -EBUSY;
+ }
+ }
+
+ if (dev->last_context != buf->context
+ && !(dev->queuelist[buf->context]->flags
+ & _DRM_CONTEXT_PRESERVED)) {
+ /* PRE: dev->last_context != buf->context */
+ if (drm_context_switch(dev, dev->last_context, buf->context)) {
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ }
+ retcode = -EBUSY;
+ goto cleanup;
+
+ /* POST: we will wait for the context
+ switch and will dispatch on a later call
+ when dev->last_context = buf->context.
+ NOTE WE HOLD THE LOCK THROUGHOUT THIS
+ TIME! */
+ }
+
+ drm_clear_next_buffer(dev);
+ buf->pending = 1;
+ buf->waiting = 0;
+ buf->list = DRM_LIST_PEND;
+#if DRM_DMA_HISTOGRAM
+ buf->time_dispatched = get_cycles();
+#endif
+
+ gamma_dma_dispatch(dev, address, length);
+ drm_free_buffer(dev, dma->this_buffer);
+ dma->this_buffer = buf;
+
+ atomic_add(length, &dma->total_bytes);
+ atomic_inc(&dma->total_dmas);
+
+ if (!buf->while_locked && !dev->context_flag && !locked) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+cleanup:
+
+ clear_bit(0, &dev->dma_flag);
+
+#if DRM_DMA_HISTOGRAM
+ dma_stop = get_cycles();
+ atomic_inc(&dev->histo.dma[drm_histogram_slot(dma_stop - dma_start)]);
+#endif
+
+ return retcode;
+}
+
+static void gamma_dma_schedule_timer_wrapper(unsigned long dev)
+{
+ gamma_dma_schedule((drm_device_t *)dev, 0);
+}
+
+static void gamma_dma_schedule_tq_wrapper(void *dev)
+{
+ gamma_dma_schedule(dev, 0);
+}
+
+int gamma_dma_schedule(drm_device_t *dev, int locked)
+{
+ int next;
+ drm_queue_t *q;
+ drm_buf_t *buf;
+ int retcode = 0;
+ int processed = 0;
+ int missed;
+ int expire = 20;
+ drm_device_dma_t *dma = dev->dma;
+#if DRM_DMA_HISTOGRAM
+ cycles_t schedule_start;
+#endif
+
+ if (test_and_set_bit(0, &dev->interrupt_flag)) {
+ /* Not reentrant */
+ atomic_inc(&dma->total_missed_sched);
+ return -EBUSY;
+ }
+ missed = atomic_read(&dma->total_missed_sched);
+
+#if DRM_DMA_HISTOGRAM
+ schedule_start = get_cycles();
+#endif
+
+again:
+ if (dev->context_flag) {
+ clear_bit(0, &dev->interrupt_flag);
+ return -EBUSY;
+ }
+ if (dma->next_buffer) {
+ /* Unsent buffer that was previously
+ selected, but that couldn't be sent
+ because the lock could not be obtained
+ or the DMA engine wasn't ready. Try
+ again. */
+ atomic_inc(&dma->total_tried);
+ if (!(retcode = gamma_do_dma(dev, locked))) {
+ atomic_inc(&dma->total_hit);
+ ++processed;
+ }
+ } else {
+ do {
+ next = drm_select_queue(dev,
+ gamma_dma_schedule_timer_wrapper);
+ if (next >= 0) {
+ q = dev->queuelist[next];
+ buf = drm_waitlist_get(&q->waitlist);
+ dma->next_buffer = buf;
+ dma->next_queue = q;
+ if (buf && buf->list = DRM_LIST_RECLAIM) {
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ }
+ }
+ } while (next >= 0 && !dma->next_buffer);
+ if (dma->next_buffer) {
+ if (!(retcode = gamma_do_dma(dev, locked))) {
+ ++processed;
+ }
+ }
+ }
+
+ if (--expire) {
+ if (missed != atomic_read(&dma->total_missed_sched)) {
+ atomic_inc(&dma->total_lost);
+ if (gamma_dma_is_ready(dev)) goto again;
+ }
+ if (processed && gamma_dma_is_ready(dev)) {
+ atomic_inc(&dma->total_lost);
+ processed = 0;
+ goto again;
+ }
+ }
+
+ clear_bit(0, &dev->interrupt_flag);
+
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.schedule[drm_histogram_slot(get_cycles()
+ - schedule_start)]);
+#endif
+ return retcode;
+}
+
+static int gamma_dma_priority(drm_device_t *dev, drm_dma_t *d)
+{
+ unsigned long address;
+ unsigned long length;
+ int must_free = 0;
+ int retcode = 0;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+ drm_buf_t *last_buf = NULL;
+ drm_device_dma_t *dma = dev->dma;
+ DECLARE_WAITQUEUE(entry, current);
+
+ /* Turn off interrupt handling */
+ while (test_and_set_bit(0, &dev->interrupt_flag)) {
+ schedule();
+ if (signal_pending(current)) return -EINTR;
+ }
+ if (!(d->flags & _DRM_DMA_WHILE_LOCKED)) {
+ while (!drm_lock_take(&dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ schedule();
+ if (signal_pending(current)) {
+ clear_bit(0, &dev->interrupt_flag);
+ return -EINTR;
+ }
+ }
+ ++must_free;
+ }
+ atomic_inc(&dma->total_prio);
+
+ for (i = 0; i < d->send_count; i++) {
+ idx = d->send_indices[i];
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ d->send_indices[i], dma->buf_count - 1);
+ continue;
+ }
+ buf = dma->buflist[ idx ];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d using buffer owned by %d\n",
+ current->pid, buf->pid);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ if (buf->list != DRM_LIST_NONE) {
+ DRM_ERROR("Process %d using %d's buffer on list %d\n",
+ current->pid, buf->pid, buf->list);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ /* This isn't a race condition on
+ buf->list, since our concern is the
+ buffer reclaim during the time the
+ process closes the /dev/drm? handle, so
+ it can't also be doing DMA. */
+ buf->list = DRM_LIST_PRIO;
+ buf->used = d->send_sizes[i];
+ buf->context = d->context;
+ buf->while_locked = d->flags & _DRM_DMA_WHILE_LOCKED;
+ address = (unsigned long)buf->address;
+ length = buf->used;
+ if (!length) {
+ DRM_ERROR("0 length buffer\n");
+ }
+ if (buf->pending) {
+ DRM_ERROR("Sending pending buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ if (buf->waiting) {
+ DRM_ERROR("Sending waiting buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ buf->pending = 1;
+
+ if (dev->last_context != buf->context
+ && !(dev->queuelist[buf->context]->flags
+ & _DRM_CONTEXT_PRESERVED)) {
+ add_wait_queue(&dev->context_wait, &entry);
+ current->state = TASK_INTERRUPTIBLE;
+ /* PRE: dev->last_context != buf->context */
+ drm_context_switch(dev, dev->last_context,
+ buf->context);
+ /* POST: we will wait for the context
+ switch and will dispatch on a later call
+ when dev->last_context = buf->context.
+ NOTE WE HOLD THE LOCK THROUGHOUT THIS
+ TIME! */
+ schedule();
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->context_wait, &entry);
+ if (signal_pending(current)) {
+ retcode = -EINTR;
+ goto cleanup;
+ }
+ if (dev->last_context != buf->context) {
+ DRM_ERROR("Context mismatch: %d %d\n",
+ dev->last_context,
+ buf->context);
+ }
+ }
+
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = get_cycles();
+ buf->time_dispatched = buf->time_queued;
+#endif
+ gamma_dma_dispatch(dev, address, length);
+ atomic_add(length, &dma->total_bytes);
+ atomic_inc(&dma->total_dmas);
+
+ if (last_buf) {
+ drm_free_buffer(dev, last_buf);
+ }
+ last_buf = buf;
+ }
+
+
+cleanup:
+ if (last_buf) {
+ gamma_dma_ready(dev);
+ drm_free_buffer(dev, last_buf);
+ }
+
+ if (must_free && !dev->context_flag) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+ clear_bit(0, &dev->interrupt_flag);
+ return retcode;
+}
+
+static int gamma_dma_send_buffers(drm_device_t *dev, drm_dma_t *d)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ drm_buf_t *last_buf = NULL;
+ int retcode = 0;
+ drm_device_dma_t *dma = dev->dma;
+
+ if (d->flags & _DRM_DMA_BLOCK) {
+ last_buf = dma->buflist[d->send_indices[d->send_count-1]];
+ add_wait_queue(&last_buf->dma_wait, &entry);
+ }
+
+ if ((retcode = drm_dma_enqueue(dev, d))) {
+ if (d->flags & _DRM_DMA_BLOCK)
+ remove_wait_queue(&last_buf->dma_wait, &entry);
+ return retcode;
+ }
+
+ gamma_dma_schedule(dev, 0);
+
+ if (d->flags & _DRM_DMA_BLOCK) {
+ DRM_DEBUG("%d waiting\n", current->pid);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!last_buf->waiting && !last_buf->pending)
+ break; /* finished */
+ schedule();
+ if (signal_pending(current)) {
+ retcode = -EINTR; /* Can't restart */
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ DRM_DEBUG("%d running\n", current->pid);
+ remove_wait_queue(&last_buf->dma_wait, &entry);
+ if (!retcode
+ || (last_buf->list=DRM_LIST_PEND && !last_buf->pending)) {
+ if (!waitqueue_active(&last_buf->dma_wait)) {
+ drm_free_buffer(dev, last_buf);
+ }
+ }
+ if (retcode) {
+ DRM_ERROR("ctx%d w%d p%d c%d i%d l%d %d/%d\n",
+ d->context,
+ last_buf->waiting,
+ last_buf->pending,
+ DRM_WAITCOUNT(dev, d->context),
+ last_buf->idx,
+ last_buf->list,
+ last_buf->pid,
+ current->pid);
+ }
+ }
+ return retcode;
+}
+
+int gamma_dma(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ int retcode = 0;
+ drm_dma_t d;
+
+ if (copy_from_user(&d, (drm_dma_t *)arg, sizeof(d)))
+ return -EFAULT;
+ DRM_DEBUG("%d %d: %d send, %d req\n",
+ current->pid, d.context, d.send_count, d.request_count);
+
+ if (d.context = DRM_KERNEL_CONTEXT || d.context >= dev->queue_slots) {
+ DRM_ERROR("Process %d using context %d\n",
+ current->pid, d.context);
+ return -EINVAL;
+ }
+ if (d.send_count < 0 || d.send_count > dma->buf_count) {
+ DRM_ERROR("Process %d trying to send %d buffers (of %d max)\n",
+ current->pid, d.send_count, dma->buf_count);
+ return -EINVAL;
+ }
+ if (d.request_count < 0 || d.request_count > dma->buf_count) {
+ DRM_ERROR("Process %d trying to get %d buffers (of %d max)\n",
+ current->pid, d.request_count, dma->buf_count);
+ return -EINVAL;
+ }
+
+ if (d.send_count) {
+ if (d.flags & _DRM_DMA_PRIORITY)
+ retcode = gamma_dma_priority(dev, &d);
+ else
+ retcode = gamma_dma_send_buffers(dev, &d);
+ }
+
+ d.granted_count = 0;
+
+ if (!retcode && d.request_count) {
+ retcode = drm_dma_get_buffers(dev, &d);
+ }
+
+ DRM_DEBUG("%d returning, granted = %d\n",
+ current->pid, d.granted_count);
+ if (copy_to_user((drm_dma_t *)arg, &d, sizeof(d)))
+ return -EFAULT;
+
+ return retcode;
+}
+
+int gamma_irq_install(drm_device_t *dev, int irq)
+{
+ int retcode;
+
+ if (!irq) return -EINVAL;
+
+ down(&dev->struct_sem);
+ if (dev->irq) {
+ up(&dev->struct_sem);
+ return -EBUSY;
+ }
+ dev->irq = irq;
+ up(&dev->struct_sem);
+
+ DRM_DEBUG("%d\n", irq);
+
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+
+ dev->dma->next_buffer = NULL;
+ dev->dma->next_queue = NULL;
+ dev->dma->this_buffer = NULL;
+
+ INIT_LIST_HEAD(&dev->tq.list);
+ dev->tq.sync = 0;
+ dev->tq.routine = gamma_dma_schedule_tq_wrapper;
+ dev->tq.data = dev;
+
+
+ /* Before installing handler */
+ GAMMA_WRITE(GAMMA_GCOMMANDMODE, 0);
+ GAMMA_WRITE(GAMMA_GDMACONTROL, 0);
+
+ /* Install handler */
+ if ((retcode = request_irq(dev->irq,
+ gamma_dma_service,
+ 0,
+ dev->devname,
+ dev))) {
+ down(&dev->struct_sem);
+ dev->irq = 0;
+ up(&dev->struct_sem);
+ return retcode;
+ }
+
+ /* After installing handler */
+ GAMMA_WRITE(GAMMA_GINTENABLE, 0x2001);
+ GAMMA_WRITE(GAMMA_COMMANDINTENABLE, 0x0008);
+ GAMMA_WRITE(GAMMA_GDELAYTIMER, 0x39090);
+
+ return 0;
+}
+
+int gamma_irq_uninstall(drm_device_t *dev)
+{
+ int irq;
+
+ down(&dev->struct_sem);
+ irq = dev->irq;
+ dev->irq = 0;
+ up(&dev->struct_sem);
+
+ if (!irq) return -EINVAL;
+
+ DRM_DEBUG("%d\n", irq);
+
+ GAMMA_WRITE(GAMMA_GDELAYTIMER, 0);
+ GAMMA_WRITE(GAMMA_COMMANDINTENABLE, 0);
+ GAMMA_WRITE(GAMMA_GINTENABLE, 0);
+ free_irq(irq, dev);
+
+ return 0;
+}
+
+
+int gamma_control(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_control_t ctl;
+ int retcode;
+
+ if (copy_from_user(&ctl, (drm_control_t *)arg, sizeof(ctl)))
+ return -EFAULT;
+
+ switch (ctl.func) {
+ case DRM_INST_HANDLER:
+ if ((retcode = gamma_irq_install(dev, ctl.irq)))
+ return retcode;
+ break;
+ case DRM_UNINST_HANDLER:
+ if ((retcode = gamma_irq_uninstall(dev)))
+ return retcode;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+int gamma_lock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_lock_t lock;
+ drm_queue_t *q;
+#if DRM_DMA_HISTOGRAM
+ cycles_t start;
+
+ dev->lck_start = start = get_cycles();
+#endif
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
+ lock.context, current->pid, dev->lock.hw_lock->lock,
+ lock.flags);
+
+ if (lock.context < 0 || lock.context >= dev->queue_count)
+ return -EINVAL;
+ q = dev->queuelist[lock.context];
+
+ ret = drm_flush_block_and_flush(dev, lock.context, lock.flags);
+
+ if (!ret) {
+ if (_DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)
+ != lock.context) {
+ long j = jiffies - dev->lock.lock_time;
+
+ if (j > 0 && j <= DRM_LOCK_SLICE) {
+ /* Can't take lock if we just had it and
+ there is contention. */
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(j);
+ }
+ }
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ ret = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ lock.context)) {
+ dev->lock.pid = current->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ atomic_inc(&q->total_locks);
+ break; /* Got lock */
+ }
+
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ schedule();
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+ }
+
+ drm_flush_unblock(dev, lock.context, lock.flags); /* cleanup phase */
+
+ if (!ret) {
+ sigemptyset(&dev->sigmask);
+ sigaddset(&dev->sigmask, SIGSTOP);
+ sigaddset(&dev->sigmask, SIGTSTP);
+ sigaddset(&dev->sigmask, SIGTTIN);
+ sigaddset(&dev->sigmask, SIGTTOU);
+ dev->sigdata.context = lock.context;
+ dev->sigdata.lock = dev->lock.hw_lock;
+ block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
+
+ if (lock.flags & _DRM_LOCK_READY)
+ gamma_dma_ready(dev);
+ if (lock.flags & _DRM_LOCK_QUIESCENT) {
+ if (gamma_found() = 1) {
+ gamma_dma_quiescent_single(dev);
+ } else {
+ gamma_dma_quiescent_dual(dev);
+ }
+ }
+ }
+ DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock");
+
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.lacq[drm_histogram_slot(get_cycles() - start)]);
+#endif
+
+ return ret;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_drv.c linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.c
--- linux-2.4.13/drivers/char/drm-4.0/gamma_drv.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,571 @@
+/* gamma.c -- 3dlabs GMX 2000 driver -*- linux-c -*-
+ * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#include <linux/config.h>
+#include "drmP.h"
+#include "gamma_drv.h"
+
+#ifndef PCI_DEVICE_ID_3DLABS_GAMMA
+#define PCI_DEVICE_ID_3DLABS_GAMMA 0x0008
+#endif
+#ifndef PCI_DEVICE_ID_3DLABS_MX
+#define PCI_DEVICE_ID_3DLABS_MX 0x0006
+#endif
+
+#define GAMMA_NAME "gamma"
+#define GAMMA_DESC "3dlabs GMX 2000"
+#define GAMMA_DATE "20000910"
+#define GAMMA_MAJOR 1
+#define GAMMA_MINOR 0
+#define GAMMA_PATCHLEVEL 0
+
+static drm_device_t gamma_device;
+
+static struct file_operations gamma_fops = {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* This started being used during 2.4.0-test */
+ owner: THIS_MODULE,
+#endif
+ open: gamma_open,
+ flush: drm_flush,
+ release: gamma_release,
+ ioctl: gamma_ioctl,
+ mmap: drm_mmap,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+};
+
+static struct miscdevice gamma_misc = {
+ minor: MISC_DYNAMIC_MINOR,
+ name: GAMMA_NAME,
+ fops: &gamma_fops,
+};
+
+static drm_ioctl_desc_t gamma_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { gamma_version, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_busid, 0, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_block, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_unblock, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_CONTROL)] = { gamma_control, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] = { drm_addbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] = { drm_markbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] = { drm_infobufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MAP_BUFS)] = { drm_mapbufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] = { drm_freebufs, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { drm_addctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { drm_rmctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { drm_modctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { drm_getctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { drm_switchctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { drm_newctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { drm_resctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_DMA)] = { gamma_dma, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { gamma_lock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { gamma_unlock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_finish, 1, 0 },
+};
+#define GAMMA_IOCTL_COUNT DRM_ARRAY_SIZE(gamma_ioctls)
+
+#ifdef MODULE
+static char *gamma = NULL;
+#endif
+static int devices = 0;
+
+MODULE_AUTHOR("VA Linux Systems, Inc.");
+MODULE_DESCRIPTION("3dlabs GMX 2000");
+MODULE_PARM(gamma, "s");
+MODULE_PARM(devices, "i");
+MODULE_PARM_DESC(devices,
+ "devices=x, where x is the number of MX chips on card\n");
+#ifndef MODULE
+/* gamma_options is called by the kernel to parse command-line options
+ * passed via the boot-loader (e.g., LILO). It calls the insmod option
+ * routine, drm_parse_options.
+ */
+
+
+static int __init gamma_options(char *str)
+{
+ drm_parse_options(str);
+ return 1;
+}
+
+__setup("gamma=", gamma_options);
+#endif
+
+static int gamma_setup(drm_device_t *dev)
+{
+ int i;
+
+ atomic_set(&dev->ioctl_count, 0);
+ atomic_set(&dev->vma_count, 0);
+ dev->buf_use = 0;
+ atomic_set(&dev->buf_alloc, 0);
+
+ drm_dma_setup(dev);
+
+ atomic_set(&dev->total_open, 0);
+ atomic_set(&dev->total_close, 0);
+ atomic_set(&dev->total_ioctl, 0);
+ atomic_set(&dev->total_irq, 0);
+ atomic_set(&dev->total_ctx, 0);
+ atomic_set(&dev->total_locks, 0);
+ atomic_set(&dev->total_unlocks, 0);
+ atomic_set(&dev->total_contends, 0);
+ atomic_set(&dev->total_sleeps, 0);
+
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ dev->magiclist[i].head = NULL;
+ dev->magiclist[i].tail = NULL;
+ }
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ dev->vmalist = NULL;
+ dev->lock.hw_lock = NULL;
+ init_waitqueue_head(&dev->lock.lock_queue);
+ dev->queue_count = 0;
+ dev->queue_reserved = 0;
+ dev->queue_slots = 0;
+ dev->queuelist = NULL;
+ dev->irq = 0;
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+ dev->last_context = 0;
+ dev->last_switch = 0;
+ dev->last_checked = 0;
+ init_timer(&dev->timer);
+ init_waitqueue_head(&dev->context_wait);
+#if DRM_DMA_HISTO
+ memset(&dev->histo, 0, sizeof(dev->histo));
+#endif
+ dev->ctx_start = 0;
+ dev->lck_start = 0;
+
+ dev->buf_rp = dev->buf;
+ dev->buf_wp = dev->buf;
+ dev->buf_end = dev->buf + DRM_BSZ;
+ dev->buf_async = NULL;
+ init_waitqueue_head(&dev->buf_readers);
+ init_waitqueue_head(&dev->buf_writers);
+
+ DRM_DEBUG("\n");
+
+ /* The kernel's context could be created here, but is now created
+ in drm_dma_enqueue. This is more resource-efficient for
+ hardware that does not do DMA, but may mean that
+ drm_select_queue fails between the time the interrupt is
+ initialized and the time the queues are initialized. */
+
+ return 0;
+}
+
+
+static int gamma_takedown(drm_device_t *dev)
+{
+ int i;
+ drm_magic_entry_t *pt, *next;
+ drm_map_t *map;
+ drm_vma_entry_t *vma, *vma_next;
+
+ DRM_DEBUG("\n");
+
+ if (dev->irq) gamma_irq_uninstall(dev);
+
+ down(&dev->struct_sem);
+ del_timer(&dev->timer);
+
+ if (dev->devname) {
+ drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER);
+ dev->devname = NULL;
+ }
+
+ if (dev->unique) {
+ drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER);
+ dev->unique = NULL;
+ dev->unique_len = 0;
+ }
+ /* Clear pid list */
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ for (pt = dev->magiclist[i].head; pt; pt = next) {
+ next = pt->next;
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+ }
+ dev->magiclist[i].head = dev->magiclist[i].tail = NULL;
+ }
+
+ /* Clear vma list (only built for debugging) */
+ if (dev->vmalist) {
+ for (vma = dev->vmalist; vma; vma = vma_next) {
+ vma_next = vma->next;
+ drm_free(vma, sizeof(*vma), DRM_MEM_VMAS);
+ }
+ dev->vmalist = NULL;
+ }
+
+ /* Clear map area and mtrr information */
+ if (dev->maplist) {
+ for (i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+#ifdef CONFIG_MTRR
+ if (map->mtrr >= 0) {
+ int retcode;
+ retcode = mtrr_del(map->mtrr,
+ map->offset,
+ map->size);
+ DRM_DEBUG("mtrr_del = %d\n", retcode);
+ }
+#endif
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+ case _DRM_SHM:
+ drm_free_pages((unsigned long)map->handle,
+ drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ break;
+ case _DRM_AGP:
+ /* Do nothing here, because this is all
+ handled in the AGP/GART driver. */
+ break;
+ }
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+ drm_free(dev->maplist,
+ dev->map_count * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ }
+
+ if (dev->queuelist) {
+ for (i = 0; i < dev->queue_count; i++) {
+ drm_waitlist_destroy(&dev->queuelist[i]->waitlist);
+ if (dev->queuelist[i]) {
+ drm_free(dev->queuelist[i],
+ sizeof(*dev->queuelist[0]),
+ DRM_MEM_QUEUES);
+ dev->queuelist[i] = NULL;
+ }
+ }
+ drm_free(dev->queuelist,
+ dev->queue_slots * sizeof(*dev->queuelist),
+ DRM_MEM_QUEUES);
+ dev->queuelist = NULL;
+ }
+
+ drm_dma_takedown(dev);
+
+ dev->queue_count = 0;
+ if (dev->lock.hw_lock) {
+ dev->lock.hw_lock = NULL; /* SHM removed */
+ dev->lock.pid = 0;
+ wake_up_interruptible(&dev->lock.lock_queue);
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+int gamma_found(void)
+{
+ return devices;
+}
+
+int gamma_find_devices(void)
+{
+ struct pci_dev *d = NULL, *one = NULL, *two = NULL;
+
+ d = pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_GAMMA,d);
+ if (!d) return 0;
+
+ one = pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_MX,d);
+ if (!one) return 0;
+
+ /* Make sure it's on the same card, if not - no MX's found */
+ if (PCI_SLOT(d->devfn) != PCI_SLOT(one->devfn)) return 0;
+
+ two = pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_MX,one);
+ if (!two) return 1;
+
+ /* Make sure it's on the same card, if not - only 1 MX found */
+ if (PCI_SLOT(d->devfn) != PCI_SLOT(two->devfn)) return 1;
+
+ /* Two MX's found - we don't currently support more than 2 */
+ return 2;
+}
+
+/* gamma_init is called via init_module at module load time, or via
+ * linux/init/main.c (this is not currently supported). */
+
+static int __init gamma_init(void)
+{
+ int retcode;
+ drm_device_t *dev = &gamma_device;
+
+ DRM_DEBUG("\n");
+
+ memset((void *)dev, 0, sizeof(*dev));
+ dev->count_lock = SPIN_LOCK_UNLOCKED;
+ sema_init(&dev->struct_sem, 1);
+
+#ifdef MODULE
+ drm_parse_options(gamma);
+#endif
+ devices = gamma_find_devices();
+ if (devices = 0) return -1;
+
+ if ((retcode = misc_register(&gamma_misc))) {
+ DRM_ERROR("Cannot register \"%s\"\n", GAMMA_NAME);
+ return retcode;
+ }
+ dev->device = MKDEV(MISC_MAJOR, gamma_misc.minor);
+ dev->name = GAMMA_NAME;
+
+ drm_mem_init();
+ drm_proc_init(dev);
+
+ DRM_INFO("Initialized %s %d.%d.%d %s on minor %d with %d MX devices\n",
+ GAMMA_NAME,
+ GAMMA_MAJOR,
+ GAMMA_MINOR,
+ GAMMA_PATCHLEVEL,
+ GAMMA_DATE,
+ gamma_misc.minor,
+ devices);
+
+ return 0;
+}
+
+/* gamma_cleanup is called via cleanup_module at module unload time. */
+
+static void __exit gamma_cleanup(void)
+{
+ drm_device_t *dev = &gamma_device;
+
+ DRM_DEBUG("\n");
+
+ drm_proc_cleanup();
+ if (misc_deregister(&gamma_misc)) {
+ DRM_ERROR("Cannot unload module\n");
+ } else {
+ DRM_INFO("Module unloaded\n");
+ }
+ gamma_takedown(dev);
+}
+
+module_init(gamma_init);
+module_exit(gamma_cleanup);
+
+
+int gamma_version(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_version_t version;
+ int len;
+
+ if (copy_from_user(&version,
+ (drm_version_t *)arg,
+ sizeof(version)))
+ return -EFAULT;
+
+#define DRM_COPY(name,value) \
+ len = strlen(value); \
+ if (len > name##_len) len = name##_len; \
+ name##_len = strlen(value); \
+ if (len && name) { \
+ if (copy_to_user(name, value, len)) \
+ return -EFAULT; \
+ }
+
+ version.version_major = GAMMA_MAJOR;
+ version.version_minor = GAMMA_MINOR;
+ version.version_patchlevel = GAMMA_PATCHLEVEL;
+
+ DRM_COPY(version.name, GAMMA_NAME);
+ DRM_COPY(version.date, GAMMA_DATE);
+ DRM_COPY(version.desc, GAMMA_DESC);
+
+ if (copy_to_user((drm_version_t *)arg,
+ &version,
+ sizeof(version)))
+ return -EFAULT;
+ return 0;
+}
+
+int gamma_open(struct inode *inode, struct file *filp)
+{
+ drm_device_t *dev = &gamma_device;
+ int retcode = 0;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (!(retcode = drm_open_helper(inode, filp, dev))) {
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_INC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_open);
+ spin_lock(&dev->count_lock);
+ if (!dev->open_count++) {
+ spin_unlock(&dev->count_lock);
+ return gamma_setup(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+ return retcode;
+}
+
+int gamma_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ int retcode = 0;
+
+ lock_kernel();
+ dev = priv->dev;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (!(retcode = drm_release(inode, filp))) {
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_close);
+ spin_lock(&dev->count_lock);
+ if (!--dev->open_count) {
+ if (atomic_read(&dev->ioctl_count) || dev->blocked) {
+ DRM_ERROR("Device busy: %d %d\n",
+ atomic_read(&dev->ioctl_count),
+ dev->blocked);
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return -EBUSY;
+ }
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return gamma_takedown(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+ unlock_kernel();
+ return retcode;
+}
+
+/* drm_ioctl is called whenever a process performs an ioctl on /dev/drm. */
+
+int gamma_ioctl(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ int nr = DRM_IOCTL_NR(cmd);
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode = 0;
+ drm_ioctl_desc_t *ioctl;
+ drm_ioctl_t *func;
+
+ atomic_inc(&dev->ioctl_count);
+ atomic_inc(&dev->total_ioctl);
+ ++priv->ioctl_count;
+
+ DRM_DEBUG("pid = %d, cmd = 0x%02x, nr = 0x%02x, dev 0x%x, auth = %d\n",
+ current->pid, cmd, nr, dev->device, priv->authenticated);
+
+ if (nr >= GAMMA_IOCTL_COUNT) {
+ retcode = -EINVAL;
+ } else {
+ ioctl = &gamma_ioctls[nr];
+ func = ioctl->func;
+
+ if (!func) {
+ DRM_DEBUG("no function\n");
+ retcode = -EINVAL;
+ } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN))
+ || (ioctl->auth_needed && !priv->authenticated)) {
+ retcode = -EACCES;
+ } else {
+ retcode = (func)(inode, filp, cmd, arg);
+ }
+ }
+
+ atomic_dec(&dev->ioctl_count);
+ return retcode;
+}
+
+
+int gamma_unlock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_lock_t lock;
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d frees lock (%d holds)\n",
+ lock.context,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ atomic_inc(&dev->total_unlocks);
+ if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock))
+ atomic_inc(&dev->total_contends);
+ drm_lock_transfer(dev, &dev->lock.hw_lock->lock, DRM_KERNEL_CONTEXT);
+ gamma_dma_schedule(dev, 1);
+ if (!dev->context_flag) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.lhld[drm_histogram_slot(get_cycles()
+ - dev->lck_start)]);
+#endif
+
+ unblock_all_signals();
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_drv.h linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.h
--- linux-2.4.13/drivers/char/drm-4.0/gamma_drv.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,58 @@
+/* gamma_drv.h -- Private header for 3dlabs GMX 2000 driver -*- linux-c -*-
+ * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#ifndef _GAMMA_DRV_H_
+#define _GAMMA_DRV_H_
+
+ /* gamma_drv.c */
+extern int gamma_version(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_open(struct inode *inode, struct file *filp);
+extern int gamma_release(struct inode *inode, struct file *filp);
+extern int gamma_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_lock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_unlock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* gamma_dma.c */
+extern int gamma_dma_schedule(drm_device_t *dev, int locked);
+extern int gamma_dma(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_irq_install(drm_device_t *dev, int irq);
+extern int gamma_irq_uninstall(drm_device_t *dev);
+extern int gamma_control(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_find_devices(void);
+extern int gamma_found(void);
+
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_bufs.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_bufs.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_bufs.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_bufs.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,339 @@
+/* i810_bufs.c -- IOCTLs to manage buffers -*- linux-c -*-
+ * Created: Thu Jan 6 01:47:26 2000 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "i810_drv.h"
+#include "linux/un.h"
+
+int i810_addbufs_agp(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ drm_buf_entry_t *entry;
+ drm_buf_t *buf;
+ unsigned long offset;
+ unsigned long agp_offset;
+ int count;
+ int order;
+ int size;
+ int alignment;
+ int page_order;
+ int total;
+ int byte_count;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+ agp_offset = request.agp_start;
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size) :size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+ byte_count = 0;
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if(count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ offset = 0;
+
+ while(entry->buf_count < count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+ buf->offset = offset;
+ buf->bus_address = dev->agp->base + agp_offset + offset;
+ buf->address = (void *)(agp_offset + offset + dev->agp->base);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+
+ buf->dev_private = drm_alloc(sizeof(drm_i810_buf_priv_t),
+ DRM_MEM_BUFS);
+ buf->dev_priv_size = sizeof(drm_i810_buf_priv_t);
+ memset(buf->dev_private, 0, sizeof(drm_i810_buf_priv_t));
+
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ offset = offset + alignment;
+ entry->buf_count++;
+ byte_count += PAGE_SIZE << page_order;
+
+ DRM_DEBUG("buffer %d @ %p\n",
+ entry->buf_count, buf->address);
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+ dma->byte_count += byte_count;
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+ dma->flags = _DRM_DMA_USE_AGP;
+ return 0;
+}
+
+int i810_addbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_buf_desc_t request;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if(request.flags & _DRM_AGP_BUFFER)
+ return i810_addbufs_agp(inode, filp, cmd, arg);
+ else
+ return -EINVAL;
+}
+
+int i810_infobufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_info_t request;
+ int i;
+ int count;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_info_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) ++count;
+ }
+
+ DRM_DEBUG("count = %d\n", count);
+
+ if (request.count >= count) {
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) {
+ if (copy_to_user(&request.list[count].count,
+ &dma->bufs[i].buf_count,
+ sizeof(dma->bufs[0]
+ .buf_count)) ||
+ copy_to_user(&request.list[count].size,
+ &dma->bufs[i].buf_size,
+ sizeof(dma->bufs[0].buf_size)) ||
+ copy_to_user(&request.list[count].low_mark,
+ &dma->bufs[i]
+ .freelist.low_mark,
+ sizeof(dma->bufs[0]
+ .freelist.low_mark)) ||
+ copy_to_user(&request.list[count]
+ .high_mark,
+ &dma->bufs[i]
+ .freelist.high_mark,
+ sizeof(dma->bufs[0]
+ .freelist.high_mark)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d %d %d %d %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].buf_size,
+ dma->bufs[i].freelist.low_mark,
+ dma->bufs[i].freelist.high_mark);
+ ++count;
+ }
+ }
+ }
+ request.count = count;
+
+ if (copy_to_user((drm_buf_info_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int i810_markbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int order;
+ drm_buf_entry_t *entry;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d, %d, %d\n",
+ request.size, request.low_mark, request.high_mark);
+ order = drm_order(request.size);
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ entry = &dma->bufs[order];
+
+ if (request.low_mark < 0 || request.low_mark > entry->buf_count)
+ return -EINVAL;
+ if (request.high_mark < 0 || request.high_mark > entry->buf_count)
+ return -EINVAL;
+
+ entry->freelist.low_mark = request.low_mark;
+ entry->freelist.high_mark = request.high_mark;
+
+ return 0;
+}
+
+int i810_freebufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_free_t request;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_free_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", request.count);
+ for (i = 0; i < request.count; i++) {
+ if (copy_from_user(&idx,
+ &request.list[i],
+ sizeof(idx)))
+ return -EFAULT;
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ idx, dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[idx];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d freeing buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ drm_free_buffer(dev, buf);
+ }
+
+ return 0;
+}
+
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_context.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_context.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,212 @@
+/* i810_context.c -- IOCTLs for i810 contexts -*- linux-c -*-
+ * Created: Mon Dec 13 09:51:35 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "i810_drv.h"
+
+static int i810_alloc_queue(drm_device_t *dev)
+{
+ int temp = drm_ctxbitmap_next(dev);
+ DRM_DEBUG("i810_alloc_queue: %d\n", temp);
+ return temp;
+}
+
+int i810_context_switch(drm_device_t *dev, int old, int new)
+{
+ char buf[64];
+
+ atomic_inc(&dev->total_ctx);
+
+ if (test_and_set_bit(0, &dev->context_flag)) {
+ DRM_ERROR("Reentering -- FIXME\n");
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new = dev->last_context) {
+ clear_bit(0, &dev->context_flag);
+ return 0;
+ }
+
+ if (drm_flags & DRM_FLAG_NOCTX) {
+ i810_context_switch_complete(dev, new);
+ } else {
+ sprintf(buf, "C %d %d\n", old, new);
+ drm_write_string(dev, buf);
+ }
+
+ return 0;
+}
+
+int i810_context_switch_complete(drm_device_t *dev, int new)
+{
+ dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
+ dev->last_switch = jiffies;
+
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Lock isn't held after context switch\n");
+ }
+
+ /* If a context switch is ever initiated
+ when the kernel holds the lock, release
+ that lock here. */
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles()
+ - dev->ctx_start)]);
+
+#endif
+ clear_bit(0, &dev->context_flag);
+ wake_up(&dev->context_wait);
+
+ return 0;
+}
+
+int i810_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS);
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ if ((ctx.handle = i810_alloc_queue(dev)) = DRM_KERNEL_CONTEXT) {
+ /* Skip kernel's context and get a new one. */
+ ctx.handle = i810_alloc_queue(dev);
+ }
+ if (ctx.handle = -1) {
+ DRM_DEBUG("Not enough free contexts.\n");
+ /* Should this return -EBUSY instead? */
+ return -ENOMEM;
+ }
+ DRM_DEBUG("%d\n", ctx.handle);
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ /* This does nothing for the i810 */
+ return 0;
+}
+
+int i810_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+ /* This is 0, because we don't hanlde any context flags */
+ ctx.flags = 0;
+ if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return i810_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int i810_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ i810_context_switch_complete(dev, ctx.handle);
+
+ return 0;
+}
+
+int i810_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ if(ctx.handle != DRM_KERNEL_CONTEXT) {
+ drm_ctxbitmap_free(dev, ctx.handle);
+ }
+
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_dma.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_dma.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,1438 @@
+/* i810_dma.c -- DMA support for the i810 -*- linux-c -*-
+ * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ * Keith Whitwell <keithw@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "i810_drv.h"
+#include <linux/interrupt.h> /* For task queue support */
+
+/* in case we don't have a 2.3.99-pre6 kernel or later: */
+#ifndef VM_DONTCOPY
+#define VM_DONTCOPY 0
+#endif
+
+#define I810_BUF_FREE 2
+#define I810_BUF_CLIENT 1
+#define I810_BUF_HARDWARE 0
+
+#define I810_BUF_UNMAPPED 0
+#define I810_BUF_MAPPED 1
+
+#define I810_REG(reg) 2
+#define I810_BASE(reg) ((unsigned long) \
+ dev->maplist[I810_REG(reg)]->handle)
+#define I810_ADDR(reg) (I810_BASE(reg) + reg)
+#define I810_DEREF(reg) *(__volatile__ int *)I810_ADDR(reg)
+#define I810_READ(reg) I810_DEREF(reg)
+#define I810_WRITE(reg,val) do { I810_DEREF(reg) = val; } while (0)
+#define I810_DEREF16(reg) *(__volatile__ u16 *)I810_ADDR(reg)
+#define I810_READ16(reg) I810_DEREF16(reg)
+#define I810_WRITE16(reg,val) do { I810_DEREF16(reg) = val; } while (0)
+
+#define RING_LOCALS unsigned int outring, ringmask; volatile char *virt;
+
+#define BEGIN_LP_RING(n) do { \
+ if (I810_VERBOSE) \
+ DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", \
+ n, __FUNCTION__); \
+ if (dev_priv->ring.space < n*4) \
+ i810_wait_ring(dev, n*4); \
+ dev_priv->ring.space -= n*4; \
+ outring = dev_priv->ring.tail; \
+ ringmask = dev_priv->ring.tail_mask; \
+ virt = dev_priv->ring.virtual_start; \
+} while (0)
+
+#define ADVANCE_LP_RING() do { \
+ if (I810_VERBOSE) DRM_DEBUG("ADVANCE_LP_RING\n"); \
+ dev_priv->ring.tail = outring; \
+ I810_WRITE(LP_RING + RING_TAIL, outring); \
+} while(0)
+
+#define OUT_RING(n) do { \
+ if (I810_VERBOSE) DRM_DEBUG(" OUT_RING %x\n", (int)(n)); \
+ *(volatile unsigned int *)(virt + outring) = n; \
+ outring += 4; \
+ outring &= ringmask; \
+} while (0);
+
+static inline void i810_print_status_page(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ u32 *temp = (u32 *)dev_priv->hw_status_page;
+ int i;
+
+ DRM_DEBUG( "hw_status: Interrupt Status : %x\n", temp[0]);
+ DRM_DEBUG( "hw_status: LpRing Head ptr : %x\n", temp[1]);
+ DRM_DEBUG( "hw_status: IRing Head ptr : %x\n", temp[2]);
+ DRM_DEBUG( "hw_status: Reserved : %x\n", temp[3]);
+ DRM_DEBUG( "hw_status: Driver Counter : %d\n", temp[5]);
+ for(i = 6; i < dma->buf_count + 6; i++) {
+ DRM_DEBUG( "buffer status idx : %d used: %d\n", i - 6, temp[i]);
+ }
+}
+
+static drm_buf_t *i810_freelist_get(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+ int used;
+
+ /* Linear search might not be the best solution */
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ /* In use is already a pointer */
+ used = cmpxchg(buf_priv->in_use, I810_BUF_FREE,
+ I810_BUF_CLIENT);
+ if(used = I810_BUF_FREE) {
+ return buf;
+ }
+ }
+ return NULL;
+}
+
+/* This should only be called if the buffer is not sent to the hardware
+ * yet, the hardware updates in use for us once its on the ring buffer.
+ */
+
+static int i810_freelist_put(drm_device_t *dev, drm_buf_t *buf)
+{
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ int used;
+
+ /* In use is already a pointer */
+ used = cmpxchg(buf_priv->in_use, I810_BUF_CLIENT, I810_BUF_FREE);
+ if(used != I810_BUF_CLIENT) {
+ DRM_ERROR("Freeing buffer thats not in use : %d\n", buf->idx);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static struct file_operations i810_buffer_fops = {
+ open: i810_open,
+ flush: drm_flush,
+ release: i810_release,
+ ioctl: i810_ioctl,
+ mmap: i810_mmap_buffers,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+};
+
+int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ drm_i810_private_t *dev_priv;
+ drm_buf_t *buf;
+ drm_i810_buf_priv_t *buf_priv;
+
+ lock_kernel();
+ dev = priv->dev;
+ dev_priv = dev->dev_private;
+ buf = dev_priv->mmap_buffer;
+ buf_priv = buf->dev_private;
+
+ vma->vm_flags |= (VM_IO | VM_DONTCOPY);
+ vma->vm_file = filp;
+
+ buf_priv->currently_mapped = I810_BUF_MAPPED;
+ unlock_kernel();
+
+ if (remap_page_range(vma->vm_start,
+ VM_OFFSET(vma),
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot)) return -EAGAIN;
+ return 0;
+}
+
+static int i810_map_buffer(drm_buf_t *buf, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ struct file_operations *old_fops;
+ int retcode = 0;
+
+ if(buf_priv->currently_mapped = I810_BUF_MAPPED) return -EINVAL;
+
+ if(VM_DONTCOPY != 0) {
+ down_write(¤t->mm->mmap_sem);
+ old_fops = filp->f_op;
+ filp->f_op = &i810_buffer_fops;
+ dev_priv->mmap_buffer = buf;
+ buf_priv->virtual = (void *)do_mmap(filp, 0, buf->total,
+ PROT_READ|PROT_WRITE,
+ MAP_SHARED,
+ buf->bus_address);
+ dev_priv->mmap_buffer = NULL;
+ filp->f_op = old_fops;
+ if ((unsigned long)buf_priv->virtual > -1024UL) {
+ /* Real error */
+ DRM_DEBUG("mmap error\n");
+ retcode = (signed int)buf_priv->virtual;
+ buf_priv->virtual = 0;
+ }
+ up_write(¤t->mm->mmap_sem);
+ } else {
+ buf_priv->virtual = buf_priv->kernel_virtual;
+ buf_priv->currently_mapped = I810_BUF_MAPPED;
+ }
+ return retcode;
+}
+
+static int i810_unmap_buffer(drm_buf_t *buf)
+{
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ int retcode = 0;
+
+ if(VM_DONTCOPY != 0) {
+ if(buf_priv->currently_mapped != I810_BUF_MAPPED)
+ return -EINVAL;
+ down_write(¤t->mm->mmap_sem);
+#if LINUX_VERSION_CODE < 0x020399
+ retcode = do_munmap((unsigned long)buf_priv->virtual,
+ (size_t) buf->total);
+#else
+ retcode = do_munmap(current->mm,
+ (unsigned long)buf_priv->virtual,
+ (size_t) buf->total);
+#endif
+ up_write(¤t->mm->mmap_sem);
+ }
+ buf_priv->currently_mapped = I810_BUF_UNMAPPED;
+ buf_priv->virtual = 0;
+
+ return retcode;
+}
+
+static int i810_dma_get_buffer(drm_device_t *dev, drm_i810_dma_t *d,
+ struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_buf_t *buf;
+ drm_i810_buf_priv_t *buf_priv;
+ int retcode = 0;
+
+ buf = i810_freelist_get(dev);
+ if (!buf) {
+ retcode = -ENOMEM;
+ DRM_DEBUG("retcode=%d\n", retcode);
+ return retcode;
+ }
+
+ retcode = i810_map_buffer(buf, filp);
+ if(retcode) {
+ i810_freelist_put(dev, buf);
+ DRM_DEBUG("mapbuf failed, retcode %d\n", retcode);
+ return retcode;
+ }
+ buf->pid = priv->pid;
+ buf_priv = buf->dev_private;
+ d->granted = 1;
+ d->request_idx = buf->idx;
+ d->request_size = buf->total;
+ d->virtual = buf_priv->virtual;
+
+ return retcode;
+}
+
+static unsigned long i810_alloc_page(drm_device_t *dev)
+{
+ unsigned long address;
+
+ address = __get_free_page(GFP_KERNEL);
+ if(address = 0UL)
+ return 0;
+
+ atomic_inc(&virt_to_page(address)->count);
+ set_bit(PG_locked, &virt_to_page(address)->flags);
+
+ return address;
+}
+
+static void i810_free_page(drm_device_t *dev, unsigned long page)
+{
+ if(page = 0UL)
+ return;
+
+ atomic_dec(&virt_to_page(page)->count);
+ clear_bit(PG_locked, &virt_to_page(page)->flags);
+ wake_up(&virt_to_page(page)->wait);
+ free_page(page);
+ return;
+}
+
+static int i810_dma_cleanup(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ if(dev->dev_private) {
+ int i;
+ drm_i810_private_t *dev_priv =
+ (drm_i810_private_t *) dev->dev_private;
+
+ if(dev_priv->ring.virtual_start) {
+ drm_ioremapfree((void *) dev_priv->ring.virtual_start,
+ dev_priv->ring.Size, dev);
+ }
+ if(dev_priv->hw_status_page != 0UL) {
+ i810_free_page(dev, dev_priv->hw_status_page);
+ /* Need to rewrite hardware status page */
+ I810_WRITE(0x02080, 0x1ffff000);
+ }
+ drm_free(dev->dev_private, sizeof(drm_i810_private_t),
+ DRM_MEM_DRIVER);
+ dev->dev_private = NULL;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ drm_ioremapfree(buf_priv->kernel_virtual,
+ buf->total, dev);
+ }
+ }
+ return 0;
+}
+
+static int i810_wait_ring(drm_device_t *dev, int n)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_ring_buffer_t *ring = &(dev_priv->ring);
+ int iters = 0;
+ unsigned long end;
+ unsigned int last_head = I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR;
+
+ end = jiffies + (HZ*3);
+ while (ring->space < n) {
+ int i;
+
+ ring->head = I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR;
+ ring->space = ring->head - (ring->tail+8);
+ if (ring->space < 0) ring->space += ring->Size;
+
+ if (ring->head != last_head)
+ end = jiffies + (HZ*3);
+
+ iters++;
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("space: %d wanted %d\n", ring->space, n);
+ DRM_ERROR("lockup\n");
+ goto out_wait_ring;
+ }
+
+ for (i = 0 ; i < 2000 ; i++) ;
+ }
+
+out_wait_ring:
+ return iters;
+}
+
+static void i810_kernel_lost_context(drm_device_t *dev)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_ring_buffer_t *ring = &(dev_priv->ring);
+
+ ring->head = I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR;
+ ring->tail = I810_READ(LP_RING + RING_TAIL);
+ ring->space = ring->head - (ring->tail+8);
+ if (ring->space < 0) ring->space += ring->Size;
+}
+
+static int i810_freelist_init(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ int my_idx = 24;
+ u32 *hw_status = (u32 *)(dev_priv->hw_status_page + my_idx);
+ int i;
+
+ if(dma->buf_count > 1019) {
+ /* Not enough space in the status page for the freelist */
+ return -EINVAL;
+ }
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+
+ buf_priv->in_use = hw_status++;
+ buf_priv->my_use_idx = my_idx;
+ my_idx += 4;
+
+ *buf_priv->in_use = I810_BUF_FREE;
+
+ buf_priv->kernel_virtual = drm_ioremap(buf->bus_address,
+ buf->total, dev);
+ }
+ return 0;
+}
+
+static int i810_dma_initialize(drm_device_t *dev,
+ drm_i810_private_t *dev_priv,
+ drm_i810_init_t *init)
+{
+ drm_map_t *sarea_map;
+
+ dev->dev_private = (void *) dev_priv;
+ memset(dev_priv, 0, sizeof(drm_i810_private_t));
+
+ if (init->ring_map_idx >= dev->map_count ||
+ init->buffer_map_idx >= dev->map_count) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("ring_map or buffer_map are invalid\n");
+ return -EINVAL;
+ }
+
+ dev_priv->ring_map_idx = init->ring_map_idx;
+ dev_priv->buffer_map_idx = init->buffer_map_idx;
+ sarea_map = dev->maplist[0];
+ dev_priv->sarea_priv = (drm_i810_sarea_t *)
+ ((u8 *)sarea_map->handle +
+ init->sarea_priv_offset);
+
+ atomic_set(&dev_priv->flush_done, 0);
+ init_waitqueue_head(&dev_priv->flush_queue);
+
+ dev_priv->ring.Start = init->ring_start;
+ dev_priv->ring.End = init->ring_end;
+ dev_priv->ring.Size = init->ring_size;
+
+ dev_priv->ring.virtual_start = drm_ioremap(dev->agp->base +
+ init->ring_start,
+ init->ring_size, dev);
+
+ dev_priv->ring.tail_mask = dev_priv->ring.Size - 1;
+
+ if (dev_priv->ring.virtual_start = NULL) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("can not ioremap virtual address for"
+ " ring buffer\n");
+ return -ENOMEM;
+ }
+
+ dev_priv->w = init->w;
+ dev_priv->h = init->h;
+ dev_priv->pitch = init->pitch;
+ dev_priv->back_offset = init->back_offset;
+ dev_priv->depth_offset = init->depth_offset;
+
+ dev_priv->front_di1 = init->front_offset | init->pitch_bits;
+ dev_priv->back_di1 = init->back_offset | init->pitch_bits;
+ dev_priv->zi1 = init->depth_offset | init->pitch_bits;
+
+
+ /* Program Hardware Status Page */
+ dev_priv->hw_status_page = i810_alloc_page(dev);
+ memset((void *) dev_priv->hw_status_page, 0, PAGE_SIZE);
+ if(dev_priv->hw_status_page = 0UL) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("Can not allocate hardware status page\n");
+ return -ENOMEM;
+ }
+ DRM_DEBUG("hw status page @ %lx\n", dev_priv->hw_status_page);
+
+ I810_WRITE(0x02080, virt_to_bus((void *)dev_priv->hw_status_page));
+ DRM_DEBUG("Enabled hardware status page\n");
+
+ /* Now we need to init our freelist */
+ if(i810_freelist_init(dev) != 0) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("Not enough space in the status page for"
+ " the freelist\n");
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+int i810_dma_init(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_private_t *dev_priv;
+ drm_i810_init_t init;
+ int retcode = 0;
+
+ if (copy_from_user(&init, (drm_i810_init_t *)arg, sizeof(init)))
+ return -EFAULT;
+
+ switch(init.func) {
+ case I810_INIT_DMA:
+ dev_priv = drm_alloc(sizeof(drm_i810_private_t),
+ DRM_MEM_DRIVER);
+ if(dev_priv = NULL) return -ENOMEM;
+ retcode = i810_dma_initialize(dev, dev_priv, &init);
+ break;
+ case I810_CLEANUP_DMA:
+ retcode = i810_dma_cleanup(dev);
+ break;
+ default:
+ retcode = -EINVAL;
+ break;
+ }
+
+ return retcode;
+}
+
+
+
+/* Most efficient way to verify state for the i810 is as it is
+ * emitted. Non-conformant state is silently dropped.
+ *
+ * Use 'volatile' & local var tmp to force the emitted values to be
+ * identical to the verified ones.
+ */
+static void i810EmitContextVerified( drm_device_t *dev,
+ volatile unsigned int *code )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ int i, j = 0;
+ unsigned int tmp;
+ RING_LOCALS;
+
+ BEGIN_LP_RING( I810_CTX_SETUP_SIZE );
+
+ OUT_RING( GFX_OP_COLOR_FACTOR );
+ OUT_RING( code[I810_CTXREG_CF1] );
+
+ OUT_RING( GFX_OP_STIPPLE );
+ OUT_RING( code[I810_CTXREG_ST1] );
+
+ for ( i = 4 ; i < I810_CTX_SETUP_SIZE ; i++ ) {
+ tmp = code[i];
+
+ if ((tmp & (7<<29)) = (3<<29) &&
+ (tmp & (0x1f<<24)) < (0x1d<<24))
+ {
+ OUT_RING( tmp );
+ j++;
+ }
+ }
+
+ if (j & 1)
+ OUT_RING( 0 );
+
+ ADVANCE_LP_RING();
+}
+
+static void i810EmitTexVerified( drm_device_t *dev,
+ volatile unsigned int *code )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ int i, j = 0;
+ unsigned int tmp;
+ RING_LOCALS;
+
+ BEGIN_LP_RING( I810_TEX_SETUP_SIZE );
+
+ OUT_RING( GFX_OP_MAP_INFO );
+ OUT_RING( code[I810_TEXREG_MI1] );
+ OUT_RING( code[I810_TEXREG_MI2] );
+ OUT_RING( code[I810_TEXREG_MI3] );
+
+ for ( i = 4 ; i < I810_TEX_SETUP_SIZE ; i++ ) {
+ tmp = code[i];
+
+ if ((tmp & (7<<29)) = (3<<29) &&
+ (tmp & (0x1f<<24)) < (0x1d<<24))
+ {
+ OUT_RING( tmp );
+ j++;
+ }
+ }
+
+ if (j & 1)
+ OUT_RING( 0 );
+
+ ADVANCE_LP_RING();
+}
+
+
+/* Need to do some additional checking when setting the dest buffer.
+ */
+static void i810EmitDestVerified( drm_device_t *dev,
+ volatile unsigned int *code )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ unsigned int tmp;
+ RING_LOCALS;
+
+ BEGIN_LP_RING( I810_DEST_SETUP_SIZE + 2 );
+
+ tmp = code[I810_DESTREG_DI1];
+ if (tmp = dev_priv->front_di1 || tmp = dev_priv->back_di1) {
+ OUT_RING( CMD_OP_DESTBUFFER_INFO );
+ OUT_RING( tmp );
+ } else
+ DRM_DEBUG("bad di1 %x (allow %x or %x)\n",
+ tmp, dev_priv->front_di1, dev_priv->back_di1);
+
+ /* invarient:
+ */
+ OUT_RING( CMD_OP_Z_BUFFER_INFO );
+ OUT_RING( dev_priv->zi1 );
+
+ OUT_RING( GFX_OP_DESTBUFFER_VARS );
+ OUT_RING( code[I810_DESTREG_DV1] );
+
+ OUT_RING( GFX_OP_DRAWRECT_INFO );
+ OUT_RING( code[I810_DESTREG_DR1] );
+ OUT_RING( code[I810_DESTREG_DR2] );
+ OUT_RING( code[I810_DESTREG_DR3] );
+ OUT_RING( code[I810_DESTREG_DR4] );
+ OUT_RING( 0 );
+
+ ADVANCE_LP_RING();
+}
+
+
+
+static void i810EmitState( drm_device_t *dev )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ unsigned int dirty = sarea_priv->dirty;
+
+ if (dirty & I810_UPLOAD_BUFFERS) {
+ i810EmitDestVerified( dev, sarea_priv->BufferState );
+ sarea_priv->dirty &= ~I810_UPLOAD_BUFFERS;
+ }
+
+ if (dirty & I810_UPLOAD_CTX) {
+ i810EmitContextVerified( dev, sarea_priv->ContextState );
+ sarea_priv->dirty &= ~I810_UPLOAD_CTX;
+ }
+
+ if (dirty & I810_UPLOAD_TEX0) {
+ i810EmitTexVerified( dev, sarea_priv->TexState[0] );
+ sarea_priv->dirty &= ~I810_UPLOAD_TEX0;
+ }
+
+ if (dirty & I810_UPLOAD_TEX1) {
+ i810EmitTexVerified( dev, sarea_priv->TexState[1] );
+ sarea_priv->dirty &= ~I810_UPLOAD_TEX1;
+ }
+}
+
+
+
+/* need to verify
+ */
+static void i810_dma_dispatch_clear( drm_device_t *dev, int flags,
+ unsigned int clear_color,
+ unsigned int clear_zval )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ int nbox = sarea_priv->nbox;
+ drm_clip_rect_t *pbox = sarea_priv->boxes;
+ int pitch = dev_priv->pitch;
+ int cpp = 2;
+ int i;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ if (nbox > I810_NR_SAREA_CLIPRECTS)
+ nbox = I810_NR_SAREA_CLIPRECTS;
+
+ for (i = 0 ; i < nbox ; i++, pbox++) {
+ unsigned int x = pbox->x1;
+ unsigned int y = pbox->y1;
+ unsigned int width = (pbox->x2 - x) * cpp;
+ unsigned int height = pbox->y2 - y;
+ unsigned int start = y * pitch + x * cpp;
+
+ if (pbox->x1 > pbox->x2 ||
+ pbox->y1 > pbox->y2 ||
+ pbox->x2 > dev_priv->w ||
+ pbox->y2 > dev_priv->h)
+ continue;
+
+ if ( flags & I810_FRONT ) {
+ DRM_DEBUG("clear front\n");
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT |
+ BR00_OP_COLOR_BLT | 0x3 );
+ OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch );
+ OUT_RING( (height << 16) | width );
+ OUT_RING( start );
+ OUT_RING( clear_color );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+ }
+
+ if ( flags & I810_BACK ) {
+ DRM_DEBUG("clear back\n");
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT |
+ BR00_OP_COLOR_BLT | 0x3 );
+ OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch );
+ OUT_RING( (height << 16) | width );
+ OUT_RING( dev_priv->back_offset + start );
+ OUT_RING( clear_color );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+ }
+
+ if ( flags & I810_DEPTH ) {
+ DRM_DEBUG("clear depth\n");
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT |
+ BR00_OP_COLOR_BLT | 0x3 );
+ OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch );
+ OUT_RING( (height << 16) | width );
+ OUT_RING( dev_priv->depth_offset + start );
+ OUT_RING( clear_zval );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+ }
+ }
+}
+
+static void i810_dma_dispatch_swap( drm_device_t *dev )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ int nbox = sarea_priv->nbox;
+ drm_clip_rect_t *pbox = sarea_priv->boxes;
+ int pitch = dev_priv->pitch;
+ int cpp = 2;
+ int ofs = dev_priv->back_offset;
+ int i;
+ RING_LOCALS;
+
+ DRM_DEBUG("swapbuffers\n");
+
+ i810_kernel_lost_context(dev);
+
+ if (nbox > I810_NR_SAREA_CLIPRECTS)
+ nbox = I810_NR_SAREA_CLIPRECTS;
+
+ for (i = 0 ; i < nbox; i++, pbox++)
+ {
+ unsigned int w = pbox->x2 - pbox->x1;
+ unsigned int h = pbox->y2 - pbox->y1;
+ unsigned int dst = pbox->x1*cpp + pbox->y1*pitch;
+ unsigned int start = ofs + dst;
+
+ if (pbox->x1 > pbox->x2 ||
+ pbox->y1 > pbox->y2 ||
+ pbox->x2 > dev_priv->w ||
+ pbox->y2 > dev_priv->h)
+ continue;
+
+ DRM_DEBUG("dispatch swap %d,%d-%d,%d!\n",
+ pbox[i].x1, pbox[i].y1,
+ pbox[i].x2, pbox[i].y2);
+
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT | BR00_OP_SRC_COPY_BLT | 0x4 );
+ OUT_RING( pitch | (0xCC << 16));
+ OUT_RING( (h << 16) | (w * cpp));
+ OUT_RING( dst );
+ OUT_RING( pitch );
+ OUT_RING( start );
+ ADVANCE_LP_RING();
+ }
+}
+
+
+static void i810_dma_dispatch_vertex(drm_device_t *dev,
+ drm_buf_t *buf,
+ int discard,
+ int used)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ drm_clip_rect_t *box = sarea_priv->boxes;
+ int nbox = sarea_priv->nbox;
+ unsigned long address = (unsigned long)buf->bus_address;
+ unsigned long start = address - dev->agp->base;
+ int i = 0, u;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ if (nbox > I810_NR_SAREA_CLIPRECTS)
+ nbox = I810_NR_SAREA_CLIPRECTS;
+
+ if (discard) {
+ u = cmpxchg(buf_priv->in_use, I810_BUF_CLIENT,
+ I810_BUF_HARDWARE);
+ if(u != I810_BUF_CLIENT) {
+ DRM_DEBUG("xxxx 2\n");
+ }
+ }
+
+ if (used > 4*1024)
+ used = 0;
+
+ if (sarea_priv->dirty)
+ i810EmitState( dev );
+
+ DRM_DEBUG("dispatch vertex addr 0x%lx, used 0x%x nbox %d\n",
+ address, used, nbox);
+
+ dev_priv->counter++;
+ DRM_DEBUG( "dispatch counter : %ld\n", dev_priv->counter);
+ DRM_DEBUG( "i810_dma_dispatch\n");
+ DRM_DEBUG( "start : %lx\n", start);
+ DRM_DEBUG( "used : %d\n", used);
+ DRM_DEBUG( "start + used - 4 : %ld\n", start + used - 4);
+
+ if (buf_priv->currently_mapped = I810_BUF_MAPPED) {
+ *(u32 *)buf_priv->virtual = (GFX_OP_PRIMITIVE |
+ sarea_priv->vertex_prim |
+ ((used/4)-2));
+
+ if (used & 4) {
+ *(u32 *)((u32)buf_priv->virtual + used) = 0;
+ used += 4;
+ }
+
+ i810_unmap_buffer(buf);
+ }
+
+ if (used) {
+ do {
+ if (i < nbox) {
+ BEGIN_LP_RING(4);
+ OUT_RING( GFX_OP_SCISSOR | SC_UPDATE_SCISSOR |
+ SC_ENABLE );
+ OUT_RING( GFX_OP_SCISSOR_INFO );
+ OUT_RING( box[i].x1 | (box[i].y1<<16) );
+ OUT_RING( (box[i].x2-1) | ((box[i].y2-1)<<16) );
+ ADVANCE_LP_RING();
+ }
+
+ BEGIN_LP_RING(4);
+ OUT_RING( CMD_OP_BATCH_BUFFER );
+ OUT_RING( start | BB1_PROTECTED );
+ OUT_RING( start + used - 4 );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+
+ } while (++i < nbox);
+ }
+
+ BEGIN_LP_RING(10);
+ OUT_RING( CMD_STORE_DWORD_IDX );
+ OUT_RING( 20 );
+ OUT_RING( dev_priv->counter );
+ OUT_RING( 0 );
+
+ if (discard) {
+ OUT_RING( CMD_STORE_DWORD_IDX );
+ OUT_RING( buf_priv->my_use_idx );
+ OUT_RING( I810_BUF_FREE );
+ OUT_RING( 0 );
+ }
+
+ OUT_RING( CMD_REPORT_HEAD );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+}
+
+
+/* Interrupts are only for flushing */
+static void i810_dma_service(int irq, void *device, struct pt_regs *regs)
+{
+ drm_device_t *dev = (drm_device_t *)device;
+ u16 temp;
+
+ atomic_inc(&dev->total_irq);
+ temp = I810_READ16(I810REG_INT_IDENTITY_R);
+ temp = temp & ~(0x6000);
+ if(temp != 0) I810_WRITE16(I810REG_INT_IDENTITY_R,
+ temp); /* Clear all interrupts */
+ else
+ return;
+
+ queue_task(&dev->tq, &tq_immediate);
+ mark_bh(IMMEDIATE_BH);
+}
+
+static void i810_dma_task_queue(void *device)
+{
+ drm_device_t *dev = (drm_device_t *) device;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+
+ atomic_set(&dev_priv->flush_done, 1);
+ wake_up_interruptible(&dev_priv->flush_queue);
+}
+
+int i810_irq_install(drm_device_t *dev, int irq)
+{
+ int retcode;
+ u16 temp;
+
+ if (!irq) return -EINVAL;
+
+ down(&dev->struct_sem);
+ if (dev->irq) {
+ up(&dev->struct_sem);
+ return -EBUSY;
+ }
+ dev->irq = irq;
+ up(&dev->struct_sem);
+
+ DRM_DEBUG( "Interrupt Install : %d\n", irq);
+ DRM_DEBUG("%d\n", irq);
+
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+
+ dev->dma->next_buffer = NULL;
+ dev->dma->next_queue = NULL;
+ dev->dma->this_buffer = NULL;
+
+ INIT_LIST_HEAD(&dev->tq.list);
+ dev->tq.sync = 0;
+ dev->tq.routine = i810_dma_task_queue;
+ dev->tq.data = dev;
+
+ /* Before installing handler */
+ temp = I810_READ16(I810REG_HWSTAM);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_HWSTAM, temp);
+
+ temp = I810_READ16(I810REG_INT_MASK_R);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_INT_MASK_R, temp); /* Unmask interrupts */
+ temp = I810_READ16(I810REG_INT_ENABLE_R);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_INT_ENABLE_R, temp); /* Disable all interrupts */
+
+ /* Install handler */
+ if ((retcode = request_irq(dev->irq,
+ i810_dma_service,
+ SA_SHIRQ,
+ dev->devname,
+ dev))) {
+ down(&dev->struct_sem);
+ dev->irq = 0;
+ up(&dev->struct_sem);
+ return retcode;
+ }
+ temp = I810_READ16(I810REG_INT_ENABLE_R);
+ temp = temp & 0x6000;
+ temp = temp | 0x0003;
+ I810_WRITE16(I810REG_INT_ENABLE_R,
+ temp); /* Enable bp & user interrupts */
+ return 0;
+}
+
+int i810_irq_uninstall(drm_device_t *dev)
+{
+ int irq;
+ u16 temp;
+
+
+/* return 0; */
+
+ down(&dev->struct_sem);
+ irq = dev->irq;
+ dev->irq = 0;
+ up(&dev->struct_sem);
+
+ if (!irq) return -EINVAL;
+
+ DRM_DEBUG( "Interrupt UnInstall: %d\n", irq);
+ DRM_DEBUG("%d\n", irq);
+
+ temp = I810_READ16(I810REG_INT_IDENTITY_R);
+ temp = temp & ~(0x6000);
+ if(temp != 0) I810_WRITE16(I810REG_INT_IDENTITY_R,
+ temp); /* Clear all interrupts */
+
+ temp = I810_READ16(I810REG_INT_ENABLE_R);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_INT_ENABLE_R,
+ temp); /* Disable all interrupts */
+
+ free_irq(irq, dev);
+
+ return 0;
+}
+
+int i810_control(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_control_t ctl;
+ int retcode;
+
+ DRM_DEBUG( "i810_control\n");
+
+ if (copy_from_user(&ctl, (drm_control_t *)arg, sizeof(ctl)))
+ return -EFAULT;
+
+ switch (ctl.func) {
+ case DRM_INST_HANDLER:
+ if ((retcode = i810_irq_install(dev, ctl.irq)))
+ return retcode;
+ break;
+ case DRM_UNINST_HANDLER:
+ if ((retcode = i810_irq_uninstall(dev)))
+ return retcode;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static inline void i810_dma_emit_flush(drm_device_t *dev)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ BEGIN_LP_RING(2);
+ OUT_RING( CMD_REPORT_HEAD );
+ OUT_RING( GFX_OP_USER_INTERRUPT );
+ ADVANCE_LP_RING();
+
+/* i810_wait_ring( dev, dev_priv->ring.Size - 8 ); */
+/* atomic_set(&dev_priv->flush_done, 1); */
+/* wake_up_interruptible(&dev_priv->flush_queue); */
+}
+
+static inline void i810_dma_quiescent_emit(drm_device_t *dev)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ BEGIN_LP_RING(4);
+ OUT_RING( INST_PARSER_CLIENT | INST_OP_FLUSH | INST_FLUSH_MAP_CACHE );
+ OUT_RING( CMD_REPORT_HEAD );
+ OUT_RING( 0 );
+ OUT_RING( GFX_OP_USER_INTERRUPT );
+ ADVANCE_LP_RING();
+
+/* i810_wait_ring( dev, dev_priv->ring.Size - 8 ); */
+/* atomic_set(&dev_priv->flush_done, 1); */
+/* wake_up_interruptible(&dev_priv->flush_queue); */
+}
+
+static void i810_dma_quiescent(drm_device_t *dev)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ unsigned long end;
+
+ if(dev_priv = NULL) {
+ return;
+ }
+ atomic_set(&dev_priv->flush_done, 0);
+ add_wait_queue(&dev_priv->flush_queue, &entry);
+ end = jiffies + (HZ*3);
+
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ i810_dma_quiescent_emit(dev);
+ if (atomic_read(&dev_priv->flush_done) = 1) break;
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("lockup\n");
+ break;
+ }
+ schedule_timeout(HZ*3);
+ if (signal_pending(current)) {
+ break;
+ }
+ }
+
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev_priv->flush_queue, &entry);
+
+ return;
+}
+
+static int i810_flush_queue(drm_device_t *dev)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ drm_device_dma_t *dma = dev->dma;
+ unsigned long end;
+ int i, ret = 0;
+
+ if(dev_priv = NULL) {
+ return 0;
+ }
+ atomic_set(&dev_priv->flush_done, 0);
+ add_wait_queue(&dev_priv->flush_queue, &entry);
+ end = jiffies + (HZ*3);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ i810_dma_emit_flush(dev);
+ if (atomic_read(&dev_priv->flush_done) = 1) break;
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("lockup\n");
+ break;
+ }
+ schedule_timeout(HZ*3);
+ if (signal_pending(current)) {
+ ret = -EINTR; /* Can't restart */
+ break;
+ }
+ }
+
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev_priv->flush_queue, &entry);
+
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+
+ int used = cmpxchg(buf_priv->in_use, I810_BUF_HARDWARE,
+ I810_BUF_FREE);
+
+ if (used = I810_BUF_HARDWARE)
+ DRM_DEBUG("reclaimed from HARDWARE\n");
+ if (used = I810_BUF_CLIENT)
+ DRM_DEBUG("still on client HARDWARE\n");
+ }
+
+ return ret;
+}
+
+/* Must be called with the lock held */
+void i810_reclaim_buffers(drm_device_t *dev, pid_t pid)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+
+ if (!dma) return;
+ if (!dev->dev_private) return;
+ if (!dma->buflist) return;
+
+ i810_flush_queue(dev);
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+
+ if (buf->pid = pid && buf_priv) {
+ int used = cmpxchg(buf_priv->in_use, I810_BUF_CLIENT,
+ I810_BUF_FREE);
+
+ if (used = I810_BUF_CLIENT)
+ DRM_DEBUG("reclaimed from client\n");
+ if(buf_priv->currently_mapped = I810_BUF_MAPPED)
+ buf_priv->currently_mapped = I810_BUF_UNMAPPED;
+ }
+ }
+}
+
+int i810_lock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_lock_t lock;
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
+ lock.context, current->pid, dev->lock.hw_lock->lock,
+ lock.flags);
+
+ if (lock.context < 0) {
+ return -EINVAL;
+ }
+ /* Only one queue:
+ */
+
+ if (!ret) {
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ ret = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ lock.context)) {
+ dev->lock.pid = current->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ break; /* Got lock */
+ }
+
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ DRM_DEBUG("Calling lock schedule\n");
+ schedule();
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+ }
+
+ if (!ret) {
+ sigemptyset(&dev->sigmask);
+ sigaddset(&dev->sigmask, SIGSTOP);
+ sigaddset(&dev->sigmask, SIGTSTP);
+ sigaddset(&dev->sigmask, SIGTTIN);
+ sigaddset(&dev->sigmask, SIGTTOU);
+ dev->sigdata.context = lock.context;
+ dev->sigdata.lock = dev->lock.hw_lock;
+ block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
+
+ if (lock.flags & _DRM_LOCK_QUIESCENT) {
+ DRM_DEBUG("_DRM_LOCK_QUIESCENT\n");
+ DRM_DEBUG("fred\n");
+ i810_dma_quiescent(dev);
+ }
+ }
+ DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock");
+ return ret;
+}
+
+int i810_flush_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("i810_flush_ioctl\n");
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_flush_ioctl called without lock held\n");
+ return -EINVAL;
+ }
+
+ i810_flush_queue(dev);
+ return 0;
+}
+
+
+int i810_dma_vertex(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+ drm_i810_vertex_t vertex;
+
+ if (copy_from_user(&vertex, (drm_i810_vertex_t *)arg, sizeof(vertex)))
+ return -EFAULT;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_dma_vertex called without lock held\n");
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("i810 dma vertex, idx %d used %d discard %d\n",
+ vertex.idx, vertex.used, vertex.discard);
+
+ if(vertex.idx < 0 || vertex.idx > dma->buf_count) return -EINVAL;
+
+ i810_dma_dispatch_vertex( dev,
+ dma->buflist[ vertex.idx ],
+ vertex.discard, vertex.used );
+
+ atomic_add(vertex.used, &dma->total_bytes);
+ atomic_inc(&dma->total_dmas);
+ sarea_priv->last_enqueue = dev_priv->counter-1;
+ sarea_priv->last_dispatch = (int) hw_status[5];
+
+ return 0;
+}
+
+
+
+int i810_clear_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_clear_t clear;
+
+ if (copy_from_user(&clear, (drm_i810_clear_t *)arg, sizeof(clear)))
+ return -EFAULT;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_clear_bufs called without lock held\n");
+ return -EINVAL;
+ }
+
+ i810_dma_dispatch_clear( dev, clear.flags,
+ clear.clear_color,
+ clear.clear_depth );
+ return 0;
+}
+
+int i810_swap_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("i810_swap_bufs\n");
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_swap_buf called without lock held\n");
+ return -EINVAL;
+ }
+
+ i810_dma_dispatch_swap( dev );
+ return 0;
+}
+
+int i810_getage(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+
+ sarea_priv->last_dispatch = (int) hw_status[5];
+ return 0;
+}
+
+int i810_getbuf(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode = 0;
+ drm_i810_dma_t d;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+
+ DRM_DEBUG("getbuf\n");
+ if (copy_from_user(&d, (drm_i810_dma_t *)arg, sizeof(d)))
+ return -EFAULT;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_dma called without lock held\n");
+ return -EINVAL;
+ }
+
+ d.granted = 0;
+
+ retcode = i810_dma_get_buffer(dev, &d, filp);
+
+ DRM_DEBUG("i810_dma: %d returning %d, granted = %d\n",
+ current->pid, retcode, d.granted);
+
+ if (copy_to_user((drm_dma_t *)arg, &d, sizeof(d)))
+ return -EFAULT;
+ sarea_priv->last_dispatch = (int) hw_status[5];
+
+ return retcode;
+}
+
+int i810_copybuf(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_copy_t d;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+ drm_buf_t *buf;
+ drm_i810_buf_priv_t *buf_priv;
+ drm_device_dma_t *dma = dev->dma;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_dma called without lock held\n");
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&d, (drm_i810_copy_t *)arg, sizeof(d)))
+ return -EFAULT;
+
+ if(d.idx < 0 || d.idx > dma->buf_count) return -EINVAL;
+ buf = dma->buflist[ d.idx ];
+ buf_priv = buf->dev_private;
+ if (buf_priv->currently_mapped != I810_BUF_MAPPED) return -EPERM;
+
+ /* Stopping end users copying their data to the entire kernel
+ is good.. */
+ if (d.used < 0 || d.used > buf->total)
+ return -EINVAL;
+
+ if (copy_from_user(buf_priv->virtual, d.address, d.used))
+ return -EFAULT;
+
+ sarea_priv->last_dispatch = (int) hw_status[5];
+
+ return 0;
+}
+
+int i810_docopy(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ if(VM_DONTCOPY = 0) return 1;
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drm.h linux-2.4.13-lia/drivers/char/drm-4.0/i810_drm.h
--- linux-2.4.13/drivers/char/drm-4.0/i810_drm.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drm.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,194 @@
+#ifndef _I810_DRM_H_
+#define _I810_DRM_H_
+
+/* WARNING: These defines must be the same as what the Xserver uses.
+ * if you change them, you must change the defines in the Xserver.
+ */
+
+#ifndef _I810_DEFINES_
+#define _I810_DEFINES_
+
+#define I810_DMA_BUF_ORDER 12
+#define I810_DMA_BUF_SZ (1<<I810_DMA_BUF_ORDER)
+#define I810_DMA_BUF_NR 256
+#define I810_NR_SAREA_CLIPRECTS 8
+
+/* Each region is a minimum of 64k, and there are at most 64 of them.
+ */
+#define I810_NR_TEX_REGIONS 64
+#define I810_LOG_MIN_TEX_REGION_SIZE 16
+#endif
+
+#define I810_UPLOAD_TEX0IMAGE 0x1 /* handled clientside */
+#define I810_UPLOAD_TEX1IMAGE 0x2 /* handled clientside */
+#define I810_UPLOAD_CTX 0x4
+#define I810_UPLOAD_BUFFERS 0x8
+#define I810_UPLOAD_TEX0 0x10
+#define I810_UPLOAD_TEX1 0x20
+#define I810_UPLOAD_CLIPRECTS 0x40
+
+
+/* Indices into buf.Setup where various bits of state are mirrored per
+ * context and per buffer. These can be fired at the card as a unit,
+ * or in a piecewise fashion as required.
+ */
+
+/* Destbuffer state
+ * - backbuffer linear offset and pitch -- invarient in the current dri
+ * - zbuffer linear offset and pitch -- also invarient
+ * - drawing origin in back and depth buffers.
+ *
+ * Keep the depth/back buffer state here to acommodate private buffers
+ * in the future.
+ */
+#define I810_DESTREG_DI0 0 /* CMD_OP_DESTBUFFER_INFO (2 dwords) */
+#define I810_DESTREG_DI1 1
+#define I810_DESTREG_DV0 2 /* GFX_OP_DESTBUFFER_VARS (2 dwords) */
+#define I810_DESTREG_DV1 3
+#define I810_DESTREG_DR0 4 /* GFX_OP_DRAWRECT_INFO (4 dwords) */
+#define I810_DESTREG_DR1 5
+#define I810_DESTREG_DR2 6
+#define I810_DESTREG_DR3 7
+#define I810_DESTREG_DR4 8
+#define I810_DEST_SETUP_SIZE 10
+
+/* Context state
+ */
+#define I810_CTXREG_CF0 0 /* GFX_OP_COLOR_FACTOR */
+#define I810_CTXREG_CF1 1
+#define I810_CTXREG_ST0 2 /* GFX_OP_STIPPLE */
+#define I810_CTXREG_ST1 3
+#define I810_CTXREG_VF 4 /* GFX_OP_VERTEX_FMT */
+#define I810_CTXREG_MT 5 /* GFX_OP_MAP_TEXELS */
+#define I810_CTXREG_MC0 6 /* GFX_OP_MAP_COLOR_STAGES - stage 0 */
+#define I810_CTXREG_MC1 7 /* GFX_OP_MAP_COLOR_STAGES - stage 1 */
+#define I810_CTXREG_MC2 8 /* GFX_OP_MAP_COLOR_STAGES - stage 2 */
+#define I810_CTXREG_MA0 9 /* GFX_OP_MAP_ALPHA_STAGES - stage 0 */
+#define I810_CTXREG_MA1 10 /* GFX_OP_MAP_ALPHA_STAGES - stage 1 */
+#define I810_CTXREG_MA2 11 /* GFX_OP_MAP_ALPHA_STAGES - stage 2 */
+#define I810_CTXREG_SDM 12 /* GFX_OP_SRC_DEST_MONO */
+#define I810_CTXREG_FOG 13 /* GFX_OP_FOG_COLOR */
+#define I810_CTXREG_B1 14 /* GFX_OP_BOOL_1 */
+#define I810_CTXREG_B2 15 /* GFX_OP_BOOL_2 */
+#define I810_CTXREG_LCS 16 /* GFX_OP_LINEWIDTH_CULL_SHADE_MODE */
+#define I810_CTXREG_PV 17 /* GFX_OP_PV_RULE -- Invarient! */
+#define I810_CTXREG_ZA 18 /* GFX_OP_ZBIAS_ALPHAFUNC */
+#define I810_CTXREG_AA 19 /* GFX_OP_ANTIALIAS */
+#define I810_CTX_SETUP_SIZE 20
+
+/* Texture state (per tex unit)
+ */
+#define I810_TEXREG_MI0 0 /* GFX_OP_MAP_INFO (4 dwords) */
+#define I810_TEXREG_MI1 1
+#define I810_TEXREG_MI2 2
+#define I810_TEXREG_MI3 3
+#define I810_TEXREG_MF 4 /* GFX_OP_MAP_FILTER */
+#define I810_TEXREG_MLC 5 /* GFX_OP_MAP_LOD_CTL */
+#define I810_TEXREG_MLL 6 /* GFX_OP_MAP_LOD_LIMITS */
+#define I810_TEXREG_MCS 7 /* GFX_OP_MAP_COORD_SETS ??? */
+#define I810_TEX_SETUP_SIZE 8
+
+#define I810_FRONT 0x1
+#define I810_BACK 0x2
+#define I810_DEPTH 0x4
+
+
+typedef struct _drm_i810_init {
+ enum {
+ I810_INIT_DMA = 0x01,
+ I810_CLEANUP_DMA = 0x02
+ } func;
+ int ring_map_idx;
+ int buffer_map_idx;
+ int sarea_priv_offset;
+ unsigned int ring_start;
+ unsigned int ring_end;
+ unsigned int ring_size;
+ unsigned int front_offset;
+ unsigned int back_offset;
+ unsigned int depth_offset;
+ unsigned int w;
+ unsigned int h;
+ unsigned int pitch;
+ unsigned int pitch_bits;
+} drm_i810_init_t;
+
+/* Warning: If you change the SAREA structure you must change the Xserver
+ * structure as well */
+
+typedef struct _drm_i810_tex_region {
+ unsigned char next, prev; /* indices to form a circular LRU */
+ unsigned char in_use; /* owned by a client, or free? */
+ int age; /* tracked by clients to update local LRU's */
+} drm_i810_tex_region_t;
+
+typedef struct _drm_i810_sarea {
+ unsigned int ContextState[I810_CTX_SETUP_SIZE];
+ unsigned int BufferState[I810_DEST_SETUP_SIZE];
+ unsigned int TexState[2][I810_TEX_SETUP_SIZE];
+ unsigned int dirty;
+
+ unsigned int nbox;
+ drm_clip_rect_t boxes[I810_NR_SAREA_CLIPRECTS];
+
+ /* Maintain an LRU of contiguous regions of texture space. If
+ * you think you own a region of texture memory, and it has an
+ * age different to the one you set, then you are mistaken and
+ * it has been stolen by another client. If global texAge
+ * hasn't changed, there is no need to walk the list.
+ *
+ * These regions can be used as a proxy for the fine-grained
+ * texture information of other clients - by maintaining them
+ * in the same lru which is used to age their own textures,
+ * clients have an approximate lru for the whole of global
+ * texture space, and can make informed decisions as to which
+ * areas to kick out. There is no need to choose whether to
+ * kick out your own texture or someone else's - simply eject
+ * them all in LRU order.
+ */
+
+ drm_i810_tex_region_t texList[I810_NR_TEX_REGIONS+1];
+ /* Last elt is sentinal */
+ int texAge; /* last time texture was uploaded */
+ int last_enqueue; /* last time a buffer was enqueued */
+ int last_dispatch; /* age of the most recently dispatched buffer */
+ int last_quiescent; /* */
+ int ctxOwner; /* last context to upload state */
+
+ int vertex_prim;
+
+} drm_i810_sarea_t;
+
+typedef struct _drm_i810_clear {
+ int clear_color;
+ int clear_depth;
+ int flags;
+} drm_i810_clear_t;
+
+
+
+/* These may be placeholders if we have more cliprects than
+ * I810_NR_SAREA_CLIPRECTS. In that case, the client sets discard to
+ * false, indicating that the buffer will be dispatched again with a
+ * new set of cliprects.
+ */
+typedef struct _drm_i810_vertex {
+ int idx; /* buffer index */
+ int used; /* nr bytes in use */
+ int discard; /* client is finished with the buffer? */
+} drm_i810_vertex_t;
+
+typedef struct _drm_i810_copy_t {
+ int idx; /* buffer index */
+ int used; /* nr bytes in use */
+ void *address; /* Address to copy from */
+} drm_i810_copy_t;
+
+typedef struct drm_i810_dma {
+ void *virtual;
+ int request_idx;
+ int request_size;
+ int granted;
+} drm_i810_dma_t;
+
+#endif /* _I810_DRM_H_ */
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drv.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_drv.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,648 @@
+/* i810_drv.c -- I810 driver -*- linux-c -*-
+ * Created: Mon Dec 13 01:56:22 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#include <linux/config.h>
+#include "drmP.h"
+#include "i810_drv.h"
+
+#define I810_NAME "i810"
+#define I810_DESC "Intel I810"
+#define I810_DATE "20000928"
+#define I810_MAJOR 1
+#define I810_MINOR 1
+#define I810_PATCHLEVEL 0
+
+static drm_device_t i810_device;
+drm_ctx_t i810_res_ctx;
+
+static struct file_operations i810_fops = {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* This started being used during 2.4.0-test */
+ owner: THIS_MODULE,
+#endif
+ open: i810_open,
+ flush: drm_flush,
+ release: i810_release,
+ ioctl: i810_ioctl,
+ mmap: drm_mmap,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+};
+
+static struct miscdevice i810_misc = {
+ minor: MISC_DYNAMIC_MINOR,
+ name: I810_NAME,
+ fops: &i810_fops,
+};
+
+static drm_ioctl_desc_t i810_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { i810_version, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_busid, 0, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_block, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_unblock, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_CONTROL)] = { i810_control, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] = { i810_addbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] = { i810_markbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] = { i810_infobufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] = { i810_freebufs, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { i810_addctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { i810_rmctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { i810_modctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { i810_getctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { i810_switchctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { i810_newctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { i810_resctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { i810_lock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { i810_unlock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_finish, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE)] = { drm_agp_acquire, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_RELEASE)] = { drm_agp_release, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ENABLE)] = { drm_agp_enable, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_INFO)] = { drm_agp_info, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ALLOC)] = { drm_agp_alloc, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_FREE)] = { drm_agp_free, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_BIND)] = { drm_agp_bind, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_UNBIND)] = { drm_agp_unbind, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_INIT)] = { i810_dma_init, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_VERTEX)] = { i810_dma_vertex, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_CLEAR)] = { i810_clear_bufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_FLUSH)] = { i810_flush_ioctl,1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_GETAGE)] = { i810_getage, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_GETBUF)] = { i810_getbuf, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_SWAP)] = { i810_swap_bufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_COPY)] = { i810_copybuf, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_DOCOPY)] = { i810_docopy, 1, 0 },
+};
+
+#define I810_IOCTL_COUNT DRM_ARRAY_SIZE(i810_ioctls)
+
+#ifdef MODULE
+static char *i810 = NULL;
+#endif
+
+MODULE_AUTHOR("VA Linux Systems, Inc.");
+MODULE_DESCRIPTION("Intel I810");
+MODULE_PARM(i810, "s");
+
+#ifndef MODULE
+/* i810_options is called by the kernel to parse command-line options
+ * passed via the boot-loader (e.g., LILO). It calls the insmod option
+ * routine, drm_parse_drm.
+ */
+
+static int __init i810_options(char *str)
+{
+ drm_parse_options(str);
+ return 1;
+}
+
+__setup("i810=", i810_options);
+#endif
+
+static int i810_setup(drm_device_t *dev)
+{
+ int i;
+
+ atomic_set(&dev->ioctl_count, 0);
+ atomic_set(&dev->vma_count, 0);
+ dev->buf_use = 0;
+ atomic_set(&dev->buf_alloc, 0);
+
+ drm_dma_setup(dev);
+
+ atomic_set(&dev->total_open, 0);
+ atomic_set(&dev->total_close, 0);
+ atomic_set(&dev->total_ioctl, 0);
+ atomic_set(&dev->total_irq, 0);
+ atomic_set(&dev->total_ctx, 0);
+ atomic_set(&dev->total_locks, 0);
+ atomic_set(&dev->total_unlocks, 0);
+ atomic_set(&dev->total_contends, 0);
+ atomic_set(&dev->total_sleeps, 0);
+
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ dev->magiclist[i].head = NULL;
+ dev->magiclist[i].tail = NULL;
+ }
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ dev->vmalist = NULL;
+ dev->lock.hw_lock = NULL;
+ init_waitqueue_head(&dev->lock.lock_queue);
+ dev->queue_count = 0;
+ dev->queue_reserved = 0;
+ dev->queue_slots = 0;
+ dev->queuelist = NULL;
+ dev->irq = 0;
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+ dev->last_context = 0;
+ dev->last_switch = 0;
+ dev->last_checked = 0;
+ init_timer(&dev->timer);
+ init_waitqueue_head(&dev->context_wait);
+#if DRM_DMA_HISTO
+ memset(&dev->histo, 0, sizeof(dev->histo));
+#endif
+ dev->ctx_start = 0;
+ dev->lck_start = 0;
+
+ dev->buf_rp = dev->buf;
+ dev->buf_wp = dev->buf;
+ dev->buf_end = dev->buf + DRM_BSZ;
+ dev->buf_async = NULL;
+ init_waitqueue_head(&dev->buf_readers);
+ init_waitqueue_head(&dev->buf_writers);
+
+ DRM_DEBUG("\n");
+
+ /* The kernel's context could be created here, but is now created
+ in drm_dma_enqueue. This is more resource-efficient for
+ hardware that does not do DMA, but may mean that
+ drm_select_queue fails between the time the interrupt is
+ initialized and the time the queues are initialized. */
+
+ return 0;
+}
+
+
+static int i810_takedown(drm_device_t *dev)
+{
+ int i;
+ drm_magic_entry_t *pt, *next;
+ drm_map_t *map;
+ drm_vma_entry_t *vma, *vma_next;
+
+ DRM_DEBUG("\n");
+
+ if (dev->irq) i810_irq_uninstall(dev);
+
+ down(&dev->struct_sem);
+ del_timer(&dev->timer);
+
+ if (dev->devname) {
+ drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER);
+ dev->devname = NULL;
+ }
+
+ if (dev->unique) {
+ drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER);
+ dev->unique = NULL;
+ dev->unique_len = 0;
+ }
+ /* Clear pid list */
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ for (pt = dev->magiclist[i].head; pt; pt = next) {
+ next = pt->next;
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+ }
+ dev->magiclist[i].head = dev->magiclist[i].tail = NULL;
+ }
+ /* Clear AGP information */
+ if (dev->agp) {
+ drm_agp_mem_t *entry;
+ drm_agp_mem_t *nexte;
+
+ /* Remove AGP resources, but leave dev->agp
+ intact until r128_cleanup is called. */
+ for (entry = dev->agp->memory; entry; entry = nexte) {
+ nexte = entry->next;
+ if (entry->bound) drm_unbind_agp(entry->memory);
+ drm_free_agp(entry->memory, entry->pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ }
+ dev->agp->memory = NULL;
+
+ if (dev->agp->acquired) _drm_agp_release();
+
+ dev->agp->acquired = 0;
+ dev->agp->enabled = 0;
+ }
+ /* Clear vma list (only built for debugging) */
+ if (dev->vmalist) {
+ for (vma = dev->vmalist; vma; vma = vma_next) {
+ vma_next = vma->next;
+ drm_free(vma, sizeof(*vma), DRM_MEM_VMAS);
+ }
+ dev->vmalist = NULL;
+ }
+
+ /* Clear map area and mtrr information */
+ if (dev->maplist) {
+ for (i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+#ifdef CONFIG_MTRR
+ if (map->mtrr >= 0) {
+ int retcode;
+ retcode = mtrr_del(map->mtrr,
+ map->offset,
+ map->size);
+ DRM_DEBUG("mtrr_del = %d\n", retcode);
+ }
+#endif
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+ case _DRM_SHM:
+ drm_free_pages((unsigned long)map->handle,
+ drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ break;
+ case _DRM_AGP:
+ break;
+ }
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+ drm_free(dev->maplist,
+ dev->map_count * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ }
+
+ if (dev->queuelist) {
+ for (i = 0; i < dev->queue_count; i++) {
+ drm_waitlist_destroy(&dev->queuelist[i]->waitlist);
+ if (dev->queuelist[i]) {
+ drm_free(dev->queuelist[i],
+ sizeof(*dev->queuelist[0]),
+ DRM_MEM_QUEUES);
+ dev->queuelist[i] = NULL;
+ }
+ }
+ drm_free(dev->queuelist,
+ dev->queue_slots * sizeof(*dev->queuelist),
+ DRM_MEM_QUEUES);
+ dev->queuelist = NULL;
+ }
+
+ drm_dma_takedown(dev);
+
+ dev->queue_count = 0;
+ if (dev->lock.hw_lock) {
+ dev->lock.hw_lock = NULL; /* SHM removed */
+ dev->lock.pid = 0;
+ wake_up_interruptible(&dev->lock.lock_queue);
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+/* i810_init is called via init_module at module load time, or via
+ * linux/init/main.c (this is not currently supported). */
+
+static int __init i810_init(void)
+{
+ int retcode;
+ drm_device_t *dev = &i810_device;
+
+ DRM_DEBUG("\n");
+
+ memset((void *)dev, 0, sizeof(*dev));
+ dev->count_lock = SPIN_LOCK_UNLOCKED;
+ sema_init(&dev->struct_sem, 1);
+
+#ifdef MODULE
+ drm_parse_options(i810);
+#endif
+ DRM_DEBUG("doing misc_register\n");
+ if ((retcode = misc_register(&i810_misc))) {
+ DRM_ERROR("Cannot register \"%s\"\n", I810_NAME);
+ return retcode;
+ }
+ dev->device = MKDEV(MISC_MAJOR, i810_misc.minor);
+ dev->name = I810_NAME;
+
+ DRM_DEBUG("doing mem init\n");
+ drm_mem_init();
+ DRM_DEBUG("doing proc init\n");
+ drm_proc_init(dev);
+ DRM_DEBUG("doing agp init\n");
+ dev->agp = drm_agp_init();
+ if(dev->agp = NULL) {
+ DRM_INFO("The i810 drm module requires the agpgart module"
+ " to function correctly\nPlease load the agpgart"
+ " module before you load the i810 module\n");
+ drm_proc_cleanup();
+ misc_deregister(&i810_misc);
+ i810_takedown(dev);
+ return -ENOMEM;
+ }
+ DRM_DEBUG("doing ctxbitmap init\n");
+ if((retcode = drm_ctxbitmap_init(dev))) {
+ DRM_ERROR("Cannot allocate memory for context bitmap.\n");
+ drm_proc_cleanup();
+ misc_deregister(&i810_misc);
+ i810_takedown(dev);
+ return retcode;
+ }
+
+ DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
+ I810_NAME,
+ I810_MAJOR,
+ I810_MINOR,
+ I810_PATCHLEVEL,
+ I810_DATE,
+ i810_misc.minor);
+
+ return 0;
+}
+
+/* i810_cleanup is called via cleanup_module at module unload time. */
+
+static void __exit i810_cleanup(void)
+{
+ drm_device_t *dev = &i810_device;
+
+ DRM_DEBUG("\n");
+
+ drm_proc_cleanup();
+ if (misc_deregister(&i810_misc)) {
+ DRM_ERROR("Cannot unload module\n");
+ } else {
+ DRM_INFO("Module unloaded\n");
+ }
+ drm_ctxbitmap_cleanup(dev);
+ i810_takedown(dev);
+ if (dev->agp) {
+ drm_agp_uninit();
+ drm_free(dev->agp, sizeof(*dev->agp), DRM_MEM_AGPLISTS);
+ dev->agp = NULL;
+ }
+}
+
+module_init(i810_init);
+module_exit(i810_cleanup);
+
+
+int i810_version(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_version_t version;
+ int len;
+
+ if (copy_from_user(&version,
+ (drm_version_t *)arg,
+ sizeof(version)))
+ return -EFAULT;
+
+#define DRM_COPY(name,value) \
+ len = strlen(value); \
+ if (len > name##_len) len = name##_len; \
+ name##_len = strlen(value); \
+ if (len && name) { \
+ if (copy_to_user(name, value, len)) \
+ return -EFAULT; \
+ }
+
+ version.version_major = I810_MAJOR;
+ version.version_minor = I810_MINOR;
+ version.version_patchlevel = I810_PATCHLEVEL;
+
+ DRM_COPY(version.name, I810_NAME);
+ DRM_COPY(version.date, I810_DATE);
+ DRM_COPY(version.desc, I810_DESC);
+
+ if (copy_to_user((drm_version_t *)arg,
+ &version,
+ sizeof(version)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_open(struct inode *inode, struct file *filp)
+{
+ drm_device_t *dev = &i810_device;
+ int retcode = 0;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (!(retcode = drm_open_helper(inode, filp, dev))) {
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_INC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_open);
+ spin_lock(&dev->count_lock);
+ if (!dev->open_count++) {
+ spin_unlock(&dev->count_lock);
+ return i810_setup(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+ return retcode;
+}
+
+int i810_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ int retcode = 0;
+
+ lock_kernel();
+ dev = priv->dev;
+ DRM_DEBUG("pid = %d, device = 0x%x, open_count = %d\n",
+ current->pid, dev->device, dev->open_count);
+
+ if (dev->lock.hw_lock && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)
+ && dev->lock.pid = current->pid) {
+ i810_reclaim_buffers(dev, priv->pid);
+ DRM_ERROR("Process %d dead, freeing lock for context %d\n",
+ current->pid,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ drm_lock_free(dev,
+ &dev->lock.hw_lock->lock,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+
+ /* FIXME: may require heavy-handed reset of
+ hardware at this point, possibly
+ processed via a callback to the X
+ server. */
+ } else if (dev->lock.hw_lock) {
+ /* The lock is required to reclaim buffers */
+ DECLARE_WAITQUEUE(entry, current);
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ retcode = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ dev->lock.pid = priv->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ break; /* Got lock */
+ }
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ schedule();
+ if (signal_pending(current)) {
+ retcode = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+ if(!retcode) {
+ i810_reclaim_buffers(dev, priv->pid);
+ drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT);
+ }
+ }
+ drm_fasync(-1, filp, 0);
+
+ down(&dev->struct_sem);
+ if (priv->prev) priv->prev->next = priv->next;
+ else dev->file_first = priv->next;
+ if (priv->next) priv->next->prev = priv->prev;
+ else dev->file_last = priv->prev;
+ up(&dev->struct_sem);
+
+ drm_free(priv, sizeof(*priv), DRM_MEM_FILES);
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_close);
+ spin_lock(&dev->count_lock);
+ if (!--dev->open_count) {
+ if (atomic_read(&dev->ioctl_count) || dev->blocked) {
+ DRM_ERROR("Device busy: %d %d\n",
+ atomic_read(&dev->ioctl_count),
+ dev->blocked);
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return -EBUSY;
+ }
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return i810_takedown(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return retcode;
+}
+
+/* drm_ioctl is called whenever a process performs an ioctl on /dev/drm. */
+
+int i810_ioctl(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ int nr = DRM_IOCTL_NR(cmd);
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode = 0;
+ drm_ioctl_desc_t *ioctl;
+ drm_ioctl_t *func;
+
+ atomic_inc(&dev->ioctl_count);
+ atomic_inc(&dev->total_ioctl);
+ ++priv->ioctl_count;
+
+ DRM_DEBUG("pid = %d, cmd = 0x%02x, nr = 0x%02x, dev 0x%x, auth = %d\n",
+ current->pid, cmd, nr, dev->device, priv->authenticated);
+
+ if (nr >= I810_IOCTL_COUNT) {
+ retcode = -EINVAL;
+ } else {
+ ioctl = &i810_ioctls[nr];
+ func = ioctl->func;
+
+ if (!func) {
+ DRM_DEBUG("no function\n");
+ retcode = -EINVAL;
+ } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN))
+ || (ioctl->auth_needed && !priv->authenticated)) {
+ retcode = -EACCES;
+ } else {
+ retcode = (func)(inode, filp, cmd, arg);
+ }
+ }
+
+ atomic_dec(&dev->ioctl_count);
+ return retcode;
+}
+
+int i810_unlock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_lock_t lock;
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d frees lock (%d holds)\n",
+ lock.context,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ atomic_inc(&dev->total_unlocks);
+ if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock))
+ atomic_inc(&dev->total_contends);
+ drm_lock_transfer(dev, &dev->lock.hw_lock->lock, DRM_KERNEL_CONTEXT);
+ if (!dev->context_flag) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.lhld[drm_histogram_slot(get_cycles()
+ - dev->lck_start)]);
+#endif
+
+ unblock_all_signals();
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drv.h linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.h
--- linux-2.4.13/drivers/char/drm-4.0/i810_drv.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,225 @@
+/* i810_drv.h -- Private header for the Matrox g200/g400 driver -*- linux-c -*-
+ * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#ifndef _I810_DRV_H_
+#define _I810_DRV_H_
+
+typedef struct drm_i810_buf_priv {
+ u32 *in_use;
+ int my_use_idx;
+ int currently_mapped;
+ void *virtual;
+ void *kernel_virtual;
+ int map_count;
+ struct vm_area_struct *vma;
+} drm_i810_buf_priv_t;
+
+typedef struct _drm_i810_ring_buffer{
+ int tail_mask;
+ unsigned long Start;
+ unsigned long End;
+ unsigned long Size;
+ u8 *virtual_start;
+ int head;
+ int tail;
+ int space;
+} drm_i810_ring_buffer_t;
+
+typedef struct drm_i810_private {
+ int ring_map_idx;
+ int buffer_map_idx;
+
+ drm_i810_ring_buffer_t ring;
+ drm_i810_sarea_t *sarea_priv;
+
+ unsigned long hw_status_page;
+ unsigned long counter;
+
+ atomic_t flush_done;
+ wait_queue_head_t flush_queue; /* Processes waiting until flush */
+ drm_buf_t *mmap_buffer;
+
+
+ u32 front_di1, back_di1, zi1;
+
+ int back_offset;
+ int depth_offset;
+ int w, h;
+ int pitch;
+} drm_i810_private_t;
+
+ /* i810_drv.c */
+extern int i810_version(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_open(struct inode *inode, struct file *filp);
+extern int i810_release(struct inode *inode, struct file *filp);
+extern int i810_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_unlock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* i810_dma.c */
+extern int i810_dma_schedule(drm_device_t *dev, int locked);
+extern int i810_getbuf(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_irq_install(drm_device_t *dev, int irq);
+extern int i810_irq_uninstall(drm_device_t *dev);
+extern int i810_control(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_lock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_dma_init(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_flush_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern void i810_reclaim_buffers(drm_device_t *dev, pid_t pid);
+extern int i810_getage(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg);
+extern int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma);
+extern int i810_copybuf(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_docopy(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* i810_bufs.c */
+extern int i810_addbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_infobufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_markbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_freebufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_addmap(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* i810_context.c */
+extern int i810_resctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_addctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_modctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_getctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_switchctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_newctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_rmctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+extern int i810_context_switch(drm_device_t *dev, int old, int new);
+extern int i810_context_switch_complete(drm_device_t *dev, int new);
+
+#define I810_VERBOSE 0
+
+
+int i810_dma_vertex(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+int i810_swap_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+int i810_clear_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23))
+#define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23))
+#define CMD_REPORT_HEAD (7<<23)
+#define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1)
+#define CMD_OP_BATCH_BUFFER ((0x0<<29)|(0x30<<23)|0x1)
+
+#define INST_PARSER_CLIENT 0x00000000
+#define INST_OP_FLUSH 0x02000000
+#define INST_FLUSH_MAP_CACHE 0x00000001
+
+
+#define BB1_START_ADDR_MASK (~0x7)
+#define BB1_PROTECTED (1<<0)
+#define BB1_UNPROTECTED (0<<0)
+#define BB2_END_ADDR_MASK (~0x7)
+
+#define I810REG_HWSTAM 0x02098
+#define I810REG_INT_IDENTITY_R 0x020a4
+#define I810REG_INT_MASK_R 0x020a8
+#define I810REG_INT_ENABLE_R 0x020a0
+
+#define LP_RING 0x2030
+#define HP_RING 0x2040
+#define RING_TAIL 0x00
+#define TAIL_ADDR 0x000FFFF8
+#define RING_HEAD 0x04
+#define HEAD_WRAP_COUNT 0xFFE00000
+#define HEAD_WRAP_ONE 0x00200000
+#define HEAD_ADDR 0x001FFFFC
+#define RING_START 0x08
+#define START_ADDR 0x00FFFFF8
+#define RING_LEN 0x0C
+#define RING_NR_PAGES 0x000FF000
+#define RING_REPORT_MASK 0x00000006
+#define RING_REPORT_64K 0x00000002
+#define RING_REPORT_128K 0x00000004
+#define RING_NO_REPORT 0x00000000
+#define RING_VALID_MASK 0x00000001
+#define RING_VALID 0x00000001
+#define RING_INVALID 0x00000000
+
+#define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19))
+#define SC_UPDATE_SCISSOR (0x1<<1)
+#define SC_ENABLE_MASK (0x1<<0)
+#define SC_ENABLE (0x1<<0)
+
+#define GFX_OP_SCISSOR_INFO ((0x3<<29)|(0x1d<<24)|(0x81<<16)|(0x1))
+#define SCI_YMIN_MASK (0xffff<<16)
+#define SCI_XMIN_MASK (0xffff<<0)
+#define SCI_YMAX_MASK (0xffff<<16)
+#define SCI_XMAX_MASK (0xffff<<0)
+
+#define GFX_OP_COLOR_FACTOR ((0x3<<29)|(0x1d<<24)|(0x1<<16)|0x0)
+#define GFX_OP_STIPPLE ((0x3<<29)|(0x1d<<24)|(0x83<<16))
+#define GFX_OP_MAP_INFO ((0x3<<29)|(0x1d<<24)|0x2)
+#define GFX_OP_DESTBUFFER_VARS ((0x3<<29)|(0x1d<<24)|(0x85<<16)|0x0)
+#define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3))
+#define GFX_OP_PRIMITIVE ((0x3<<29)|(0x1f<<24))
+
+#define CMD_OP_Z_BUFFER_INFO ((0x0<<29)|(0x16<<23))
+#define CMD_OP_DESTBUFFER_INFO ((0x0<<29)|(0x15<<23))
+
+#define BR00_BITBLT_CLIENT 0x40000000
+#define BR00_OP_COLOR_BLT 0x10000000
+#define BR00_OP_SRC_COPY_BLT 0x10C00000
+#define BR13_SOLID_PATTERN 0x80000000
+
+
+
+#endif
+
diff -urN linux-2.4.13/drivers/char/drm-4.0/init.c linux-2.4.13-lia/drivers/char/drm-4.0/init.c
--- linux-2.4.13/drivers/char/drm-4.0/init.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/init.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,113 @@
+/* init.c -- Setup/Cleanup for DRM -*- linux-c -*-
+ * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_flags = 0;
+
+/* drm_parse_option parses a single option. See description for
+ drm_parse_options for details. */
+
+static void drm_parse_option(char *s)
+{
+ char *c, *r;
+
+ DRM_DEBUG("\"%s\"\n", s);
+ if (!s || !*s) return;
+ for (c = s; *c && *c != ':'; c++); /* find : or \0 */
+ if (*c) r = c + 1; else r = NULL; /* remember remainder */
+ *c = '\0'; /* terminate */
+ if (!strcmp(s, "noctx")) {
+ drm_flags |= DRM_FLAG_NOCTX;
+ DRM_INFO("Server-mediated context switching OFF\n");
+ return;
+ }
+ if (!strcmp(s, "debug")) {
+ drm_flags |= DRM_FLAG_DEBUG;
+ DRM_INFO("Debug messages ON\n");
+ return;
+ }
+ DRM_ERROR("\"%s\" is not a valid option\n", s);
+ return;
+}
+
+/* drm_parse_options parse the insmod "drm=" options, or the command-line
+ * options passed to the kernel via LILO. The grammar of the format is as
+ * follows:
+ *
+ * drm ::= 'drm=' option_list
+ * option_list ::= option [ ';' option_list ]
+ * option ::= 'device:' major
+ * | 'debug'
+ * | 'noctx'
+ * major ::= INTEGER
+ *
+ * Note that 's' contains option_list without the 'drm=' part.
+ *
+ * device=major,minor specifies the device number used for /dev/drm
+ * if major = 0 then the misc device is used
+ * if major = 0 and minor = 0 then dynamic misc allocation is used
+ * debug=on specifies that debugging messages will be printk'd
+ * debug=trace specifies that each function call will be logged via printk
+ * debug=off turns off all debugging options
+ *
+ */
+
+void drm_parse_options(char *s)
+{
+ char *h, *t, *n;
+
+ DRM_DEBUG("\"%s\"\n", s ?: "");
+ if (!s || !*s) return;
+
+ for (h = t = n = s; h && *h; h = n) {
+ for (; *t && *t != ';'; t++); /* find ; or \0 */
+ if (*t) n = t + 1; else n = NULL; /* remember next */
+ *t = '\0'; /* terminate */
+ drm_parse_option(h); /* parse */
+ }
+}
+
+/* drm_cpu_valid returns non-zero if the DRI will run on this CPU, and 0
+ * otherwise. */
+
+int drm_cpu_valid(void)
+{
+#if defined(__i386__)
+ if (boot_cpu_data.x86 = 3) return 0; /* No cmpxchg on a 386 */
+#endif
+#if defined(__sparc__) && !defined(__sparc_v9__)
+ if (1)
+ return 0; /* No cmpxchg before v9 sparc. */
+#endif
+ return 1;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/ioctl.c linux-2.4.13-lia/drivers/char/drm-4.0/ioctl.c
--- linux-2.4.13/drivers/char/drm-4.0/ioctl.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ioctl.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,99 @@
+/* ioctl.c -- IOCTL processing for DRM -*- linux-c -*-
+ * Created: Fri Jan 8 09:01:26 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_irq_busid(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_irq_busid_t p;
+ struct pci_dev *dev;
+
+ if (copy_from_user(&p, (drm_irq_busid_t *)arg, sizeof(p)))
+ return -EFAULT;
+ dev = pci_find_slot(p.busnum, PCI_DEVFN(p.devnum, p.funcnum));
+ if (dev) p.irq = dev->irq;
+ else p.irq = 0;
+ DRM_DEBUG("%d:%d:%d => IRQ %d\n",
+ p.busnum, p.devnum, p.funcnum, p.irq);
+ if (copy_to_user((drm_irq_busid_t *)arg, &p, sizeof(p)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_getunique(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_unique_t u;
+
+ if (copy_from_user(&u, (drm_unique_t *)arg, sizeof(u)))
+ return -EFAULT;
+ if (u.unique_len >= dev->unique_len) {
+ if (copy_to_user(u.unique, dev->unique, dev->unique_len))
+ return -EFAULT;
+ }
+ u.unique_len = dev->unique_len;
+ if (copy_to_user((drm_unique_t *)arg, &u, sizeof(u)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_setunique(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_unique_t u;
+
+ if (dev->unique_len || dev->unique)
+ return -EBUSY;
+
+ if (copy_from_user(&u, (drm_unique_t *)arg, sizeof(u)))
+ return -EFAULT;
+
+ if (!u.unique_len || u.unique_len > 1024)
+ return -EINVAL;
+
+ dev->unique_len = u.unique_len;
+ dev->unique = drm_alloc(u.unique_len + 1, DRM_MEM_DRIVER);
+ if (copy_from_user(dev->unique, u.unique, dev->unique_len))
+ return -EFAULT;
+ dev->unique[dev->unique_len] = '\0';
+
+ dev->devname = drm_alloc(strlen(dev->name) + strlen(dev->unique) + 2,
+ DRM_MEM_DRIVER);
+ sprintf(dev->devname, "%s@%s", dev->name, dev->unique);
+
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/lists.c linux-2.4.13-lia/drivers/char/drm-4.0/lists.c
--- linux-2.4.13/drivers/char/drm-4.0/lists.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/lists.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,218 @@
+/* lists.c -- Buffer list handling routines -*- linux-c -*-
+ * Created: Mon Apr 19 20:54:22 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_waitlist_create(drm_waitlist_t *bl, int count)
+{
+ if (bl->count) return -EINVAL;
+
+ bl->count = count;
+ bl->bufs = drm_alloc((bl->count + 2) * sizeof(*bl->bufs),
+ DRM_MEM_BUFLISTS);
+ bl->rp = bl->bufs;
+ bl->wp = bl->bufs;
+ bl->end = &bl->bufs[bl->count+1];
+ bl->write_lock = SPIN_LOCK_UNLOCKED;
+ bl->read_lock = SPIN_LOCK_UNLOCKED;
+ return 0;
+}
+
+int drm_waitlist_destroy(drm_waitlist_t *bl)
+{
+ if (bl->rp != bl->wp) return -EINVAL;
+ if (bl->bufs) drm_free(bl->bufs,
+ (bl->count + 2) * sizeof(*bl->bufs),
+ DRM_MEM_BUFLISTS);
+ bl->count = 0;
+ bl->bufs = NULL;
+ bl->rp = NULL;
+ bl->wp = NULL;
+ bl->end = NULL;
+ return 0;
+}
+
+int drm_waitlist_put(drm_waitlist_t *bl, drm_buf_t *buf)
+{
+ int left;
+ unsigned long flags;
+
+ left = DRM_LEFTCOUNT(bl);
+ if (!left) {
+ DRM_ERROR("Overflow while adding buffer %d from pid %d\n",
+ buf->idx, buf->pid);
+ return -EINVAL;
+ }
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = get_cycles();
+#endif
+ buf->list = DRM_LIST_WAIT;
+
+ spin_lock_irqsave(&bl->write_lock, flags);
+ *bl->wp = buf;
+ if (++bl->wp >= bl->end) bl->wp = bl->bufs;
+ spin_unlock_irqrestore(&bl->write_lock, flags);
+
+ return 0;
+}
+
+drm_buf_t *drm_waitlist_get(drm_waitlist_t *bl)
+{
+ drm_buf_t *buf;
+ unsigned long flags;
+
+ spin_lock_irqsave(&bl->read_lock, flags);
+ buf = *bl->rp;
+ if (bl->rp = bl->wp) {
+ spin_unlock_irqrestore(&bl->read_lock, flags);
+ return NULL;
+ }
+ if (++bl->rp >= bl->end) bl->rp = bl->bufs;
+ spin_unlock_irqrestore(&bl->read_lock, flags);
+
+ return buf;
+}
+
+int drm_freelist_create(drm_freelist_t *bl, int count)
+{
+ atomic_set(&bl->count, 0);
+ bl->next = NULL;
+ init_waitqueue_head(&bl->waiting);
+ bl->low_mark = 0;
+ bl->high_mark = 0;
+ atomic_set(&bl->wfh, 0);
+ bl->lock = SPIN_LOCK_UNLOCKED;
+ ++bl->initialized;
+ return 0;
+}
+
+int drm_freelist_destroy(drm_freelist_t *bl)
+{
+ atomic_set(&bl->count, 0);
+ bl->next = NULL;
+ return 0;
+}
+
+int drm_freelist_put(drm_device_t *dev, drm_freelist_t *bl, drm_buf_t *buf)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ if (!dma) {
+ DRM_ERROR("No DMA support\n");
+ return 1;
+ }
+
+ if (buf->waiting || buf->pending || buf->list = DRM_LIST_FREE) {
+ DRM_ERROR("Freed buffer %d: w%d, p%d, l%d\n",
+ buf->idx, buf->waiting, buf->pending, buf->list);
+ }
+ if (!bl) return 1;
+#if DRM_DMA_HISTOGRAM
+ buf->time_freed = get_cycles();
+ drm_histogram_compute(dev, buf);
+#endif
+ buf->list = DRM_LIST_FREE;
+
+ spin_lock(&bl->lock);
+ buf->next = bl->next;
+ bl->next = buf;
+ spin_unlock(&bl->lock);
+
+ atomic_inc(&bl->count);
+ if (atomic_read(&bl->count) > dma->buf_count) {
+ DRM_ERROR("%d of %d buffers free after addition of %d\n",
+ atomic_read(&bl->count), dma->buf_count, buf->idx);
+ return 1;
+ }
+ /* Check for high water mark */
+ if (atomic_read(&bl->wfh) && atomic_read(&bl->count)>=bl->high_mark) {
+ atomic_set(&bl->wfh, 0);
+ wake_up_interruptible(&bl->waiting);
+ }
+ return 0;
+}
+
+static drm_buf_t *drm_freelist_try(drm_freelist_t *bl)
+{
+ drm_buf_t *buf;
+
+ if (!bl) return NULL;
+
+ /* Get buffer */
+ spin_lock(&bl->lock);
+ if (!bl->next) {
+ spin_unlock(&bl->lock);
+ return NULL;
+ }
+ buf = bl->next;
+ bl->next = bl->next->next;
+ spin_unlock(&bl->lock);
+
+ atomic_dec(&bl->count);
+ buf->next = NULL;
+ buf->list = DRM_LIST_NONE;
+ if (buf->waiting || buf->pending) {
+ DRM_ERROR("Free buffer %d: w%d, p%d, l%d\n",
+ buf->idx, buf->waiting, buf->pending, buf->list);
+ }
+
+ return buf;
+}
+
+drm_buf_t *drm_freelist_get(drm_freelist_t *bl, int block)
+{
+ drm_buf_t *buf = NULL;
+ DECLARE_WAITQUEUE(entry, current);
+
+ if (!bl || !bl->initialized) return NULL;
+
+ /* Check for low water mark */
+ if (atomic_read(&bl->count) <= bl->low_mark) /* Became low */
+ atomic_set(&bl->wfh, 1);
+ if (atomic_read(&bl->wfh)) {
+ if (block) {
+ add_wait_queue(&bl->waiting, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!atomic_read(&bl->wfh)
+ && (buf = drm_freelist_try(bl))) break;
+ schedule();
+ if (signal_pending(current)) break;
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&bl->waiting, &entry);
+ }
+ return buf;
+ }
+
+ return drm_freelist_try(bl);
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/lock.c linux-2.4.13-lia/drivers/char/drm-4.0/lock.c
--- linux-2.4.13/drivers/char/drm-4.0/lock.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/lock.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,252 @@
+/* lock.c -- IOCTLs for locking -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_block(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ DRM_DEBUG("\n");
+ return 0;
+}
+
+int drm_unblock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ DRM_DEBUG("\n");
+ return 0;
+}
+
+int drm_lock_take(__volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+
+ do {
+ old = *lock;
+ if (old & _DRM_LOCK_HELD) new = old | _DRM_LOCK_CONT;
+ else new = context | _DRM_LOCK_HELD;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ if (_DRM_LOCKING_CONTEXT(old) = context) {
+ if (old & _DRM_LOCK_HELD) {
+ if (context != DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("%d holds heavyweight lock\n",
+ context);
+ }
+ return 0;
+ }
+ }
+ if (new = (context | _DRM_LOCK_HELD)) {
+ /* Have lock */
+ return 1;
+ }
+ return 0;
+}
+
+/* This takes a lock forcibly and hands it to context. Should ONLY be used
+ inside *_unlock to give lock to kernel before calling *_dma_schedule. */
+int drm_lock_transfer(drm_device_t *dev,
+ __volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+
+ dev->lock.pid = 0;
+ do {
+ old = *lock;
+ new = context | _DRM_LOCK_HELD;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ return 1;
+}
+
+int drm_lock_free(drm_device_t *dev,
+ __volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+ pid_t pid = dev->lock.pid;
+
+ dev->lock.pid = 0;
+ do {
+ old = *lock;
+ new = 0;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ if (_DRM_LOCK_IS_HELD(old) && _DRM_LOCKING_CONTEXT(old) != context) {
+ DRM_ERROR("%d freed heavyweight lock held by %d (pid %d)\n",
+ context,
+ _DRM_LOCKING_CONTEXT(old),
+ pid);
+ return 1;
+ }
+ wake_up_interruptible(&dev->lock.lock_queue);
+ return 0;
+}
+
+static int drm_flush_queue(drm_device_t *dev, int context)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_queue_t *q = dev->queuelist[context];
+
+ DRM_DEBUG("\n");
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) > 1) {
+ atomic_inc(&q->block_write);
+ add_wait_queue(&q->flush_queue, &entry);
+ atomic_inc(&q->block_count);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!DRM_BUFCOUNT(&q->waitlist)) break;
+ schedule();
+ if (signal_pending(current)) {
+ ret = -EINTR; /* Can't restart */
+ break;
+ }
+ }
+ atomic_dec(&q->block_count);
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&q->flush_queue, &entry);
+ }
+ atomic_dec(&q->use_count);
+ atomic_inc(&q->total_flushed);
+
+ /* NOTE: block_write is still incremented!
+ Use drm_flush_unlock_queue to decrement. */
+ return ret;
+}
+
+static int drm_flush_unblock_queue(drm_device_t *dev, int context)
+{
+ drm_queue_t *q = dev->queuelist[context];
+
+ DRM_DEBUG("\n");
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) > 1) {
+ if (atomic_read(&q->block_write)) {
+ atomic_dec(&q->block_write);
+ wake_up_interruptible(&q->write_queue);
+ }
+ }
+ atomic_dec(&q->use_count);
+ return 0;
+}
+
+int drm_flush_block_and_flush(drm_device_t *dev, int context,
+ drm_lock_flags_t flags)
+{
+ int ret = 0;
+ int i;
+
+ DRM_DEBUG("\n");
+
+ if (flags & _DRM_LOCK_FLUSH) {
+ ret = drm_flush_queue(dev, DRM_KERNEL_CONTEXT);
+ if (!ret) ret = drm_flush_queue(dev, context);
+ }
+ if (flags & _DRM_LOCK_FLUSH_ALL) {
+ for (i = 0; !ret && i < dev->queue_count; i++) {
+ ret = drm_flush_queue(dev, i);
+ }
+ }
+ return ret;
+}
+
+int drm_flush_unblock(drm_device_t *dev, int context, drm_lock_flags_t flags)
+{
+ int ret = 0;
+ int i;
+
+ DRM_DEBUG("\n");
+
+ if (flags & _DRM_LOCK_FLUSH) {
+ ret = drm_flush_unblock_queue(dev, DRM_KERNEL_CONTEXT);
+ if (!ret) ret = drm_flush_unblock_queue(dev, context);
+ }
+ if (flags & _DRM_LOCK_FLUSH_ALL) {
+ for (i = 0; !ret && i < dev->queue_count; i++) {
+ ret = drm_flush_unblock_queue(dev, i);
+ }
+ }
+
+ return ret;
+}
+
+int drm_finish(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int ret = 0;
+ drm_lock_t lock;
+
+ DRM_DEBUG("\n");
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+ ret = drm_flush_block_and_flush(dev, lock.context, lock.flags);
+ drm_flush_unblock(dev, lock.context, lock.flags);
+ return ret;
+}
+
+/* If we get here, it means that the process has called DRM_IOCTL_LOCK
+ without calling DRM_IOCTL_UNLOCK.
+
+ If the lock is not held, then let the signal proceed as usual.
+
+ If the lock is held, then set the contended flag and keep the signal
+ blocked.
+
+
+ Return 1 if the signal should be delivered normally.
+ Return 0 if the signal should be blocked. */
+
+int drm_notifier(void *priv)
+{
+ drm_sigdata_t *s = (drm_sigdata_t *)priv;
+ unsigned int old, new, prev;
+
+
+ /* Allow signal delivery if lock isn't held */
+ if (!_DRM_LOCK_IS_HELD(s->lock->lock)
+ || _DRM_LOCKING_CONTEXT(s->lock->lock) != s->context) return 1;
+
+ /* Otherwise, set flag to force call to
+ drmUnlock */
+ do {
+ old = s->lock->lock;
+ new = old | _DRM_LOCK_CONT;
+ prev = cmpxchg(&s->lock->lock, old, new);
+ } while (prev != old);
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/memory.c linux-2.4.13-lia/drivers/char/drm-4.0/memory.c
--- linux-2.4.13/drivers/char/drm-4.0/memory.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/memory.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,486 @@
+/* memory.c -- Memory management wrappers for DRM -*- linux-c -*-
+ * Created: Thu Feb 4 14:00:34 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include <linux/config.h>
+#include "drmP.h"
+#include <linux/wrapper.h>
+
+typedef struct drm_mem_stats {
+ const char *name;
+ int succeed_count;
+ int free_count;
+ int fail_count;
+ unsigned long bytes_allocated;
+ unsigned long bytes_freed;
+} drm_mem_stats_t;
+
+static spinlock_t drm_mem_lock = SPIN_LOCK_UNLOCKED;
+static unsigned long drm_ram_available = 0; /* In pages */
+static unsigned long drm_ram_used = 0;
+static drm_mem_stats_t drm_mem_stats[] = {
+ [DRM_MEM_DMA] = { "dmabufs" },
+ [DRM_MEM_SAREA] = { "sareas" },
+ [DRM_MEM_DRIVER] = { "driver" },
+ [DRM_MEM_MAGIC] = { "magic" },
+ [DRM_MEM_IOCTLS] = { "ioctltab" },
+ [DRM_MEM_MAPS] = { "maplist" },
+ [DRM_MEM_VMAS] = { "vmalist" },
+ [DRM_MEM_BUFS] = { "buflist" },
+ [DRM_MEM_SEGS] = { "seglist" },
+ [DRM_MEM_PAGES] = { "pagelist" },
+ [DRM_MEM_FILES] = { "files" },
+ [DRM_MEM_QUEUES] = { "queues" },
+ [DRM_MEM_CMDS] = { "commands" },
+ [DRM_MEM_MAPPINGS] = { "mappings" },
+ [DRM_MEM_BUFLISTS] = { "buflists" },
+ [DRM_MEM_AGPLISTS] = { "agplist" },
+ [DRM_MEM_TOTALAGP] = { "totalagp" },
+ [DRM_MEM_BOUNDAGP] = { "boundagp" },
+ [DRM_MEM_CTXBITMAP] = { "ctxbitmap"},
+ { NULL, 0, } /* Last entry must be null */
+};
+
+void drm_mem_init(void)
+{
+ drm_mem_stats_t *mem;
+ struct sysinfo si;
+
+ for (mem = drm_mem_stats; mem->name; ++mem) {
+ mem->succeed_count = 0;
+ mem->free_count = 0;
+ mem->fail_count = 0;
+ mem->bytes_allocated = 0;
+ mem->bytes_freed = 0;
+ }
+
+ si_meminfo(&si);
+#if LINUX_VERSION_CODE < 0x020317
+ /* Changed to page count in 2.3.23 */
+ drm_ram_available = si.totalram >> PAGE_SHIFT;
+#else
+ drm_ram_available = si.totalram;
+#endif
+ drm_ram_used = 0;
+}
+
+/* drm_mem_info is called whenever a process reads /dev/drm/mem. */
+
+static int _drm_mem_info(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ drm_mem_stats_t *pt;
+
+ if (offset > 0) return 0; /* no partial requests */
+ len = 0;
+ *eof = 1;
+ DRM_PROC_PRINT(" total counts "
+ " | outstanding \n");
+ DRM_PROC_PRINT("type alloc freed fail bytes freed"
+ " | allocs bytes\n\n");
+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n",
+ "system", 0, 0, 0,
+ drm_ram_available << (PAGE_SHIFT - 10));
+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n",
+ "locked", 0, 0, 0, drm_ram_used >> 10);
+ DRM_PROC_PRINT("\n");
+ for (pt = drm_mem_stats; pt->name; pt++) {
+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu %10lu | %6d %10ld\n",
+ pt->name,
+ pt->succeed_count,
+ pt->free_count,
+ pt->fail_count,
+ pt->bytes_allocated,
+ pt->bytes_freed,
+ pt->succeed_count - pt->free_count,
+ (long)pt->bytes_allocated
+ - (long)pt->bytes_freed);
+ }
+
+ return len;
+}
+
+int drm_mem_info(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ int ret;
+
+ spin_lock(&drm_mem_lock);
+ ret = _drm_mem_info(buf, start, offset, len, eof, data);
+ spin_unlock(&drm_mem_lock);
+ return ret;
+}
+
+void *drm_alloc(size_t size, int area)
+{
+ void *pt;
+
+ if (!size) {
+ DRM_MEM_ERROR(area, "Allocating 0 bytes\n");
+ return NULL;
+ }
+
+ if (!(pt = kmalloc(size, GFP_KERNEL))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return NULL;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_allocated += size;
+ spin_unlock(&drm_mem_lock);
+ return pt;
+}
+
+void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area)
+{
+ void *pt;
+
+ if (!(pt = drm_alloc(size, area))) return NULL;
+ if (oldpt && oldsize) {
+ memcpy(pt, oldpt, oldsize);
+ drm_free(oldpt, oldsize, area);
+ }
+ return pt;
+}
+
+char *drm_strdup(const char *s, int area)
+{
+ char *pt;
+ int length = s ? strlen(s) : 0;
+
+ if (!(pt = drm_alloc(length+1, area))) return NULL;
+ strcpy(pt, s);
+ return pt;
+}
+
+void drm_strfree(const char *s, int area)
+{
+ unsigned int size;
+
+ if (!s) return;
+
+ size = 1 + (s ? strlen(s) : 0);
+ drm_free((void *)s, size, area);
+}
+
+void drm_free(void *pt, size_t size, int area)
+{
+ int alloc_count;
+ int free_count;
+
+ if (!pt) DRM_MEM_ERROR(area, "Attempt to free NULL pointer\n");
+ else kfree(pt);
+ spin_lock(&drm_mem_lock);
+ drm_mem_stats[area].bytes_freed += size;
+ free_count = ++drm_mem_stats[area].free_count;
+ alloc_count = drm_mem_stats[area].succeed_count;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(area, "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+}
+
+unsigned long drm_alloc_pages(int order, int area)
+{
+ unsigned long address;
+ unsigned long bytes = PAGE_SIZE << order;
+ unsigned long addr;
+ unsigned int sz;
+
+ spin_lock(&drm_mem_lock);
+ if ((drm_ram_used >> PAGE_SHIFT)
+ > (DRM_RAM_PERCENT * drm_ram_available) / 100) {
+ spin_unlock(&drm_mem_lock);
+ return 0;
+ }
+ spin_unlock(&drm_mem_lock);
+
+ address = __get_free_pages(GFP_KERNEL, order);
+ if (!address) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return 0;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_allocated += bytes;
+ drm_ram_used += bytes;
+ spin_unlock(&drm_mem_lock);
+
+
+ /* Zero outside the lock */
+ memset((void *)address, 0, bytes);
+
+ /* Reserve */
+ for (addr = address, sz = bytes;
+ sz > 0;
+ addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* Argument type changed in 2.4.0-test6/pre8 */
+ mem_map_reserve(virt_to_page(addr));
+#else
+ mem_map_reserve(MAP_NR(addr));
+#endif
+ }
+
+ return address;
+}
+
+void drm_free_pages(unsigned long address, int order, int area)
+{
+ unsigned long bytes = PAGE_SIZE << order;
+ int alloc_count;
+ int free_count;
+ unsigned long addr;
+ unsigned int sz;
+
+ if (!address) {
+ DRM_MEM_ERROR(area, "Attempt to free address 0\n");
+ } else {
+ /* Unreserve */
+ for (addr = address, sz = bytes;
+ sz > 0;
+ addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* Argument type changed in 2.4.0-test6/pre8 */
+ mem_map_unreserve(virt_to_page(addr));
+#else
+ mem_map_unreserve(MAP_NR(addr));
+#endif
+ }
+ free_pages(address, order);
+ }
+
+ spin_lock(&drm_mem_lock);
+ free_count = ++drm_mem_stats[area].free_count;
+ alloc_count = drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_freed += bytes;
+ drm_ram_used -= bytes;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(area,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+}
+
+void *drm_ioremap(unsigned long offset, unsigned long size, drm_device_t *dev)
+{
+ void *pt;
+
+ if (!size) {
+ DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
+ "Mapping 0 bytes at 0x%08lx\n", offset);
+ return NULL;
+ }
+
+ if(dev->agp->cant_use_aperture = 0) {
+ goto standard_ioremap;
+ } else {
+ drm_map_t *map = NULL;
+ int i;
+
+ for(i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
+ break;
+ }
+
+ if(agpmem = NULL)
+ goto standard_ioremap;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
+ } else {
+ goto standard_ioremap;
+ }
+ }
+
+standard_ioremap:
+ if (!(pt = ioremap(offset, size))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_MAPPINGS].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return NULL;
+ }
+
+ioremap_success:
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_MAPPINGS].succeed_count;
+ drm_mem_stats[DRM_MEM_MAPPINGS].bytes_allocated += size;
+ spin_unlock(&drm_mem_lock);
+ return pt;
+}
+
+void drm_ioremapfree(void *pt, unsigned long size, drm_device_t *dev)
+{
+ int alloc_count;
+ int free_count;
+
+ if (!pt)
+ DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
+ "Attempt to free NULL pointer\n");
+ else if(dev->agp->cant_use_aperture = 0)
+ iounmap(pt);
+
+ spin_lock(&drm_mem_lock);
+ drm_mem_stats[DRM_MEM_MAPPINGS].bytes_freed += size;
+ free_count = ++drm_mem_stats[DRM_MEM_MAPPINGS].free_count;
+ alloc_count = drm_mem_stats[DRM_MEM_MAPPINGS].succeed_count;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+}
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+agp_memory *drm_alloc_agp(int pages, u32 type)
+{
+ agp_memory *handle;
+
+ if (!pages) {
+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP, "Allocating 0 pages\n");
+ return NULL;
+ }
+
+ if ((handle = drm_agp_allocate_memory(pages, type))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_TOTALAGP].bytes_allocated
+ += pages << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ return handle;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_TOTALAGP].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return NULL;
+}
+
+int drm_free_agp(agp_memory *handle, int pages)
+{
+ int alloc_count;
+ int free_count;
+ int retval = -EINVAL;
+
+ if (!handle) {
+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP,
+ "Attempt to free NULL AGP handle\n");
+ return retval;;
+ }
+
+ if (drm_agp_free_memory(handle)) {
+ spin_lock(&drm_mem_lock);
+ free_count = ++drm_mem_stats[DRM_MEM_TOTALAGP].free_count;
+ alloc_count = drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_TOTALAGP].bytes_freed
+ += pages << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+ return 0;
+ }
+ return retval;
+}
+
+int drm_bind_agp(agp_memory *handle, unsigned int start)
+{
+ int retcode = -EINVAL;
+
+ if (!handle) {
+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP,
+ "Attempt to bind NULL AGP handle\n");
+ return retcode;
+ }
+
+ if (!(retcode = drm_agp_bind_memory(handle, start))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_allocated
+ += handle->page_count << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ return retcode;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_BOUNDAGP].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return retcode;
+}
+
+int drm_unbind_agp(agp_memory *handle)
+{
+ int alloc_count;
+ int free_count;
+ int retcode = -EINVAL;
+
+ if (!handle) {
+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP,
+ "Attempt to unbind NULL AGP handle\n");
+ return retcode;
+ }
+
+ if ((retcode = drm_agp_unbind_memory(handle))) return retcode;
+ spin_lock(&drm_mem_lock);
+ free_count = ++drm_mem_stats[DRM_MEM_BOUNDAGP].free_count;
+ alloc_count = drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_freed
+ += handle->page_count << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+ return retcode;
+}
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_bufs.c linux-2.4.13-lia/drivers/char/drm-4.0/mga_bufs.c
--- linux-2.4.13/drivers/char/drm-4.0/mga_bufs.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_bufs.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,629 @@
+/* mga_bufs.c -- IOCTLs to manage buffers -*- linux-c -*-
+ * Created: Thu Jan 6 01:47:26 2000 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "mga_drv.h"
+#include "linux/un.h"
+
+
+int mga_addbufs_agp(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ drm_buf_entry_t *entry;
+ drm_buf_t *buf;
+ unsigned long offset;
+ unsigned long agp_offset;
+ int count;
+ int order;
+ int size;
+ int alignment;
+ int page_order;
+ int total;
+ int byte_count;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+ agp_offset = request.agp_start;
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+ byte_count = 0;
+
+ DRM_DEBUG("count: %d\n", count);
+ DRM_DEBUG("order: %d\n", order);
+ DRM_DEBUG("size: %d\n", size);
+ DRM_DEBUG("agp_offset: %ld\n", agp_offset);
+ DRM_DEBUG("alignment: %d\n", alignment);
+ DRM_DEBUG("page_order: %d\n", page_order);
+ DRM_DEBUG("total: %d\n", total);
+ DRM_DEBUG("byte_count: %d\n", byte_count);
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ /* This isnt neccessarily a good limit, but we have to stop a dumb
+ 32 bit overflow problem below */
+
+ if ( count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ offset = 0;
+
+
+ while(entry->buf_count < count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+
+ buf->offset = offset; /* Hrm */
+ buf->bus_address = dev->agp->base + agp_offset + offset;
+ buf->address = (void *)(agp_offset + offset + dev->agp->base);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+
+ buf->dev_private = drm_alloc(sizeof(drm_mga_buf_priv_t),
+ DRM_MEM_BUFS);
+ buf->dev_priv_size = sizeof(drm_mga_buf_priv_t);
+
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ offset = offset + alignment;
+ entry->buf_count++;
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+
+ DRM_DEBUG("dma->buf_count : %d\n", dma->buf_count);
+
+ dma->byte_count += byte_count;
+
+ DRM_DEBUG("entry->buf_count : %d\n", entry->buf_count);
+
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+
+ DRM_DEBUG("count: %d\n", count);
+ DRM_DEBUG("order: %d\n", order);
+ DRM_DEBUG("size: %d\n", size);
+ DRM_DEBUG("agp_offset: %ld\n", agp_offset);
+ DRM_DEBUG("alignment: %d\n", alignment);
+ DRM_DEBUG("page_order: %d\n", page_order);
+ DRM_DEBUG("total: %d\n", total);
+ DRM_DEBUG("byte_count: %d\n", byte_count);
+
+ dma->flags = _DRM_DMA_USE_AGP;
+
+ DRM_DEBUG("dma->flags : %x\n", dma->flags);
+
+ return 0;
+}
+
+int mga_addbufs_pci(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int count;
+ int order;
+ int size;
+ int total;
+ int page_order;
+ drm_buf_entry_t *entry;
+ unsigned long page;
+ drm_buf_t *buf;
+ int alignment;
+ unsigned long offset;
+ int i;
+ int byte_count;
+ int page_count;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+
+ DRM_DEBUG("count = %d, size = %d (%d), order = %d, queue_count = %d\n",
+ request.count, request.size, size, order, dev->queue_count);
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if(count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->seglist = drm_alloc(count * sizeof(*entry->seglist),
+ DRM_MEM_SEGS);
+ if (!entry->seglist) {
+ drm_free(entry->buflist,
+ count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->seglist, 0, count * sizeof(*entry->seglist));
+
+ dma->pagelist = drm_realloc(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ DRM_DEBUG("pagelist: %d entries\n",
+ dma->page_count + (count << page_order));
+
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ byte_count = 0;
+ page_count = 0;
+ while (entry->buf_count < count) {
+ if (!(page = drm_alloc_pages(page_order, DRM_MEM_DMA))) break;
+ entry->seglist[entry->seg_count++] = page;
+ for (i = 0; i < (1 << page_order); i++) {
+ DRM_DEBUG("page %d @ 0x%08lx\n",
+ dma->page_count + page_count,
+ page + PAGE_SIZE * i);
+ dma->pagelist[dma->page_count + page_count++]
+ = page + PAGE_SIZE * i;
+ }
+ for (offset = 0;
+ offset + size <= total && entry->buf_count < count;
+ offset += alignment, ++entry->buf_count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+ buf->offset = (dma->byte_count + byte_count + offset);
+ buf->address = (void *)(page + offset);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ DRM_DEBUG("buffer %d @ %p\n",
+ entry->buf_count, buf->address);
+ }
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+ dma->seg_count += entry->seg_count;
+ dma->page_count += entry->seg_count << page_order;
+ dma->byte_count += PAGE_SIZE * (entry->seg_count << page_order);
+
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+ return 0;
+}
+
+int mga_addbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_buf_desc_t request;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if(request.flags & _DRM_AGP_BUFFER)
+ return mga_addbufs_agp(inode, filp, cmd, arg);
+ else
+ return mga_addbufs_pci(inode, filp, cmd, arg);
+}
+
+int mga_infobufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_info_t request;
+ int i;
+ int count;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_info_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) ++count;
+ }
+
+ if (request.count >= count) {
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) {
+ if (copy_to_user(&request.list[count].count,
+ &dma->bufs[i].buf_count,
+ sizeof(dma->bufs[0]
+ .buf_count)) ||
+ copy_to_user(&request.list[count].size,
+ &dma->bufs[i].buf_size,
+ sizeof(dma->bufs[0].buf_size)) ||
+ copy_to_user(&request.list[count].low_mark,
+ &dma->bufs[i]
+ .freelist.low_mark,
+ sizeof(dma->bufs[0]
+ .freelist.low_mark)) ||
+ copy_to_user(&request.list[count]
+ .high_mark,
+ &dma->bufs[i]
+ .freelist.high_mark,
+ sizeof(dma->bufs[0]
+ .freelist.high_mark)))
+ return -EFAULT;
+ ++count;
+ }
+ }
+ }
+ request.count = count;
+
+ if (copy_to_user((drm_buf_info_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int mga_markbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int order;
+ drm_buf_entry_t *entry;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request, (drm_buf_desc_t *)arg, sizeof(request)))
+ return -EFAULT;
+
+ order = drm_order(request.size);
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ entry = &dma->bufs[order];
+
+ if (request.low_mark < 0 || request.low_mark > entry->buf_count)
+ return -EINVAL;
+ if (request.high_mark < 0 || request.high_mark > entry->buf_count)
+ return -EINVAL;
+
+ entry->freelist.low_mark = request.low_mark;
+ entry->freelist.high_mark = request.high_mark;
+
+ return 0;
+}
+
+int mga_freebufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_free_t request;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_free_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0; i < request.count; i++) {
+ if (copy_from_user(&idx,
+ &request.list[i],
+ sizeof(idx)))
+ return -EFAULT;
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ idx, dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[idx];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d freeing buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ drm_free_buffer(dev, buf);
+ }
+
+ return 0;
+}
+
+int mga_mapbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ int retcode = 0;
+ const int zero = 0;
+ unsigned long virtual;
+ unsigned long address;
+ drm_buf_map_t request;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_map_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if (request.count >= dma->buf_count) {
+ if(dma->flags & _DRM_DMA_USE_AGP) {
+ drm_mga_private_t *dev_priv = dev->dev_private;
+ drm_map_t *map = NULL;
+
+ map = dev->maplist[dev_priv->buffer_map_idx];
+ if (!map) {
+ retcode = -EINVAL;
+ goto done;
+ }
+
+ DRM_DEBUG("map->offset : %lx\n", map->offset);
+ DRM_DEBUG("map->size : %lx\n", map->size);
+ DRM_DEBUG("map->type : %d\n", map->type);
+ DRM_DEBUG("map->flags : %x\n", map->flags);
+ DRM_DEBUG("map->handle : %p\n", map->handle);
+ DRM_DEBUG("map->mtrr : %d\n", map->mtrr);
+ down_write(¤t->mm->mmap_sem);
+ virtual = do_mmap(filp, 0, map->size,
+ PROT_READ|PROT_WRITE,
+ MAP_SHARED,
+ (unsigned long)map->offset);
+ up_write(¤t->mm->mmap_sem);
+ } else {
+ down_write(¤t->mm->mmap_sem);
+ virtual = do_mmap(filp, 0, dma->byte_count,
+ PROT_READ|PROT_WRITE, MAP_SHARED, 0);
+ up_write(¤t->mm->mmap_sem);
+ }
+ if (virtual > -1024UL) {
+ /* Real error */
+ DRM_DEBUG("mmap error\n");
+ retcode = (signed long)virtual;
+ goto done;
+ }
+ request.virtual = (void *)virtual;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ if (copy_to_user(&request.list[i].idx,
+ &dma->buflist[i]->idx,
+ sizeof(request.list[0].idx))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].total,
+ &dma->buflist[i]->total,
+ sizeof(request.list[0].total))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].used,
+ &zero,
+ sizeof(zero))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ address = virtual + dma->buflist[i]->offset;
+ if (copy_to_user(&request.list[i].address,
+ &address,
+ sizeof(address))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ }
+ }
+ done:
+ request.count = dma->buf_count;
+ DRM_DEBUG("%d buffers, retcode = %d\n", request.count, retcode);
+
+ if (copy_to_user((drm_buf_map_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("retcode : %d\n", retcode);
+
+ return retcode;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_context.c linux-2.4.13-lia/drivers/char/drm-4.0/mga_context.c
--- linux-2.4.13/drivers/char/drm-4.0/mga_context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,209 @@
+/* mga_context.c -- IOCTLs for mga contexts -*- linux-c -*-
+ * Created: Mon Dec 13 09:51:35 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "mga_drv.h"
+
+static int mga_alloc_queue(drm_device_t *dev)
+{
+ return drm_ctxbitmap_next(dev);
+}
+
+int mga_context_switch(drm_device_t *dev, int old, int new)
+{
+ char buf[64];
+
+ atomic_inc(&dev->total_ctx);
+
+ if (test_and_set_bit(0, &dev->context_flag)) {
+ DRM_ERROR("Reentering -- FIXME\n");
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new = dev->last_context) {
+ clear_bit(0, &dev->context_flag);
+ return 0;
+ }
+
+ if (drm_flags & DRM_FLAG_NOCTX) {
+ mga_context_switch_complete(dev, new);
+ } else {
+ sprintf(buf, "C %d %d\n", old, new);
+ drm_write_string(dev, buf);
+ }
+
+ return 0;
+}
+
+int mga_context_switch_complete(drm_device_t *dev, int new)
+{
+ dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
+ dev->last_switch = jiffies;
+
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Lock isn't held after context switch\n");
+ }
+
+ /* If a context switch is ever initiated
+ when the kernel holds the lock, release
+ that lock here. */
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles()
+ - dev->ctx_start)]);
+
+#endif
+ clear_bit(0, &dev->context_flag);
+ wake_up(&dev->context_wait);
+
+ return 0;
+}
+
+int mga_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+int mga_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ if ((ctx.handle = mga_alloc_queue(dev)) = DRM_KERNEL_CONTEXT) {
+ /* Skip kernel's context and get a new one. */
+ ctx.handle = mga_alloc_queue(dev);
+ }
+ if (ctx.handle = -1) {
+ return -ENOMEM;
+ }
+ DRM_DEBUG("%d\n", ctx.handle);
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int mga_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ /* This does nothing for the mga */
+ return 0;
+}
+
+int mga_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+ /* This is 0, because we don't hanlde any context flags */
+ ctx.flags = 0;
+ if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int mga_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return mga_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int mga_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ mga_context_switch_complete(dev, ctx.handle);
+
+ return 0;
+}
+
+int mga_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ if(ctx.handle = DRM_KERNEL_CONTEXT+1) priv->remove_auth_on_close = 1;
+
+ if(ctx.handle != DRM_KERNEL_CONTEXT) {
+ drm_ctxbitmap_free(dev, ctx.handle);
+ }
+
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_dma.c linux-2.4.13-lia/drivers/char/drm-4.0/mga_dma.c
--- linux-2.4.13/drivers/char/drm-4.0/mga_dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,1059 @@
+/* mga_dma.c -- DMA support for mga g200/g400 -*- linux-c -*-
+ * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ * Keith Whitwell <keithw@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "mga_drv.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+#define MGA_REG(reg) 2
+#define MGA_BASE(reg) ((unsigned long) \
+ ((drm_device_t *)dev)->maplist[MGA_REG(reg)]->handle)
+#define MGA_ADDR(reg) (MGA_BASE(reg) + reg)
+#define MGA_DEREF(reg) *(__volatile__ int *)MGA_ADDR(reg)
+#define MGA_READ(reg) MGA_DEREF(reg)
+#define MGA_WRITE(reg,val) do { MGA_DEREF(reg) = val; } while (0)
+
+#define PDEA_pagpxfer_enable 0x2
+
+static int mga_flush_queue(drm_device_t *dev);
+
+static unsigned long mga_alloc_page(drm_device_t *dev)
+{
+ unsigned long address;
+
+ address = __get_free_page(GFP_KERNEL);
+ if(address = 0UL) {
+ return 0;
+ }
+ atomic_inc(&virt_to_page(address)->count);
+ set_bit(PG_reserved, &virt_to_page(address)->flags);
+
+ return address;
+}
+
+static void mga_free_page(drm_device_t *dev, unsigned long page)
+{
+ if(!page) return;
+ atomic_dec(&virt_to_page(page)->count);
+ clear_bit(PG_reserved, &virt_to_page(page)->flags);
+ free_page(page);
+ return;
+}
+
+static void mga_delay(void)
+{
+ return;
+}
+
+/* These are two age tags that will never be sent to
+ * the hardware */
+#define MGA_BUF_USED 0xffffffff
+#define MGA_BUF_FREE 0
+
+static int mga_freelist_init(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_t *buf;
+ drm_mga_buf_priv_t *buf_priv;
+ drm_mga_private_t *dev_priv = (drm_mga_private_t *)dev->dev_private;
+ drm_mga_freelist_t *item;
+ int i;
+
+ dev_priv->head = drm_alloc(sizeof(drm_mga_freelist_t), DRM_MEM_DRIVER);
+ if(dev_priv->head = NULL) return -ENOMEM;
+ memset(dev_priv->head, 0, sizeof(drm_mga_freelist_t));
+ dev_priv->head->age = MGA_BUF_USED;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ buf = dma->buflist[ i ];
+ buf_priv = buf->dev_private;
+ item = drm_alloc(sizeof(drm_mga_freelist_t),
+ DRM_MEM_DRIVER);
+ if(item = NULL) return -ENOMEM;
+ memset(item, 0, sizeof(drm_mga_freelist_t));
+ item->age = MGA_BUF_FREE;
+ item->prev = dev_priv->head;
+ item->next = dev_priv->head->next;
+ if(dev_priv->head->next != NULL)
+ dev_priv->head->next->prev = item;
+ if(item->next = NULL) dev_priv->tail = item;
+ item->buf = buf;
+ buf_priv->my_freelist = item;
+ buf_priv->discard = 0;
+ buf_priv->dispatched = 0;
+ dev_priv->head->next = item;
+ }
+
+ return 0;
+}
+
+static void mga_freelist_cleanup(drm_device_t *dev)
+{
+ drm_mga_private_t *dev_priv = (drm_mga_private_t *)dev->dev_private;
+ drm_mga_freelist_t *item;
+ drm_mga_freelist_t *prev;
+
+ item = dev_priv->head;
+ while(item) {
+ prev = item;
+ item = item->next;
+ drm_free(prev, sizeof(drm_mga_freelist_t), DRM_MEM_DRIVER);
+ }
+
+ dev_priv->head = dev_priv->tail = NULL;
+}
+
+/* Frees dispatch lock */
+static inline void mga_dma_quiescent(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_mga_private_t *dev_priv = (drm_mga_private_t *)dev->dev_private;
+ drm_mga_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ unsigned long end;
+ int i;
+
+ DRM_DEBUG("dispatch_status = 0x%02lx\n", dev_priv->dispatch_status);
+ end = jiffies + (HZ*3);
+ while(1) {
+ if(!test_and_set_bit(MGA_IN_DISPATCH,
+ &dev_priv->dispatch_status)) {
+ break;
+ }
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("irqs: %d wanted %d\n",
+ atomic_read(&dev->total_irq),
+ atomic_read(&dma->total_lost));
+ DRM_ERROR("lockup: dispatch_status = 0x%02lx,"
+ " jiffies = %lu, end = %lu\n",
+ dev_priv->dispatch_status, jiffies, end);
+ return;
+ }
+ for (i = 0 ; i < 2000 ; i++) mga_delay();
+ }
+ end = jiffies + (HZ*3);
+ DRM_DEBUG("quiescent status : %x\n", MGA_READ(MGAREG_STATUS));
+ while((MGA_READ(MGAREG_STATUS) & 0x00030001) != 0x00020000) {
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("irqs: %d wanted
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (81 preceding siblings ...)
2001-10-25 4:27 ` [Linux-ia64] kernel update (relative to 2.4.13) David Mosberger
@ 2001-10-25 4:30 ` David Mosberger
2001-10-25 5:26 ` Keith Owens
` (132 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-10-25 4:30 UTC (permalink / raw)
To: linux-ia64
An updated ia64 patch for 2.4.13 is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.11-ia64-011010.diff*
change log:
- support readahead() syscall added by 2.4.13 (both for ia64 and ia32)
- console log level fix by Jesper Juhl
- half-hearted attempt add supporting reading of "default LDT entry" in ia32
modify_ldt() syscall; someone who understands what this is supposed to do
should take a look at this...
- palinfo update by Stephane Eranian
- die() fix by Keith Owens
- unaligned handler fix for rotating fp regs by Tony Luck
- ACPI fix to get AGP bus scanned again by Chris Ahna
- implementation ia64 version of wbinvd() for ACPI; this hasn't been tested
and may not work; shouldn't be an issue at the moment as this is needed only
for ACPI functionality that is not supported on Itanium; still someone who
knows ACPI better may want to take a look at this
- update PCI DMA interface to support page-based mapping/unmapping and
the optional DAC interface
This kernel has been tested with gcc-3.0 on Big Sur, Lion, and HP
simulator. Both UP and MP seem to compile fine. As usual, your
mileage may vary.
Enjoy,
--david
diff -urN linux-davidm/arch/i386/mm/fault.c linux-2.4.13-lia/arch/i386/mm/fault.c
--- linux-davidm/arch/i386/mm/fault.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/i386/mm/fault.c Wed Oct 24 18:11:25 2001
@@ -27,8 +27,6 @@
extern void die(const char *,struct pt_regs *,long);
-extern int console_loglevel;
-
/*
* Ugly, ugly, but the goto's result in better assembly..
*/
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S linux-2.4.13-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Wed Oct 24 18:54:52 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_entry.S Wed Oct 24 18:11:48 2001
@@ -400,7 +400,7 @@
data8 sys_ni_syscall /* reserved for TUX */
data8 sys_ni_syscall /* reserved for Security */
data8 sys_gettid
- data8 sys_ni_syscall /* 225 */
+ data8 sys_readahead /* 225 */
data8 sys_ni_syscall
data8 sys_ni_syscall
data8 sys_ni_syscall
diff -urN linux-davidm/arch/ia64/ia32/ia32_ldt.c linux-2.4.13-lia/arch/ia64/ia32/ia32_ldt.c
--- linux-davidm/arch/ia64/ia32/ia32_ldt.c Wed Oct 24 18:54:52 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_ldt.c Wed Oct 24 18:12:38 2001
@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Adapted from arch/i386/kernel/ldt.c
*/
@@ -60,6 +60,25 @@
}
static int
+read_default_ldt (void * ptr, unsigned long bytecount)
+{
+ unsigned long size;
+ int err;
+
+ /* XXX fix me: should return equivalent of default_ldt[0] */
+ err = 0;
+ size = 8;
+ if (size > bytecount)
+ size = bytecount;
+
+ err = size;
+ if (clear_user(ptr, size))
+ err = -EFAULT;
+
+ return err;
+}
+
+static int
write_ldt (void * ptr, unsigned long bytecount, int oldmode)
{
struct ia32_modify_ldt_ldt_s ldt_info;
@@ -116,6 +135,9 @@
break;
case 1:
ret = write_ldt(P(ptr), bytecount, 1);
+ break;
+ case 2:
+ ret = read_default_ldt(P(ptr), bytecount);
break;
case 0x11:
ret = write_ldt(P(ptr), bytecount, 0);
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.4.13-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Wed Oct 24 18:54:52 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/entry.S Wed Oct 24 18:13:32 2001
@@ -4,7 +4,7 @@
* Kernel entry points.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Asit Mallick <Asit.K.Mallick@intel.com>
@@ -15,7 +15,7 @@
* kernel stack. This allows us to handle interrupts without changing
* to physical mode.
*
- * Jonathan Nickin <nicklin@missioncriticallinux.com>
+ * Jonathan Nicklin <nicklin@missioncriticallinux.com>
* Patrick O'Rourke <orourke@missioncriticallinux.com>
* 11/07/2000
/
@@ -1130,7 +1130,7 @@
data8 sys_clone2
data8 sys_getdents64
data8 sys_getunwind // 1215
- data8 ia64_ni_syscall
+ data8 sys_readahead
data8 ia64_ni_syscall
data8 ia64_ni_syscall
data8 ia64_ni_syscall
diff -urN linux-davidm/arch/ia64/kernel/fw-emu.c linux-2.4.13-lia/arch/ia64/kernel/fw-emu.c
--- linux-davidm/arch/ia64/kernel/fw-emu.c Wed Oct 24 18:54:52 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/fw-emu.c Wed Oct 24 18:13:46 2001
@@ -174,6 +174,43 @@
" ;;\n"
" mov ar.lc=r9\n"
" mov r8=r0\n"
+" ;;\n"
+"1: cmp.eq p6,p7\x15,r28 /* PAL_PERF_MON_INFO */\n"
+"(p7) br.cond.sptk.few 1f\n"
+" mov r8=0 /* status = 0 */\n"
+" movl r9 =0x12082004 /* generic=4 width2 retired=8 cycles\x18 */\n"
+" mov r10=0 /* reserved */\n"
+" mov r11=0 /* reserved */\n"
+" mov r16=0xffff /* implemented PMC */\n"
+" mov r17=0xffff /* implemented PMD */\n"
+" add r18=8,r29 /* second index */\n"
+" ;;\n"
+" st8 [r29]=r16,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r17,16 /* store implemented PMD */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" mov r16=0xf0 /* cycles count capable PMC */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" mov r17=0x10 /* retired bundles capable PMC */\n"
+" ;;\n"
+" st8 [r29]=r16,16 /* store cycles capable */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r17,16 /* store retired bundle capable */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
"1: br.cond.sptk.few rp\n"
"stacked:\n"
" br.ret.sptk.few rp\n"
diff -urN linux-davidm/arch/ia64/kernel/palinfo.c linux-2.4.13-lia/arch/ia64/kernel/palinfo.c
--- linux-davidm/arch/ia64/kernel/palinfo.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/palinfo.c Wed Oct 24 18:14:08 2001
@@ -6,12 +6,13 @@
* Intel IA-64 Architecture Software Developer's Manual v1.0.
*
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 05/26/2000 S.Eranian initial release
* 08/21/2000 S.Eranian updated to July 2000 PAL specs
* 02/05/2001 S.Eranian fixed module support
+ * 10/23/2001 S.Eranian updated pal_perf_mon_info bug fixes
*/
#include <linux/config.h>
#include <linux/types.h>
@@ -32,8 +33,9 @@
MODULE_AUTHOR("Stephane Eranian <eranian@hpl.hp.com>");
MODULE_DESCRIPTION("/proc interface to IA-64 PAL");
+MODULE_LICENSE("GPL");
-#define PALINFO_VERSION "0.4"
+#define PALINFO_VERSION "0.5"
#ifdef CONFIG_SMP
#define cpu_is_online(i) (cpu_online_map & (1UL << i))
@@ -606,15 +608,6 @@
if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0) return 0;
-#ifdef IA64_PAL_PERF_MON_INFO_BUG
- /*
- * This bug has been fixed in PAL 2.2.9 and higher
- */
- pm_buffer[5]=0x3;
- pm_info.pal_perf_mon_info_s.cycles = 0x12;
- pm_info.pal_perf_mon_info_s.retired = 0x08;
-#endif
-
p += sprintf(p, "PMC/PMD pairs : %d\n" \
"Counter width : %d bits\n" \
"Cycle event number : %d\n" \
@@ -636,6 +629,14 @@
p = bitregister_process(p, pm_buffer+8, 256);
p += sprintf(p, "\nRetired bundles count capable : ");
+
+#ifdef CONFIG_ITANIUM
+ /*
+ * PAL_PERF_MON_INFO reports that only PMC4 can be used to count CPU_CYCLES
+ * which is wrong, both PMC4 and PMD5 support it.
+ */
+ if (pm_buffer[12] = 0x10) pm_buffer[12]=0x30;
+#endif
p = bitregister_process(p, pm_buffer+12, 256);
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.4.13-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Wed Oct 24 18:54:52 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/process.c Wed Oct 24 18:14:43 2001
@@ -460,7 +460,7 @@
if (task->thread.pfm_context)
pfm_context_exit(task);
- if(atomic_read(&task->thread.pfm_notifiers_check) >0)
+ if (atomic_read(&task->thread.pfm_notifiers_check) > 0)
pfm_cleanup_notifiers(task);
}
#endif
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.4.13-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Wed Oct 24 18:54:52 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/traps.c Wed Oct 24 18:15:16 2001
@@ -87,23 +87,34 @@
void
die (const char *str, struct pt_regs *regs, long err)
{
- static spinlock_t die_lock = SPIN_LOCK_UNLOCKED;
+ static struct {
+ spinlock_t lock;
+ int lock_owner;
+ int lock_owner_depth;
+ } die = {
+ lock: SPIN_LOCK_UNLOCKED,
+ lock_owner: -1,
+ lock_owner_depth: 0
+ };
- console_verbose();
- spin_lock_irq(&die_lock);
- bust_spinlocks(1);
- printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
- show_regs(regs);
- bust_spinlocks(0);
- spin_unlock_irq(&die_lock);
-
- if (current->thread.flags & IA64_KERNEL_DEATH) {
- printk("die_if_kernel recursion detected.\n");
- sti();
- while (1);
+ if (die.lock_owner != smp_processor_id()) {
+ console_verbose();
+ spin_lock_irq(&die.lock);
+ die.lock_owner = smp_processor_id();
+ die.lock_owner_depth = 0;
+ bust_spinlocks(1);
}
- current->thread.flags |= IA64_KERNEL_DEATH;
- do_exit(SIGSEGV);
+
+ if (++die.lock_owner_depth < 3) {
+ printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
+ show_regs(regs);
+ } else
+ printk(KERN_ERR "Recursive die() failure, output suppressed\n");
+
+ bust_spinlocks(0);
+ die.lock_owner = -1;
+ spin_unlock_irq(&die.lock);
+ do_exit(SIGSEGV);
}
void
diff -urN linux-davidm/arch/ia64/kernel/unaligned.c linux-2.4.13-lia/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Wed Oct 24 18:54:52 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/unaligned.c Wed Oct 24 18:15:29 2001
@@ -5,6 +5,7 @@
* Copyright (C) 1999-2000 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*
+ * 2001/10/11 Fix unaligned access to rotating registers in s/w pipelined loops.
* 2001/08/13 Correct size of extended floats (float_fsz) from 16 to 10 bytes.
* 2001/01/17 Add support emulation of unaligned kernel accesses.
*/
@@ -283,9 +284,19 @@
unsigned long rnats, nat_mask;
unsigned long on_kbs;
long sof = (regs->cr_ifs) & 0x7f;
+ long sor = 8 * ((regs->cr_ifs >> 14) & 0xf);
+ long rrb_gr = (regs->cr_ifs >> 18) & 0x7f;
+ long ridx;
+
+ if ((r1 - 32) > sor)
+ ridx = -sof + (r1 - 32);
+ else if ((r1 - 32) < (sor - rrb_gr))
+ ridx = -sof + (r1 - 32) + rrb_gr;
+ else
+ ridx = -sof + (r1 - 32) - (sor - rrb_gr);
- DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
- r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld ridx=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f, ridx);
if ((r1 - 32) >= sof) {
/* this should never happen, as the "rsvd register fault" has higher priority */
@@ -294,7 +305,7 @@
}
on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
- addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx);
if (addr >= kbs) {
/* the register is on the kernel backing store: easy... */
rnat_addr = ia64_rse_rnat_addr(addr);
@@ -319,12 +330,12 @@
return;
}
- bspstore = (unsigned long *) regs->ar_bspstore;
+ bspstore = (unsigned long *)regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
bsp = ia64_rse_skip_regs(ubs_end, -sof);
- addr = ia64_rse_skip_regs(bsp, r1 - 32);
+ addr = ia64_rse_skip_regs(bsp, ridx + sof);
- DPRINT("ubs_end=%p bsp=%p addr=%px\n", (void *) ubs_end, (void *) bsp, (void *) addr);
+ DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
ia64_poke(current, sw, (unsigned long) ubs_end, (unsigned long) addr, val);
@@ -354,9 +365,19 @@
unsigned long rnats, nat_mask;
unsigned long on_kbs;
long sof = (regs->cr_ifs) & 0x7f;
+ long sor = 8 * ((regs->cr_ifs >> 14) & 0xf);
+ long rrb_gr = (regs->cr_ifs >> 18) & 0x7f;
+ long ridx;
+
+ if ((r1 - 32) > sor)
+ ridx = -sof + (r1 - 32);
+ else if ((r1 - 32) < (sor - rrb_gr))
+ ridx = -sof + (r1 - 32) + rrb_gr;
+ else
+ ridx = -sof + (r1 - 32) - (sor - rrb_gr);
- DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
- r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld ridx=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f, ridx);
if ((r1 - 32) >= sof) {
/* this should never happen, as the "rsvd register fault" has higher priority */
@@ -365,7 +386,7 @@
}
on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
- addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx);
if (addr >= kbs) {
/* the register is on the kernel backing store: easy... */
*val = *addr;
@@ -391,7 +412,7 @@
bspstore = (unsigned long *)regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
bsp = ia64_rse_skip_regs(ubs_end, -sof);
- addr = ia64_rse_skip_regs(bsp, r1 - 32);
+ addr = ia64_rse_skip_regs(bsp, ridx + sof);
DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
diff -urN linux-davidm/arch/parisc/kernel/traps.c linux-2.4.13-lia/arch/parisc/kernel/traps.c
--- linux-davidm/arch/parisc/kernel/traps.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/parisc/kernel/traps.c Wed Oct 24 18:17:29 2001
@@ -43,7 +43,6 @@
static inline void console_verbose(void)
{
- extern int console_loglevel;
console_loglevel = 15;
}
diff -urN linux-davidm/drivers/acpi/include/acutils.h linux-2.4.13-lia/drivers/acpi/include/acutils.h
--- linux-davidm/drivers/acpi/include/acutils.h Mon Sep 24 15:06:42 2001
+++ linux-2.4.13-lia/drivers/acpi/include/acutils.h Wed Oct 24 18:17:40 2001
@@ -383,6 +383,7 @@
/* Method name strings */
#define METHOD_NAME__HID "_HID"
+#define METHOD_NAME__CID "_CID"
#define METHOD_NAME__UID "_UID"
#define METHOD_NAME__ADR "_ADR"
#define METHOD_NAME__STA "_STA"
@@ -396,6 +397,11 @@
NATIVE_CHAR *object_name,
acpi_namespace_node *device_node,
acpi_integer *address);
+
+acpi_status
+acpi_ut_execute_CID (
+ acpi_namespace_node *device_node,
+ ACPI_DEVICE_ID *cid);
acpi_status
acpi_ut_execute_HID (
diff -urN linux-davidm/drivers/acpi/include/platform/acgcc.h linux-2.4.13-lia/drivers/acpi/include/platform/acgcc.h
--- linux-davidm/drivers/acpi/include/platform/acgcc.h Wed Oct 24 10:17:44 2001
+++ linux-2.4.13-lia/drivers/acpi/include/platform/acgcc.h Wed Oct 24 18:17:50 2001
@@ -42,11 +42,32 @@
/*! [Begin] no source code translation */
+#include <linux/interrupt.h>
+
+#include <asm/processor.h>
#include <asm/pal.h>
#define halt() ia64_pal_halt_light() /* PAL_HALT[_LIGHT] */
#define safe_halt() ia64_pal_halt(1) /* PAL_HALT */
+static inline void
+wbinvd (void)
+{
+ unsigned long flags, vector, position = 0;
+ long status;
+
+ do {
+ ia64_clear_ic(flags);
+ status = ia64_pal_cache_flush(0x3, (PAL_CACHE_FLUSH_INVALIDATE
+ | PAL_CACHE_FLUSH_CHK_INTRS),
+ &position, &vector);
+ local_irq_restore(flags);
+ if (status = 1) {
+ ia64_eoi();
+ hw_resend_irq(NULL, vector);
+ }
+ } while (status = 1);
+}
#define ACPI_ACQUIRE_GLOBAL_LOCK(GLptr, Acq) \
do { \
diff -urN linux-davidm/drivers/acpi/namespace/nsxfobj.c linux-2.4.13-lia/drivers/acpi/namespace/nsxfobj.c
--- linux-davidm/drivers/acpi/namespace/nsxfobj.c Mon Sep 24 15:06:43 2001
+++ linux-2.4.13-lia/drivers/acpi/namespace/nsxfobj.c Wed Oct 24 18:18:06 2001
@@ -588,6 +588,7 @@
acpi_namespace_node *node;
u32 flags;
ACPI_DEVICE_ID device_id;
+ ACPI_DEVICE_ID compatible_id;
ACPI_GET_DEVICES_INFO *info;
@@ -628,7 +629,17 @@
}
if (STRNCMP (device_id.buffer, info->hid, sizeof (device_id.buffer)) != 0) {
- return (AE_OK);
+ status = acpi_ut_execute_CID (node, &compatible_id);
+ if (status = AE_NOT_FOUND) {
+ return (AE_OK);
+ }
+ else if (ACPI_FAILURE (status)) {
+ return (AE_CTRL_DEPTH);
+ }
+
+ if (STRNCMP (compatible_id.buffer, info->hid, sizeof (compatible_id.buffer)) != 0) {
+ return (AE_OK);
+ }
}
}
diff -urN linux-davidm/drivers/acpi/utilities/uteval.c linux-2.4.13-lia/drivers/acpi/utilities/uteval.c
--- linux-davidm/drivers/acpi/utilities/uteval.c Mon Sep 24 15:06:47 2001
+++ linux-2.4.13-lia/drivers/acpi/utilities/uteval.c Wed Oct 24 18:18:19 2001
@@ -115,6 +115,93 @@
/*******************************************************************************
*
+ * FUNCTION: Acpi_ut_execute_CID
+ *
+ * PARAMETERS: Device_node - Node for the device
+ * *Cid - Where the CID is returned
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Executes the _CID control method that returns the compatible
+ * ID of the device.
+ *
+ * NOTE: Internal function, no parameter validation
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ut_execute_CID (
+ acpi_namespace_node *device_node,
+ ACPI_DEVICE_ID *cid)
+{
+ acpi_operand_object *obj_desc;
+ acpi_status status;
+
+
+ FUNCTION_TRACE ("Ut_execute_CID");
+
+
+ /* Execute the method */
+
+ status = acpi_ns_evaluate_relative (device_node,
+ METHOD_NAME__CID, NULL, &obj_desc);
+ if (ACPI_FAILURE (status)) {
+ if (status = AE_NOT_FOUND) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "_CID on %4.4s was not found\n",
+ &device_node->name));
+ }
+
+ else {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "_CID on %4.4s failed %s\n",
+ &device_node->name, acpi_format_exception (status)));
+ }
+
+ return_ACPI_STATUS (status);
+ }
+
+ /* Did we get a return object? */
+
+ if (!obj_desc) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "No object was returned from _CID\n"));
+ return_ACPI_STATUS (AE_TYPE);
+ }
+
+ /*
+ * A _CID can return either a Number (32 bit compressed EISA ID) or
+ * a string
+ */
+ if ((obj_desc->common.type != ACPI_TYPE_INTEGER) &&
+ (obj_desc->common.type != ACPI_TYPE_STRING)) {
+ status = AE_TYPE;
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Type returned from _CID not a number or string: %s(%X) \n",
+ acpi_ut_get_type_name (obj_desc->common.type), obj_desc->common.type));
+ }
+
+ else {
+ if (obj_desc->common.type = ACPI_TYPE_INTEGER) {
+ /* Convert the Numeric CID to string */
+
+ acpi_ex_eisa_id_to_string ((u32) obj_desc->integer.value, cid->buffer);
+ }
+
+ else {
+ /* Copy the String CID from the returned object */
+
+ STRNCPY(cid->buffer, obj_desc->string.pointer, sizeof(cid->buffer));
+ }
+ }
+
+
+ /* On exit, we must delete the return object */
+
+ acpi_ut_remove_reference (obj_desc);
+
+ return_ACPI_STATUS (status);
+}
+
+/*******************************************************************************
+ *
* FUNCTION: Acpi_ut_execute_HID
*
* PARAMETERS: Device_node - Node for the device
diff -urN linux-davidm/drivers/pci/pci.c linux-2.4.13-lia/drivers/pci/pci.c
--- linux-davidm/drivers/pci/pci.c Wed Oct 24 18:55:31 2001
+++ linux-2.4.13-lia/drivers/pci/pci.c Wed Oct 24 10:21:19 2001
@@ -1552,10 +1552,10 @@
switch (rqst) {
case PM_SAVE_STATE:
- error = pci_pm_save_state((u32)data);
+ error = pci_pm_save_state((unsigned long)data);
break;
case PM_SUSPEND:
- error = pci_pm_suspend((u32)data);
+ error = pci_pm_suspend((unsigned long)data);
break;
case PM_RESUME:
error = pci_pm_resume();
@@ -1873,16 +1873,16 @@
int map, block;
if ((page = pool_find_page (pool, dma)) = 0) {
- printk (KERN_ERR "pci_pool_free %s/%s, %p/%x (bad dma)\n",
+ printk (KERN_ERR "pci_pool_free %s/%s, %p/%lx (bad dma)\n",
pool->dev ? pool->dev->slot_name : NULL,
- pool->name, vaddr, (int) (dma & 0xffffffff));
+ pool->name, vaddr, (unsigned long) dma);
return;
}
#ifdef CONFIG_PCIPOOL_DEBUG
if (((dma - page->dma) + (void *)page->vaddr) != vaddr) {
printk (KERN_ERR "pci_pool_free %s/%s, %p (bad vaddr)/%x\n",
pool->dev ? pool->dev->slot_name : NULL,
- pool->name, vaddr, (int) (dma & 0xffffffff));
+ pool->name, vaddr, dma);
return;
}
#endif
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.4.13-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h Wed Oct 24 18:54:53 2001
+++ linux-2.4.13-lia/include/asm-ia64/offsets.h Wed Oct 24 18:30:40 2001
@@ -8,7 +8,7 @@
*/
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 3920 /* 0xf50 */
+#define IA64_TASK_SIZE 3408 /* 0xd50 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
@@ -20,9 +20,9 @@
#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
-#define IA64_TASK_THREAD_OFFSET 1472 /* 0x5c0 */
-#define IA64_TASK_THREAD_KSP_OFFSET 1472 /* 0x5c0 */
-#define IA64_TASK_PFM_MUST_BLOCK_OFFSET 2096 /* 0x830 */
+#define IA64_TASK_THREAD_OFFSET 976 /* 0x3d0 */
+#define IA64_TASK_THREAD_KSP_OFFSET 976 /* 0x3d0 */
+#define IA64_TASK_PFM_MUST_BLOCK_OFFSET 1600 /* 0x640 */
#define IA64_TASK_PID_OFFSET 220 /* 0xdc */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
diff -urN linux-davidm/include/asm-ia64/pal.h linux-2.4.13-lia/include/asm-ia64/pal.h
--- linux-davidm/include/asm-ia64/pal.h Wed Oct 24 18:54:53 2001
+++ linux-2.4.13-lia/include/asm-ia64/pal.h Wed Oct 24 18:20:46 2001
@@ -776,10 +776,12 @@
* initialized to zero before calling this for the first time..
*/
static inline s64
-ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 *progress)
+ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 *progress, u64 *vector)
{
struct ia64_pal_retval iprv;
PAL_CALL_IC_OFF(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, *progress);
+ if (vector)
+ *vector = iprv.v0;
*progress = iprv.v1;
return iprv.status;
}
diff -urN linux-davidm/include/asm-ia64/pci.h linux-2.4.13-lia/include/asm-ia64/pci.h
--- linux-davidm/include/asm-ia64/pci.h Wed Oct 24 18:54:53 2001
+++ linux-2.4.13-lia/include/asm-ia64/pci.h Wed Oct 24 18:32:34 2001
@@ -10,9 +10,9 @@
#include <asm/scatterlist.h>
/*
- * Can be used to override the logic in pci_scan_bus for skipping
- * already-configured bus numbers - to be used for buggy BIOSes or
- * architectures with incomplete PCI setup by the loader.
+ * Can be used to override the logic in pci_scan_bus for skipping already-configured bus
+ * numbers - to be used for buggy BIOSes or architectures with incomplete PCI setup by the
+ * loader.
*/
#define pcibios_assign_all_busses() 0
@@ -56,6 +56,19 @@
{
return 1;
}
+
+#define pci_map_page(dev,pg,off,size,dir) \
+ pci_map_single((dev), page_address(pg) + (off), (size), (dir))
+#define pci_unmap_page(dev,dma_addr,size,dir) \
+ pci_unmap_single((dev), (dma_addr), (size), (dir)
+
+/* The ia64 platform always supports 64-bit addressing. */
+#define pci_dac_dma_supported(pci_dev, mask) (1)
+
+#define pci_dac_page_to_dma(dev,pg,off,dir) ((dma64_addr_t) page_to_bus(pg) + (off))
+#define pci_dac_dma_to_page(dev,dma_addr) (virt_to_page(bus_to_virt(dma_addr)))
+#define pci_dac_dma_to_offset(dev,dma_addr) ((dma_addr) & ~PAGE_MASK)
+#define pci_dac_dma_sync_single(dev,dma_addr,len,dir) do { /* nothing */ } while (0)
/* Return the index of the PCI controller for device PDEV. */
#define pci_controller_num(PDEV) (0)
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.4.13-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h Wed Oct 24 18:54:53 2001
+++ linux-2.4.13-lia/include/asm-ia64/processor.h Wed Oct 24 18:32:17 2001
@@ -170,7 +170,6 @@
#define IA64_THREAD_KRBS_SYNCED (__IA64_UL(1) << 5) /* krbs synced with process vm? */
#define IA64_THREAD_FPEMU_NOPRINT (__IA64_UL(1) << 6) /* don't log any fpswa faults */
#define IA64_THREAD_FPEMU_SIGFPE (__IA64_UL(1) << 7) /* send a SIGFPE for fpswa faults */
-#define IA64_KERNEL_DEATH (__IA64_UL(1) << 63) /* see die_if_kernel()... */
#define IA64_THREAD_UAC_SHIFT 3
#define IA64_THREAD_UAC_MASK (IA64_THREAD_UAC_NOPRINT | IA64_THREAD_UAC_SIGBUS)
@@ -958,6 +957,42 @@
asm ("mov %0=gp" : "=r"(val));
return val;
+}
+
+static inline void
+ia64_set_ibr (__u64 regnum, __u64 value)
+{
+ asm volatile ("mov ibr[%0]=%1" :: "r"(regnum), "r"(value));
+}
+
+static inline void
+ia64_set_dbr (__u64 regnum, __u64 value)
+{
+ asm volatile ("mov dbr[%0]=%1" :: "r"(regnum), "r"(value));
+#ifdef CONFIG_ITANIUM
+ asm volatile (";; srlz.d");
+#endif
+}
+
+static inline __u64
+ia64_get_ibr (__u64 regnum)
+{
+ __u64 retval;
+
+ asm volatile ("mov %0=ibr[%1]" : "=r"(retval) : "r"(regnum));
+ return retval;
+}
+
+static inline __u64
+ia64_get_dbr (__u64 regnum)
+{
+ __u64 retval;
+
+ asm volatile ("mov %0Ûr[%1]" : "=r"(retval) : "r"(regnum));
+#ifdef CONFIG_ITANIUM
+ asm volatile (";; srlz.d");
+#endif
+ return retval;
}
/* XXX remove the handcoded version once we have a sufficiently clever compiler... */
diff -urN linux-davidm/include/asm-ia64/unistd.h linux-2.4.13-lia/include/asm-ia64/unistd.h
--- linux-davidm/include/asm-ia64/unistd.h Wed Oct 24 18:54:53 2001
+++ linux-2.4.13-lia/include/asm-ia64/unistd.h Wed Oct 24 18:21:34 2001
@@ -205,6 +205,7 @@
#define __NR_clone2 1213
#define __NR_getdents64 1214
#define __NR_getunwind 1215
+#define __NR_readahead 1216
#if !defined(__ASSEMBLY__) && !defined(ASSEMBLER)
diff -urN linux-davidm/include/linux/kernel.h linux-2.4.13-lia/include/linux/kernel.h
--- linux-davidm/include/linux/kernel.h Wed Oct 10 16:32:16 2001
+++ linux-2.4.13-lia/include/linux/kernel.h Wed Oct 24 18:22:22 2001
@@ -36,6 +36,13 @@
#define KERN_INFO "<6>" /* informational */
#define KERN_DEBUG "<7>" /* debug-level messages */
+extern int console_printk[];
+
+#define console_loglevel (console_printk[0])
+#define default_message_loglevel (console_printk[1])
+#define minimum_console_loglevel (console_printk[2])
+#define default_console_loglevel (console_printk[3])
+
# define NORET_TYPE /**/
# define ATTRIB_NORET __attribute__((noreturn))
# define NORET_AND noreturn,
@@ -79,8 +86,6 @@
asmlinkage int printk(const char * fmt, ...)
__attribute__ ((format (printf, 1, 2)));
-
-extern int console_loglevel;
static inline void console_silent(void)
{
diff -urN linux-davidm/kernel/exec_domain.c linux-2.4.13-lia/kernel/exec_domain.c
--- linux-davidm/kernel/exec_domain.c Wed Oct 10 16:32:16 2001
+++ linux-2.4.13-lia/kernel/exec_domain.c Wed Oct 24 18:22:52 2001
@@ -196,8 +196,10 @@
put_exec_domain(oep);
+#if 0
printk(KERN_DEBUG "[%s:%d]: set personality to %lx\n",
current->comm, current->pid, personality);
+#endif
return 0;
}
diff -urN linux-davidm/kernel/printk.c linux-2.4.13-lia/kernel/printk.c
--- linux-davidm/kernel/printk.c Wed Oct 24 18:54:53 2001
+++ linux-2.4.13-lia/kernel/printk.c Wed Oct 24 18:29:37 2001
@@ -16,6 +16,7 @@
* 01Mar01 Andrew Morton <andrewm@uow.edu.au>
*/
+#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/tty.h>
#include <linux/tty_driver.h>
@@ -27,7 +28,11 @@
#include <asm/uaccess.h>
+#if 0
#define LOG_BUF_LEN (16384) /* This must be a power of two */
+#else
+#define LOG_BUF_LEN (65536) /* This must be a power of two */
+#endif
#define LOG_BUF_MASK (LOG_BUF_LEN-1)
/* printk's without a loglevel use this.. */
@@ -39,11 +44,12 @@
DECLARE_WAIT_QUEUE_HEAD(log_wait);
-/* Keep together for sysctl support */
-int console_loglevel = DEFAULT_CONSOLE_LOGLEVEL;
-int default_message_loglevel = DEFAULT_MESSAGE_LOGLEVEL;
-int minimum_console_loglevel = MINIMUM_CONSOLE_LOGLEVEL;
-int default_console_loglevel = DEFAULT_CONSOLE_LOGLEVEL;
+int console_printk[4] = {
+ DEFAULT_CONSOLE_LOGLEVEL, /* console_loglevel */
+ DEFAULT_MESSAGE_LOGLEVEL, /* default_message_loglevel */
+ MINIMUM_CONSOLE_LOGLEVEL, /* minimum_console_loglevel */
+ DEFAULT_CONSOLE_LOGLEVEL, /* default_console_loglevel */
+};
int oops_in_progress;
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (82 preceding siblings ...)
2001-10-25 4:30 ` David Mosberger
@ 2001-10-25 5:26 ` Keith Owens
2001-10-25 6:21 ` Keith Owens
` (131 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-10-25 5:26 UTC (permalink / raw)
To: linux-ia64
On Wed, 24 Oct 2001 21:30:42 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>An updated ia64 patch for 2.4.13 is now available at
>This kernel has been tested with gcc-3.0 on Big Sur, Lion, and HP
>simulator. Both UP and MP seem to compile fine. As usual, your
>mileage may vary.
Not many miles for me ;)
Index: 13.3/include/asm-ia64/pci.h
--- 13.3/include/asm-ia64/pci.h Thu, 25 Oct 2001 15:06:28 +1000 kaos (linux-2.4/t/2_pci.h 1.4 644)
+++ 13.3(w)/include/asm-ia64/pci.h Thu, 25 Oct 2001 15:25:33 +1000 kaos (linux-2.4/t/2_pci.h 1.4 644)
@@ -60,7 +60,7 @@ pci_dma_supported (struct pci_dev *hwdev
#define pci_map_page(dev,pg,off,size,dir) \
pci_map_single((dev), page_address(pg) + (off), (size), (dir))
#define pci_unmap_page(dev,dma_addr,size,dir) \
- pci_unmap_single((dev), (dma_addr), (size), (dir)
+ pci_unmap_single((dev), (dma_addr), (size), (dir))
/* The ia64 platform always supports 64-bit addressing. */
#define pci_dac_dma_supported(pci_dev, mask) (1)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (83 preceding siblings ...)
2001-10-25 5:26 ` Keith Owens
@ 2001-10-25 6:21 ` Keith Owens
2001-10-25 6:44 ` Christoph Hellwig
` (130 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-10-25 6:21 UTC (permalink / raw)
To: linux-ia64
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Content-Type: text/plain; charset=us-ascii
On Wed, 24 Oct 2001 21:30:42 -0700,
David Mosberger <davidm@hpl.hp.com> wrote:
>An updated ia64 patch for 2.4.13 is now available at
>ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
> linux-2.4.11-ia64-011010.diff*
kdb v1.9-2.4.13-ia64-011024 is available in
ftp://oss.sgi.com/projects/kdb/download/ia64. Changelog extract.
2001-10-24 Keith Owens <kaos@sgi.com>
* Upgrade to kernel 2.4.13.
2001-10-14 Keith Owens <kaos@melbourne.sgi.com>
* More use of TMPPREFIX in top level Makefile to speed up NFS compiles.
* Correct repeat calculations in md/mds commands.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: Exmh version 2.1.1 10/15/1999
iD8DBQE7169/i4UHNye0ZOoRArkCAJ99FG3hO3Q3uQjIWSjkTEIog2yingCg06EH
EYDtZ2C8sVON+TERYHTtcL0=Yhbj
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (84 preceding siblings ...)
2001-10-25 6:21 ` Keith Owens
@ 2001-10-25 6:44 ` Christoph Hellwig
2001-10-25 19:55 ` Luck, Tony
` (129 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Christoph Hellwig @ 2001-10-25 6:44 UTC (permalink / raw)
To: linux-ia64
On Wed, Oct 24, 2001 at 09:30:42PM -0700, David Mosberger wrote:
> - half-hearted attempt add supporting reading of "default LDT entry" in ia32
> modify_ldt() syscall; someone who understands what this is supposed to do
> should take a look at this...
This is just needed for Xenix/286 binary emulation - I doubt we want to run
x286emul on ia64 anytime soon 8)
Christoph
--
Of course it doesn't work. We've performed a software upgrade.
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (85 preceding siblings ...)
2001-10-25 6:44 ` Christoph Hellwig
@ 2001-10-25 19:55 ` Luck, Tony
2001-10-25 20:20 ` David Mosberger
` (128 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Luck, Tony @ 2001-10-25 19:55 UTC (permalink / raw)
To: linux-ia64
2.4.13 builds (with Keith's patch) and boots for me. But
I'm seeing these weird messages early in boot:
ACPI 2.0 Root System Description Ptr at 0xe0000000000e2000
ACPI 2.0 XSDT at 0xe00000003ffd8030 (p=0x3ffd8030)
ACPI 2.0: Intel W460GXBS 0.1
Acpi cfg:bind to Boot time Acpi OSD
ACPI attempting to access kernel owned memory at 3FFD8030.
ACPI attempting to access kernel owned memory at 3FFD8030.
ACPI attempting to access kernel owned memory at 3FFD8068.
ACPI attempting to access kernel owned memory at 3FFD8068.
ACPI attempting to access kernel owned memory at 3FFD9EC0.
ACPI attempting to access kernel owned memory at 3FFD9EC0.
ACPI attempting to access kernel owned memory at 3FFD9F40.
ACPI attempting to access kernel owned memory at 3FFD8160.
ACPI attempting to access kernel owned memory at 3FFD8160.
Running on a 2-processor (B3-stepping) BigSur with 1GB RAM.
I'm using the 3.1 version of elilo.
-Tony
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (86 preceding siblings ...)
2001-10-25 19:55 ` Luck, Tony
@ 2001-10-25 20:20 ` David Mosberger
2001-10-26 14:36 ` Andreas Schwab
` (127 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-10-25 20:20 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 25 Oct 2001 12:55:09 -0700, "Luck, Tony" <tony.luck@intel.com> said:
Tony> 2.4.13 builds (with Keith's patch) and boots for me. But I'm
Tony> seeing these weird messages early in boot:
Tony> ACPI 2.0 Root System Description Ptr at 0xe0000000000e2000
Tony> ACPI 2.0 XSDT at 0xe00000003ffd8030 (p=0x3ffd8030) ACPI 2.0:
Tony> Intel W460GXBS 0.1 Acpi cfg:bind to Boot time Acpi OSD ACPI
Tony> attempting to access kernel owned memory at 3FFD8030. ACPI
Tony> attempting to access kernel owned memory at 3FFD8030. ACPI
Tony> attempting to access kernel owned memory at 3FFD8068. ACPI
Tony> attempting to access kernel owned memory at 3FFD8068. ACPI
Tony> attempting to access kernel owned memory at 3FFD9EC0. ACPI
Tony> attempting to access kernel owned memory at 3FFD9EC0. ACPI
Tony> attempting to access kernel owned memory at 3FFD9F40. ACPI
Tony> attempting to access kernel owned memory at 3FFD8160. ACPI
Tony> attempting to access kernel owned memory at 3FFD8160.
Don't worry about these for now. It's a known problem and on my todo
list (once the book is finished...).
The warnings are due to the fact that setup_arch() is not yet
registering the reserved memory regions (search for "XXX fix me" in
that routine and you'll see what I mean). Given the EFI memory map,
this will be easy to fix---just a small matter of programming. In the
meantime, it shouldn't have any negative effect (hotplugging with dumb
buses such as PCMCIA would be a problem, but since ISA isn't supported
anyhow...).
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (87 preceding siblings ...)
2001-10-25 20:20 ` David Mosberger
@ 2001-10-26 14:36 ` Andreas Schwab
2001-10-30 2:20 ` David Mosberger
` (126 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-10-26 14:36 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> An updated ia64 patch for 2.4.13 is now available at
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
The patch add a definition for bust_spinlocks in arch/ia64/kernel/traps.c
which is slightly different from the implementation in
lib/bust_spinlocks.c. Does it make sense to merge them?
Andreas.
--
Andreas Schwab "And now for something
Andreas.Schwab@suse.de completely different."
SuSE Labs, SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (88 preceding siblings ...)
2001-10-26 14:36 ` Andreas Schwab
@ 2001-10-30 2:20 ` David Mosberger
2001-11-02 1:35 ` William Lee Irwin III
` (125 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-10-30 2:20 UTC (permalink / raw)
To: linux-ia64
>>>>> On 26 Oct 2001 16:36:22 +0200, Andreas Schwab <schwab@suse.de> said:
Andreas> David Mosberger <davidm@hpl.hp.com> writes: |> An updated
Andreas> ia64 patch for 2.4.13 is now available at |>
Andreas> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
Andreas> The patch add a definition for bust_spinlocks in
Andreas> arch/ia64/kernel/traps.c which is slightly different from
Andreas> the implementation in lib/bust_spinlocks.c. Does it make
Andreas> sense to merge them?
Not unless you make them exactly the same. ;-)
The issue is global_irq_lock. It's existence depends on the platform.
FYI, i386 does the same thing (just in a different place: mm/fault.c).
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (89 preceding siblings ...)
2001-10-30 2:20 ` David Mosberger
@ 2001-11-02 1:35 ` William Lee Irwin III
2001-11-06 1:23 ` David Mosberger
` (124 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: William Lee Irwin III @ 2001-11-02 1:35 UTC (permalink / raw)
To: linux-ia64
On Thu, Oct 25, 2001 at 01:20:05PM -0700, David Mosberger wrote:
> Don't worry about these for now. It's a known problem and on my todo
> list (once the book is finished...).
>
> The warnings are due to the fact that setup_arch() is not yet
> registering the reserved memory regions (search for "XXX fix me" in
> that routine and you'll see what I mean). Given the EFI memory map,
> this will be easy to fix---just a small matter of programming. In the
> meantime, it shouldn't have any negative effect (hotplugging with dumb
> buses such as PCMCIA would be a problem, but since ISA isn't supported
> anyhow...).
According to my analysis this is due to acpi_os_map_memory() attempting
to use the results of virt_to_page() prior to paging_init(), where
virt_to_page() attempts to index off of mem_map (which is not yet
initialized) and access the contents of the struct page from there
(which leads to an invalid pointer dereference or returns garbage).
The following check in acpi_os_map_memory() (in drivers/acpi/os.c) was
introduced somewhere after 2.4.5, and (IMHO) should be omitted:
if ((unsigned long) phys < virt_to_phys(high_memory)) {
struct page *page;
*virt = phys_to_virt((unsigned long) phys);
/* Check for stamping */
page = virt_to_page(*virt);
if(page && !test_bit(PG_reserved, &page->flags))
printk(KERN_WARNING "ACPI attempting to access kernel owned memory a
t %08lX.\n", (unsigned long)phys);
return AE_OK;
}
mem_map is initialized by free_area_init_core(), called from
paging_init().
If there is a cleaner way to repair this issue, I would be glad to
adopt it.
Cheers,
Bill
------------------
willir@us.ibm.com
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.13)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (90 preceding siblings ...)
2001-11-02 1:35 ` William Lee Irwin III
@ 2001-11-06 1:23 ` David Mosberger
2001-11-06 6:59 ` [Linux-ia64] kernel update (relative to 2.4.14) David Mosberger
` (123 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-11-06 1:23 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 1 Nov 2001 17:35:01 -0800, William Lee Irwin III <wli@holomorphy.com> said:
Bill> According to my analysis this is due to acpi_os_map_memory()
Bill> attempting to use the results of virt_to_page() prior to
Bill> paging_init(), where virt_to_page() attempts to index off of
Bill> mem_map (which is not yet initialized) and access the contents
Bill> of the struct page from there (which leads to an invalid
Bill> pointer dereference or returns garbage).
Bill> The following check in acpi_os_map_memory() (in
Bill> drivers/acpi/os.c) was introduced somewhere after 2.4.5, and
Bill> (IMHO) should be omitted:
Bill> if ((unsigned long) phys < virt_to_phys(high_memory)) {
Bill> struct page *page; *virt = phys_to_virt((unsigned long) phys);
You're probably right. Jung-Ik, can you make this change and send me
a patch?
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.14)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (91 preceding siblings ...)
2001-11-06 1:23 ` David Mosberger
@ 2001-11-06 6:59 ` David Mosberger
2001-11-07 1:48 ` Keith Owens
` (122 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-11-06 6:59 UTC (permalink / raw)
To: linux-ia64
An updated ia64 patch for 2.4.14 is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.14-ia64-011105.diff*
change log:
- update for 2.4.14
- support non-legacy serial ports via ACPI
(Khalid Aziz & Alex Williamson)
- support 16MB as an alternate page size for the identity mapped region
- export flush_tlb_range to modules, by popular request
- in struct scatterlist, rename "orig_address" to "page" (yes,
that's cheesy, but SCSI insists on a zeroing a member with such a
name...)
- increase IO_SPACE_LIMIT from 0xffff to ~0UL
This kernel has been tested with gcc-3.0 on Big Sur. Both UP and MP
seem to compile fine. As usual, your mileage may vary.
Enjoy,
--david
diff -urN linux-davidm/Documentation/Configure.help lia64/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Mon Nov 5 21:43:07 2001
+++ lia64/Documentation/Configure.help Mon Nov 5 21:02:05 2001
@@ -2589,6 +2589,14 @@
say N here to save some memory. You can also say Y if you have an
"intelligent" multiport card such as Cyclades, Digiboards, etc.
+Support for serial ports defined by ACPI tables
+CONFIG_SERIAL_ACPI
+ Legacy free machines may not have serial ports at the legacy COM1,
+ COM2 etc addresses. Serial ports on such machines are described by
+ the ACPI tables SPCR (Serial Port Console Redirection) table and
+ DBGP (Debug Port) table. Say Y here if you want to include support
+ for these serial ports.
+
Support for sharing serial interrupts
CONFIG_SERIAL_SHARE_IRQ
Some serial boards have hardware support which allows multiple dumb
diff -urN linux-davidm/arch/ia64/config.in lia64/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/config.in Mon Nov 5 21:04:18 2001
@@ -256,6 +256,10 @@
mainmenu_option next_comment
comment 'Kernel hacking'
+choice 'Physical memory granularity' \
+ "16MB CONFIG_IA64_GRANULE_16MB \
+ 64MB CONFIG_IA64_GRANULE_64MB" 64MB
+
bool 'Kernel debugging' CONFIG_DEBUG_KERNEL
if [ "$CONFIG_DEBUG_KERNEL" != "n" ]; then
bool ' Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
diff -urN linux-davidm/arch/ia64/defconfig lia64/arch/ia64/defconfig
--- linux-davidm/arch/ia64/defconfig Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/defconfig Mon Nov 5 22:26:13 2001
@@ -40,7 +40,7 @@
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
CONFIG_IA64_BRL_EMU=y
-CONFIG_ITANIUM_BSTEP_SPECIFIC=y
+# CONFIG_ITANIUM_BSTEP_SPECIFIC is not set
CONFIG_IA64_L1_CACHE_SHIFT=6
CONFIG_IA64_MCA=y
CONFIG_PM=y
@@ -95,6 +95,7 @@
# CONFIG_IPV6 is not set
# CONFIG_KHTTPD is not set
# CONFIG_ATM is not set
+# CONFIG_VLAN_8021Q is not set
#
#
@@ -127,7 +128,6 @@
#
# CONFIG_PNP is not set
# CONFIG_ISAPNP is not set
-# CONFIG_PNPBIOS is not set
#
# Block devices
@@ -298,7 +298,6 @@
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_NCR53C406A is not set
-# CONFIG_SCSI_NCR_D700 is not set
# CONFIG_SCSI_NCR53C7xx is not set
# CONFIG_SCSI_NCR53C8XX is not set
# CONFIG_SCSI_SYM53C8XX is not set
@@ -364,6 +363,7 @@
# CONFIG_NE2K_PCI is not set
# CONFIG_NE3210 is not set
# CONFIG_ES3210 is not set
+# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
@@ -374,7 +374,6 @@
# CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set
# CONFIG_WINBOND_840 is not set
-# CONFIG_LAN_SAA9730 is not set
# CONFIG_NET_POCKET is not set
#
@@ -444,6 +443,7 @@
CONFIG_VT_CONSOLE=y
CONFIG_SERIAL=y
CONFIG_SERIAL_CONSOLE=y
+# CONFIG_SERIAL_ACPI is not set
# CONFIG_SERIAL_EXTENDED is not set
# CONFIG_SERIAL_NONSTANDARD is not set
CONFIG_UNIX98_PTYS=y
@@ -510,7 +510,6 @@
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
-# CONFIG_SONYPI is not set
#
# Ftape, the floppy tape device driver
@@ -596,11 +595,13 @@
CONFIG_VFAT_FS=y
# CONFIG_EFS_FS is not set
# CONFIG_JFFS_FS is not set
+# CONFIG_JFFS2_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_TMPFS is not set
# CONFIG_RAMFS is not set
CONFIG_ISO9660_FS=y
# CONFIG_JOLIET is not set
+# CONFIG_ZISOFS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_NTFS_FS is not set
@@ -643,6 +644,8 @@
# CONFIG_NCPFS_SMALLDOS is not set
# CONFIG_NCPFS_NLS is not set
# CONFIG_NCPFS_EXTRAS is not set
+# CONFIG_ZISOFS_FS is not set
+# CONFIG_ZLIB_FS_INFLATE is not set
#
# Partition Types
@@ -755,12 +758,14 @@
#
CONFIG_USB_DEVICEFS=y
# CONFIG_USB_BANDWIDTH is not set
+# CONFIG_USB_LONG_TIMEOUT is not set
#
# USB Controllers
#
-CONFIG_USB_UHCI_ALT=y
-CONFIG_USB_OHCI=y
+CONFIG_USB_UHCI=m
+# CONFIG_USB_UHCI_ALT is not set
+# CONFIG_USB_OHCI is not set
#
# USB Device Class drivers
@@ -768,15 +773,24 @@
# CONFIG_USB_AUDIO is not set
# CONFIG_USB_BLUETOOTH is not set
# CONFIG_USB_STORAGE is not set
+# CONFIG_USB_STORAGE_DEBUG is not set
+# CONFIG_USB_STORAGE_DATAFAB is not set
+# CONFIG_USB_STORAGE_FREECOM is not set
+# CONFIG_USB_STORAGE_ISD200 is not set
+# CONFIG_USB_STORAGE_DPCM is not set
+# CONFIG_USB_STORAGE_HP8200e is not set
+# CONFIG_USB_STORAGE_SDDR09 is not set
+# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
#
# USB Human Interface Devices (HID)
#
-# CONFIG_USB_HID is not set
-CONFIG_USB_KBD=y
-CONFIG_USB_MOUSE=y
+CONFIG_USB_HID=m
+CONFIG_USB_HIDDEV=y
+CONFIG_USB_KBD=m
+CONFIG_USB_MOUSE=m
# CONFIG_USB_WACOM is not set
#
@@ -786,11 +800,12 @@
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_SCANNER is not set
# CONFIG_USB_MICROTEK is not set
+# CONFIG_USB_HPUSBSCSI is not set
#
# USB Multimedia devices
#
-CONFIG_USB_IBMCAM=y
+# CONFIG_USB_IBMCAM is not set
# CONFIG_USB_OV511 is not set
# CONFIG_USB_PWC is not set
# CONFIG_USB_SE401 is not set
@@ -801,9 +816,9 @@
# USB Network adaptors
#
# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_KAWETH is not set
# CONFIG_USB_CATC is not set
# CONFIG_USB_CDCETHER is not set
-# CONFIG_USB_KAWETH is not set
# CONFIG_USB_USBNET is not set
#
@@ -815,9 +830,33 @@
# USB Serial Converter support
#
# CONFIG_USB_SERIAL is not set
+# CONFIG_USB_SERIAL_GENERIC is not set
+# CONFIG_USB_SERIAL_BELKIN is not set
+# CONFIG_USB_SERIAL_WHITEHEAT is not set
+# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
+# CONFIG_USB_SERIAL_EMPEG is not set
+# CONFIG_USB_SERIAL_FTDI_SIO is not set
+# CONFIG_USB_SERIAL_VISOR is not set
+# CONFIG_USB_SERIAL_IR is not set
+# CONFIG_USB_SERIAL_EDGEPORT is not set
+# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
+# CONFIG_USB_SERIAL_KEYSPAN is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set
+# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set
+# CONFIG_USB_SERIAL_MCT_U232 is not set
+# CONFIG_USB_SERIAL_PL2303 is not set
+# CONFIG_USB_SERIAL_CYBERJACK is not set
+# CONFIG_USB_SERIAL_XIRCOM is not set
+# CONFIG_USB_SERIAL_OMNINET is not set
#
-# USB misc drivers
+# USB Miscellaneous drivers
#
# CONFIG_USB_RIO500 is not set
@@ -829,6 +868,8 @@
#
# Kernel hacking
#
+# CONFIG_IA64_GRANULE_16MB is not set
+CONFIG_IA64_GRANULE_64MB=y
CONFIG_DEBUG_KERNEL=y
CONFIG_IA64_PRINT_HAZARDS=y
# CONFIG_DISABLE_VHPT is not set
diff -urN linux-davidm/arch/ia64/ia32/ia32_traps.c lia64/arch/ia64/ia32/ia32_traps.c
--- linux-davidm/arch/ia64/ia32/ia32_traps.c Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/ia32/ia32_traps.c Mon Nov 5 21:04:32 2001
@@ -4,7 +4,7 @@
* Copyright (C) 2000 Asit K. Mallick <asit.k.mallick@intel.com>
* Copyright (C) 2001 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
-/*
+ *
* 06/16/00 A. Mallick added siginfo for most cases (close to IA32)
* 09/29/00 D. Mosberger added ia32_intercept()
*/
diff -urN linux-davidm/arch/ia64/kernel/acpi.c lia64/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/kernel/acpi.c Mon Nov 5 21:04:58 2001
@@ -23,6 +23,9 @@
#include <linux/string.h>
#include <linux/types.h>
#include <linux/irq.h>
+#ifdef CONFIG_SERIAL_ACPI
+#include <linux/acpi_serial.h>
+#endif
#include <asm/acpi-ext.h>
#include <asm/acpikcfg.h>
@@ -44,6 +47,7 @@
void (*pm_idle) (void);
void (*pm_power_off) (void);
+asm (".weak iosapic_register_irq");
asm (".weak iosapic_register_legacy_irq");
asm (".weak iosapic_register_platform_irq");
asm (".weak iosapic_init");
@@ -397,6 +401,7 @@
# ifdef CONFIG_ACPI
acpi_xsdt_t *xsdt;
acpi_desc_table_hdr_t *hdrp;
+ acpi_madt_t *madt;
int tables, i;
if (strncmp(rsdp20->signature, ACPI_RSDP_SIG, ACPI_RSDP_SIG_LEN)) {
@@ -438,9 +443,76 @@
ACPI_MADT_SIG, ACPI_MADT_SIG_LEN) != 0)
continue;
- acpi20_parse_madt((acpi_madt_t *) hdrp);
+ /* Save MADT pointer for later */
+ madt = (acpi_madt_t *) hdrp;
+ acpi20_parse_madt(madt);
}
+#ifdef CONFIG_SERIAL_ACPI
+ /*
+ * Now we're interested in other tables. We want the iosapics already
+ * initialized, so we do it in a separate loop.
+ */
+ for (i = 0; i < tables; i++) {
+ hdrp = (acpi_desc_table_hdr_t *) __va(readl_unaligned(&xsdt->entry_ptrs[i]));
+ /*
+ * search for SPCR and DBGP table entries so we can enable
+ * non-pci interrupts to IO-SAPICs.
+ */
+ if (!strncmp(hdrp->signature, ACPI_SPCRT_SIG, ACPI_SPCRT_SIG_LEN) ||
+ !strncmp(hdrp->signature, ACPI_DBGPT_SIG, ACPI_DBGPT_SIG_LEN))
+ {
+ acpi_ser_t *spcr = (void *)hdrp;
+ unsigned long global_int;
+
+ setup_serial_acpi(hdrp);
+
+ /*
+ * ACPI is able to describe serial ports that live at non-standard
+ * memory space addresses and use SAPIC interrupts. If not also
+ * PCI devices, there would be no interrupt vector information for
+ * them. This checks for and fixes that situation.
+ */
+ if (spcr->length < sizeof(acpi_ser_t))
+ /* table is not long enough for full info, thus no int */
+ break;
+
+ /*
+ * If the device is not in PCI space, but uses a SAPIC interrupt,
+ * we need to program the SAPIC so that serial can autoprobe for
+ * the IA64 interrupt vector later on. If the device is in PCI
+ * space, it should already be setup via the PCI vectors
+ */
+ if (spcr->base_addr.space_id != ACPI_SERIAL_PCICONF_SPACE &&
+ spcr->int_type = ACPI_SERIAL_INT_SAPIC)
+ {
+ u32 irq_base;
+ char *iosapic_address;
+ int vector;
+
+ /* We have a UART in memory space with a SAPIC interrupt */
+ global_int = ( (spcr->global_int[3] << 24)
+ | (spcr->global_int[2] << 16)
+ | (spcr->global_int[1] << 8)
+ | spcr->global_int[0]);
+
+ if (!iosapic_register_irq)
+ continue;
+
+ /* which iosapic does this IRQ belong to? */
+ if (acpi20_which_iosapic(global_int, madt, &irq_base,
+ &iosapic_address) = 0)
+ {
+ vector = iosapic_register_irq(global_int,
+ 1, /* active high polarity */
+ 1, /* edge triggered */
+ irq_base,
+ iosapic_address);
+ }
+ }
+ }
+ }
+#endif
acpi_cf_terminate();
# ifdef CONFIG_SMP
diff -urN linux-davidm/arch/ia64/kernel/efi.c lia64/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/kernel/efi.c Mon Nov 5 21:05:27 2001
@@ -6,8 +6,8 @@
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999-2001 Hewlett-Packard Co.
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1999-2001 Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* All EFI Runtime Services are not implemented yet as EFI only
* supports physical mode addressing on SoftSDV. This is to be fixed
@@ -234,7 +234,7 @@
* The only ITLB entry in region 7 that is used is the one installed by
* __start(). That entry covers a 64MB range.
*/
- mask = ~((1 << KERNEL_PG_SHIFT) - 1);
+ mask = ~((1 << KERNEL_TR_PAGE_SHIFT) - 1);
vaddr = PAGE_OFFSET + md->phys_addr;
/*
@@ -242,29 +242,32 @@
* mapping.
*
* PAL code is guaranteed to be aligned on a power of 2 between 4k and
- * 256KB. Also from the documentation, it seems like there is an implicit
- * guarantee that you will need only ONE ITR to map it. This implies that
- * the PAL code is always aligned on its size, i.e., the closest matching
- * page size supported by the TLB. Therefore PAL code is guaranteed never
- * to cross a 64MB unless it is bigger than 64MB (very unlikely!). So for
+ * 256KB and that only one ITR is needed to map it. This implies that the
+ * PAL code is always aligned on its size, i.e., the closest matching page
+ * size supported by the TLB. Therefore PAL code is guaranteed never to
+ * cross a 64MB unless it is bigger than 64MB (very unlikely!). So for
* now the following test is enough to determine whether or not we need a
* dedicated ITR for the PAL code.
*/
if ((vaddr & mask) = (KERNEL_START & mask)) {
- printk(__FUNCTION__ " : no need to install ITR for PAL code\n");
+ printk(__FUNCTION__ ": no need to install ITR for PAL code\n");
continue;
}
+ if (md->num_pages << 12 > IA64_GRANULE_SIZE)
+ panic("Woah! PAL code size bigger than a granule!");
+
+ mask = ~((1 << IA64_GRANULE_SHIFT) - 1);
printk("CPU %d: mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
smp_processor_id(), md->phys_addr, md->phys_addr + (md->num_pages << 12),
- vaddr & mask, (vaddr & mask) + KERNEL_PG_SIZE);
+ vaddr & mask, (vaddr & mask) + IA64_GRANULE_SIZE);
/*
* Cannot write to CRx with PSR.ic=1
*/
ia64_clear_ic(flags);
ia64_itr(0x1, IA64_TR_PALCODE, vaddr & mask,
- pte_val(mk_pte_phys(md->phys_addr, PAGE_KERNEL)), KERNEL_PG_SHIFT);
+ pte_val(mk_pte_phys(md->phys_addr, PAGE_KERNEL)), IA64_GRANULE_SHIFT);
local_irq_restore(flags);
ia64_srlz_i();
}
diff -urN linux-davidm/arch/ia64/kernel/entry.S lia64/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/kernel/entry.S Mon Nov 5 21:06:38 2001
@@ -140,15 +140,14 @@
dep r20=0,in0,61,3 // physical address of "current"
;;
st8 [r22]=sp // save kernel stack pointer of old task
- shr.u r26=r20,KERNEL_PG_SHIFT
- mov r16=KERNEL_PG_NUM
+ shr.u r26=r20,IA64_GRANULE_SHIFT
+ shr.u r17=r20,KERNEL_TR_PAGE_SHIFT
;;
- cmp.ne p6,p7=r26,r16 // check whether r26 != KERNEL_PG_NUM
+ cmp.ne p6,p7=KERNEL_TR_PAGE_NUM,r17
adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
;;
/*
- * If we've already mapped this task's page, we can skip doing it
- * again.
+ * If we've already mapped this task's page, we can skip doing it again.
*/
(p6) cmp.eq p7,p6=r26,r27
(p6) br.cond.dpnt .map
@@ -176,7 +175,7 @@
;;
srlz.d
or r23=r25,r20 // construct PA | page properties
- mov r25=KERNEL_PG_SHIFT<<2
+ mov r25=IA64_GRANULE_SHIFT<<2
;;
mov cr.itir=r25
mov cr.ifa=in0 // VA of next task...
diff -urN linux-davidm/arch/ia64/kernel/head.S lia64/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/kernel/head.S Mon Nov 5 21:06:50 2001
@@ -63,17 +63,17 @@
* that maps the kernel's text and data:
*/
rsm psr.i | psr.ic
- mov r16=((ia64_rid(IA64_REGION_ID_KERNEL, PAGE_OFFSET) << 8) | (KERNEL_PG_SHIFT << 2))
+ mov r16=((ia64_rid(IA64_REGION_ID_KERNEL, PAGE_OFFSET) << 8) | (IA64_GRANULE_SHIFT << 2))
;;
srlz.i
- mov r18=KERNEL_PG_SHIFT<<2
- movl r17=PAGE_OFFSET + KERNEL_PG_NUM*KERNEL_PG_SIZE
+ mov r18=KERNEL_TR_PAGE_SHIFT<<2
+ movl r17=KERNEL_START
;;
mov rr[r17]=r16
mov cr.itir=r18
mov cr.ifa=r17
mov r16=IA64_TR_KERNEL
- movl r18=(KERNEL_PG_NUM*KERNEL_PG_SIZE | PAGE_KERNEL)
+ movl r18=((1 << KERNEL_TR_PAGE_SHIFT) | PAGE_KERNEL)
;;
srlz.i
;;
@@ -112,7 +112,7 @@
;;
#ifdef CONFIG_IA64_EARLY_PRINTK
- mov r3=(6<<8) | (KERNEL_PG_SHIFT<<2)
+ mov r3=(6<<8) | (IA64_GRANULE_SHIFT<<2)
movl r2=6<<61
;;
mov rr[r2]=r3
@@ -145,6 +145,7 @@
cmp4.ne isAP,isBP=r3,r0
;; // RAW on r2
extr r3=r2,0,61 // r3 = phys addr of task struct
+ mov r16=KERNEL_TR_PAGE_NUM
;;
// load the "current" pointer (r13) and ar.k6 with the current task
@@ -152,7 +153,7 @@
mov IA64_KR(CURRENT)=r3 // Physical address
// initialize k4 to a safe value (64-128MB is mapped by TR_KERNEL)
- mov IA64_KR(CURRENT_STACK)=1
+ mov IA64_KR(CURRENT_STACK)=r16
/*
* Reserve space at the top of the stack for "struct pt_regs". Kernel threads
* don't store interesting values in that structure, but the space still needs
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c lia64/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/kernel/ia64_ksyms.c Mon Nov 5 21:07:03 2001
@@ -69,6 +69,8 @@
#include <asm/pgalloc.h>
+EXPORT_SYMBOL(flush_tlb_range);
+
#ifdef CONFIG_SMP
EXPORT_SYMBOL(smp_flush_tlb_all);
diff -urN linux-davidm/arch/ia64/kernel/mca.c lia64/arch/ia64/kernel/mca.c
--- linux-davidm/arch/ia64/kernel/mca.c Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/kernel/mca.c Mon Nov 5 21:07:13 2001
@@ -1575,7 +1575,9 @@
ia64_log_proc_dev_err_info_print (sal_log_processor_info_t *slpi,
prfunc_t prfunc)
{
+#ifdef MCA_PRT_XTRA_DATA
size_t d_len = slpi->header.len - sizeof(sal_log_section_hdr_t);
+#endif
sal_processor_static_info_t *spsi;
int i;
sal_log_mod_error_info_t *p_data;
diff -urN linux-davidm/arch/ia64/lib/swiotlb.c lia64/arch/ia64/lib/swiotlb.c
--- linux-davidm/arch/ia64/lib/swiotlb.c Tue Jul 31 10:30:08 2001
+++ lia64/arch/ia64/lib/swiotlb.c Mon Nov 5 22:17:35 2001
@@ -398,7 +398,7 @@
BUG();
for (i = 0; i < nelems; i++, sg++) {
- sg->orig_address = sg->address;
+ sg->page = sg->address;
if ((virt_to_phys(sg->address) & ~hwdev->dma_mask) != 0) {
sg->address = map_single(hwdev, sg->address, sg->length, direction);
}
@@ -419,9 +419,9 @@
BUG();
for (i = 0; i < nelems; i++, sg++)
- if (sg->orig_address != sg->address) {
+ if (sg->page != sg->address) {
unmap_single(hwdev, sg->address, sg->length, direction);
- sg->address = sg->orig_address;
+ sg->address = sg->page;
} else if (direction = PCI_DMA_FROMDEVICE)
mark_clean(sg->address, sg->length);
}
@@ -442,7 +442,7 @@
BUG();
for (i = 0; i < nelems; i++, sg++)
- if (sg->orig_address != sg->address)
+ if (sg->page != sg->address)
sync_single(hwdev, sg->address, sg->length, direction);
}
diff -urN linux-davidm/arch/ia64/mm/init.c lia64/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/mm/init.c Mon Nov 5 21:07:35 2001
@@ -276,7 +276,7 @@
ia64_clear_ic(flags);
rid = ia64_rid(IA64_REGION_ID_KERNEL, __IA64_UNCACHED_OFFSET);
- ia64_set_rr(__IA64_UNCACHED_OFFSET, (rid << 8) | (KERNEL_PG_SHIFT << 2));
+ ia64_set_rr(__IA64_UNCACHED_OFFSET, (rid << 8) | (IA64_GRANULE_SHIFT << 2));
rid = ia64_rid(IA64_REGION_ID_KERNEL, VMALLOC_START);
ia64_set_rr(VMALLOC_START, (rid << 8) | (PAGE_SHIFT << 2) | 1);
diff -urN linux-davidm/arch/ia64/mm/tlb.c lia64/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c Mon Nov 5 21:43:07 2001
+++ lia64/arch/ia64/mm/tlb.c Wed Oct 24 21:27:13 2001
@@ -79,7 +79,7 @@
flush_tlb_all();
}
-static void
+static inline void
ia64_global_tlb_purge (unsigned long start, unsigned long end, unsigned long nbits)
{
static spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED;
diff -urN linux-davidm/drivers/acpi/hardware/hwacpi.c lia64/drivers/acpi/hardware/hwacpi.c
--- linux-davidm/drivers/acpi/hardware/hwacpi.c Mon Nov 5 21:43:07 2001
+++ lia64/drivers/acpi/hardware/hwacpi.c Mon Nov 5 21:08:40 2001
@@ -221,11 +221,14 @@
/* Give the platform some time to react */
- acpi_os_stall (20000);
+ while (retries-- > 0) {
+ acpi_os_stall (20000);
- if (acpi_hw_get_mode () = mode) {
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
- status = AE_OK;
+ if (acpi_hw_get_mode () = mode) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
+ status = AE_OK;
+ break;
+ }
}
return_ACPI_STATUS (status);
diff -urN linux-davidm/drivers/acpi/include/acutils.h lia64/drivers/acpi/include/acutils.h
--- linux-davidm/drivers/acpi/include/acutils.h Mon Nov 5 21:43:07 2001
+++ lia64/drivers/acpi/include/acutils.h Mon Nov 5 21:08:49 2001
@@ -402,7 +402,7 @@
acpi_status
acpi_ut_execute_CID (
acpi_namespace_node *device_node,
- ACPI_DEVICE_ID *cid);
+ acpi_device_id *cid);
acpi_status
acpi_ut_execute_HID (
diff -urN linux-davidm/drivers/acpi/namespace/nsxfobj.c lia64/drivers/acpi/namespace/nsxfobj.c
--- linux-davidm/drivers/acpi/namespace/nsxfobj.c Mon Nov 5 21:43:07 2001
+++ lia64/drivers/acpi/namespace/nsxfobj.c Mon Nov 5 21:08:57 2001
@@ -548,6 +548,7 @@
acpi_namespace_node *node;
u32 flags;
acpi_device_id device_id;
+ acpi_device_id compatible_id;
acpi_get_devices_info *info;
diff -urN linux-davidm/drivers/acpi/utilities/uteval.c lia64/drivers/acpi/utilities/uteval.c
--- linux-davidm/drivers/acpi/utilities/uteval.c Mon Nov 5 21:43:07 2001
+++ lia64/drivers/acpi/utilities/uteval.c Mon Nov 5 21:09:07 2001
@@ -132,7 +132,7 @@
acpi_status
acpi_ut_execute_CID (
acpi_namespace_node *device_node,
- ACPI_DEVICE_ID *cid)
+ acpi_device_id *cid)
{
acpi_operand_object *obj_desc;
acpi_status status;
diff -urN linux-davidm/drivers/block/loop.c lia64/drivers/block/loop.c
--- linux-davidm/drivers/block/loop.c Mon Nov 5 18:28:46 2001
+++ lia64/drivers/block/loop.c Mon Nov 5 21:09:16 2001
@@ -207,7 +207,6 @@
index++;
pos += size;
UnlockPage(page);
- deactivate_page(page);
page_cache_release(page);
}
return 0;
@@ -218,7 +217,6 @@
kunmap(page);
unlock:
UnlockPage(page);
- deactivate_page(page);
page_cache_release(page);
fail:
return -1;
diff -urN linux-davidm/drivers/char/Config.in lia64/drivers/char/Config.in
--- linux-davidm/drivers/char/Config.in Mon Nov 5 21:43:07 2001
+++ lia64/drivers/char/Config.in Mon Nov 5 21:09:27 2001
@@ -16,6 +16,9 @@
tristate ' Dual serial port support' CONFIG_DUALSP_SERIAL
fi
fi
+if [ "$CONFIG_ACPI" = "y" ]; then
+ bool ' Support for serial ports defined by ACPI tables' CONFIG_SERIAL_ACPI
+fi
dep_mbool 'Extended dumb serial driver options' CONFIG_SERIAL_EXTENDED $CONFIG_SERIAL
if [ "$CONFIG_SERIAL_EXTENDED" = "y" ]; then
bool ' Support more than 4 serial ports' CONFIG_SERIAL_MANY_PORTS
diff -urN linux-davidm/drivers/char/Makefile lia64/drivers/char/Makefile
--- linux-davidm/drivers/char/Makefile Mon Nov 5 21:43:07 2001
+++ lia64/drivers/char/Makefile Mon Nov 5 21:09:36 2001
@@ -126,6 +126,7 @@
obj-$(CONFIG_VT) += vt.o vc_screen.o consolemap.o consolemap_deftbl.o $(CONSOLE) selection.o
obj-$(CONFIG_SERIAL) += $(SERIAL)
+obj-$(CONFIG_SERIAL_ACPI) += acpi_serial.o
obj-$(CONFIG_SERIAL_21285) += serial_21285.o
obj-$(CONFIG_SERIAL_SA1100) += serial_sa1100.o
obj-$(CONFIG_SERIAL_AMBA) += serial_amba.o
diff -urN linux-davidm/drivers/char/acpi_serial.c lia64/drivers/char/acpi_serial.c
--- linux-davidm/drivers/char/acpi_serial.c Wed Dec 31 16:00:00 1969
+++ lia64/drivers/char/acpi_serial.c Mon Nov 5 21:09:45 2001
@@ -0,0 +1,194 @@
+/*
+ * linux/drivers/char/acpi_serial.c
+ *
+ * Copyright (C) 2000 Hewlett-Packard Co.
+ * Copyright (C) 2000 Khalid Aziz <khalid_aziz@hp.com>
+ *
+ * Detect and initialize the headless console serial port defined in
+ * SPCR table and debug serial port defined in DBGP table
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/pm.h>
+#include <linux/acpi.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <asm/serial.h>
+#include <asm/io.h>
+#include <linux/acpi_serial.h>
+/*#include <asm/acpi-ext.h>*/
+
+#undef SERIAL_DEBUG_ACPI
+
+/*
+ * Query ACPI tables for a debug and a headless console serial
+ * port. If found, add them to rs_table[]. A pointer to either SPCR
+ * or DBGP table is passed as parameter. This function should be called
+ * before serial_console_init() is called to make sure the SPCR serial
+ * console will be available for use. IA-64 kernel calls this function
+ * from within acpi.c when it encounters SPCR or DBGP tables as it parses
+ * the ACPI 2.0 tables during bootup.
+ *
+ */
+void __init setup_serial_acpi(void *tablep)
+{
+ acpi_ser_t *acpi_ser_p;
+ struct serial_struct serial_req;
+ unsigned long iobase;
+ int global_sys_irq;
+
+#ifdef SERIAL_DEBUG_ACPI
+ printk("Entering setup_serial_acpi()\n");
+#endif
+
+ /* Now get the table */
+ if (tablep = NULL) {
+ return;
+ }
+
+ acpi_ser_p = (acpi_ser_t *)tablep;
+
+ /*
+ * Perform a sanity check on the table. Table should have a
+ * signature of "SPCR" or "DBGP" and it should be atleast 52 bytes
+ * long.
+ */
+ if ((strncmp(acpi_ser_p->signature, ACPI_SPCRT_SIGNATURE,
+ ACPI_SIG_LEN) != 0) &&
+ (strncmp(acpi_ser_p->signature, ACPI_DBGPT_SIGNATURE,
+ ACPI_SIG_LEN) != 0)) {
+ return;
+ }
+ if (acpi_ser_p->length < 52) {
+ return;
+ }
+
+ iobase = (((u64) acpi_ser_p->base_addr.addrh) << 32) | acpi_ser_p->base_addr.addrl;
+ global_sys_irq = (acpi_ser_p->global_int[3] << 24) |
+ (acpi_ser_p->global_int[2] << 16) |
+ (acpi_ser_p->global_int[1] << 8) |
+ acpi_ser_p->global_int[0];
+
+#ifdef SERIAL_DEBUG_ACPI
+ printk("setup_serial_acpi(): table pointer = 0x%p\n", acpi_ser_p);
+ printk(" sig = '%c%c%c%c'\n",
+ acpi_ser_p->signature[0],
+ acpi_ser_p->signature[1],
+ acpi_ser_p->signature[2],
+ acpi_ser_p->signature[3]);
+ printk(" length = %d\n", acpi_ser_p->length);
+ printk(" Rev = %d\n", acpi_ser_p->rev);
+ printk(" Interface type = %d\n", acpi_ser_p->intfc_type);
+ printk(" Base address = 0x%lX\n", iobase);
+ printk(" IRQ = %d\n", acpi_ser_p->irq);
+ printk(" Global System Int = %d\n", global_sys_irq);
+ printk(" Baud rate = ");
+ switch (acpi_ser_p->baud) {
+ case ACPI_SERIAL_BAUD_9600:
+ printk("9600\n");
+ break;
+
+ case ACPI_SERIAL_BAUD_19200:
+ printk("19200\n");
+ break;
+
+ case ACPI_SERIAL_BAUD_57600:
+ printk("57600\n");
+ break;
+
+ case ACPI_SERIAL_BAUD_115200:
+ printk("115200\n");
+ break;
+
+ default:
+ printk("Huh (%d)\n", acpi_ser_p->baud);
+ break;
+
+ }
+ if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_PCICONF_SPACE) {
+ printk(" PCI serial port:\n");
+ printk(" Bus %d, Device %d, Vendor ID 0x%x, Dev ID 0x%x\n",
+ acpi_ser_p->pci_bus, acpi_ser_p->pci_dev,
+ acpi_ser_p->pci_vendor_id, acpi_ser_p->pci_dev_id);
+ }
+#endif
+
+ /*
+ * Now build a serial_req structure to update the entry in
+ * rs_table for the headless console port.
+ */
+ switch (acpi_ser_p->intfc_type) {
+ case ACPI_SERIAL_INTFC_16550:
+ serial_req.type = PORT_16550;
+ serial_req.baud_base = BASE_BAUD;
+ break;
+
+ case ACPI_SERIAL_INTFC_16450:
+ serial_req.type = PORT_16450;
+ serial_req.baud_base = BASE_BAUD;
+ break;
+
+ default:
+ serial_req.type = PORT_UNKNOWN;
+ break;
+ }
+ if (strncmp(acpi_ser_p->signature, ACPI_SPCRT_SIGNATURE,
+ ACPI_SIG_LEN) = 0) {
+ serial_req.line = ACPI_SERIAL_CONSOLE_PORT;
+ }
+ else if (strncmp(acpi_ser_p->signature, ACPI_DBGPT_SIGNATURE,
+ ACPI_SIG_LEN) = 0) {
+ serial_req.line = ACPI_SERIAL_DEBUG_PORT;
+ }
+ /*
+ * Check if this is an I/O mapped address or a memory mapped address
+ */
+ if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_MEM_SPACE) {
+ serial_req.port = 0;
+ serial_req.port_high = 0;
+ serial_req.iomem_base = (void *)ioremap(iobase, 64);
+ serial_req.io_type = SERIAL_IO_MEM;
+ }
+ else if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_IO_SPACE) {
+ serial_req.port = (unsigned long) iobase & 0xffffffff;
+ serial_req.port_high = (unsigned long)(((u64)iobase) >> 32);
+ serial_req.iomem_base = NULL;
+ serial_req.io_type = SERIAL_IO_PORT;
+ }
+ else if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_PCICONF_SPACE) {
+ printk("WARNING: No support for PCI serial console\n");
+ return;
+ }
+
+ /*
+ * If the table does not have IRQ information, use 0 for IRQ.
+ * This will force rs_init() to probe for IRQ.
+ */
+ if (acpi_ser_p->length < 53) {
+ serial_req.irq = 0;
+ }
+ else {
+ if (acpi_ser_p->int_type &
+ (ACPI_SERIAL_INT_APIC | ACPI_SERIAL_INT_SAPIC)) {
+ serial_req.irq = global_sys_irq;
+ }
+ else if (acpi_ser_p->int_type & ACPI_SERIAL_INT_PCAT) {
+ serial_req.irq = acpi_ser_p->irq;
+ }
+ }
+
+ serial_req.flags = ASYNC_SKIP_TEST | ASYNC_BOOT_AUTOCONF | ASYNC_AUTO_IRQ;
+ serial_req.xmit_fifo_size = serial_req.custom_divisor = 0;
+ serial_req.close_delay = serial_req.hub6 = serial_req.closing_wait = 0;
+ serial_req.iomem_reg_shift = 0;
+ if (early_serial_setup(&serial_req) < 0) {
+ printk("early_serial_setup() for ACPI serial console port failed\n");
+ return;
+ }
+
+#ifdef SERIAL_DEBUG_ACPI
+ printk("Leaving setup_serial_acpi()\n");
+#endif
+}
diff -urN linux-davidm/drivers/char/serial.c lia64/drivers/char/serial.c
--- linux-davidm/drivers/char/serial.c Mon Nov 5 18:28:47 2001
+++ lia64/drivers/char/serial.c Mon Nov 5 21:11:43 2001
@@ -85,6 +85,11 @@
* SERIAL_PARANOIA_CHECK
* Check the magic number for the async_structure where
* ever possible.
+ *
+ * CONFIG_SERIAL_ACPI
+ * Enable support for serial console port and serial
+ * debug port as defined by the SPCR and DBGP tables in
+ * ACPI 2.0.
*/
#include <linux/config.h>
@@ -113,6 +118,10 @@
#endif
#endif
+#ifdef CONFIG_SERIAL_ACPI
+#define ENABLE_SERIAL_ACPI
+#endif
+
#if defined(CONFIG_ISAPNP)|| (defined(CONFIG_ISAPNP_MODULE) && defined(MODULE))
#ifndef ENABLE_SERIAL_PNP
#define ENABLE_SERIAL_PNP
@@ -2355,7 +2364,7 @@
autoconfig(info->state);
if ((info->state->flags & ASYNC_AUTO_IRQ) &&
- (info->state->port != 0) &&
+ (info->state->port != 0 || info->state->iomem_base != 0) &&
(info->state->type != PORT_UNKNOWN)) {
irq = detect_uart_irq(info->state);
if (irq > 0)
@@ -3384,6 +3393,10 @@
" ISAPNP"
#define SERIAL_OPT
#endif
+#ifdef ENABLE_SERIAL_ACPI
+ " SERIAL_ACPI"
+#define SERIAL_OPT
+#endif
#ifdef SERIAL_OPT
" enabled\n";
#else
@@ -5475,13 +5488,22 @@
continue;
if ( (state->flags & ASYNC_BOOT_AUTOCONF)
&& (state->flags & ASYNC_AUTO_IRQ)
- && (state->port != 0))
+ && (state->port != 0 || state->iomem_base != 0))
state->irq = detect_uart_irq(state);
- printk(KERN_INFO "ttyS%02d%s at 0x%04lx (irq = %d) is a %s\n",
- state->line + SERIAL_DEV_OFFSET,
- (state->flags & ASYNC_FOURPORT) ? " FourPort" : "",
- state->port, state->irq,
- uart_config[state->type].name);
+ if (state->io_type = SERIAL_IO_MEM) {
+ printk(KERN_INFO"ttyS%02d%s at 0x%px (irq = %d) is a %s\n",
+ state->line + SERIAL_DEV_OFFSET,
+ (state->flags & ASYNC_FOURPORT) ? " FourPort" : "",
+ state->iomem_base, state->irq,
+ uart_config[state->type].name);
+ }
+ else {
+ printk(KERN_INFO "ttyS%02d%s at 0x%04lx (irq = %d) is a %s\n",
+ state->line + SERIAL_DEV_OFFSET,
+ (state->flags & ASYNC_FOURPORT) ? " FourPort" : "",
+ state->port, state->irq,
+ uart_config[state->type].name);
+ }
tty_register_devfs(&serial_driver, 0,
serial_driver.minor_start + state->line);
tty_register_devfs(&callout_driver, 0,
diff -urN linux-davidm/drivers/scsi/scsi_merge.c lia64/drivers/scsi/scsi_merge.c
--- linux-davidm/drivers/scsi/scsi_merge.c Mon Nov 5 18:29:00 2001
+++ lia64/drivers/scsi/scsi_merge.c Mon Nov 5 22:18:02 2001
@@ -942,6 +942,7 @@
}
}
count++;
+ memset(sgpnt + count - 1, 0, sizeof(*sgpnt));
sgpnt[count - 1].address = bh->b_data;
sgpnt[count - 1].page = NULL;
sgpnt[count - 1].length += bh->b_size;
diff -urN linux-davidm/drivers/video/fbmem.c lia64/drivers/video/fbmem.c
--- linux-davidm/drivers/video/fbmem.c Wed Oct 24 10:17:32 2001
+++ lia64/drivers/video/fbmem.c Mon Nov 5 21:12:42 2001
@@ -609,6 +609,8 @@
vma->vm_flags |= VM_IO;
#elif defined(__sh__)
pgprot_val(vma->vm_page_prot) &= ~_PAGE_CACHABLE;
+#elif defined(__ia64__)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
#else
#warning What do we have to do here??
#endif
diff -urN linux-davidm/include/asm-ia64/io.h lia64/include/asm-ia64/io.h
--- linux-davidm/include/asm-ia64/io.h Mon Nov 5 21:43:09 2001
+++ lia64/include/asm-ia64/io.h Mon Nov 5 21:37:14 2001
@@ -14,7 +14,7 @@
* mistake somewhere.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*/
@@ -25,7 +25,12 @@
#define __IA64_UNCACHED_OFFSET 0xc000000000000000 /* region 6 */
-#define IO_SPACE_LIMIT 0xffff
+/*
+ * The legacy I/O space defined by the ia64 architecture supports only 65536 ports, but
+ * large machines may have multiple other I/O spaces so we can't place any a priori limit
+ * on IO_SPACE_LIMIT. These additional spaces are described in ACPI.
+ */
+#define IO_SPACE_LIMIT 0xffffffffffffffffUL
# ifdef __KERNEL__
diff -urN linux-davidm/include/asm-ia64/pci.h lia64/include/asm-ia64/pci.h
--- linux-davidm/include/asm-ia64/pci.h Mon Nov 5 21:43:09 2001
+++ lia64/include/asm-ia64/pci.h Mon Nov 5 22:18:23 2001
@@ -60,7 +60,7 @@
#define pci_map_page(dev,pg,off,size,dir) \
pci_map_single((dev), page_address(pg) + (off), (size), (dir))
#define pci_unmap_page(dev,dma_addr,size,dir) \
- pci_unmap_single((dev), (dma_addr), (size), (dir)
+ pci_unmap_single((dev), (dma_addr), (size), (dir))
/* The ia64 platform always supports 64-bit addressing. */
#define pci_dac_dma_supported(pci_dev, mask) (1)
diff -urN linux-davidm/include/asm-ia64/pgtable.h lia64/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Tue Jul 31 10:30:09 2001
+++ lia64/include/asm-ia64/pgtable.h Mon Nov 5 21:37:16 2001
@@ -9,7 +9,7 @@
* in <asm/page.h> (currently 8192).
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
@@ -466,11 +466,21 @@
# endif /* !__ASSEMBLY__ */
/*
- * Identity-mapped regions use a large page size. KERNEL_PG_NUM is the
- * number of the (large) page frame that mapps the kernel.
+ * Identity-mapped regions use a large page size. We'll call such large pages
+ * "granules". If you can think of a better name that's unambiguous, let me
+ * know...
+ */
+#if defined(CONFIG_IA64_GRANULE_64MB)
+# define IA64_GRANULE_SHIFT _PAGE_SIZE_64M
+#elif defined(CONFIG_IA64_GRANULE_16MB)
+# define IA64_GRANULE_SHIFT _PAGE_SIZE_16M
+#endif
+#define IA64_GRANULE_SIZE (1 << IA64_GRANULE_SHIFT)
+/*
+ * log2() of the page size we use to map the kernel image (IA64_TR_KERNEL):
*/
-#define KERNEL_PG_SHIFT _PAGE_SIZE_64M
-#define KERNEL_PG_SIZE (1 << KERNEL_PG_SHIFT)
-#define KERNEL_PG_NUM ((KERNEL_START - PAGE_OFFSET) / KERNEL_PG_SIZE)
+#define KERNEL_TR_PAGE_SHIFT _PAGE_SIZE_64M
+#define KERNEL_TR_PAGE_SIZE (1 << KERNEL_TR_PAGE_SHIFT)
+#define KERNEL_TR_PAGE_NUM ((KERNEL_START - PAGE_OFFSET) / KERNEL_TR_PAGE_SIZE)
#endif /* _ASM_IA64_PGTABLE_H */
diff -urN linux-davidm/include/asm-ia64/scatterlist.h lia64/include/asm-ia64/scatterlist.h
--- linux-davidm/include/asm-ia64/scatterlist.h Wed Oct 24 10:18:00 2001
+++ lia64/include/asm-ia64/scatterlist.h Mon Nov 5 22:17:49 2001
@@ -2,13 +2,13 @@
#define _ASM_IA64_SCATTERLIST_H
/*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
struct scatterlist {
char *address; /* location data is to be transferred to */
- char *orig_address; /* Save away the original buffer address (used by pci-dma.c) */
+ void *page; /* stupid: SCSI code insists on a member of this name... */
unsigned int length; /* buffer length */
};
diff -urN linux-davidm/include/linux/acpi_serial.h lia64/include/linux/acpi_serial.h
--- linux-davidm/include/linux/acpi_serial.h Wed Dec 31 16:00:00 1969
+++ lia64/include/linux/acpi_serial.h Mon Nov 5 21:14:00 2001
@@ -0,0 +1,103 @@
+/*
+ * linux/include/linux/acpi_serial.h
+ *
+ * Copyright (C) 2000 Hewlett-Packard Co.
+ * Copyright (C) 2000 Khalid Aziz <khalid_aziz@hp.com>
+ *
+ * Definitions for ACPI defined serial ports (headless console and
+ * debug ports)
+ *
+ */
+
+extern void setup_serial_acpi(void *);
+
+/* ACPI table signatures */
+#define ACPI_SPCRT_SIGNATURE "SPCR"
+#define ACPI_DBGPT_SIGNATURE "DBGP"
+
+/* Interface type as defined in ACPI serial port tables */
+#define ACPI_SERIAL_INTFC_16550 0
+#define ACPI_SERIAL_INTFC_16450 1
+
+/* Interrupt types for ACPI serial port tables */
+#define ACPI_SERIAL_INT_PCAT 0x01
+#define ACPI_SERIAL_INT_APIC 0x02
+#define ACPI_SERIAL_INT_SAPIC 0x04
+
+/* Baud rates as defined in ACPI serial port tables */
+#define ACPI_SERIAL_BAUD_9600 3
+#define ACPI_SERIAL_BAUD_19200 4
+#define ACPI_SERIAL_BAUD_57600 6
+#define ACPI_SERIAL_BAUD_115200 7
+
+/* Parity as defined in ACPI serial port tables */
+#define ACPI_SERIAL_PARITY_NONE 0
+
+/* Flow control methods as defined in ACPI serial port tables */
+#define ACPI_SERIAL_FLOW_DCD 0x01
+#define ACPI_SERIAL_FLOW_RTS 0x02
+#define ACPI_SERIAL_FLOW_XON 0x04
+
+/* Terminal types as defined in ACPI serial port tables */
+#define ACPI_SERIAL_TERM_VT100 0
+#define ACPI_SERIAL_TERM_VT100X 1
+
+/* PCI Flags as defined by SPCR table */
+#define ACPI_SERIAL_PCIFLAG_PNP 0x00000001
+
+/* Space ID as defined in base address structure in ACPI serial port tables */
+#define ACPI_SERIAL_MEM_SPACE 0
+#define ACPI_SERIAL_IO_SPACE 1
+#define ACPI_SERIAL_PCICONF_SPACE 2
+
+/*
+ * Generic Register Address Structure - as defined by Microsoft
+ * in http://www.microsoft.com/hwdev/onnow/download/LFreeACPI.doc
+ *
+*/
+typedef struct {
+ u8 space_id;
+ u8 bit_width;
+ u8 bit_offset;
+ u8 resv;
+ u32 addrl;
+ u32 addrh;
+} gen_regaddr;
+
+/* Space ID for generic register address structure */
+#define REGADDR_SPACE_SYSMEM 0
+#define REGADDR_SPACE_SYSIO 1
+#define REGADDR_SPACE_PCICONFIG 2
+
+/* Serial Port Console Redirection and Debug Port Table formats */
+typedef struct {
+ u8 signature[4];
+ u32 length;
+ u8 rev;
+ u8 chksum;
+ u8 oemid[6];
+ u8 oem_tabid[8];
+ u32 oem_rev;
+ u8 creator_id[4];
+ u32 creator_rev;
+ u8 intfc_type;
+ u8 resv1[3];
+ gen_regaddr base_addr;
+ u8 int_type;
+ u8 irq;
+ u8 global_int[4];
+ u8 baud;
+ u8 parity;
+ u8 stop_bits;
+ u8 flow_ctrl;
+ u8 termtype;
+ u8 language;
+ u16 pci_dev_id;
+ u16 pci_vendor_id;
+ u8 pci_bus;
+ u8 pci_dev;
+ u8 pci_func;
+ u8 pci_flags[4];
+ u8 pci_seg;
+ u32 resv2;
+} acpi_ser_t;
diff -urN linux-davidm/include/linux/serial.h lia64/include/linux/serial.h
--- linux-davidm/include/linux/serial.h Fri Aug 10 18:13:47 2001
+++ lia64/include/linux/serial.h Mon Nov 5 21:37:52 2001
@@ -182,5 +182,11 @@
/* Allow complicated architectures to specify rs_table[] at run time */
extern int early_serial_setup(struct serial_struct *req);
+#ifdef CONFIG_ACPI
+/* tty ports reserved for the ACPI serial console port and debug port */
+#define ACPI_SERIAL_CONSOLE_PORT 4
+#define ACPI_SERIAL_DEBUG_PORT 5
+#endif
+
#endif /* __KERNEL__ */
#endif /* _LINUX_SERIAL_H */
diff -urN linux-davidm/kernel/printk.c lia64/kernel/printk.c
--- linux-davidm/kernel/printk.c Mon Nov 5 21:43:09 2001
+++ lia64/kernel/printk.c Mon Nov 5 21:16:22 2001
@@ -25,11 +25,10 @@
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h> /* For in_interrupt() */
-#include <linux/config.h>
#include <asm/uaccess.h>
-#ifdef CONFIG_MULTIQUAD
+#if defined(CONFIG_MULTIQUAD) || defined(CONFIG_IA64)
#define LOG_BUF_LEN (65536)
#elif defined(CONFIG_SMP)
#define LOG_BUF_LEN (32768)
diff -urN linux-davidm/mm/memory.c lia64/mm/memory.c
--- linux-davidm/mm/memory.c Mon Nov 5 21:43:09 2001
+++ lia64/mm/memory.c Mon Nov 5 21:16:33 2001
@@ -1338,7 +1338,7 @@
if (pmd) {
pte_t * pte = pte_alloc(mm, pmd, address);
if (pte)
- return handle_pte_fault(mm, vma, address, write_access, pte);
+ return handle_pte_fault(mm, vma, address, access_type, pte);
}
spin_unlock(&mm->page_table_lock);
return -1;
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.14)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (92 preceding siblings ...)
2001-11-06 6:59 ` [Linux-ia64] kernel update (relative to 2.4.14) David Mosberger
@ 2001-11-07 1:48 ` Keith Owens
2001-11-07 2:47 ` David Mosberger
` (121 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-11-07 1:48 UTC (permalink / raw)
To: linux-ia64
On Mon, 5 Nov 2001 22:59:53 -0800,
David Mosberger <davidm@hpl.hp.com> wrote:
> linux-2.4.14-ia64-011105.diff*
> - support non-legacy serial ports via ACPI
Select CONFIG_SERIAL_ACPI, get these errors.
acpi.c: In function `acpi20_parse':
acpi.c:462: `ACPI_SPCRT_SIG' undeclared (first use in this function)
acpi.c:462: (Each undeclared identifier is reported only once
acpi.c:462: for each function it appears in.)
acpi.c:462: `ACPI_SPCRT_SIG_LEN' undeclared (first use in this function)
acpi.c:463: `ACPI_DBGPT_SIG' undeclared (first use in this function)
acpi.c:463: `ACPI_DBGPT_SIG_LEN' undeclared (first use in this function)
acpi.c:404: warning: `madt' might be used uninitialized in this function
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.14)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (93 preceding siblings ...)
2001-11-07 1:48 ` Keith Owens
@ 2001-11-07 2:47 ` David Mosberger
2001-11-27 5:24 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
` (120 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-11-07 2:47 UTC (permalink / raw)
To: linux-ia64
>>>>> On Wed, 07 Nov 2001 12:48:20 +1100, Keith Owens <kaos@ocs.com.au> said:
Keith> On Mon, 5 Nov 2001 22:59:53 -0800,
Keith> David Mosberger <davidm@hpl.hp.com> wrote:
>> linux-2.4.14-ia64-011105.diff*
>> - support non-legacy serial ports via ACPI
Keith> Select CONFIG_SERIAL_ACPI, get these errors.
Keith> acpi.c: In function `acpi20_parse':
Keith> acpi.c:462: `ACPI_SPCRT_SIG' undeclared (first use in this function)
Keith> acpi.c:462: (Each undeclared identifier is reported only once
Keith> acpi.c:462: for each function it appears in.)
Keith> acpi.c:462: `ACPI_SPCRT_SIG_LEN' undeclared (first use in this function)
Keith> acpi.c:463: `ACPI_DBGPT_SIG' undeclared (first use in this function)
Keith> acpi.c:463: `ACPI_DBGPT_SIG_LEN' undeclared (first use in this function)
Keith> acpi.c:404: warning: `madt' might be used uninitialized in this function
Yes, the patch that was mailed to me had the attached hunk missing.
If you apply it, it should compile.
--david
--- clean/include/asm-ia64/acpi-ext.h Tue Nov 6 10:21:28 2001
+++ merge/include/asm-ia64/acpi-ext.h Tue Nov 6 10:48:10 2001
@@ -195,6 +195,12 @@
#define ACPI20_ENTRY_PIS_CPEI 3
#define ACPI_MAX_PLATFORM_IRQS 4
+#define ACPI_SPCRT_SIG "SPCR"
+#define ACPI_SPCRT_SIG_LEN 4
+
+#define ACPI_DBGPT_SIG "DBGP"
+#define ACPI_DBGPT_SIG_LEN 4
+
extern int acpi20_parse(acpi20_rsdp_t *);
extern int acpi_parse(acpi_rsdp_t *);
extern const char *acpi_get_sysname (void);
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (94 preceding siblings ...)
2001-11-07 2:47 ` David Mosberger
@ 2001-11-27 5:24 ` David Mosberger
2001-11-27 13:04 ` Andreas Schwab
` (119 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-11-27 5:24 UTC (permalink / raw)
To: linux-ia64
An updated ia64 patch for 2.4.16 is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
linux-2.4.16-ia64-011126.diff*
change log:
- update for 2.4.15/2.4.16:
- add show_trace_task()
- lots of SGI SN updates (most in "sn" subdirectories)
- IA-32 brk() fix by Don Dugger
- drop some of the more cosmetic fixes (they will be in the 2.5 tree
instead)
- fix EARLY_PRINTK (Tony Luck, I think)
- in iosapic.c, move RTE programming to a saner place (Alex Williamson)
- fix alignment of bootmap
- move machvec_init() a little earlier and call platform_cpu_init() at
end of cpu_init() (Jack Steiner)
- swiotlb.c: support "page" member in scatterlist
- fix asm clobbers in spinlock.h
- add more ACPI tables to asm-ia64/acpi-ext.h
This kernel has been tested with gcc-3.0 on Big Sur and HP Ski
simulator. As usual, your mileage may vary.
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (95 preceding siblings ...)
2001-11-27 5:24 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
@ 2001-11-27 13:04 ` Andreas Schwab
2001-11-27 17:02 ` John Hesterberg
` (118 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2001-11-27 13:04 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@hpl.hp.com> writes:
|> An updated ia64 patch for 2.4.16 is now available at
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
|>
|> linux-2.4.16-ia64-011126.diff*
|>
|> change log:
|>
|> - update for 2.4.15/2.4.16:
|> - add show_trace_task()
|> - lots of SGI SN updates (most in "sn" subdirectories)
|> - IA-32 brk() fix by Don Dugger
|> - drop some of the more cosmetic fixes (they will be in the 2.5 tree
|> instead)
|> - fix EARLY_PRINTK (Tony Luck, I think)
|> - in iosapic.c, move RTE programming to a saner place (Alex Williamson)
|> - fix alignment of bootmap
|> - move machvec_init() a little earlier and call platform_cpu_init() at
|> end of cpu_init() (Jack Steiner)
|> - swiotlb.c: support "page" member in scatterlist
|> - fix asm clobbers in spinlock.h
|> - add more ACPI tables to asm-ia64/acpi-ext.h
|>
|> This kernel has been tested with gcc-3.0 on Big Sur and HP Ski
|> simulator. As usual, your mileage may vary.
You have renamed numnodes in mm/numa.c, but not in include/linux/mmzone.h,
and there are many occurences of "extern int num_compact_nodes" all over
the place. I'm wondering why it was renamed in the first place.
Andreas.
--
Andreas Schwab "And now for something
Andreas.Schwab@suse.de completely different."
SuSE Labs, SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (96 preceding siblings ...)
2001-11-27 13:04 ` Andreas Schwab
@ 2001-11-27 17:02 ` John Hesterberg
2001-11-27 22:03 ` John Hesterberg
` (117 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: John Hesterberg @ 2001-11-27 17:02 UTC (permalink / raw)
To: linux-ia64
On Tue, Nov 27, 2001 at 02:04:15PM +0100, Andreas Schwab wrote:
> David Mosberger <davidm@hpl.hp.com> writes:
>
> |> An updated ia64 patch for 2.4.16 is now available at
> |> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
> |>
> |> linux-2.4.16-ia64-011126.diff*
> |>
> |> change log:
> |>
> |> - update for 2.4.15/2.4.16:
> |> - add show_trace_task()
> |> - lots of SGI SN updates (most in "sn" subdirectories)
> |> - IA-32 brk() fix by Don Dugger
> |> - drop some of the more cosmetic fixes (they will be in the 2.5 tree
> |> instead)
> |> - fix EARLY_PRINTK (Tony Luck, I think)
> |> - in iosapic.c, move RTE programming to a saner place (Alex Williamson)
> |> - fix alignment of bootmap
> |> - move machvec_init() a little earlier and call platform_cpu_init() at
> |> end of cpu_init() (Jack Steiner)
> |> - swiotlb.c: support "page" member in scatterlist
> |> - fix asm clobbers in spinlock.h
> |> - add more ACPI tables to asm-ia64/acpi-ext.h
> |>
> |> This kernel has been tested with gcc-3.0 on Big Sur and HP Ski
> |> simulator. As usual, your mileage may vary.
>
> You have renamed numnodes in mm/numa.c, but not in include/linux/mmzone.h,
> and there are many occurences of "extern int num_compact_nodes" all over
> the place. I'm wondering why it was renamed in the first place.
>
> Andreas.
It was changed because numnodes is ambiguous. This is part of a
significant numa/discontig memory change that will be submitted to
the ia64 2.5 tree. This little piece came in as part of the SGI SN
updates, since it's more clear, and the SN code has lots of
references to it, and only a couple in the non-SN code.
Of course, as you point out, I didn't get the mmzone.h change in
right, I'll send an update for that.
The numa/discontig work is part of the Linux Scalability Effort
that is on sourceforge. For details, see:
http://sourceforge.net/projects/discontig
The latest patch is:
http://prdownloads.sourceforge.net/discontig/discontig-2.4.14.gz
John
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (97 preceding siblings ...)
2001-11-27 17:02 ` John Hesterberg
@ 2001-11-27 22:03 ` John Hesterberg
2001-11-29 0:41 ` David Mosberger
` (116 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: John Hesterberg @ 2001-11-27 22:03 UTC (permalink / raw)
To: linux-ia64
> > You have renamed numnodes in mm/numa.c, but not in include/linux/mmzone.h,
> > and there are many occurences of "extern int num_compact_nodes" all over
> > the place. I'm wondering why it was renamed in the first place.
> >
> > Andreas.
>
> It was changed because numnodes is ambiguous. This is part of a
> significant numa/discontig memory change that will be submitted to
> the ia64 2.5 tree. This little piece came in as part of the SGI SN
> updates, since it's more clear, and the SN code has lots of
> references to it, and only a couple in the non-SN code.
Shoot. I didn't notice that other platforms also use this.
I didn't mean to bite off that much. Below is a patch putting
numnodes back in the non-SN files, and I'll let the numa/discontig
effort sort out if they really want to change the name.
John
diff -Naur 2.4.16-ia64/include/asm-ia64/acpi-ext.h 16i/include/asm-ia64/acpi-ext.h
--- 2.4.16-ia64/include/asm-ia64/acpi-ext.h Tue Nov 27 15:26:31 2001
+++ 16i/include/asm-ia64/acpi-ext.h Tue Nov 27 21:44:11 2001
@@ -297,7 +297,7 @@
extern int pxm_to_nid_map[MAX_PXM_DOMAINS]; /* _PXM to logical node ID map */
extern int nid_to_pxm_map[PLAT_MAX_COMPACT_NODES]; /* logical node ID to _PXM map */
-extern int num_compact_nodes; /* total number of nodes in system */
+extern int numnodes; /* total number of nodes in system */
extern int num_memory_chunks; /* total number of memory chunks */
/*
diff -Naur 2.4.16-ia64/mm/numa.c 16i/mm/numa.c
--- 2.4.16-ia64/mm/numa.c Tue Nov 27 15:26:32 2001
+++ 16i/mm/numa.c Tue Nov 27 21:40:38 2001
@@ -9,7 +9,7 @@
#include <linux/mmzone.h>
#include <linux/spinlock.h>
-int num_compact_nodes = 1; /* Initialized for UMA platforms */
+int numnodes = 1; /* Initialized for UMA platforms */
static bootmem_data_t contig_bootmem_data;
pg_data_t contig_page_data = { bdata: &contig_bootmem_data };
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (98 preceding siblings ...)
2001-11-27 22:03 ` John Hesterberg
@ 2001-11-29 0:41 ` David Mosberger
2001-12-05 15:25 ` [Linux-ia64] kernel update (relative to 2.4.10) n0ano
` (115 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-11-29 0:41 UTC (permalink / raw)
To: linux-ia64
The ia64 patch for 2.4.16 has been updated (in the usual place). The
patch fixes the NUMA issue reported by Andreas. Also, there were a
couple of cleanups which should make it in theory possible to use this
source tree for building both x86 and ia64 kernels. Thanks to John
Hesterberg for some of those patches. I suspect this will make some
people happy and others unhappy (since it increases the patch size,
slightly). But the additional changes required to the x86 specific
files are so small that it shouldn't be a major issue.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (99 preceding siblings ...)
2001-11-29 0:41 ` David Mosberger
@ 2001-12-05 15:25 ` n0ano
2001-12-15 5:13 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
` (114 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: n0ano @ 2001-12-05 15:25 UTC (permalink / raw)
To: linux-ia64
Sorry about this out of date message. Due to my job upheaval at VA
there was a period when there was no administrator for this list.
That got resolved today and I've been clearing out old administrivia.
I probably should have just deep six'd this message, oh well.
On Tue, Sep 25, 2001 at 12:13:28AM -0700, David Mosberger wrote:
> Here is a long-awaited kernel update which brings us in sync with
> 2.4.10. As usual, it's available at
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
>
> linux-2.4.10-ia64-010924.diff*
>
--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
n0ano@indstorage.com
Ph: 303/652-0870x117
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (100 preceding siblings ...)
2001-12-05 15:25 ` [Linux-ia64] kernel update (relative to 2.4.10) n0ano
@ 2001-12-15 5:13 ` David Mosberger
2001-12-15 8:12 ` Keith Owens
` (113 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-12-15 5:13 UTC (permalink / raw)
To: linux-ia64
An updated ia64 patch for 2.4.16 is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.16-ia64-011214.diff*
change log (relative to 2.4.16+ia64-011128):
- Add /proc/sal/ support (Jesse Barnes)
- Add memory barrier before releasing a lock with clear_bit (Jack Steiner).
- Cache-align BKL (Jack Steiner).
- Fix kernel exit path to deliver only one signal at a time.
- Disable interrupts during re-scheduling & signal delivery checking.
- Fix performance bug in SW I/O TLB (Tony Luck)
- Fix couple of buglets in simeth and simserial (Stephane & me)
- Don't call consoles on APs until CPU is fully initialized
(Andrew Morton, NOMURA, Jun'ichi, me)
Enjoy,
--david
diff -urN linux-davidm/arch/ia64/kernel/Makefile lia64-2.4/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Mon Nov 26 11:18:20 2001
+++ lia64-2.4/arch/ia64/kernel/Makefile Fri Dec 14 15:48:59 2001
@@ -14,7 +14,7 @@
export-objs := ia64_ksyms.o
obj-y := acpi.o entry.o gate.o efi.o efi_stub.o ia64_ksyms.o irq.o irq_ia64.o irq_lsapic.o ivt.o \
- machvec.o pal.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
+ machvec.o pal.o process.o perfmon.o ptrace.o sal.o salinfo.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
obj-$(CONFIG_IA64_GENERIC) += iosapic.o
obj-$(CONFIG_IA64_DIG) += iosapic.o
diff -urN linux-davidm/arch/ia64/kernel/entry.S lia64-2.4/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Mon Nov 26 11:18:20 2001
+++ lia64-2.4/arch/ia64/kernel/entry.S Tue Dec 4 19:35:07 2001
@@ -521,6 +521,8 @@
;;
mov.ret.sptk rp=r14,.restart
.restart:
+ // need_resched and signals atomic test
+(pUser) rsm psr.i
adds r17=IA64_TASK_NEED_RESCHED_OFFSET,r13
adds r18=IA64_TASK_SIGPENDING_OFFSET,r13
#ifdef CONFIG_PERFMON
@@ -539,8 +541,6 @@
(pUser) cmp.ne.unc p7,p0=r17,r0 // current->need_resched != 0?
(pUser) cmp.ne.unc p8,p0=r18,r0 // current->sigpending != 0?
;;
- adds r2=PT(R8)+16,r12
- adds r3=PT(R9)+16,r12
#ifdef CONFIG_PERFMON
(p9) br.call.spnt.many b7=pfm_block_on_overflow
#endif
@@ -549,7 +549,10 @@
#else
(p7) br.call.spnt.many b7=schedule
#endif
-(p8) br.call.spnt.many b7=handle_signal_delivery // check & deliver pending signals
+(p8) br.call.spnt.many rp=handle_signal_delivery // check & deliver pending signals (once)
+ ;;
+.ret9: adds r2=PT(R8)+16,r12
+ adds r3=PT(R9)+16,r12
;;
// start restoring the state saved on the kernel stack (struct pt_regs):
ld8.fill r8=[r2],16
@@ -582,7 +585,7 @@
ld8.fill r30=[r2],16
ld8.fill r31=[r3],16
;;
- rsm psr.i | psr.ic // initiate turning off of interrupts & interruption collection
+ rsm psr.i | psr.ic // initiate turning off of interrupt and interruption collection
invala // invalidate ALAT
;;
ld8 r1=[r2],16 // ar.ccv
@@ -601,7 +604,7 @@
mov ar.fpsr=r13
mov b0=r14
;;
- srlz.i // ensure interrupts & interruption collection are off
+ srlz.i // ensure interruption collection is off
mov b7=r15
;;
bsw.0 // switch back to bank 0
diff -urN linux-davidm/arch/ia64/kernel/irq.c lia64-2.4/arch/ia64/kernel/irq.c
--- linux-davidm/arch/ia64/kernel/irq.c Fri Dec 14 20:03:42 2001
+++ lia64-2.4/arch/ia64/kernel/irq.c Mon Dec 3 11:07:11 2001
@@ -291,6 +291,7 @@
break;
/* Duh, we have to loop. Release the lock to avoid deadlocks */
+ smp_mb__before_clear_bit(); /* need barrier before releasing lock... */
clear_bit(0,&global_irq_lock);
for (;;) {
diff -urN linux-davidm/arch/ia64/kernel/sal.c lia64-2.4/arch/ia64/kernel/sal.c
--- linux-davidm/arch/ia64/kernel/sal.c Mon Nov 26 11:18:21 2001
+++ lia64-2.4/arch/ia64/kernel/sal.c Fri Dec 14 15:50:57 2001
@@ -18,7 +18,8 @@
#include <asm/sal.h>
#include <asm/pal.h>
-spinlock_t sal_lock = SPIN_LOCK_UNLOCKED;
+spinlock_t sal_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED;
+unsigned long sal_platform_features;
static struct {
void *addr; /* function entry point */
@@ -76,7 +77,7 @@
return str;
}
-static void __init
+static void __init
ia64_sal_handler_init (void *entry_point, void *gpval)
{
/* fill in the SAL procedure descriptor and point ia64_sal to it: */
@@ -102,7 +103,7 @@
if (strncmp(systab->signature, "SST_", 4) != 0)
printk("bad signature in system table!");
- /*
+ /*
* revisions are coded in BCD, so %x does the job for us
*/
printk("SAL v%x.%02x: oem=%.32s, product=%.32s\n",
@@ -152,12 +153,12 @@
case SAL_DESC_PLATFORM_FEATURE:
{
struct ia64_sal_desc_platform_feature *pf = (void *) p;
+ sal_platform_features = pf->feature_mask;
printk("SAL: Platform features ");
- if (pf->feature_mask & (1 << 0))
+ if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_BUS_LOCK)
printk("BusLock ");
-
- if (pf->feature_mask & (1 << 1)) {
+ if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT) {
printk("IRQ_Redirection ");
#ifdef CONFIG_SMP
if (no_int_routing)
@@ -166,15 +167,17 @@
smp_int_redirect |= SMP_IRQ_REDIRECTION;
#endif
}
- if (pf->feature_mask & (1 << 2)) {
+ if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT) {
printk("IPI_Redirection ");
#ifdef CONFIG_SMP
- if (no_int_routing)
+ if (no_int_routing)
smp_int_redirect &= ~SMP_IPI_REDIRECTION;
else
smp_int_redirect |= SMP_IPI_REDIRECTION;
#endif
}
+ if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT)
+ printk("ITC_Drift ");
printk("\n");
break;
}
diff -urN linux-davidm/arch/ia64/kernel/salinfo.c lia64-2.4/arch/ia64/kernel/salinfo.c
--- linux-davidm/arch/ia64/kernel/salinfo.c Wed Dec 31 16:00:00 1969
+++ lia64-2.4/arch/ia64/kernel/salinfo.c Fri Dec 14 20:07:14 2001
@@ -0,0 +1,105 @@
+/*
+ * salinfo.c
+ *
+ * Creates entries in /proc/sal for various system features.
+ *
+ * Copyright (c) 2001 Silicon Graphics, Inc. All rights reserved.
+ *
+ * 10/30/2001 jbarnes@sgi.com copied much of Stephane's palinfo
+ * code to create this file
+ */
+
+#include <linux/types.h>
+#include <linux/proc_fs.h>
+#include <linux/module.h>
+
+#include <asm/sal.h>
+
+MODULE_AUTHOR("Jesse Barnes <jbarnes@sgi.com>");
+MODULE_DESCRIPTION("/proc interface to IA-64 SAL features");
+MODULE_LICENSE("GPL");
+
+int salinfo_read(char *page, char **start, off_t off, int count, int *eof, void *data);
+
+typedef struct {
+ const char *name; /* name of the proc entry */
+ unsigned long feature; /* feature bit */
+ struct proc_dir_entry *entry; /* registered entry (removal) */
+} salinfo_entry_t;
+
+/*
+ * List {name,feature} pairs for every entry in /proc/sal/<feature>
+ * that this module exports
+ */
+static salinfo_entry_t salinfo_entries[]={
+ { "bus_lock", IA64_SAL_PLATFORM_FEATURE_BUS_LOCK, },
+ { "irq_redirection", IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT, },
+ { "ipi_redirection", IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT, },
+ { "itc_drift", IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT, },
+};
+
+#define NR_SALINFO_ENTRIES (sizeof(salinfo_entries)/sizeof(salinfo_entry_t))
+
+/*
+ * One for each feature and one more for the directory entry...
+ */
+static struct proc_dir_entry *salinfo_proc_entries[NR_SALINFO_ENTRIES + 1];
+
+static int __init
+salinfo_init(void)
+{
+ struct proc_dir_entry *salinfo_dir; /* /proc/sal dir entry */
+ struct proc_dir_entry **sdir = salinfo_proc_entries; /* keeps track of every entry */
+ int i;
+
+ salinfo_dir = proc_mkdir("sal", NULL);
+
+ for (i=0; i < NR_SALINFO_ENTRIES; i++) {
+ /* pass the feature bit in question as misc data */
+ *sdir++ = create_proc_read_entry (salinfo_entries[i].name, 0, salinfo_dir,
+ salinfo_read, (void *)salinfo_entries[i].feature);
+ }
+ *sdir++ = salinfo_dir;
+
+ return 0;
+}
+
+static void __exit
+salinfo_exit(void)
+{
+ int i = 0;
+
+ for (i = 0; i < NR_SALINFO_ENTRIES ; i++) {
+ if (salinfo_proc_entries[i])
+ remove_proc_entry (salinfo_proc_entries[i]->name, NULL);
+ }
+}
+
+/*
+ * 'data' contains an integer that corresponds to the feature we're
+ * testing
+ */
+static int
+salinfo_read(char *page, char **start, off_t off, int count, int *eof, void *data)
+{
+ int len = 0;
+
+ MOD_INC_USE_COUNT;
+
+ len = sprintf(page, (sal_platform_features & (unsigned long)data) ? "1\n" : "0\n");
+
+ if (len <= off+count) *eof = 1;
+
+ *start = page + off;
+ len -= off;
+
+ if (len>count) len = count;
+ if (len<0) len = 0;
+
+ MOD_DEC_USE_COUNT;
+
+ return len;
+}
+
+module_init(salinfo_init);
+module_exit(salinfo_exit);
diff -urN linux-davidm/arch/ia64/kernel/smp.c lia64-2.4/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c Mon Nov 26 11:18:24 2001
+++ lia64-2.4/arch/ia64/kernel/smp.c Fri Dec 14 14:13:03 2001
@@ -51,7 +51,7 @@
#include <asm/mca.h>
/* The 'big kernel lock' */
-spinlock_t kernel_flag = SPIN_LOCK_UNLOCKED;
+spinlock_t kernel_flag __cacheline_aligned = SPIN_LOCK_UNLOCKED;
/*
* Structure and data for smp_call_function(). This is designed to minimise static memory
diff -urN linux-davidm/arch/ia64/lib/swiotlb.c lia64-2.4/arch/ia64/lib/swiotlb.c
--- linux-davidm/arch/ia64/lib/swiotlb.c Fri Dec 14 20:03:42 2001
+++ lia64-2.4/arch/ia64/lib/swiotlb.c Mon Dec 3 13:52:19 2001
@@ -27,6 +27,16 @@
#define ALIGN(val, align) ((unsigned long) \
(((unsigned long) (val) + ((align) - 1)) & ~((align) - 1)))
+#define OFFSET(val,align) ((unsigned long) \
+ ( (val) & ( (align) - 1)))
+
+/*
+ * Maximum allowable number of contiguous slabs to map,
+ * must be a power of 2. What is the appropriate value ?
+ * The complexity of {map,unmap}_single is linearly dependent on this value.
+ */
+#define IO_TLB_SEGSIZE 128
+
/*
* log of the size of each IO TLB slab. The number of slabs is command line controllable.
*/
@@ -65,10 +75,15 @@
setup_io_tlb_npages (char *str)
{
io_tlb_nslabs = simple_strtoul(str, NULL, 0) << (PAGE_SHIFT - IO_TLB_SHIFT);
+
+ /* avoid tail segment of size < IO_TLB_SEGSIZE */
+ io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+
return 1;
}
__setup("swiotlb=", setup_io_tlb_npages);
+
/*
* Statically reserve bounce buffer space and initialize bounce buffer data structures for
* the software IO TLB used to implement the PCI DMA API.
@@ -88,12 +103,12 @@
/*
* Allocate and initialize the free list array. This array is used
- * to find contiguous free memory regions of size 2^IO_TLB_SHIFT between
- * io_tlb_start and io_tlb_end.
+ * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
+ * between io_tlb_start and io_tlb_end.
*/
io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
for (i = 0; i < io_tlb_nslabs; i++)
- io_tlb_list[i] = io_tlb_nslabs - i;
+ io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
io_tlb_index = 0;
io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
@@ -120,7 +135,7 @@
if (size > (1 << PAGE_SHIFT))
stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
else
- stride = nslots;
+ stride = 1;
if (!nslots)
BUG();
@@ -147,7 +162,8 @@
for (i = index; i < index + nslots; i++)
io_tlb_list[i] = 0;
- for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
+ for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1)
+ && io_tlb_list[i]; i--)
io_tlb_list[i] = ++count;
dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
@@ -213,7 +229,8 @@
*/
spin_lock_irqsave(&io_tlb_lock, flags);
{
- int count = ((index + nslots) < io_tlb_nslabs ? io_tlb_list[index + nslots] : 0);
+ int count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+ io_tlb_list[index + nslots] : 0);
/*
* Step 1: return the slots to the free list, merging the slots with
* superceeding slots
@@ -224,7 +241,8 @@
* Step 2: merge the returned slots with the preceeding slots, if
* available (non zero)
*/
- for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--)
+ for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) &&
+ io_tlb_list[i]; i--)
io_tlb_list[i] = ++count;
}
spin_unlock_irqrestore(&io_tlb_lock, flags);
diff -urN linux-davidm/drivers/char/simserial.c lia64-2.4/drivers/char/simserial.c
--- linux-davidm/drivers/char/simserial.c Fri Dec 14 20:03:44 2001
+++ lia64-2.4/drivers/char/simserial.c Mon Dec 3 11:36:02 2001
@@ -958,7 +958,7 @@
state->port, state->irq);
}
-int rs_read_proc(char *page, char **start, off_t off, int count,
+static int rs_read_proc(char *page, char **start, off_t off, int count,
int *eof, void *data)
{
int i, len = 0, l;
diff -urN linux-davidm/drivers/net/simeth.c lia64-2.4/drivers/net/simeth.c
--- linux-davidm/drivers/net/simeth.c Fri Dec 14 20:03:44 2001
+++ lia64-2.4/drivers/net/simeth.c Fri Dec 14 14:36:09 2001
@@ -1,16 +1,14 @@
/*
* Simulated Ethernet Driver
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 Stephane Eranain <eranian@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * Copyright (C) 1999-2001 Stephane Eranain <eranian@hpl.hp.com>
*/
#include <linux/config.h>
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/sched.h>
#include <linux/types.h>
#include <linux/in.h>
-#include <linux/malloc.h>
#include <linux/string.h>
#include <linux/init.h>
#include <linux/errno.h>
@@ -26,7 +24,6 @@
#include <asm/system.h>
#include <asm/irq.h>
-
#define SIMETH_RECV_MAX 10
/*
@@ -47,12 +44,7 @@
#define NETWORK_INTR 8
-/*
- * This structure is need for the module version
- * It hasn't been tested yet
- */
struct simeth_local {
- struct net_device *next_module;
struct net_device_stats stats;
int simfd; /* descriptor in the simulator */
};
@@ -67,7 +59,7 @@
static void set_multicast_list(struct net_device *dev);
static int simeth_device_event(struct notifier_block *this,unsigned long event, void *ptr);
-static char *simeth_version="v0.2";
+static char *simeth_version="0.3";
/*
* This variable is used to establish a mapping between the Linux/ia64 kernel
@@ -89,7 +81,7 @@
static volatile unsigned int card_count; /* how many cards "found" so far */
-static int simeth_debug=0; /* set to 1 to get debug information */
+static int simeth_debug; /* set to 1 to get debug information */
/*
* Used to catch IFF_UP & IFF_DOWN events
@@ -122,7 +114,15 @@
int __init
simeth_probe (void)
{
- return simeth_probe1();
+ int r;
+
+ printk("simeth: v%s\n", simeth_version);
+
+ r = simeth_probe1();
+
+ if (r = 0) register_netdevice_notifier(&simeth_dev_notifier);
+
+ return r;
}
extern long ia64_ssc (long, long, long, long, int);
@@ -194,7 +194,7 @@
int fd, i;
/*
- * XXX Fix me
+ * XXX Fix me
* let's support just one card for now
*/
if (test_and_set_bit(0, &card_count))
@@ -225,7 +225,6 @@
local = dev->priv;
local->simfd = fd; /* keep track of underlying file descriptor */
- local->next_module = NULL;
dev->open = simeth_open;
dev->stop = simeth_close;
@@ -236,24 +235,13 @@
/* Fill in the fields of the device structure with ethernet-generic values. */
ether_setup(dev);
- printk("simeth: %s alpha\n", simeth_version);
printk("%s: hosteth=%s simfd=%d, HwAddr", dev->name, simeth_device, local->simfd);
for(i = 0; i < ETH_ALEN; i++) {
printk(" %2.2x", dev->dev_addr[i]);
}
printk(", IRQ %d\n", dev->irq);
-#ifdef MODULE
- local->next_module = simeth_dev;
- simeth_dev = dev;
-#endif
- /*
- * XXX Fix me
- * would not work with more than one device !
- */
- register_netdevice_notifier(&simeth_dev_notifier);
-
- return 0;
+ return 0;
}
/*
@@ -268,7 +256,6 @@
}
netif_start_queue(dev);
- MOD_INC_USE_COUNT;
return 0;
}
@@ -355,8 +342,6 @@
free_irq(dev->irq, dev);
- MOD_DEC_USE_COUNT;
-
return 0;
}
@@ -545,52 +530,4 @@
}
#endif
-
-#ifdef MODULE
-static int
-simeth_init(void)
-{
- unsigned int cards_found = 0;
-
- /* iterate over probe */
-
- while ( simeth_probe1() = 0 ) cards_found++;
-
- return cards_found ? 0 : -ENODEV;
-}
-
-
-int
-init_module(void)
-{
- simeth_dev = NULL;
-
- /* the register_netdev is done "indirectly by ether_initdev() */
-
- return simeth_init();
-}
-
-void
-cleanup_module(void)
-{
- struct net_device *next;
-
- while ( simeth_dev ) {
-
- next = ((struct simeth_private *)simeth_dev->priv)->next_module;
-
- unregister_netdev(simeth_dev);
-
- kfree(simeth_dev);
-
- simeth_dev = next;
- }
- /*
- * XXX fix me
- * not clean wihen multiple devices
- */
- unregister_netdevice_notifier(&simeth_dev_notifier);
-}
-#else /* !MODULE */
__initcall(simeth_probe);
-#endif /* !MODULE */
diff -urN linux-davidm/include/asm-ia64/bitops.h lia64-2.4/include/asm-ia64/bitops.h
--- linux-davidm/include/asm-ia64/bitops.h Tue Jul 31 10:30:09 2001
+++ lia64-2.4/include/asm-ia64/bitops.h Fri Dec 14 17:16:48 2001
@@ -57,10 +57,10 @@
}
/*
- * clear_bit() doesn't provide any barrier for the compiler.
+ * clear_bit() has "acquire" semantics.
*/
#define smp_mb__before_clear_bit() smp_mb()
-#define smp_mb__after_clear_bit() smp_mb()
+#define smp_mb__after_clear_bit() do { /* skip */; } while (0)
/**
* clear_bit - Clears a bit in memory
diff -urN linux-davidm/include/asm-ia64/hardirq.h lia64-2.4/include/asm-ia64/hardirq.h
--- linux-davidm/include/asm-ia64/hardirq.h Fri Dec 14 20:03:45 2001
+++ lia64-2.4/include/asm-ia64/hardirq.h Fri Dec 14 17:21:51 2001
@@ -70,6 +70,7 @@
/* if we didn't own the irq lock, just ignore.. */
if (global_irq_holder = cpu) {
global_irq_holder = NO_PROC_ID;
+ smp_mb__before_clear_bit(); /* need barrier before releasing lock... */
clear_bit(0,&global_irq_lock);
}
}
diff -urN linux-davidm/include/asm-ia64/sal.h lia64-2.4/include/asm-ia64/sal.h
--- linux-davidm/include/asm-ia64/sal.h Mon Nov 26 11:19:18 2001
+++ lia64-2.4/include/asm-ia64/sal.h Fri Dec 14 17:57:29 2001
@@ -149,6 +149,7 @@
#define IA64_SAL_PLATFORM_FEATURE_BUS_LOCK (1 << 0)
#define IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT (1 << 1)
#define IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT (1 << 2)
+#define IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT (1 << 3)
typedef struct ia64_sal_desc_platform_feature {
u8 type;
@@ -775,5 +776,7 @@
*scratch_buf_size_needed = isrv.v1;
return isrv.status;
}
+
+extern unsigned long sal_platform_features;
#endif /* _ASM_IA64_PAL_H */
diff -urN linux-davidm/include/asm-ia64/spinlock.h lia64-2.4/include/asm-ia64/spinlock.h
--- linux-davidm/include/asm-ia64/spinlock.h Fri Dec 14 20:03:47 2001
+++ lia64-2.4/include/asm-ia64/spinlock.h Fri Dec 14 17:21:40 2001
@@ -159,10 +159,10 @@
:: "r"(rw) : "ar.ccv", "p7", "r2", "r29", "memory"); \
} while(0)
-/*
- * clear_bit() has "acq" semantics; we're really need "rel" semantics,
- * but for simplicity, we simply do a fence for now...
- */
-#define write_unlock(x) ({clear_bit(31, (x)); mb();})
+#define write_unlock(x) \
+({ \
+ smp_mb__before_clear_bit(); /* need barrier before releasing lock... */ \
+ clear_bit(31, (x)); \
+})
#endif /* _ASM_IA64_SPINLOCK_H */
diff -urN linux-davidm/include/asm-ia64/system.h lia64-2.4/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Mon Nov 26 11:19:18 2001
+++ lia64-2.4/include/asm-ia64/system.h Fri Dec 14 11:02:39 2001
@@ -405,6 +405,10 @@
ia64_psr(ia64_task_regs(prev))->dfh = 1; \
__switch_to(prev,next,last); \
} while (0)
+
+/* Return true if this CPU can call the console drivers in printk() */
+#define arch_consoles_callable() (cpu_online_map & (1UL << smp_processor_id()))
+
#else
# define switch_to(prev,next,last) do { \
ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
diff -urN linux-davidm/kernel/printk.c lia64-2.4/kernel/printk.c
--- linux-davidm/kernel/printk.c Fri Dec 14 20:03:47 2001
+++ lia64-2.4/kernel/printk.c Fri Dec 14 11:02:39 2001
@@ -38,6 +38,10 @@
#define LOG_BUF_MASK (LOG_BUF_LEN-1)
+#ifndef arch_consoles_callable
+#define arch_consoles_callable() (1)
+#endif
+
/* printk's without a loglevel use this.. */
#define DEFAULT_MESSAGE_LOGLEVEL 4 /* KERN_WARNING */
@@ -445,6 +449,14 @@
log_level_unknown = 1;
}
+ if (!arch_consoles_callable()) {
+ /*
+ * On some architectures, the consoles are not usable
+ * on secondary CPUs early in the boot process.
+ */
+ spin_unlock_irqrestore(&logbuf_lock, flags);
+ goto out;
+ }
if (!down_trylock(&console_sem)) {
/*
* We own the drivers. We can drop the spinlock and let
@@ -461,6 +473,7 @@
*/
spin_unlock_irqrestore(&logbuf_lock, flags);
}
+out:
return printed_len;
}
EXPORT_SYMBOL(printk);
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (101 preceding siblings ...)
2001-12-15 5:13 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
@ 2001-12-15 8:12 ` Keith Owens
2001-12-16 12:21 ` [Linux-ia64] kernel update (relative to 2.4.10) Zach, Yoav
` (112 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2001-12-15 8:12 UTC (permalink / raw)
To: linux-ia64
On Fri, 14 Dec 2001 21:13:46 -0800,
David Mosberger <davidm@hpl.hp.com> wrote:
>An updated ia64 patch for 2.4.16 is now available at
>ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
>
> linux-2.4.16-ia64-011214.diff*
David, any reason you did not put that diff under /v2.4?
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (102 preceding siblings ...)
2001-12-15 8:12 ` Keith Owens
@ 2001-12-16 12:21 ` Zach, Yoav
2001-12-17 17:11 ` n0ano
` (111 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Zach, Yoav @ 2001-12-16 12:21 UTC (permalink / raw)
To: linux-ia64
On September 25, 2001, David Mosberger wrote :
> Here is a long-awaited kernel update which brings us in sync with
> 2.4.10. As usual, it's available at
...
> with these fixes in place, I'm able to successfully run the ia32
> versions of realplay, OpenOffice, mozilla, netscape, acrobat, strace,
> Intel compilers, etc. But as usual, your mileage may vary.
I'm trying to run the Intel compiler for IA32 on an IA64 machine, kernel
version 2.4.10. Compilation works fine, but on the link stage I get errors
which indicate that the linker is trying to link IA64 libraries with the
IA32 object file created at the compilation stage. For instance -
/opt/intel/compiler60/ia32/lib/libcxa.so.1(*ABS*+0x722f0): multiple
definition of `_DYNAMIC'
/usr/lib/crt1.o(.dynamic+0x0): first defined here
and, more explicitly put -
ld: hello.o: linking 64-bit files with 32-bit files
Bad value: failed to merge target specific data of file hello.o
I guess it only an environment variable or command line switch that fixes
the problem. Does anyone know how to fix it ?
TIA,
Yoav.
-----------------------------------------------
Yoav Zach
Phone: 972 4 865 5112
Mail: yoav.zach@intel.com
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (103 preceding siblings ...)
2001-12-16 12:21 ` [Linux-ia64] kernel update (relative to 2.4.10) Zach, Yoav
@ 2001-12-17 17:11 ` n0ano
2001-12-26 21:15 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
` (110 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: n0ano @ 2001-12-17 17:11 UTC (permalink / raw)
To: linux-ia64
Yoav-
It looks to me like your Intel compile thinks it's the default
compiler and it is therefore going to the default places to find
things, like libraries and what not. I know how to fix this
for GCC but I'm not that familiar with the Intel compiler anymore.
The best solution is to install the Intel compile in such a way
that it finds the appropriate IA32 objects. You'll probably have
to talk to your Intel compiler supplier about that. (I just noticed
that you have an Intel address. Talk to the compiler group, they
can tell you how to do this.)
As an interim solution I believe that the Intel compiler driver
program, `cc', merely spawns off different programs to do the
different phases of the compilation process, passing appropriate
parameters to the different programs. If memory serves the
parameter `-#' will print out each command and its arguments.
You can do this, see what the actual link command is and then
manually execute the linker with parameters that point to IA32
objects.
On Sun, Dec 16, 2001 at 02:21:27PM +0200, Zach, Yoav wrote:
>
> On September 25, 2001, David Mosberger wrote :
>
> > Here is a long-awaited kernel update which brings us in sync with
> > 2.4.10. As usual, it's available at
> ...
> > with these fixes in place, I'm able to successfully run the ia32
> > versions of realplay, OpenOffice, mozilla, netscape, acrobat, strace,
> > Intel compilers, etc. But as usual, your mileage may vary.
>
>
> I'm trying to run the Intel compiler for IA32 on an IA64 machine, kernel
> version 2.4.10. Compilation works fine, but on the link stage I get errors
> which indicate that the linker is trying to link IA64 libraries with the
> IA32 object file created at the compilation stage. For instance -
>
> /opt/intel/compiler60/ia32/lib/libcxa.so.1(*ABS*+0x722f0): multiple
> definition of `_DYNAMIC'
> /usr/lib/crt1.o(.dynamic+0x0): first defined here
>
> and, more explicitly put -
>
> ld: hello.o: linking 64-bit files with 32-bit files
> Bad value: failed to merge target specific data of file hello.o
>
> I guess it only an environment variable or command line switch that fixes
> the problem. Does anyone know how to fix it ?
>
>
> TIA,
> Yoav.
>
> -----------------------------------------------
> Yoav Zach
> Phone: 972 4 865 5112
> Mail: yoav.zach@intel.com
>
> _______________________________________________
> Linux-IA64 mailing list
> Linux-IA64@linuxia64.org
> http://lists.linuxia64.org/lists/listinfo/linux-ia64
--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
n0ano@indstorage.com
Ph: 303/652-0870x117
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.16)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (104 preceding siblings ...)
2001-12-17 17:11 ` n0ano
@ 2001-12-26 21:15 ` David Mosberger
2001-12-27 6:38 ` [Linux-ia64] kernel update (relative to v2.4.17) David Mosberger
` (109 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-12-26 21:15 UTC (permalink / raw)
To: linux-ia64
>>>>> On Sat, 15 Dec 2001 19:12:23 +1100, Keith Owens <kaos@ocs.com.au> said:
Keith> On Fri, 14 Dec 2001 21:13:46 -0800, David Mosberger
Keith> <davidm@hpl.hp.com> wrote:
>> An updated ia64 patch for 2.4.16 is now available at
>> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
>>
>> linux-2.4.16-ia64-011214.diff*
Keith> David, any reason you did not put that diff under /v2.4?
Old habits + lack of sleep. ;-}
It's fixed now.
Thanks,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.17)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (105 preceding siblings ...)
2001-12-26 21:15 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
@ 2001-12-27 6:38 ` David Mosberger
2001-12-27 8:09 ` j-nomura
` (108 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2001-12-27 6:38 UTC (permalink / raw)
To: linux-ia64
The ia64 patch for v2.4.17 is now at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
linux-2.4.17-ia64-011226.diff*
This is mostly a sync up with 2.4.17 and has few other changes (as far
as I recall, anyhow...). The patch has been tested only on Big Sur,
but I don't expect any major problems with Lion (or HP simulator).
(Yeah, famous last words...).
Happy New Year,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.17)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (106 preceding siblings ...)
2001-12-27 6:38 ` [Linux-ia64] kernel update (relative to v2.4.17) David Mosberger
@ 2001-12-27 8:09 ` j-nomura
2001-12-27 21:59 ` Christian Groessler
` (107 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: j-nomura @ 2001-12-27 8:09 UTC (permalink / raw)
To: linux-ia64
Hi,
> The ia64 patch for v2.4.17 is now at
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
I had to apply the patch below to compile with CONFIG_DEVFS_GUID.
Index: fs/devfs/base.c
=================================RCS file: /home/cvsadm/cvsroot/linux/fs/devfs/base.c,v
retrieving revision 1.1.1.16.2.1
diff -u -r1.1.1.16.2.1 base.c
--- fs/devfs/base.c 2001/12/27 07:21:15 1.1.1.16.2.1
+++ fs/devfs/base.c 2001/12/27 07:53:18
@@ -2186,7 +2186,7 @@
slave = master->slave;
if (slave) {
master->slave = NULL;
- unregister (slave);
+ unregister (slave->parent, slave);
};
}
#endif /* CONFIG_DEVFS_GUID */
Best regards.
--
NOMURA, Jun'ichi <j-nomura@ce.jp.nec.com, nomura@hpc.bs1.fc.nec.co.jp>
HPC Operating System Group, 1st Computers Software Division,
Computers Software Operations Unit, NEC Solutions.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.17)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (107 preceding siblings ...)
2001-12-27 8:09 ` j-nomura
@ 2001-12-27 21:59 ` Christian Groessler
2001-12-31 3:13 ` Matt_Domsch
` (106 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Christian Groessler @ 2001-12-27 21:59 UTC (permalink / raw)
To: linux-ia64
Hi,
I had to make the following change for it to compile on UP:
--- arch/ia64/kernel/setup.c.org Thu Dec 27 22:39:49 2001
+++ arch/ia64/kernel/setup.c Thu Dec 27 22:49:45 2001
@@ -488,7 +488,9 @@ identify_cpu (struct cpuinfo_ia64 *c)
cpuid.bits[i] = ia64_get_cpuid(i);
memcpy(c->vendor, cpuid.field.vendor, 16);
+#ifdef CONFIG_SMP
c->processor = smp_processor_id();
+#endif
c->ppn = cpuid.field.ppn;
c->number = cpuid.field.number;
c->revision = cpuid.field.revision;
regards,
chris
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to v2.4.17)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (108 preceding siblings ...)
2001-12-27 21:59 ` Christian Groessler
@ 2001-12-31 3:13 ` Matt_Domsch
2002-01-07 11:30 ` j-nomura
` (105 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Matt_Domsch @ 2001-12-31 3:13 UTC (permalink / raw)
To: linux-ia64
> I had to apply the patch below to compile with CONFIG_DEVFS_GUID.
For what aer you using CONFIG_DEVFS_GUID? It's going to go away soon by
agreement of the author...
-Matt
--
Matt Domsch
Sr. Software Engineer
Dell Linux Solutions www.dell.com/linux
#1 US Linux Server provider with 24.5% (IDC Dec 2001)
#2 Worldwide Linux Server provider with 18.5% (IDC Dec 2001)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.17)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (109 preceding siblings ...)
2001-12-31 3:13 ` Matt_Domsch
@ 2002-01-07 11:30 ` j-nomura
2002-02-08 7:02 ` [Linux-ia64] kernel update (relative to 2.5.3) David Mosberger
` (104 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: j-nomura @ 2002-01-07 11:30 UTC (permalink / raw)
To: linux-ia64
> For what aer you using CONFIG_DEVFS_GUID? It's going to go away soon by
> agreement of the author...
I used it for GPT related testing/trouble-shooting as an easy way to see
whether linux takes the broken partition table as GPT or not.
I'm ok if it'll be removed.
Best regards.
--
NOMURA, Jun'ichi <j-nomura@ce.jp.nec.com, nomura@hpc.bs1.fc.nec.co.jp>
HPC Operating System Group, 1st Computers Software Division,
Computers Software Operations Unit, NEC Solutions.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.3)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (110 preceding siblings ...)
2002-01-07 11:30 ` j-nomura
@ 2002-02-08 7:02 ` David Mosberger
2002-02-27 1:47 ` [Linux-ia64] kernel update (relative to 2.4.18) David Mosberger
` (103 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-02-08 7:02 UTC (permalink / raw)
To: linux-ia64
A quick patch relative to v2.5.3 is now at ftp.CC.kernel.org
in /pub/linux/kernel/ports/ia64/v2.5/:
linux-2.5.3-ia64-020207.gz
It's mostly a sync up with the many v2.5.3 changes plus assorted other
patches folks have sent me over the last couple of days and weeks
(including a large perfmon update from Stephane).
Caveats: this should give you a working kernel for the HP simulator
and Big Sur, but I have tried SMP only and haven't done a lot of
testing. I think NFS may be broken at the moment. Also, this kernel
by default turns off execute permission on the stack and data
segments. For applications that (incorrectly) assume that data is
executable by default, a workaround can be enabled by turning on a bit
in the ELF executable header. The attached program can be used to
manipulate the bit in question.
IMPORTANT: The current XFree86 server suffers from the problem of assuming
data is executable. To fix this, run the command:
# chatr -E /usr/X11R6/bin/XFree86
and you should be all set.
Enjoy,
--david
<-- chatr.c --><-- chatr.c --><-- chatr.c --><-- chatr.c -->
#include <elf.h>
#include <fcntl.h>
#include <getopt.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#ifndef EF_IA_64_LINUX_EXECUTABLE_STACK
# define EF_IA_64_LINUX_EXECUTABLE_STACK 0x1
#endif
static struct option long_options[] = {
{"executable-stack", 0, 0, 'E'},
{"help", 0, 0, 'h'},
{"no-executable-stack", 0, 0, 'e'},
};
static const char *prog_name;
static void
usage (FILE *fp)
{
fprintf (fp, "Usage: %s [-eEh] files...\n"
"\t-e: mark stack and data of image as not executable\n"
"\t-E: mark stack and data of image as executable\n"
"\t-h: print this help message\n", prog_name);
}
static void
update_file (const char *filename, int executable_stack)
{
Elf64_Ehdr ehdr;
ssize_t ret;
int fd;
fd = open (filename, executable_stack ? O_RDWR : O_RDONLY);
if (fd < 0)
{
perror (filename);
exit (-1);
}
ret = read (fd, &ehdr, sizeof (ehdr));
if (ret != sizeof (ehdr))
{
if (ret < 0)
perror (filename);
else
fprintf (stderr, "%s: short read\n", filename);
exit (-1);
}
if (executable_stack = 0)
{
printf ("%s:\n\tstack and data executable: %s\n", filename,
(ehdr.e_flags & EF_IA_64_LINUX_EXECUTABLE_STACK) ? "yes" : "no");
return;
}
if (executable_stack = 1)
ehdr.e_flags |= EF_IA_64_LINUX_EXECUTABLE_STACK;
else
ehdr.e_flags &= ~EF_IA_64_LINUX_EXECUTABLE_STACK;
if (lseek (fd, 0, SEEK_SET) < 0)
{
perror ("lseek");
exit (-1);
}
ret = write (fd, &ehdr, sizeof (ehdr));
if (ret != sizeof (ehdr))
{
perror ("write");
exit (-1);
}
}
int
main (int argc, char **argv)
{
int ch, executable_stack = 0;
extern int optind;
prog_name = argv[0];
if (argc < 2)
{
usage (stderr);
exit (-1);
}
while (1)
{
ch = getopt_long (argc, argv, "eEh", long_options, NULL);
if (ch = -1)
break;
switch (ch)
{
case 'e': executable_stack = -1; break;
case 'E': executable_stack = 1; break;
case 'h': usage (stdout); exit (0);
}
}
while (optind < argc)
update_file (argv[optind++], executable_stack);
return 0;
}
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (111 preceding siblings ...)
2002-02-08 7:02 ` [Linux-ia64] kernel update (relative to 2.5.3) David Mosberger
@ 2002-02-27 1:47 ` David Mosberger
2002-02-28 4:40 ` Peter Chubb
` (102 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-02-27 1:47 UTC (permalink / raw)
To: linux-ia64
The latest ia64 patch is now availabe at ftp.CC.kernel.org
in /pub/linux/kernel/ports/ia64/v2.4/:
linux-2.4.18-ia64-020226.gz
IMPORTANT:
Starting with this patch, the stack and and pages are no longer
executable by default! This will hopefully increase resilience
against buffer-overflow attacks (though it's no guarantee) and also
has the benefit that it lets us drop the lazy-execute bit support.
The latter has two implications: we can use the standard page-fault
handler again and applications that generate code on the fly are no
longer penalized with extra page faults. On the downside, there is
currently a known bug in the XFree86 server where it incorrectly
assumes that data pages are executable by default. For this reason,
you'll need to use the chatr utility which I posted earlier to mark
the XFree86 server as having executable data. A command of the form:
$ chatr --executable-stack /usr/X11R6/bin/XFree86
should do the trick (source code for chatr is appended). You must do
this as otherwise XFree86 will die while trying to load some of its
modules (it normally dies in the scanpci module).
Other than that, here is a summary of changes (my apologies if I
missed anything or misattributed a patch):
- better ia64 IRQ affinity support (Erich Focht)
- include Firewire configuration for ia64 (Bdale Garbee/Grant Grundler)
- fix ia32 GDT initialization (Don Dugger)
- implement ia32 emulation for SIOCGIFCONF (Don Dugger)
- make page fault handler pass reason for a SIGSEGV/SIGILL in
siginfo.si_isr (see include/asm-ia64/siginfo.h for details)
- /proc/efivars update (Matt Domsch)
- fix I/O SAPIC irq routing (makes ACPI power button events work)
(J.I. Lee)
- fix exception handler to not assume that archdata_start is
non-NULL for all modules (Bjorn Helgaas)
- don't define _HAVE_ARCH_IPV6_CSUM as we don't really have it
(Bdale Garbee/Grant Grunderl)
- drop .bias from ld4.bias instruction in spinlock code to avoid
CPU errata (Asit Mallick)
- MCA update (Jenna Hall)
- update perfmon to v1.0 (Stephane Eranian)
- add PTRACE_GETREGS/PTRACE_SETREGS (Yoav Zach)
- fix handling of .label_state/.copy_state in kernel unwinder
- move initial kernel stack from region 4 down to region 3
- bump gcc3.x inlining threshold to 2000 to get things to
compile with gcc3.1
- don't make ia64 stack and data executable by default and
drop lazy-execute bit support; restore original interface
for page-fault handler
- fix clone2() stack initialization bug found by Asit Mallick
- fix siginfo kernel data leaking
- fix alternate TLB miss handlers to not lose Exception Deferral bit
This patch has been tested on Big Sur and HP Ski simulator in SMP and
UP and with gcc3.0 and gcc2.96. YMMV.
Oh, I also included in the ia64 patch the personality fix which was
accidentally got dropped from 2.4.18 (it was in -rc4). Also, I synced
the drm-4.0 code with Marcelo's tree, but I haven't done any testing
on it.
Enjoy,
--david
<-- chatr.c -->
#include <elf.h>
#include <fcntl.h>
#include <getopt.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#ifndef EF_IA_64_LINUX_EXECUTABLE_STACK
# define EF_IA_64_LINUX_EXECUTABLE_STACK 0x1
#endif
static struct option long_options[] = {
{"executable-stack", 0, 0, 'E'},
{"help", 0, 0, 'h'},
{"no-executable-stack", 0, 0, 'e'},
};
static const char *prog_name;
static void
usage (FILE *fp)
{
fprintf (fp, "Usage: %s [-eEh] files...\n"
"\t-e: mark stack and data of image as not executable\n"
"\t-E: mark stack and data of image as executable\n"
"\t-h: print this help message\n", prog_name);
}
static void
update_file (const char *filename, int executable_stack)
{
Elf64_Ehdr ehdr;
ssize_t ret;
int fd;
fd = open (filename, executable_stack ? O_RDWR : O_RDONLY);
if (fd < 0)
{
perror (filename);
exit (-1);
}
ret = read (fd, &ehdr, sizeof (ehdr));
if (ret != sizeof (ehdr))
{
if (ret < 0)
perror (filename);
else
fprintf (stderr, "%s: short read\n", filename);
exit (-1);
}
if (executable_stack = 0)
{
printf ("%s:\n\tstack and data executable: %s\n", filename,
(ehdr.e_flags & EF_IA_64_LINUX_EXECUTABLE_STACK) ? "yes" : "no");
return;
}
if (executable_stack = 1)
ehdr.e_flags |= EF_IA_64_LINUX_EXECUTABLE_STACK;
else
ehdr.e_flags &= ~EF_IA_64_LINUX_EXECUTABLE_STACK;
if (lseek (fd, 0, SEEK_SET) < 0)
{
perror ("lseek");
exit (-1);
}
ret = write (fd, &ehdr, sizeof (ehdr));
if (ret != sizeof (ehdr))
{
perror ("write");
exit (-1);
}
}
int
main (int argc, char **argv)
{
int ch, executable_stack = 0;
extern int optind;
prog_name = argv[0];
if (argc < 2)
{
usage (stderr);
exit (-1);
}
while (1)
{
ch = getopt_long (argc, argv, "eEh", long_options, NULL);
if (ch = -1)
break;
switch (ch)
{
case 'e': executable_stack = -1; break;
case 'E': executable_stack = 1; break;
case 'h': usage (stdout); exit (0);
}
}
while (optind < argc)
update_file (argv[optind++], executable_stack);
return 0;
}
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (112 preceding siblings ...)
2002-02-27 1:47 ` [Linux-ia64] kernel update (relative to 2.4.18) David Mosberger
@ 2002-02-28 4:40 ` Peter Chubb
2002-02-28 19:19 ` David Mosberger
` (101 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-02-28 4:40 UTC (permalink / raw)
To: linux-ia64
>>>>> "David" = David Mosberger <davidm@hpl.hp.com> writes:
David> The latest ia64 patch is now availabe at ftp.CC.kernel.org in
David> /pub/linux/kernel/ports/ia64/v2.4/:
With this patched kernel, one cannot configure the loop device when
using the HP simulator.
I moved the line that sourced the block-device config script outside
the HP_SIM guards, as in the appended patch.
--- /usr/src/linux-2.4.18/arch/ia64/config.in Sat Nov 10 09:26:17 2001
+++ arch/ia64/config.in Thu Feb 28 15:11:51 2002
@@ -134,7 +134,7 @@
source drivers/mtd/Config.in
source drivers/pnp/Config.in
-source drivers/block/Config.in
source drivers/message/i2o/Config.in
source drivers/md/Config.in
@@ -152,6 +152,7 @@
endmenu
fi # !HP_SIM
+source drivers/block/Config.in
mainmenu_option next_comment
comment 'SCSI support'
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (113 preceding siblings ...)
2002-02-28 4:40 ` Peter Chubb
@ 2002-02-28 19:19 ` David Mosberger
2002-03-06 22:33 ` Peter Chubb
` (100 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-02-28 19:19 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 28 Feb 2002 15:40:53 +1100, Peter Chubb <peter@chubb.wattle.id.au> said:
>>>>> "David" = David Mosberger <davidm@hpl.hp.com> writes:
David> The latest ia64 patch is now availabe at ftp.CC.kernel.org in
David> /pub/linux/kernel/ports/ia64/v2.4/:
Peter> With this patched kernel, one cannot configure the loop device when
Peter> using the HP simulator.
Peter> I moved the line that sourced the block-device config script outside
Peter> the HP_SIM guards, as in the appended patch.
Hmmh, I'm a bit reluctant to do this, as it will pull in all the other
drivers, 99% of which won't build. How about just including the
loopback prompt directly in the ia64 config? If this works, send me a
patch and I'll include it. (Yes, I realize that's kludgy, but
presumably this will all get cleaned up if and when the new kernel
configuration makes it into 2.5).
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (114 preceding siblings ...)
2002-02-28 19:19 ` David Mosberger
@ 2002-03-06 22:33 ` Peter Chubb
2002-03-08 6:38 ` [Linux-ia64] kernel update (relative to 2.5.5) David Mosberger
` (99 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-03-06 22:33 UTC (permalink / raw)
To: linux-ia64
>>>>> "David" = David Mosberger <davidm@hpl.hp.com> writes:
David> The latest ia64 patch is now availabe at ftp.CC.kernel.org in
David> /pub/linux/kernel/ports/ia64/v2.4/:
Peter> With this patched kernel, one cannot configure the loop device
Peter> when using the HP simulator.
Peter> I moved the line that sourced the block-device config script
Peter> outside the HP_SIM guards, as in the appended patch.
David> Hmmh, I'm a bit reluctant to do this, as it will pull in all
David> the other drivers, 99% of which won't build. How about just
David> including the loopback prompt directly in the ia64 config? If
David> this works, send me a patch and I'll include it. (Yes, I
David> realize that's kludgy, but presumably this will all get cleaned
David> up if and when the new kernel configuration makes it into 2.5).
OK, Here it is. We probably need to do something similar for the scsi
config. This patch is against a 2.4.18 kernel with the IA64 patch.
--- linux/arch/ia64/config.in Thu Mar 7 09:21:50 2002
+++ linux-local/arch/ia64/config.in Thu Mar 7 09:28:08 2002
@@ -152,6 +152,17 @@
fi
endmenu
+else # ! HP_SIM
+mainmenu_option next_comment
+comment 'Block devices'
+tristate 'Loopback device support' CONFIG_BLK_DEV_LOOP
+dep_tristate 'Network block device support' CONFIG_BLK_DEV_NBD $CONFIG_NET
+
+tristate 'RAM disk support' CONFIG_BLK_DEV_RAM
+if [ "$CONFIG_BLK_DEV_RAM" = "y" -o "$CONFIG_BLK_DEV_RAM" = "m" ]; then
+ int ' Default RAM disk size' CONFIG_BLK_DEV_RAM_SIZE 4096
+fi
+endmenu
fi # !HP_SIM
mainmenu_option next_comment
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.5)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (115 preceding siblings ...)
2002-03-06 22:33 ` Peter Chubb
@ 2002-03-08 6:38 ` David Mosberger
2002-03-09 11:08 ` Keith Owens
` (98 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-03-08 6:38 UTC (permalink / raw)
To: linux-ia64
OK, I finally got around to take another stab at 2.5.5. The result
can be found at ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/
in file:
linux-2.5.5-ia64-020307.diff.gz
This is mostly a sync up with 2.5.5, though all the changes that made
it into 2.4.18 should be there as well. With this patch, the task
memory has the following layout:
- "current" points to "struct task_struct"
- "struct thread_info" follows directly above "struct task_struct"
- rest of the memory is used for the kernel stack
I tested this on a dual Big Sur and the HP Ski simulator (SMP only).
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.5)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (116 preceding siblings ...)
2002-03-08 6:38 ` [Linux-ia64] kernel update (relative to 2.5.5) David Mosberger
@ 2002-03-09 11:08 ` Keith Owens
2002-04-26 7:15 ` [Linux-ia64] kernel update (relative to v2.5.10) David Mosberger
` (97 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2002-03-09 11:08 UTC (permalink / raw)
To: linux-ia64
On Thu, 7 Mar 2002 22:38:20 -0800,
David Mosberger <davidm@napali.hpl.hp.com> wrote:
>OK, I finally got around to take another stab at 2.5.5. The result
>can be found at ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/
>in file:
>
> linux-2.5.5-ia64-020307.diff.gz
# bzcat linux-2.5.5-ia64-020307.diff.bz2 | patch -p1 -E --quiet
The next patch would create the file arch/mips/.gdbinit,
which already exists! Assume -R? [n]
Apply anyway? [n]
1 out of 1 hunk ignored -- saving rejects to file arch/mips/.gdbinit.rej
No big deal, the rejected patch is just trying to create exactly the
same contents.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.5.10)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (117 preceding siblings ...)
2002-03-09 11:08 ` Keith Owens
@ 2002-04-26 7:15 ` David Mosberger
2002-05-31 6:08 ` [Linux-ia64] kernel update (relative to v2.5.18) David Mosberger
` (96 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-04-26 7:15 UTC (permalink / raw)
To: linux-ia64
The latest ia64 kernel patch is at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/ in
file:
linux-2.5.10-ia64-020426.diff.gz
This hasn't been tested very thoroughly but assuming bitkeeper didn't
let me down, the patch should work. A summary of changes is below.
--david
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.11)
ia64: Fix merge errors in do_csum().
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.10)
ia64: Revert compile-time optimization for bzero().
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.9)
ia64: Add missing .prologue directive to ip_fast_csum().
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.8)
ia64: Add IA64_ISR_CODE_* macros from Ken's patch.
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.7)
ia64: Treat lfetch.fault like speculative loads as is required by the
architecture definition. Patch by Ken Chen.
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.6)
ia64: Change "McKinley" to "Itanium 2" for user-visible strings.
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.5)
ia64: Add optimized ip_fast_csum() by Ken Chen and merge his cleanups
to do_csum.S.
<elenstev@mesatop.com> (02/04/25 1.531.11.4)
[PATCH] This patch adds a help text for CONFIG_IA64_GRANULE_16MB.
<davidm@wailua.hpl.hp.com> (02/04/25 1.531.11.3)
ia64: Make default behavior of SIGURG to ignore the signal, as per SUS.
<davidm@wailua.hpl.hp.com> (02/04/24 1.531.11.2)
ia64: Update comments for E2BIG and TCSETS like for x86.
<davidm@wailua.hpl.hp.com> (02/04/24 1.531.11.1)
ia64: Correct unwind info for signal trampoline.
<davidm@wailua.hpl.hp.com> (02/04/23 1.489.12.3)
mpi_raid.h:
Rename: drivers/message/fusion/mpi_raid.h -> drivers/message/fusion/lsi/mpi_raid.h
<davidm@wailua.hpl.hp.com> (02/04/23 1.489.12.2)
Make fusion driver work on 2.5.
<davidm@wailua.hpl.hp.com> (02/04/23 1.489.11.5)
ia64: Make ACPI work again.
<davidm@wailua.hpl.hp.com> (02/04/23 1.489.3.8)
ia64: Send SIGILL for break operands in range 0x3f000 to 0x3ffff to
simplify dynamic bundle patching.
<davidm@wailua.hpl.hp.com> (02/04/19 1.489.11.4)
Misc. ACPI and merge fixes.
<davidm@wailua.hpl.hp.com> (02/04/19 1.489.3.7)
ia64: Fix ACPI/IOSAPIC breakage introduced by big ACPI update.
<peter@chubb.wattle.id.au> (02/04/19 1.489.3.6)
[PATCH] The attached patch cleans up IA32 support a little.
As it's impossible at present to compile and use IA32 support as a
module, disallow that; and also provide dummy functions to remove
compilation warnings if CONFIG_IA32_SUPPORT is off.
<asit.k.mallick@intel.com> (02/04/19 1.489.3.5)
[PATCH] Don't prefetch beyond end of patch to avoid bringing in cache-lines needlessly.
<davidm@wailua.hpl.hp.com> (02/04/19 1.489.11.3)
Cset exclude: davidm@napali.hpl.hp.com|ChangeSet|20020315234532|54616
<davidm@wailua.hpl.hp.com> (02/04/19 1.456.7.6)
agpgart_be.c:
ia64: Add missing parenthesis for the pdev
argument in "unused" attribute of the setup
routines.
<davidm@wailua.hpl.hp.com> (02/04/19 1.489.12.1)
Update with latest version.
<davidm@wailua.hpl.hp.com> (02/04/18 1.489.11.2)
ia64: Fix typo that prevented _PRT from being found on zx1.
<davidm@wailua.hpl.hp.com> (02/04/18 1.489.3.4)
ia64: Include config file for fusion driver in arch/ia64/config.in.
<steiner@sgi.com> (02/04/16 1.489.3.3)
[PATCH] ia64: Correct copyright message in shub_md.h.
<davidm@wailua.hpl.hp.com> (02/04/16 1.489.3.2)
ia64: Fix alloc_consistent() for zx1 platform to return <4GB memory as per
DMA-mapping.txt. Patch by Alex Williamson.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.5.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (118 preceding siblings ...)
2002-04-26 7:15 ` [Linux-ia64] kernel update (relative to v2.5.10) David Mosberger
@ 2002-05-31 6:08 ` David Mosberger
2002-06-06 2:01 ` Peter Chubb
` (95 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-05-31 6:08 UTC (permalink / raw)
To: linux-ia64
The latest ia64 kernel patch is at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/ in file:
linux-2.5.18-ia64-020530.diff.gz
This kernel seems to work quite well both on Itanium 1 and 2 and the
HP simulator. I didn't move all the way to 2.5.19 since it seems to
have some (small) problems which I expect Linus to fix quickly.
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.5.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (119 preceding siblings ...)
2002-05-31 6:08 ` [Linux-ia64] kernel update (relative to v2.5.18) David Mosberger
@ 2002-06-06 2:01 ` Peter Chubb
2002-06-06 3:16 ` David Mosberger
` (94 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-06-06 2:01 UTC (permalink / raw)
To: linux-ia64
>>>>> "David" = David Mosberger <davidm@napali.hpl.hp.com> writes:
David> The latest ia64 kernel patch is at
David> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/ in file:
David> linux-2.5.18-ia64-020530.diff.gz
Thanks David, but it doesn't compile for me. I'm seeing these
problems
-- perfmon_itanium.h is missing. I used the one from the 2.4
tree.
-- With CONFIG_GENERIC, sba_iommu.c still doesn't compile
because of the removal of the address field from the scatterlist.
-- Again with CONFIG_GENERIC, there are problems:
include/asm/machvec.h:83:12: warning: pasting "machvec_hpsim" and "."
does not give a valid preprocessing token
include/asm/machvec_init.h:26: initializer element is not constant
include/asm/machvec_init.h:26: (near initialization for `machvec_hpsim.dma_supported')
include/asm/machvec_init.h:26: `__ia64_mmiob' undeclared here (not in a function)
include/asm/machvec_init.h:26: initializer element is not constant
etc.
-- I think you need this little patch in ia64/kernel/signal.c
otherwise the usual case will fall off the bottom of the
function without a return value.
--- /tmp/geta23662 Wed Jun 5 15:31:23 2002
+++ linux-2.5.18-patched/arch/ia64/kernel/signal.c Wed Jun 5 15:31:22 2002
@@ -146,6 +146,7 @@
if (from->si_code < 0) {
if (__copy_to_user(to, from, sizeof(siginfo_t)))
return -EFAULT;
+ return 0;
} else {
int err;
-- asm/suspend.h doesn't exist, so do_mounts.c won't
compile. I just created an empty file.
-- With CONFIG_DIG I managed to get a compile, but haven;t
yet had a chance to try it.
--
Peter C peterc@gelato.unsw.edu.au
You are lost in a maze of BitKeeper repositories, all almost the same.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (120 preceding siblings ...)
2002-06-06 2:01 ` Peter Chubb
@ 2002-06-06 3:16 ` David Mosberger
2002-06-07 21:54 ` Bjorn Helgaas
` (93 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-06-06 3:16 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 06 Jun 2002 12:01:06 +1000, Peter Chubb <peter@chubb.wattle.id.au> said:
Peter> -- perfmon_itanium.h is missing. I used the one from the 2.4
Peter> tree.
My mistake. I added it now.
Peter> -- With CONFIG_GENERIC, sba_iommu.c still doesn't
Peter> compile because of the removal of the address field from the
Peter> scatterlist.
Yes. Grant said he's going to work on this.
Peter> -- Again with CONFIG_GENERIC, there are problems:
Peter> include/asm/machvec.h:83:12: warning: pasting "machvec_hpsim"
Peter> and "." does not give a valid preprocessing token
Peter> include/asm/machvec_init.h:26: initializer element is not
Peter> constant include/asm/machvec_init.h:26: (near initialization
Peter> for `machvec_hpsim.dma_supported')
Peter> include/asm/machvec_init.h:26: `__ia64_mmiob' undeclared here
Peter> (not in a function) include/asm/machvec_init.h:26:
Peter> initializer element is not constant
I don't use GENERIC myself usually. A patch is always welcome, of course.
Peter> -- I think you need this little patch in
Peter> ia64/kernel/signal.c otherwise the usual case will fall off
Peter> the bottom of the function without a return value.
Yes, indeed. Thanks for catching that.
Peter> -- asm/suspend.h doesn't exist, so do_mounts.c won't
Peter> compile. I just created an empty file.
Ah, yes. It's arguably a bug in 2.5, but it seems like Linus wants
each arch to add that file, even if it's unused. I'll re-check next
time I work on 2.5 and add it if it's still needed.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (121 preceding siblings ...)
2002-06-06 3:16 ` David Mosberger
@ 2002-06-07 21:54 ` Bjorn Helgaas
2002-06-07 22:07 ` Bjorn Helgaas
` (92 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Bjorn Helgaas @ 2002-06-07 21:54 UTC (permalink / raw)
To: linux-ia64
> -- Again with CONFIG_GENERIC, there are problems:
>
> include/asm/machvec.h:83:12: warning: pasting "machvec_hpsim" and "."
> does not give a valid preprocessing token
> include/asm/machvec_init.h:26: initializer element is not constant
> include/asm/machvec_init.h:26: (near initialization for
> `machvec_hpsim.dma_supported') include/asm/machvec_init.h:26:
> `__ia64_mmiob' undeclared here (not in a function)
> include/asm/machvec_init.h:26: initializer element is not constant
I forgot that you had run into these problems too. If you haven't
alread fixed them, you can try the attached patch which I sent to
David this morning.
Grant also fixed the sba_iommu.c problems, that patch is coming
soon. Both are required to make the generic kernel build.
--
Bjorn Helgaas - bjorn_helgaas at hp.com
Linux Systems Operation R&D
Hewlett-Packard Company
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/arch/ia64/hp/zx1/hpzx1_machvec.c linux-2.5.18-ia64-020530/arch/ia64/hp/zx1/hpzx1_machvec.c
--- linux-2.5.18-ia64-020530.orig/arch/ia64/hp/zx1/hpzx1_machvec.c Fri May 24 19:55:24 2002
+++ linux-2.5.18-ia64-020530/arch/ia64/hp/zx1/hpzx1_machvec.c Thu Jun 6 10:37:14 2002
@@ -1,4 +1,2 @@
#define MACHVEC_PLATFORM_NAME hpzx1
#include <asm/machvec_init.h>
-#define MACHVEC_PLATFORM_NAME hpzx1
-#include <asm/machvec_init.h>
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/arch/ia64/lib/io.c linux-2.5.18-ia64-020530/arch/ia64/lib/io.c
--- linux-2.5.18-ia64-020530.orig/arch/ia64/lib/io.c Fri May 24 19:55:16 2002
+++ linux-2.5.18-ia64-020530/arch/ia64/lib/io.c Thu Jun 6 11:07:16 2002
@@ -87,6 +87,12 @@
__ia64_outl(val, port);
}
+void
+ia64_mmiob (void)
+{
+ __ia64_mmiob();
+}
+
/* define aliases: */
asm (".global __ia64_inb, __ia64_inw, __ia64_inl");
@@ -98,5 +104,8 @@
asm ("__ia64_outb = ia64_outb");
asm ("__ia64_outw = ia64_outw");
asm ("__ia64_outl = ia64_outl");
+
+asm (".global __ia64_mmiob");
+asm ("__ia64_mmiob = ia64_mmiob");
#endif /* CONFIG_IA64_GENERIC */
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/include/asm-ia64/machvec.h linux-2.5.18-ia64-020530/include/asm-ia64/machvec.h
--- linux-2.5.18-ia64-020530.orig/include/asm-ia64/machvec.h Thu Jun 6 10:11:46 2002
+++ linux-2.5.18-ia64-020530/include/asm-ia64/machvec.h Fri Jun 7 10:49:22 2002
@@ -210,6 +210,7 @@
extern ia64_mv_pci_dma_sync_single swiotlb_sync_single;
extern ia64_mv_pci_dma_sync_sg swiotlb_sync_sg;
extern ia64_mv_pci_dma_address swiotlb_dma_address;
+extern ia64_mv_pci_dma_supported swiotlb_pci_dma_supported;
/*
* Define default versions so we can extend machvec for new platforms without having
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/include/asm-ia64/machvec_init.h linux-2.5.18-ia64-020530/include/asm-ia64/machvec_init.h
--- linux-2.5.18-ia64-020530.orig/include/asm-ia64/machvec_init.h Thu Jun 6 10:22:38 2002
+++ linux-2.5.18-ia64-020530/include/asm-ia64/machvec_init.h Fri Jun 7 10:49:29 2002
@@ -16,6 +16,7 @@
extern ia64_mv_outb_t __ia64_outb;
extern ia64_mv_outw_t __ia64_outw;
extern ia64_mv_outl_t __ia64_outl;
+extern ia64_mv_mmiob_t __ia64_mmiob;
#define MACHVEC_HELPER(name) \
struct ia64_machine_vector machvec_##name __attribute__ ((unused, __section__ (".machvec"))) \
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/include/asm-ia64/page.h linux-2.5.18-ia64-020530/include/asm-ia64/page.h
--- linux-2.5.18-ia64-020530.orig/include/asm-ia64/page.h Thu Jun 6 09:54:16 2002
+++ linux-2.5.18-ia64-020530/include/asm-ia64/page.h Thu Jun 6 14:35:21 2002
@@ -56,6 +56,18 @@
flush_dcache_page(page); \
} while (0)
+/*
+ * Note: the MAP_NR_*() macro can't use __pa() because MAP_NR_*(X) MUST
+ * map to something >= max_mapnr if X is outside the identity mapped
+ * kernel space.
+ */
+
+/*
+ * The dense variant can be used as long as the size of memory holes isn't
+ * very big.
+ */
+#define MAP_NR_DENSE(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
+
#define pfn_valid(pfn) ((pfn) < max_mapnr)
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (122 preceding siblings ...)
2002-06-07 21:54 ` Bjorn Helgaas
@ 2002-06-07 22:07 ` Bjorn Helgaas
2002-06-09 10:34 ` Steffen Persvold
` (91 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Bjorn Helgaas @ 2002-06-07 22:07 UTC (permalink / raw)
To: linux-ia64
> -- With CONFIG_GENERIC, sba_iommu.c still doesn't compile
> because of the removal of the address field from the scatterlist.
Here's a patch to make sba_iommu work again.
I added dma_address and dma_length to struct scatterlist and
removed orig_address. This brings IA64 in line with most other
architectures, but required a few changes to swiotlb.
Grant Grundler did the sba_iommu.c updates.
Note that this isn't *quite* enough to make the generic kernel
work on ZX1 boxes, because the ACPI in 2.5.18 barfs on a
ZX1 _CRS method.
David, I've tested both the swiotlb (on i2000 and ZX1) and
sba_iommu (on ZX1, with a kludge for the ACPI problem),
and they seem to work fine.
--
Bjorn Helgaas - bjorn_helgaas at hp.com
Linux Systems Operation R&D
Hewlett-Packard Company
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/arch/ia64/hp/common/sba_iommu.c linux-sg/arch/ia64/hp/common/sba_iommu.c
--- linux-2.5.18-ia64-020530.orig/arch/ia64/hp/common/sba_iommu.c Tue Jun 4 11:24:07 2002
+++ linux-sg/arch/ia64/hp/common/sba_iommu.c Fri Jun 7 11:05:07 2002
@@ -2,6 +2,7 @@
** IA64 System Bus Adapter (SBA) I/O MMU manager
**
** (c) Copyright 2002 Alex Williamson
+** (c) Copyright 2002 Grant Grundler
** (c) Copyright 2002 Hewlett-Packard Company
**
** Portions (c) 2000 Grant Grundler (from parisc I/O MMU code)
@@ -110,7 +111,7 @@
*/
#define DELAYED_RESOURCE_CNT 16
-#define DEFAULT_DMA_HINT_REG 0
+#define DEFAULT_DMA_HINT_REG(d) 0
#define ZX1_FUNC_ID_VALUE ((PCI_DEVICE_ID_HP_ZX1_SBA << 16) | PCI_VENDOR_ID_HP)
#define ZX1_MC_ID ((PCI_DEVICE_ID_HP_ZX1_MC << 16) | PCI_VENDOR_ID_HP)
@@ -216,9 +217,10 @@
static int reserve_sba_gart = 1;
static struct pci_dev sac_only_dev;
-#define sba_sg_iova(sg) (sg->address)
+#define sba_sg_address(sg) (page_address((sg)->page) + (sg)->offset)
#define sba_sg_len(sg) (sg->length)
-#define sba_sg_buffer(sg) (sg->orig_address)
+#define sba_sg_iova(sg) (sg->dma_address)
+#define sba_sg_iova_len(sg) (sg->dma_length)
/* REVISIT - fix me for multiple SBAs/IOCs */
#define GET_IOC(dev) (sba_list->ioc)
@@ -232,7 +234,7 @@
** rather than the HW. I/O MMU allocation alogorithms can be
** faster with smaller size is (to some degree).
*/
-#define DMA_CHUNK_SIZE (BITS_PER_LONG*PAGE_SIZE)
+#define DMA_CHUNK_SIZE (BITS_PER_LONG*IOVP_SIZE)
/* Looks nice and keeps the compiler happy */
#define SBA_DEV(d) ((struct sba_device *) (d))
@@ -255,7 +257,7 @@
* sba_dump_tlb - debugging only - print IOMMU operating parameters
* @hpa: base address of the IOMMU
*
- * Print the size/location of the IO MMU PDIR.
+ * Print the size/location of the IO MMU Pdir.
*/
static void
sba_dump_tlb(char *hpa)
@@ -273,12 +275,12 @@
#ifdef ASSERT_PDIR_SANITY
/**
- * sba_dump_pdir_entry - debugging only - print one IOMMU PDIR entry
+ * sba_dump_pdir_entry - debugging only - print one IOMMU Pdir entry
* @ioc: IO MMU structure which owns the pdir we are interested in.
* @msg: text to print ont the output line.
* @pide: pdir index.
*
- * Print one entry of the IO MMU PDIR in human readable form.
+ * Print one entry of the IO MMU Pdir in human readable form.
*/
static void
sba_dump_pdir_entry(struct ioc *ioc, char *msg, uint pide)
@@ -360,25 +362,25 @@
* print the SG list so we can verify it's correct by hand.
*/
static void
-sba_dump_sg( struct ioc *ioc, struct scatterlist *startsg, int nents)
+sba_dump_sg(struct ioc *ioc, struct scatterlist *startsg, int nents)
{
while (nents-- > 0) {
- printk(" %d : %08lx/%05x %p\n",
+ printk(" %d : DMA %08lx/%05x CPU %p\n",
nents,
(unsigned long) sba_sg_iova(startsg),
- sba_sg_len(startsg),
- sba_sg_buffer(startsg));
+ sba_sg_iova_len(startsg),
+ sba_sg_address(startsg));
startsg++;
}
}
static void
-sba_check_sg( struct ioc *ioc, struct scatterlist *startsg, int nents)
+sba_check_sg(struct ioc *ioc, struct scatterlist *startsg, int nents)
{
struct scatterlist *the_sg = startsg;
int the_nents = nents;
while (the_nents-- > 0) {
- if (sba_sg_buffer(the_sg) = 0x0UL)
+ if (sba_sg_address(the_sg) = 0x0UL)
sba_dump_sg(NULL, startsg, nents);
the_sg++;
}
@@ -404,7 +406,6 @@
#define SBA_IOVA(ioc,iovp,offset,hint_reg) ((ioc->ibase) | (iovp) | (offset) | ((hint_reg)<<(ioc->hint_shift_pdir)))
#define SBA_IOVP(ioc,iova) (((iova) & ioc->hint_mask_pdir) & ~(ioc->ibase))
-/* FIXME : review these macros to verify correctness and usage */
#define PDIR_INDEX(iovp) ((iovp)>>IOVP_SHIFT)
#define RESMAP_MASK(n) ~(~0UL << (n))
@@ -412,7 +413,7 @@
/**
- * sba_search_bitmap - find free space in IO PDIR resource bitmap
+ * sba_search_bitmap - find free space in IO Pdir resource bitmap
* @ioc: IO MMU structure which owns the pdir we are interested in.
* @bits_wanted: number of entries we need.
*
@@ -449,7 +450,7 @@
** We need the alignment to invalidate I/O TLB using
** SBA HW features in the unmap path.
*/
- unsigned long o = 1 << get_order(bits_wanted << PAGE_SHIFT);
+ unsigned long o = 1 << get_order(bits_wanted << IOVP_SHIFT);
uint bitshiftcnt = ROUNDUP(ioc->res_bitshift, o);
unsigned long mask;
@@ -495,7 +496,7 @@
/**
- * sba_alloc_range - find free bits and mark them in IO PDIR resource bitmap
+ * sba_alloc_range - find free bits and mark them in IO Pdir resource bitmap
* @ioc: IO MMU structure which owns the pdir we are interested in.
* @size: number of bytes to create a mapping for
*
@@ -557,7 +558,7 @@
/**
- * sba_free_range - unmark bits in IO PDIR resource bitmap
+ * sba_free_range - unmark bits in IO Pdir resource bitmap
* @ioc: IO MMU structure which owns the pdir we are interested in.
* @iova: IO virtual address which was previously allocated.
* @size: number of bytes to create a mapping for
@@ -604,14 +605,14 @@
/**
- * sba_io_pdir_entry - fill in one IO PDIR entry
- * @pdir_ptr: pointer to IO PDIR entry
- * @vba: Virtual CPU address of buffer to map
+ * sba_io_pdir_entry - fill in one IO Pdir entry
+ * @pdir_ptr: pointer to IO Pdir entry
+ * @phys_page: phys CPU address of page to map
*
* SBA Mapping Routine
*
- * Given a virtual address (vba, arg1) sba_io_pdir_entry()
- * loads the I/O PDIR entry pointed to by pdir_ptr (arg0).
+ * Given a physical address (phys_page, arg1) sba_io_pdir_entry()
+ * loads the I/O Pdir entry pointed to by pdir_ptr (arg0).
* Each IO Pdir entry consists of 8 bytes as shown below
* (LSB = bit 0):
*
@@ -623,20 +624,12 @@
* V = Valid Bit
* U = Unused
* PPN = Physical Page Number
- *
- * The physical address fields are filled with the results of virt_to_phys()
- * on the vba.
*/
-#if 1
-#define sba_io_pdir_entry(pdir_ptr, vba) *pdir_ptr = ((vba & ~0xE000000000000FFFULL) | 0x80000000000000FFULL)
-#else
-void SBA_INLINE
-sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba)
-{
- *pdir_ptr = ((vba & ~0xE000000000000FFFULL) | 0x80000000000000FFULL);
-}
-#endif
+#define SBA_VALID_MASK 0x80000000000000FFULL
+#define sba_io_pdir_entry(pdir_ptr, phys_page) *pdir_ptr = (phys_page | SBA_VALID_MASK)
+#define sba_io_page(pdir_ptr) (*pdir_ptr & ~SBA_VALID_MASK)
+
#ifdef ENABLE_MARK_CLEAN
/**
@@ -660,12 +653,12 @@
#endif
/**
- * sba_mark_invalid - invalidate one or more IO PDIR entries
+ * sba_mark_invalid - invalidate one or more IO Pdir entries
* @ioc: IO MMU structure which owns the pdir we are interested in.
* @iova: IO Virtual Address mapped earlier
* @byte_cnt: number of bytes this mapping covers.
*
- * Marking the IO PDIR entry(ies) as Invalid and invalidate
+ * Marking the IO Pdir entry(ies) as Invalid and invalidate
* corresponding IO TLB entry. The PCOM (Purge Command Register)
* is to purge stale entries in the IO TLB when unmapping entries.
*
@@ -700,14 +693,14 @@
iovp |= IOVP_SHIFT; /* set "size" field for PCOM */
/*
- ** clear I/O PDIR entry "valid" bit
+ ** clear I/O Pdir entry "valid" bit
** Do NOT clear the rest - save it for debugging.
** We should only clear bits that have previously
** been enabled.
*/
- ioc->pdir_base[off] &= ~(0x80000000000000FFULL);
+ ioc->pdir_base[off] &= ~SBA_VALID_MASK;
} else {
- u32 t = get_order(byte_cnt) + PAGE_SHIFT;
+ u32 t = get_order(byte_cnt) + IOVP_SHIFT;
iovp |= t;
ASSERT(t <= 31); /* 2GB! Max value of "size" field */
@@ -716,7 +709,7 @@
/* verify this pdir entry is enabled */
ASSERT(ioc->pdir_base[off] >> 63);
/* clear I/O Pdir entry "valid" bit first */
- ioc->pdir_base[off] &= ~(0x80000000000000FFULL);
+ ioc->pdir_base[off] &= ~SBA_VALID_MASK;
off++;
byte_cnt -= IOVP_SIZE;
} while (byte_cnt > 0);
@@ -744,7 +737,7 @@
u64 *pdir_start;
int pide;
#ifdef ALLOW_IOV_BYPASS
- unsigned long pci_addr = virt_to_phys(addr);
+ unsigned long phys_addr = virt_to_phys(addr);
#endif
ioc = GET_IOC(dev);
@@ -754,7 +747,7 @@
/*
** Check if the PCI device can DMA to ptr... if so, just return ptr
*/
- if ((pci_addr & ~dev->dma_mask) = 0) {
+ if ((phys_addr & ~dev->dma_mask) = 0) {
/*
** Device is bit capable of DMA'ing to the buffer...
** just return the PCI address of ptr
@@ -765,8 +758,8 @@
spin_unlock_irqrestore(&ioc->res_lock, flags);
#endif
DBG_BYPASS("sba_map_single() bypass mask/addr: 0x%lx/0x%lx\n",
- dev->dma_mask, pci_addr);
- return pci_addr;
+ dev->dma_mask, phys_addr);
+ return phys_addr;
}
#endif
@@ -799,7 +792,8 @@
while (size > 0) {
ASSERT(((u8 *)pdir_start)[7] = 0); /* verify availability */
- sba_io_pdir_entry(pdir_start, (unsigned long) addr);
+
+ sba_io_pdir_entry(pdir_start, virt_to_phys(addr));
DBG_RUN(" pdir 0x%p %lx\n", pdir_start, *pdir_start);
@@ -812,7 +806,7 @@
sba_check_pdir(ioc,"Check after sba_map_single()");
#endif
spin_unlock_irqrestore(&ioc->res_lock, flags);
- return SBA_IOVA(ioc, iovp, offset, DEFAULT_DMA_HINT_REG);
+ return SBA_IOVA(ioc, iovp, offset, DEFAULT_DMA_HINT_REG(direction));
}
/**
@@ -866,6 +860,29 @@
size += offset;
size = ROUNDUP(size, IOVP_SIZE);
+#ifdef ENABLE_MARK_CLEAN
+ /*
+ ** Don't need to hold the spinlock while telling VM pages are "clean".
+ ** The pages are "busy" in the resource map until we mark them free.
+ ** But tell VM pages are clean *before* releasing the resource
+ ** in order to avoid race conditions.
+ */
+ if (direction = PCI_DMA_FROMDEVICE) {
+ u32 iovp = (u32) SBA_IOVP(ioc,iova);
+ unsigned int pide = PDIR_INDEX(iovp);
+ u64 *pdirp = &(ioc->pdir_base[pide]);
+ size_t byte_cnt = size;
+ void *addr;
+
+ do {
+ addr = phys_to_virt(sba_io_page(pdirp));
+ mark_clean(addr, min(byte_cnt, IOVP_SIZE));
+ pdirp++;
+ byte_cnt -= IOVP_SIZE;
+ } while (byte_cnt > 0);
+ }
+#endif
+
spin_lock_irqsave(&ioc->res_lock, flags);
#ifdef CONFIG_PROC_FS
ioc->usingle_calls++;
@@ -891,40 +908,7 @@
sba_free_range(ioc, iova, size);
READ_REG(ioc->ioc_hpa+IOC_PCOM); /* flush purges */
#endif /* DELAYED_RESOURCE_CNT = 0 */
-#ifdef ENABLE_MARK_CLEAN
- if (direction = PCI_DMA_FROMDEVICE) {
- u32 iovp = (u32) SBA_IOVP(ioc,iova);
- int off = PDIR_INDEX(iovp);
- void *addr;
-
- if (size <= IOVP_SIZE) {
- addr = phys_to_virt(ioc->pdir_base[off] &
- ~0xE000000000000FFFULL);
- mark_clean(addr, size);
- } else {
- size_t byte_cnt = size;
-
- do {
- addr = phys_to_virt(ioc->pdir_base[off] &
- ~0xE000000000000FFFULL);
- mark_clean(addr, min(byte_cnt, IOVP_SIZE));
- off++;
- byte_cnt -= IOVP_SIZE;
-
- } while (byte_cnt > 0);
- }
- }
-#endif
spin_unlock_irqrestore(&ioc->res_lock, flags);
-
- /* XXX REVISIT for 2.5 Linux - need syncdma for zero-copy support.
- ** For Astro based systems this isn't a big deal WRT performance.
- ** As long as 2.4 kernels copyin/copyout data from/to userspace,
- ** we don't need the syncdma. The issue here is I/O MMU cachelines
- ** are *not* coherent in all cases. May be hwrev dependent.
- ** Need to investigate more.
- asm volatile("syncdma");
- */
}
@@ -980,242 +964,109 @@
}
-/*
-** Since 0 is a valid pdir_base index value, can't use that
-** to determine if a value is valid or not. Use a flag to indicate
-** the SG list entry contains a valid pdir index.
-*/
-#define PIDE_FLAG 0x1UL
-
#ifdef DEBUG_LARGE_SG_ENTRIES
int dump_run_sg = 0;
#endif
-
-/**
- * sba_fill_pdir - write allocated SG entries into IO PDIR
- * @ioc: IO MMU structure which owns the pdir we are interested in.
- * @startsg: list of IOVA/size pairs
- * @nents: number of entries in startsg list
- *
- * Take preprocessed SG list and write corresponding entries
- * in the IO PDIR.
- */
-
-static SBA_INLINE int
-sba_fill_pdir(
- struct ioc *ioc,
- struct scatterlist *startsg,
- int nents)
-{
- struct scatterlist *dma_sg = startsg; /* pointer to current DMA */
- int n_mappings = 0;
- u64 *pdirp = 0;
- unsigned long dma_offset = 0;
-
- dma_sg--;
- while (nents-- > 0) {
- int cnt = sba_sg_len(startsg);
- sba_sg_len(startsg) = 0;
-
-#ifdef DEBUG_LARGE_SG_ENTRIES
- if (dump_run_sg)
- printk(" %2d : %08lx/%05x %p\n",
- nents,
- (unsigned long) sba_sg_iova(startsg), cnt,
- sba_sg_buffer(startsg)
- );
-#else
- DBG_RUN_SG(" %d : %08lx/%05x %p\n",
- nents,
- (unsigned long) sba_sg_iova(startsg), cnt,
- sba_sg_buffer(startsg)
- );
-#endif
- /*
- ** Look for the start of a new DMA stream
- */
- if ((u64)sba_sg_iova(startsg) & PIDE_FLAG) {
- u32 pide = (u64)sba_sg_iova(startsg) & ~PIDE_FLAG;
- dma_offset = (unsigned long) pide & ~IOVP_MASK;
- sba_sg_iova(startsg) = 0;
- dma_sg++;
- sba_sg_iova(dma_sg) = (char *)(pide | ioc->ibase);
- pdirp = &(ioc->pdir_base[pide >> IOVP_SHIFT]);
- n_mappings++;
- }
-
- /*
- ** Look for a VCONTIG chunk
- */
- if (cnt) {
- unsigned long vaddr = (unsigned long) sba_sg_buffer(startsg);
- ASSERT(pdirp);
-
- /* Since multiple Vcontig blocks could make up
- ** one DMA stream, *add* cnt to dma_len.
- */
- sba_sg_len(dma_sg) += cnt;
- cnt += dma_offset;
- dma_offset=0; /* only want offset on first chunk */
- cnt = ROUNDUP(cnt, IOVP_SIZE);
-#ifdef CONFIG_PROC_FS
- ioc->msg_pages += cnt >> IOVP_SHIFT;
-#endif
- do {
- sba_io_pdir_entry(pdirp, vaddr);
- vaddr += IOVP_SIZE;
- cnt -= IOVP_SIZE;
- pdirp++;
- } while (cnt > 0);
- }
- startsg++;
- }
-#ifdef DEBUG_LARGE_SG_ENTRIES
- dump_run_sg = 0;
-#endif
- return(n_mappings);
-}
-
-
-/*
-** Two address ranges are DMA contiguous *iff* "end of prev" and
-** "start of next" are both on a page boundry.
-**
-** (shift left is a quick trick to mask off upper bits)
-*/
-#define DMA_CONTIG(__X, __Y) \
- (((((unsigned long) __X) | ((unsigned long) __Y)) << (BITS_PER_LONG - PAGE_SHIFT)) = 0UL)
+#define SG_ENT_VIRT_PAGE(sg) page_address((sg)->page)
+#define SG_ENT_PHYS_PAGE(SG) virt_to_phys(SG_ENT_VIRT_PAGE(SG))
/**
* sba_coalesce_chunks - preprocess the SG list
* @ioc: IO MMU structure which owns the pdir we are interested in.
- * @startsg: list of IOVA/size pairs
+ * @startsg: input=SG list output=DMA addr/len pairs filled in
* @nents: number of entries in startsg list
+ * @direction: R/W or both.
*
- * First pass is to walk the SG list and determine where the breaks are
- * in the DMA stream. Allocates PDIR entries but does not fill them.
- * Returns the number of DMA chunks.
- *
- * Doing the fill seperate from the coalescing/allocation keeps the
- * code simpler. Future enhancement could make one pass through
- * the sglist do both.
+ * Walk the SG list and determine where the breaks are in the DMA stream.
+ * Allocate IO Pdir resources and fill them in separate loop.
+ * Returns the number of DMA streams used for output IOVA list.
+ * Note each DMA stream can consume multiple IO Pdir entries.
+ *
+ * Code is written assuming some coalescing is possible.
*/
static SBA_INLINE int
-sba_coalesce_chunks( struct ioc *ioc,
- struct scatterlist *startsg,
- int nents)
-{
- struct scatterlist *vcontig_sg; /* VCONTIG chunk head */
- unsigned long vcontig_len; /* len of VCONTIG chunk */
- unsigned long vcontig_end;
- struct scatterlist *dma_sg; /* next DMA stream head */
- unsigned long dma_offset, dma_len; /* start/len of DMA stream */
+sba_coalesce_chunks(struct ioc *ioc, struct scatterlist *startsg,
+ int nents, int direction)
+{
+ struct scatterlist *dma_sg = startsg; /* return array */
int n_mappings = 0;
- while (nents > 0) {
- unsigned long vaddr = (unsigned long) (startsg->address);
+ ASSERT(nents > 1);
+
+ do {
+ unsigned int dma_cnt = 1; /* number of pages in DMA stream */
+ unsigned int pide; /* index into IO Pdir array */
+ u64 *pdirp; /* pointer into IO Pdir array */
+ unsigned long dma_offset, dma_len; /* cumulative DMA stream */
/*
** Prepare for first/next DMA stream
*/
- dma_sg = vcontig_sg = startsg;
- dma_len = vcontig_len = vcontig_end = sba_sg_len(startsg);
- vcontig_end += vaddr;
- dma_offset = vaddr & ~IOVP_MASK;
-
- /* PARANOID: clear entries */
- sba_sg_buffer(startsg) = sba_sg_iova(startsg);
- sba_sg_iova(startsg) = 0;
- sba_sg_len(startsg) = 0;
+ dma_len = sba_sg_len(startsg);
+ dma_offset = sba_sg_address(startsg);
+ startsg++;
+ nents--;
/*
- ** This loop terminates one iteration "early" since
- ** it's always looking one "ahead".
+ ** We want to know how many entries can be coalesced
+ ** before trying to allocate IO Pdir space.
+ ** IOVAs can then be allocated "naturally" aligned
+ ** to take advantage of the block IO TLB flush.
*/
- while (--nents > 0) {
- unsigned long vaddr; /* tmp */
-
- startsg++;
+ while (nents) {
+ unsigned int end_offset = dma_offset + dma_len;
- /* catch brokenness in SCSI layer */
- ASSERT(startsg->length <= DMA_CHUNK_SIZE);
+ /* prev entry must end on a page boundary */
+ if (end_offset & IOVP_MASK)
+ break;
- /*
- ** First make sure current dma stream won't
- ** exceed DMA_CHUNK_SIZE if we coalesce the
- ** next entry.
- */
- if (((dma_len + dma_offset + startsg->length + ~IOVP_MASK) & IOVP_MASK) > DMA_CHUNK_SIZE)
+ /* next entry start on a page boundary? */
+ if (startsg->offset)
break;
/*
- ** Then look for virtually contiguous blocks.
- **
- ** append the next transaction?
+ ** make sure current dma stream won't exceed
+ ** DMA_CHUNK_SIZE if coalescing entries.
*/
- vaddr = (unsigned long) sba_sg_iova(startsg);
- if (vcontig_end = vaddr)
- {
- vcontig_len += sba_sg_len(startsg);
- vcontig_end += sba_sg_len(startsg);
- dma_len += sba_sg_len(startsg);
- sba_sg_buffer(startsg) = (char *)vaddr;
- sba_sg_iova(startsg) = 0;
- sba_sg_len(startsg) = 0;
- continue;
- }
+ if (((end_offset + startsg->length + ~IOVP_MASK)
+ & IOVP_MASK)
+ > DMA_CHUNK_SIZE)
+ break;
-#ifdef DEBUG_LARGE_SG_ENTRIES
- dump_run_sg = (vcontig_len > IOVP_SIZE);
-#endif
+ dma_len += sba_sg_len(startsg);
+ startsg++;
+ nents--;
+ dma_cnt++;
+ }
- /*
- ** Not virtually contigous.
- ** Terminate prev chunk.
- ** Start a new chunk.
- **
- ** Once we start a new VCONTIG chunk, dma_offset
- ** can't change. And we need the offset from the first
- ** chunk - not the last one. Ergo Successive chunks
- ** must start on page boundaries and dove tail
- ** with it's predecessor.
- */
- sba_sg_len(vcontig_sg) = vcontig_len;
+ ASSERT(dma_len <= DMA_CHUNK_SIZE);
- vcontig_sg = startsg;
- vcontig_len = sba_sg_len(startsg);
+ /* allocate IO Pdir resource.
+ ** returns index into (u64) IO Pdir array.
+ ** IOVA is formed from this.
+ */
+ pide = sba_alloc_range(ioc, dma_cnt << IOVP_SHIFT);
+ pdirp = &(ioc->pdir_base[pide]);
- /*
- ** 3) do the entries end/start on page boundaries?
- ** Don't update vcontig_end until we've checked.
- */
- if (DMA_CONTIG(vcontig_end, vaddr))
- {
- vcontig_end = vcontig_len + vaddr;
- dma_len += vcontig_len;
- sba_sg_buffer(startsg) = (char *)vaddr;
- sba_sg_iova(startsg) = 0;
- continue;
- } else {
- break;
- }
+ /* fill_pdir: write stream into IO Pdir */
+ while (dma_cnt--) {
+ sba_io_pdir_entry(pdirp, SG_ENT_PHYS_PAGE(startsg));
+ startsg++;
+ pdirp++;
}
- /*
- ** End of DMA Stream
- ** Terminate last VCONTIG block.
- ** Allocate space for DMA stream.
- */
- sba_sg_len(vcontig_sg) = vcontig_len;
- dma_len = (dma_len + dma_offset + ~IOVP_MASK) & IOVP_MASK;
- ASSERT(dma_len <= DMA_CHUNK_SIZE);
- sba_sg_iova(dma_sg) = (char *) (PIDE_FLAG
- | (sba_alloc_range(ioc, dma_len) << IOVP_SHIFT)
- | dma_offset);
+ /* "output" IOVA */
+ sba_sg_iova(dma_sg) = SBA_IOVA(ioc,
+ ((dma_addr_t) pide << IOVP_SHIFT),
+ dma_offset,
+ DEFAULT_DMA_HINT_REG(direction));
+ sba_sg_iova_len(dma_sg) = dma_len;
+
+ dma_sg++;
n_mappings++;
- }
+ } while (nents);
return n_mappings;
}
@@ -1223,7 +1074,7 @@
/**
* sba_map_sg - map Scatter/Gather list
- * @dev: instance of PCI owned by the driver that's asking.
+ * @dev: instance of PCI device owned by the driver that's asking.
* @sglist: array of buffer/length pairs
* @nents: number of entries in list
* @direction: R/W or both.
@@ -1234,42 +1085,46 @@
int direction)
{
struct ioc *ioc;
- int coalesced, filled = 0;
+ int filled = 0;
unsigned long flags;
#ifdef ALLOW_IOV_BYPASS
struct scatterlist *sg;
#endif
- DBG_RUN_SG("%s() START %d entries\n", __FUNCTION__, nents);
+ DBG_RUN_SG("%s() START %d entries, 0x%p,0x%x\n", __FUNCTION__, nents,
+ sba_sg_address(sglist), sba_sg_len(sglist));
+
ioc = GET_IOC(dev);
ASSERT(ioc);
#ifdef ALLOW_IOV_BYPASS
if (dev->dma_mask >= ioc->dma_mask) {
- for (sg = sglist ; filled < nents ; filled++, sg++){
- sba_sg_buffer(sg) = sba_sg_iova(sg);
- sba_sg_iova(sg) = (char *)virt_to_phys(sba_sg_buffer(sg));
+ for (sg = sglist ; filled < nents ; filled++, sg++) {
+ sba_sg_iova(sg) = virt_to_phys(sba_sg_address(sg));
+ sba_sg_iova_len(sg) = sba_sg_len(sg);
}
#ifdef CONFIG_PROC_FS
spin_lock_irqsave(&ioc->res_lock, flags);
ioc->msg_bypass++;
spin_unlock_irqrestore(&ioc->res_lock, flags);
#endif
+ DBG_RUN_SG("%s() DONE %d mappings bypassed\n", __FUNCTION__, filled);
return filled;
}
#endif
/* Fast path single entry scatterlists. */
if (nents = 1) {
- sba_sg_buffer(sglist) = sba_sg_iova(sglist);
sba_sg_iova(sglist) = (char *)sba_map_single(dev,
- sba_sg_buffer(sglist),
+ sba_sg_iova(sglist),
sba_sg_len(sglist), direction);
+ sba_sg_iova_len(sglist) = sba_sg_len(sglist);
#ifdef CONFIG_PROC_FS
/*
** Should probably do some stats counting, but trying to
** be precise quickly starts wasting CPU time.
*/
#endif
+ DBG_RUN_SG("%s() DONE 1 mapping\n", __FUNCTION__);
return 1;
}
@@ -1286,26 +1141,11 @@
#ifdef CONFIG_PROC_FS
ioc->msg_calls++;
#endif
-
- /*
- ** First coalesce the chunks and allocate I/O pdir space
- **
- ** If this is one DMA stream, we can properly map using the
- ** correct virtual address associated with each DMA page.
- ** w/o this association, we wouldn't have coherent DMA!
- ** Access to the virtual address is what forces a two pass algorithm.
- */
- coalesced = sba_coalesce_chunks(ioc, sglist, nents);
/*
- ** Program the I/O Pdir
- **
- ** map the virtual addresses to the I/O Pdir
- ** o dma_address will contain the pdir index
- ** o dma_len will contain the number of bytes to map
- ** o address contains the virtual address.
+ ** coalesce and program the I/O Pdir
*/
- filled = sba_fill_pdir(ioc, sglist, nents);
+ filled = sba_coalesce_chunks(ioc, sglist, nents, direction);
#ifdef ASSERT_PDIR_SANITY
if (sba_check_pdir(ioc,"Check after sba_map_sg()"))
@@ -1317,7 +1157,6 @@
spin_unlock_irqrestore(&ioc->res_lock, flags);
- ASSERT(coalesced = filled);
DBG_RUN_SG("%s() DONE %d mappings\n", __FUNCTION__, filled);
return filled;
@@ -1341,8 +1180,8 @@
unsigned long flags;
#endif
- DBG_RUN_SG("%s() START %d entries, %p,%x\n",
- __FUNCTION__, nents, sba_sg_buffer(sglist), sglist->length);
+ DBG_RUN_SG("%s() START %d entries, 0x%p,0x%x\n",
+ __FUNCTION__, nents, sba_sg_address(sglist), sba_sg_len(sglist));
ioc = GET_IOC(dev);
ASSERT(ioc);
@@ -1360,7 +1199,7 @@
while (sba_sg_len(sglist) && nents--) {
sba_unmap_single(dev, (dma_addr_t)sba_sg_iova(sglist),
- sba_sg_len(sglist), direction);
+ sba_sg_iova_len(sglist), direction);
#ifdef CONFIG_PROC_FS
/*
** This leaves inconsistent data in the stats, but we can't
@@ -1368,7 +1207,7 @@
** were coalesced to a single entry. The stats are fun,
** but speed is more important.
*/
- ioc->usg_pages += (((u64)sba_sg_iova(sglist) & ~IOVP_MASK) + sba_sg_len(sglist) + IOVP_SIZE - 1) >> PAGE_SHIFT;
+ ioc->usg_pages += (((u64)sba_sg_iova(sglist) & ~IOVP_MASK) + sba_sg_len(sglist) + IOVP_SIZE - 1) >> IOVP_SHIFT;
#endif
++sglist;
}
@@ -1429,12 +1268,12 @@
__FUNCTION__, ioc->ioc_hpa, iova_space_size>>20,
iov_order + PAGE_SHIFT, ioc->pdir_size);
- /* FIXME : DMA HINTs not used */
+ /* XXX DMA HINTs not used */
ioc->hint_shift_pdir = iov_order + PAGE_SHIFT;
ioc->hint_mask_pdir = ~(0x3 << (iov_order + PAGE_SHIFT));
- ioc->pdir_base - pdir_base = (void *) __get_free_pages(GFP_KERNEL, get_order(pdir_size));
+ ioc->pdir_base = pdir_base + (void *) __get_free_pages(GFP_KERNEL, get_order(pdir_size));
if (NULL = pdir_base)
{
panic(__FILE__ ":%s() could not allocate I/O Page Table\n", __FUNCTION__);
@@ -1452,20 +1291,8 @@
/* build IMASK for IOC and Elroy */
iova_space_mask = 0xffffffff;
- iova_space_mask <<= (iov_order + PAGE_SHIFT);
+ iova_space_mask <<= (iov_order + IOVP_SHIFT);
-#ifdef CONFIG_IA64_HP_PROTO
- /*
- ** REVISIT - this is a kludge, but we won't be supporting anything but
- ** zx1 2.0 or greater for real. When fw is in shape, ibase will
- ** be preprogrammed w/ the IOVA hole base and imask will give us
- ** the size.
- */
- if ((sba_dev->hw_rev & 0xFF) < 0x20) {
- DBG_INIT("%s() Found SBA rev < 2.0, setting IOVA base to 0. This device will not be supported in the future.\n", __FUNCTION__);
- ioc->ibase = 0x0;
- } else
-#endif
ioc->ibase = READ_REG(ioc->ioc_hpa + IOC_IBASE) & 0xFFFFFFFEUL;
ioc->imask = iova_space_mask; /* save it */
@@ -1474,7 +1301,7 @@
__FUNCTION__, ioc->ibase, ioc->imask);
/*
- ** FIXME: Hint registers are programmed with default hint
+ ** XXX DMA HINT registers are programmed with default hint
** values during boot, so hints should be sane even if we
** can't reprogram them the way drivers want.
*/
@@ -1487,8 +1314,8 @@
*/
ioc->imask |= 0xFFFFFFFF00000000UL;
- /* Set I/O PDIR Page size to system page size */
- switch (PAGE_SHIFT) {
+ /* Set I/O Pdir page size to system page size */
+ switch (IOVP_SHIFT) {
case 12: /* 4K */
tcnfg = 0;
break;
@@ -1636,7 +1463,7 @@
res_word = (int)(index / BITS_PER_LONG);
mask = 0x1UL << (index - (res_word * BITS_PER_LONG));
res_ptr[res_word] |= mask;
- sba_dev->ioc[i].pdir_base[PDIR_INDEX(reserved_iov)] = (0x80000000000000FFULL | reserved_iov);
+ sba_dev->ioc[i].pdir_base[PDIR_INDEX(reserved_iov)] = (SBA_VALID_MASK | reserved_iov);
}
}
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/arch/ia64/lib/swiotlb.c linux-sg/arch/ia64/lib/swiotlb.c
--- linux-2.5.18-ia64-020530.orig/arch/ia64/lib/swiotlb.c Tue Jun 4 11:24:07 2002
+++ linux-sg/arch/ia64/lib/swiotlb.c Fri Jun 7 11:00:04 2002
@@ -415,18 +415,20 @@
swiotlb_map_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
{
void *addr;
+ unsigned long pci_addr;
int i;
if (direction = PCI_DMA_NONE)
BUG();
for (i = 0; i < nelems; i++, sg++) {
- sg->orig_address = SG_ENT_VIRT_ADDRESS(sg);
- if ((SG_ENT_PHYS_ADDRESS(sg) & ~hwdev->dma_mask) != 0) {
- addr = map_single(hwdev, sg->orig_address, sg->length, direction);
- sg->page = virt_to_page(addr);
- sg->offset = (u64) addr & ~PAGE_MASK;
- }
+ addr = SG_ENT_VIRT_ADDRESS(sg);
+ pci_addr = virt_to_phys(addr);
+ if ((pci_addr & ~hwdev->dma_mask) != 0)
+ sg->dma_address = map_single(hwdev, addr, sg->length, direction);
+ else
+ sg->dma_address = pci_addr;
+ sg->dma_length = sg->length;
}
return nelems;
}
@@ -444,12 +446,10 @@
BUG();
for (i = 0; i < nelems; i++, sg++)
- if (sg->orig_address != SG_ENT_VIRT_ADDRESS(sg)) {
- unmap_single(hwdev, SG_ENT_VIRT_ADDRESS(sg), sg->length, direction);
- sg->page = virt_to_page(sg->orig_address);
- sg->offset = (u64) sg->orig_address & ~PAGE_MASK;
- } else if (direction = PCI_DMA_FROMDEVICE)
- mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->length);
+ if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+ unmap_single(hwdev, sg->dma_address, sg->dma_length, direction);
+ else if (direction = PCI_DMA_FROMDEVICE)
+ mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
}
/*
@@ -468,14 +468,14 @@
BUG();
for (i = 0; i < nelems; i++, sg++)
- if (sg->orig_address != SG_ENT_VIRT_ADDRESS(sg))
- sync_single(hwdev, SG_ENT_VIRT_ADDRESS(sg), sg->length, direction);
+ if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+ sync_single(hwdev, sg->dma_address, sg->dma_length, direction);
}
unsigned long
swiotlb_dma_address (struct scatterlist *sg)
{
- return SG_ENT_PHYS_ADDRESS(sg);
+ return sg->dma_address;
}
/*
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/include/asm-ia64/pci.h linux-sg/include/asm-ia64/pci.h
--- linux-2.5.18-ia64-020530.orig/include/asm-ia64/pci.h Thu Jun 6 17:03:56 2002
+++ linux-sg/include/asm-ia64/pci.h Fri Jun 7 11:04:29 2002
@@ -90,7 +90,7 @@
/* Return the index of the PCI controller for device PDEV. */
#define pci_controller_num(PDEV) (0)
-#define sg_dma_len(sg) ((sg)->length)
+#define sg_dma_len(sg) ((sg)->dma_length)
#define HAVE_PCI_MMAP
extern int pci_mmap_page_range (struct pci_dev *dev, struct vm_area_struct *vma,
diff -u -r -X /home/helgaas/exclude linux-2.5.18-ia64-020530.orig/include/asm-ia64/scatterlist.h linux-sg/include/asm-ia64/scatterlist.h
--- linux-2.5.18-ia64-020530.orig/include/asm-ia64/scatterlist.h Fri May 24 19:55:16 2002
+++ linux-sg/include/asm-ia64/scatterlist.h Fri Jun 7 11:00:04 2002
@@ -7,12 +7,12 @@
*/
struct scatterlist {
- char *orig_address; /* for use by swiotlb */
-
- /* These two are only valid if ADDRESS member of this struct is NULL. */
struct page *page;
unsigned int offset;
unsigned int length; /* buffer length */
+
+ dma_addr_t dma_address;
+ unsigned int dma_length;
};
#define ISA_DMA_THRESHOLD (~0UL)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (123 preceding siblings ...)
2002-06-07 22:07 ` Bjorn Helgaas
@ 2002-06-09 10:34 ` Steffen Persvold
2002-06-14 3:12 ` Peter Chubb
` (90 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Steffen Persvold @ 2002-06-09 10:34 UTC (permalink / raw)
To: linux-ia64
On Fri, 7 Jun 2002, Bjorn Helgaas wrote:
> > -- With CONFIG_GENERIC, sba_iommu.c still doesn't compile
> > because of the removal of the address field from the scatterlist.
>
> Here's a patch to make sba_iommu work again.
>
> I added dma_address and dma_length to struct scatterlist and
> removed orig_address. This brings IA64 in line with most other
> architectures, but required a few changes to swiotlb.
>
> Grant Grundler did the sba_iommu.c updates.
>
> Note that this isn't *quite* enough to make the generic kernel
> work on ZX1 boxes, because the ACPI in 2.5.18 barfs on a
> ZX1 _CRS method.
>
> David, I've tested both the swiotlb (on i2000 and ZX1) and
> sba_iommu (on ZX1, with a kludge for the ACPI problem),
> and they seem to work fine.
Hi Bjorn,
Do you have any estimate when this kludge for the ACPI problem on ZX1 is
available to the public ? Do the DIG version work on ZX1 now, or does
it barf on the same ACPI problem ?
Regards,
--
Steffen Persvold | Scalable Linux Systems | Try out the world's best
mailto:sp@scali.com | http://www.scali.com | performing MPI implementation:
Tel: (+47) 2262 8950 | Olaf Helsets vei 6 | - ScaMPI 1.13.8 -
Fax: (+47) 2262 8951 | N0621 Oslo, NORWAY | >320MBytes/s and <4uS latency
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (124 preceding siblings ...)
2002-06-09 10:34 ` Steffen Persvold
@ 2002-06-14 3:12 ` Peter Chubb
2002-06-22 8:57 ` [Linux-ia64] kernel update (relative to 2.4.18) David Mosberger
` (89 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-06-14 3:12 UTC (permalink / raw)
To: linux-ia64
+/*
+ * Note: the MAP_NR_*() macro can't use __pa() because MAP_NR_*(X) MUST
+ * map to something >= max_mapnr if X is outside the identity mapped
+ * kernel space.
+ */
+
+/*
+ * The dense variant can be used as long as the size of memory holes isn't
+ * very big.
+ */
+#define MAP_NR_DENSE(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
+
Bjorn,
Given that the only current use of MAP_NR_DENSE is in
machvec.c: map_nr_dense() why not open-code it there, instead of using
the MACRO? (So
unsigned long
map_nr_dense(unsigned long addr) {
return (addr - PAGE_OFFSET) >> PAGE_SHIFT;
}
)
And move the comment there too instead of in page.h.
It might make things a little cleaner that way.
Peter C
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (125 preceding siblings ...)
2002-06-14 3:12 ` Peter Chubb
@ 2002-06-22 8:57 ` David Mosberger
2002-06-22 9:25 ` David Mosberger
` (88 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-06-22 8:57 UTC (permalink / raw)
To: linux-ia64
The latest ia64 kernel patch for 2.4.18 is at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
linux-2.4.18-ia64-020622.diff.gz
There are some important bug fixes in this patch:
- munmap near end of per-region mappable address space no
longer causes virtual memory corruption
- the kernel used to inherit all PSR bits across fork/exec,
which is a Bad Thing; this has been fixed
A more detailed changelog is below.
The kernel should work well both on Itanium 1 and Itanium 2. One
important change is that the kernel now ignores all memory that lies
on a granule ("large kernel page") that contains a hole or something
other than normal memory (e.g., memory-mapped I/O registers). The net
effect of this is that on most of today's machines, about 64MB of
memory get thrown away because of the stupid <64MB legacy memory
region (which contains a hole due to ISA BIOS space). With 1GB of
memory, 64MB is about 6% which is more than some of you might be
willing to throw away. If this is an issue, you can select a granule
size of 16MB (CONFIG_IA64_GRANULE_16MB) and then correspondingly less
memory will be thrown away. Reasonably well-designed (i.e., 100%
legacy-free machines) won't throw away any memory, so in the long
term, this issue should go away. In the meantime, buy some stock of
DRAM companies... ;-)
Support for the hp zx1-based machines should be pretty much complete
now. The virtual memory map works well, as does the 64KB page size.
Two formerly missing drivers (bcm5700 gig ethernet and fm801 sound
chip) have been added.
I have tested this kernel on hp zx1, big sur, and hp ski simulator.
As usual, you're mileage will vary. Especially the new code for
ignoring memory due to holes is a potential source of nervousness. I
tried to be careful about getting all the corner-cases correct, but be
sure to test this out before letting it lose on someone else...
Enjoy,
--david
PS: Sorry, still no perfmon_mckinley.h. Hopefully it won't be long before
we can release that...
PPS: I'll be at the kernel workshop/OLS next week, so I won't be able to
do much while on travel.
ChangeLog
- Install NaT-page at address zero to speed up speculation
across NULL pointers (Ken Chen).
- Update hp zx1 specific code (Bjorn Helgaas, Alex Williamson,
Matthew Willcox)
- ACPI sync up (Matthew Willcox, Bjorn Helgaas)
- Ignore memory that's on a granule (large kernel page) that contains
holes or non-memory regions (e.g., uncached/writecombine regions).
- Widen I/O SAPIC base irq numbers to 32 bits (KOCHI Takayoshi)
- Sync with latest perfmon (Stephane Eranian)
- Stop inheriting certain PSR bits (e.g., PSR.DB) across
clone2() and fork() (Stephane Eranian & yours truly)
- Pass Don't clear siginfo.si_addr for SIGTRAP caused by debug
breakpoint
- McKinley-tuned copy_user(), merged with McKinley-tuned memcpy()
(Ken Chen)
- Fix linker script so GOT gets properly aligned (not just
__gp). This avoids spurios "gp out-of-range" errors.
- Clean up fusion driver (based on suggestion by Keith Owens &
Grant Grundler)
- Add bcm5700 gig ethernet driver
- Add preliminary driver for ForteMedia FM801 sound chip
(Martin Petersen).
- Fix bootmem so machine boots much faster if there are large
holes in the physical memory map.
- Fix vm-corruption due to munmap() near end of mappable address space
- Finish virtual mem_map integration.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (126 preceding siblings ...)
2002-06-22 8:57 ` [Linux-ia64] kernel update (relative to 2.4.18) David Mosberger
@ 2002-06-22 9:25 ` David Mosberger
2002-06-22 10:05 ` Steffen Persvold
` (87 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-06-22 9:25 UTC (permalink / raw)
To: linux-ia64
The latest patch is available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4 in file:
linux-2.4.18-ia64-020622.diff.gz
This kernel should work well both on Itanium 1 and Itanium 2,
including hp zx1-based machines. 64KB page size should also work well
as does the virtual mem_map, which is needed for zx1-based machines
with >1GB of memory. This patch also fixes the
munmap-near-end-of-mappable-space bug.
IMPORTANT: To avoid memory attribute aliasing issues and the risk of
triggering prefetches on memory holes, the kernel now ignores all
memory inside granules ("large kernel pages") that contain holes or
something other than normal memory (e.g., memory-mapped I/O regions).
On today's systems, this means that the kernel tends to throw away
~64MB of memory (or about 6% on a 1GB machine). If you want to
minimize the amount of memory being thrown away, you can choose a
granule size of 16MB instead (this will increase alternate TLB miss
handling overhead, however).
Enjoy,
--david
PS: I'll be at the kernel workshop/OLS next week, so don't expect quick
responses.
ChangeLog
- Install NaT-page at address zero to speed up speculation
across NULL pointers (Ken Chen).
- Update hp zx1 specific code (Bjorn Helgaas, Alex Williamson,
Matthew Willcox)
- ACPI sync up (Matthew Willcox, Bjorn Helgaas)
- Ignore memory that's on a granule (large kernel page) that contains
holes or non-memory regions (e.g., uncached/writecombine regions).
- Widen I/O SAPIC base irq numbers to 32 bits (KOCHI Takayoshi)
- Sync with latest perfmon (Stephane Eranian)
- Stop inheriting certain PSR bits (e.g., PSR.DB) across clone2() and
fork() (Stephane Eranian & yours truly)
- Don't clear siginfo.si_addr for SIGTRAP caused by debug breakpoint
- McKinley-tuned copy_user(), merged with McKinley-tuned memcpy()
(Ken Chen)
- Fix linker script so GOT gets properly aligned (not just __gp).
This avoids spurios "gp out-of-range" errors.
- Clean up fusion driver (based on suggestion by Keith Owens &
Grant Grundler)
- Add bcm5700 gig ethernet driver
- Add preliminary driver for ForteMedia FM801 sound chip (Martin
Petersen).
- Fix bootmem so machine boots much faster if there are large holes in
the physical memory map.
- Fix vm-corruption due to munmap() near end of mappable address space
- Finish virtual mem_map integration.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (127 preceding siblings ...)
2002-06-22 9:25 ` David Mosberger
@ 2002-06-22 10:05 ` Steffen Persvold
2002-06-22 19:03 ` David Mosberger
` (86 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Steffen Persvold @ 2002-06-22 10:05 UTC (permalink / raw)
To: linux-ia64
On Sat, 22 Jun 2002, David Mosberger wrote:
> The latest ia64 kernel patch for 2.4.18 is at
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
>
> linux-2.4.18-ia64-020622.diff.gz
>
> There are some important bug fixes in this patch:
>
> - munmap near end of per-region mappable address space no
> longer causes virtual memory corruption
> - the kernel used to inherit all PSR bits across fork/exec,
> which is a Bad Thing; this has been fixed
>
Hi David,
Have you tried CONFIG_IA64_GENERIC=y ? This is what I get :
gcc -D__KERNEL__ -I/usr/src/linux-2.4.18-020622/include -Wall
-Wstrict-prototypes -Wno-trigraphs -g -O2 -fomit-frame-pointer
-fno-strict-aliasing -fno-common -pipe -ffixed-r13
-mfixed-rangeñ0-f15,f32-f127 -falign-functions2 -mconstant-gp
-D__KERNEL__ -I/usr/src/linux-2.4.18-020622/include -c -o bootloader.o
bootloader.c
In file included from bootloader.c:54:
../kernel/fw-emu.c:246:9: #error Not implemented yet...
../kernel/fw-emu.c:257:9: #error Not implemented yet...
bootloader.c: In function `cons_write':
bootloader.c:62: warning: implicit declaration of function `ssc'
make[1]: *** [bootloader.o] Error 1
make[1]: Leaving directory `/usr/src/linux-2.4.18-020622/arch/ia64/boot'
make: *** [boot] Error 2
Seems to be related to efi_get_time() and efi_reset_system() for all other
platforms than CONFIG_IA64_HP_SIM. Is this intended ?
Regards,
--
Steffen Persvold | Scalable Linux Systems | Try out the world's best
mailto:sp@scali.com | http://www.scali.com | performing MPI implementation:
Tel: (+47) 2262 8950 | Olaf Helsets vei 6 | - ScaMPI 1.13.8 -
Fax: (+47) 2262 8951 | N0621 Oslo, NORWAY | >320MBytes/s and <4uS latency
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (128 preceding siblings ...)
2002-06-22 10:05 ` Steffen Persvold
@ 2002-06-22 19:03 ` David Mosberger
2002-06-22 19:33 ` Andreas Schwab
` (85 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-06-22 19:03 UTC (permalink / raw)
To: linux-ia64
>>>>> On Sat, 22 Jun 2002 12:05:50 +0200 (CEST), Steffen Persvold <sp@scali.com> said:
Steffen> Have you tried CONFIG_IA64_GENERIC=y ?
No, I don't test GENERIC myself and need the help of others for
testing all reasonable configurations. Patches welcome, of course.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (129 preceding siblings ...)
2002-06-22 19:03 ` David Mosberger
@ 2002-06-22 19:33 ` Andreas Schwab
2002-07-08 22:08 ` Kimio Suganuma
` (84 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2002-06-22 19:33 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@napali.hpl.hp.com> writes:
|> The latest ia64 kernel patch for 2.4.18 is at
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
|>
|> linux-2.4.18-ia64-020622.diff.gz
This is undefined (from drivers/acpi/tables.c):
+ entry = (acpi_table_entry_header *)
+ ((unsigned long) entry += entry->length);
entry is modified twice without an intervening sequence point.
Andreas.
--
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux AG, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (130 preceding siblings ...)
2002-06-22 19:33 ` Andreas Schwab
@ 2002-07-08 22:08 ` Kimio Suganuma
2002-07-08 22:14 ` David Mosberger
` (83 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Kimio Suganuma @ 2002-07-08 22:08 UTC (permalink / raw)
To: linux-ia64
Hi David,
> ChangeLog
> - Fix bootmem so machine boots much faster if there are large holes in
> the physical memory map.
This change seemed to cause a problem on my machine.
When a system has physical memory >4G, IO TLB is allocated above 4G
like this;
Placing software IO TLB between 0xe00000010222c000 - 0xe00000010242c000
and then swiotlb_map_single() caused PANIC.
IO TLB is allocated by alloc_bootmem_low_pages(), and this is just a
macro calling __alloc_bootmem().
#define alloc_bootmem_low_pages(x) \
__alloc_bootmem((x), PAGE_SIZE, 0)
Once alloc_bootmem() is called, "last_success" is set to > 4G.
And then alloc_bootmem_low_pages() becomes to allocates memory > 4G. :-(
alloc_bootmem_low_page() doesn't ensure to allocate memory < 4G,
but swiotlb expects to it allocates very low address memory.
Here is a patch for mm/bootmem.c to fix the problem.
Any comment?
*** bootmem.c.bk Mon Jul 8 13:59:29 2002
--- bootmem.c Mon Jul 8 14:00:37 2002
***************
*** 168,179 ****
if (goal && (goal >= bdata->node_boot_start) &&
((goal >> PAGE_SHIFT) < bdata->node_low_pfn)) {
preferred = goal - bdata->node_boot_start;
} else
preferred = 0;
- if (last_success >= preferred)
- preferred = last_success;
-
preferred = ((preferred + align - 1) & ~(align - 1)) >> PAGE_SHIFT;
preferred += offset;
areasize = (size+PAGE_SIZE-1)/PAGE_SIZE;
--- 168,179 ----
if (goal && (goal >= bdata->node_boot_start) &&
((goal >> PAGE_SHIFT) < bdata->node_low_pfn)) {
preferred = goal - bdata->node_boot_start;
+
+ if (last_success >= preferred)
+ preferred = last_success;
} else
preferred = 0;
preferred = ((preferred + align - 1) & ~(align - 1)) >> PAGE_SHIFT;
preferred += offset;
areasize = (size+PAGE_SIZE-1)/PAGE_SIZE;
Regards,
Kimi
--
Kimio Suganuma <k-suganuma@mvj.biglobe.ne.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (131 preceding siblings ...)
2002-07-08 22:08 ` Kimio Suganuma
@ 2002-07-08 22:14 ` David Mosberger
2002-07-20 7:08 ` [Linux-ia64] kernel update (relative to v2.4.18) David Mosberger
` (82 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-08 22:14 UTC (permalink / raw)
To: linux-ia64
>>>>> On Mon, 08 Jul 2002 15:08:17 -0700, Kimio Suganuma <k-suganuma@mvj.biglobe.ne.jp> said:
>> - Fix bootmem so machine boots much faster if there are large
>> holes in the physical memory map.
Kimio> This change seemed to cause a problem on my machine. When a
Kimio> system has physical memory >4G, IO TLB is allocated above 4G
Kimio> like this;
Kimio> Placing software IO TLB between 0xe00000010222c000 -
Kimio> 0xe00000010242c000
Kimio> and then swiotlb_map_single() caused PANIC.
Kimio> IO TLB is allocated by alloc_bootmem_low_pages(), and this is
Kimio> just a macro calling __alloc_bootmem().
Kimio> #define alloc_bootmem_low_pages(x) \ __alloc_bootmem((x),
Kimio> PAGE_SIZE, 0)
Kimio> Once alloc_bootmem() is called, "last_success" is set to >
Kimio> 4G. And then alloc_bootmem_low_pages() becomes to allocates
Kimio> memory > 4G. :-( alloc_bootmem_low_page() doesn't ensure to
Kimio> allocate memory < 4G, but swiotlb expects to it allocates
Kimio> very low address memory.
Kimio> Here is a patch for mm/bootmem.c to fix the problem. Any
Kimio> comment?
Good catch. Thanks for the fix!
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (132 preceding siblings ...)
2002-07-08 22:14 ` David Mosberger
@ 2002-07-20 7:08 ` David Mosberger
2002-07-22 11:54 ` Andreas Schwab
` (81 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-20 7:08 UTC (permalink / raw)
To: linux-ia64
The latest ia64 kernel patch is at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
linux-2.4.18-ia64-020719.diff.gz
This patch has mostly bug-fixes for Itanium 2 machines and a few
additional drivers. The problem reported by Alex, where the
McKinley-tuned memcpy() causes VGA console corruption has been worked
around by not using memcpy() at all for accessing the framebuffer
(this was arguably the wrong thing to do in the first place).
However, I do not fully understand yet what triggered this problem.
It would be good to make sure it's really not a bug in the new memcpy().
Ken, if you have a chance to look into it, I'd be interested to know
what you find.
I tried this kernel on the Intel Tiger platform, the hp Ski simulator,
and hp zx1 platform and it seems to work quite well.
Enjoy,
--david
ChangeLog
- Make "xconfig" work again (Keith Owens, Khalid Aziz)
- Add HCDP ACPI serial line support (Khalid Aziz)
- Drop extraneous set_rte() call from iosapic.c (Alex Williamson)
- Fix perfmon SMP initialization ordering bug (Stephane Eranian)
- Include perfmon_mckinley.h now that it's public info (Stephane Eranian)
- Implement pcibios_enable_device for PCI hotplug (Takayoshi Kochi)
- Fix McKinley versions of copy_user()/memcpy() (Ken Chen)
- Fix two small kernel unwinder bugs (based on patch by Richard Henderson)
- Fix mem.c driver to skip over memory-holes (Alex Williamson, I think)
- Include eepro1000 and Tigon3 support (various, plus bug fixes by
Grant Grundler for these and other gigE cards)
- Update FM801 sound river (Martin Petersen)
- Fix flush_tlb_pgtables() to avoid denial-of-service attacks (Arun
Sharma)
- Add PCI ID for Quadro4 card
- Fix Radeon and 3dfx fbdev code to work on McKinley
- Make VGA framebuffer code use stupid word-by-word copy routines, not
regular memcpy() routines
- Fix page allocator to not confuse memory holes with real memory
(John Marvin & me)
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (133 preceding siblings ...)
2002-07-20 7:08 ` [Linux-ia64] kernel update (relative to v2.4.18) David Mosberger
@ 2002-07-22 11:54 ` Andreas Schwab
2002-07-22 12:31 ` Keith Owens
` (80 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2002-07-22 11:54 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@napali.hpl.hp.com> writes:
|> The latest ia64 kernel patch is at
|> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
|>
|> linux-2.4.18-ia64-020719.diff.gz
Why are you explicitly disabling CONFIG_BLK_DEV_LOOP, CONFIG_BLK_DEV_NBD
and CONFIG_BLK_DEV_RAM?
Andreas.
--
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux AG, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (134 preceding siblings ...)
2002-07-22 11:54 ` Andreas Schwab
@ 2002-07-22 12:31 ` Keith Owens
2002-07-22 12:34 ` Andreas Schwab
` (79 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2002-07-22 12:31 UTC (permalink / raw)
To: linux-ia64
On Mon, 22 Jul 2002 13:54:36 +0200,
Andreas Schwab <schwab@suse.de> wrote:
>Why are you explicitly disabling CONFIG_BLK_DEV_LOOP, CONFIG_BLK_DEV_NBD
>and CONFIG_BLK_DEV_RAM?
make xconfig was complaining that those variables appeared elsewhere in
the menu but were not being set in one branch of the menu tree.
Explicitly setting them to "n" removes the xconfig warnings, the
variables were already implicitly set to n.
OTOH, if they should be selected by the user then they need to appear
in both branches of the menu tree. Looking at the code it could go
either way. David - was there a reason that these options were omitted
when "$CONFIG_IA64_HP_SIM" = "n"?
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (135 preceding siblings ...)
2002-07-22 12:31 ` Keith Owens
@ 2002-07-22 12:34 ` Andreas Schwab
2002-07-22 12:54 ` Keith Owens
` (78 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2002-07-22 12:34 UTC (permalink / raw)
To: linux-ia64
Keith Owens <kaos@ocs.com.au> writes:
|> On Mon, 22 Jul 2002 13:54:36 +0200,
|> Andreas Schwab <schwab@suse.de> wrote:
|> >Why are you explicitly disabling CONFIG_BLK_DEV_LOOP, CONFIG_BLK_DEV_NBD
|> >and CONFIG_BLK_DEV_RAM?
|>
|> make xconfig was complaining that those variables appeared elsewhere in
|> the menu but were not being set in one branch of the menu tree.
|> Explicitly setting them to "n" removes the xconfig warnings, the
|> variables were already implicitly set to n.
Where? Why should an Itanium configuration not be able to use a loop
device or a ram disk???
Andreas.
--
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux AG, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (136 preceding siblings ...)
2002-07-22 12:34 ` Andreas Schwab
@ 2002-07-22 12:54 ` Keith Owens
2002-07-22 18:05 ` David Mosberger
` (77 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2002-07-22 12:54 UTC (permalink / raw)
To: linux-ia64
On Mon, 22 Jul 2002 14:34:30 +0200,
Andreas Schwab <schwab@suse.de> wrote:
>Why should an Itanium configuration not be able to use a loop
>device or a ram disk???
You are right, I only fixed the symptom, not the cause. The real
problem looks like an xconfig bug. I will dig into it tomorrow, when I
have access to an ia64.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (137 preceding siblings ...)
2002-07-22 12:54 ` Keith Owens
@ 2002-07-22 18:05 ` David Mosberger
2002-07-22 23:54 ` Kimio Suganuma
` (76 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-22 18:05 UTC (permalink / raw)
To: linux-ia64
>>>>> On Mon, 22 Jul 2002 13:54:36 +0200, Andreas Schwab <schwab@suse.de> said:
Andreas> David Mosberger <davidm@napali.hpl.hp.com> writes: |> The
Andreas> latest ia64 kernel patch is at |>
Andreas> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in
Andreas> file: |> |> linux-2.4.18-ia64-020719.diff.gz
Andreas> Why are you explicitly disabling CONFIG_BLK_DEV_LOOP,
Andreas> CONFIG_BLK_DEV_NBD and CONFIG_BLK_DEV_RAM?
I generally trust Keith's build-environment related fixes. Turned out
to be a mistake in this particular case. I just uploaded a new
patch:
linux-2.4.18-ia64-020722.diff.gz
It goes back to the more drastic solution of turning off "xconfig"
alltogether (it tells you to use "menuconfig" instead). I don't care
about "xconfig" myself, but of course will welcome patches that fix
the problems for real.
The new patch also has the additional #define's needed for the tg3 driver.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (138 preceding siblings ...)
2002-07-22 18:05 ` David Mosberger
@ 2002-07-22 23:54 ` Kimio Suganuma
2002-07-23 1:00 ` Keith Owens
` (75 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Kimio Suganuma @ 2002-07-22 23:54 UTC (permalink / raw)
To: linux-ia64
Hi,
On Sat, 20 Jul 2002 00:08:22 -0700
David Mosberger <davidm@napali.hpl.hp.com> wrote:
> The latest ia64 kernel patch is at
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
>
> linux-2.4.18-ia64-020719.diff.gz
In this version, my system doesn't boot without increasing swiotlb.
It was able to boot with default swiotlb in the previous version.
I found eepro100 driver was trying to get swiotlb of
(0x610 bytes * 1024 = 0x184000) at booting, and this caused
swiotlb shortage.
I guess the problem might be caused by this modification;
> - Include eepro1000 and Tigon3 support (various, plus bug fixes by
> Grant Grundler for these and other gigE cards)
In linux-2.4.18/drivers/net/eepro100.c, RX_RING_SIZE is changed to 1024!!
-------------
/* A few values that may be tweaked. */
/* The ring sizes should be a power of two for efficiency. */
-#define TX_RING_SIZE 32
-#define RX_RING_SIZE 32
+#define TX_RING_SIZE 64
+#define RX_RING_SIZE 1024
--------------
If this modification is permanent, I think default value of swiotlb
should be increased. Any comment?
Regards,
Kimi
--
Kimio Suganuma <k-suganuma@mvj.biglobe.ne.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (139 preceding siblings ...)
2002-07-22 23:54 ` Kimio Suganuma
@ 2002-07-23 1:00 ` Keith Owens
2002-07-23 1:10 ` David Mosberger
` (74 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2002-07-23 1:00 UTC (permalink / raw)
To: linux-ia64
On Mon, 22 Jul 2002 11:05:50 -0700,
David Mosberger <davidm@napali.hpl.hp.com> wrote:
>I just uploaded a new patch:
>
> linux-2.4.18-ia64-020722.diff.gz
>
>It goes back to the more drastic solution of turning off "xconfig"
>alltogether (it tells you to use "menuconfig" instead). I don't care
>about "xconfig" myself, but of course will welcome patches that fix
>the problems for real.
>
>The new patch also has the additional #define's needed for the tg3 driver.
020722 is identical to 020719, except that the Changelog at the start
has been removed. If you can wait for a couple of hours, I will have
the real fix for the ia64 xconfig problem.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (140 preceding siblings ...)
2002-07-23 1:00 ` Keith Owens
@ 2002-07-23 1:10 ` David Mosberger
2002-07-23 1:21 ` Matthew Wilcox
` (73 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-23 1:10 UTC (permalink / raw)
To: linux-ia64
>>>>> On Mon, 22 Jul 2002 16:54:23 -0700, Kimio Suganuma <k-suganuma@mvj.biglobe.ne.jp> said:
Kimio> Hi, On Sat, 20 Jul 2002 00:08:22 -0700 David Mosberger
Kimio> <davidm@napali.hpl.hp.com> wrote:
>> The latest ia64 kernel patch is at
>> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/ in file:
>>
>> linux-2.4.18-ia64-020719.diff.gz
Kimio> In this version, my system doesn't boot without increasing
Kimio> swiotlb. It was able to boot with default swiotlb in the
Kimio> previous version.
Kimio> I found eepro100 driver was trying to get swiotlb of (0x610
Kimio> bytes * 1024 = 0x184000) at booting, and this caused swiotlb
Kimio> shortage.
Kimio> I guess the problem might be caused by this modification;
>> - Include eepro1000 and Tigon3 support (various, plus bug fixes
>> by Grant Grundler for these and other gigE cards)
Kimio> In linux-2.4.18/drivers/net/eepro100.c, RX_RING_SIZE is
Kimio> changed to 1024!!
Argh, I'm sorry about that. This wasn't my intention. I think I
misread e100 and e1000 and picked that one up by accident.
I'll fix that and update the 2.4.18 patch shortly. I would like to
have a final 2.4.18 patch without silly issues like these. (Assuming
2.4.19 shows up relatively soon).
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (141 preceding siblings ...)
2002-07-23 1:10 ` David Mosberger
@ 2002-07-23 1:21 ` Matthew Wilcox
2002-07-23 1:28 ` David Mosberger
` (72 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Matthew Wilcox @ 2002-07-23 1:21 UTC (permalink / raw)
To: linux-ia64
On Mon, Jul 22, 2002 at 06:10:11PM -0700, David Mosberger wrote:
> Argh, I'm sorry about that. This wasn't my intention. I think I
> misread e100 and e1000 and picked that one up by accident.
>
> I'll fix that and update the 2.4.18 patch shortly. I would like to
> have a final 2.4.18 patch without silly issues like these. (Assuming
> 2.4.19 shows up relatively soon).
well, the eepro100 patch does fix a real problem on those cards. is there
a real problem with increasing the default swiotlb size?
--
Revolutions do not require corporate support.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (142 preceding siblings ...)
2002-07-23 1:21 ` Matthew Wilcox
@ 2002-07-23 1:28 ` David Mosberger
2002-07-23 1:35 ` Grant Grundler
` (71 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-23 1:28 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 23 Jul 2002 02:21:39 +0100, Matthew Wilcox <willy@debian.org> said:
Matthew> On Mon, Jul 22, 2002 at 06:10:11PM -0700, David Mosberger
Matthew> wrote:
>> Argh, I'm sorry about that. This wasn't my intention. I think I
>> misread e100 and e1000 and picked that one up by accident.
>>
>> I'll fix that and update the 2.4.18 patch shortly. I would like
>> to have a final 2.4.18 patch without silly issues like these.
>> (Assuming 2.4.19 shows up relatively soon).
Matthew> well, the eepro100 patch does fix a real problem on those
Matthew> cards. is there a real problem with increasing the default
Matthew> swiotlb size?
My understanding is that the 1024 value is way bigger than it really
needs to be. I don't want to bloat the kernel for no good reason. If
someone presents some real data as to what needs to be used here, I'll
be happy to adjust things accordingly.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (143 preceding siblings ...)
2002-07-23 1:28 ` David Mosberger
@ 2002-07-23 1:35 ` Grant Grundler
2002-07-23 3:09 ` Keith Owens
` (70 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Grant Grundler @ 2002-07-23 1:35 UTC (permalink / raw)
To: linux-ia64
Kimio Suganuma wrote:
> I found eepro100 driver was trying to get swiotlb of
> (0x610 bytes * 1024 = 0x184000) at booting, and this caused
> swiotlb shortage.
>
> I guess the problem might be caused by this modification;
>
> > - Include eepro1000 and Tigon3 support (various, plus bug fixes by
> > Grant Grundler for these and other gigE cards)
later versions of the eepro100 patch used RX_RING_SIZE of 64.
So I'll guess it's safe to use a smaller value than 1024.
grant
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (144 preceding siblings ...)
2002-07-23 1:35 ` Grant Grundler
@ 2002-07-23 3:09 ` Keith Owens
2002-07-23 5:04 ` David Mosberger
` (69 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2002-07-23 3:09 UTC (permalink / raw)
To: linux-ia64
On Mon, 22 Jul 2002 11:05:50 -0700,
David Mosberger <davidm@napali.hpl.hp.com> wrote:
>It goes back to the more drastic solution of turning off "xconfig"
>alltogether (it tells you to use "menuconfig" instead). I don't care
>about "xconfig" myself, but of course will welcome patches that fix
>the problems for real.
Memo to self: xconfig is a can of worms, try to avoid working on it.
The current configuration mini-language (CML1) has lots of hidden
restrictions. One of them is that you cannot reliably select a
variable in two different menus. arch/ia64/config.in is breaking this
rule in several places, some of which work, others do not. This patch -
* Indents arch/ia64/config.in so I can see what is dependent on HP_SIM.
The bulk of the patch is indent changes, if this is a problem I can
do another patch without the indent changes, but the result is much
less readable.
* Moves force setting of DEVFS for SN[12] to fs/Config.in.
* Removes dependency on CONFIG_DRM_AGP. That variable does not exist
and breaks xconfig for ia64. This is a generic 2.4.18/2.4.19-rc bug
and has been sent to Marcelo.
* Removes the attempt to set BLOCK variables from arch/ia64/config.in
and adds the dependency on HP_SIM to drivers/block/Config.in. This
is the only method that stands any chance of working for xconfig.
ACPI is already done this way for the same reason, the arch
dependencies are inside the ACPI menu, not in the arch menu.
* Moves force settings of ACPI variables based on HP_SIM to _after_
HP_SIM is actually defined. General config bug, not xconfig specific.
Index: 18.102/fs/Config.in
--- 18.102/fs/Config.in Wed, 23 Jan 2002 08:52:06 +1100 kaos (linux-2.4/m/b/39_Config.in 1.2.1.2.1.8 644)
+++ 18.102(w)/fs/Config.in Tue, 23 Jul 2002 12:39:57 +1000 kaos (linux-2.4/m/b/39_Config.in 1.2.1.2.1.8 644)
@@ -61,7 +61,11 @@ tristate 'OS/2 HPFS file system support'
bool '/proc file system support' CONFIG_PROC_FS
-dep_bool '/dev file system support (EXPERIMENTAL)' CONFIG_DEVFS_FS $CONFIG_EXPERIMENTAL
+if [ "$CONFIG_IA64_SGI_SN1" = "y" -o "$CONFIG_IA64_SGI_SN2" = "y" ]; then
+ dep_bool '/dev file system support (EXPERIMENTAL)' CONFIG_DEVFS_FS $CONFIG_EXPERIMENTAL
+else
+ define_bool CONFIG_DEVFS_FS y
+fi
dep_bool ' Automatically mount at boot' CONFIG_DEVFS_MOUNT $CONFIG_DEVFS_FS
dep_bool ' Debug devfs' CONFIG_DEVFS_DEBUG $CONFIG_DEVFS_FS
Index: 18.102/drivers/char/Config.in
--- 18.102/drivers/char/Config.in Mon, 22 Jul 2002 11:29:07 +1000 kaos (linux-2.4/b/c/3_Config.in 1.2.1.1.4.12.1.2 644)
+++ 18.102(w)/drivers/char/Config.in Tue, 23 Jul 2002 12:31:28 +1000 kaos (linux-2.4/b/c/3_Config.in 1.2.1.1.4.12.1.2 644)
@@ -211,7 +211,7 @@ if [ "$CONFIG_FTAPE" != "n" ]; then
fi
endmenu
-dep_tristate '/dev/agpgart (AGP Support)' CONFIG_AGP $CONFIG_DRM_AGP
+tristate '/dev/agpgart (AGP Support)' CONFIG_AGP
if [ "$CONFIG_AGP" != "n" ]; then
bool ' Intel 440LX/BX/GX and I815/I830M/I840/I850 support' CONFIG_AGP_INTEL
if [ "$CONFIG_IA64" != "n" ]; then
Index: 18.102/drivers/block/Config.in
--- 18.102/drivers/block/Config.in Mon, 17 Sep 2001 11:13:57 +1000 kaos (linux-2.4/c/c/37_Config.in 1.2.2.1 644)
+++ 18.102(w)/drivers/block/Config.in Tue, 23 Jul 2002 13:05:00 +1000 kaos (linux-2.4/c/c/37_Config.in 1.2.2.1 644)
@@ -4,6 +4,8 @@
mainmenu_option next_comment
comment 'Block devices'
+if [ "$CONFIG_IA64_HP_SIM" != "y" ]; then
+
tristate 'Normal PC floppy disk support' CONFIG_BLK_DEV_FD
if [ "$CONFIG_AMIGA" = "y" ]; then
tristate 'Amiga floppy support' CONFIG_AMIGA_FLOPPY
@@ -36,6 +38,7 @@ fi
dep_tristate 'Compaq SMART2 support' CONFIG_BLK_CPQ_DA $CONFIG_PCI
dep_tristate 'Compaq Smart Array 5xxx support' CONFIG_BLK_CPQ_CISS_DA $CONFIG_PCI
dep_tristate 'Mylex DAC960/DAC1100 PCI RAID Controller support' CONFIG_BLK_DEV_DAC960 $CONFIG_PCI
+fi # HP_SIM
tristate 'Loopback device support' CONFIG_BLK_DEV_LOOP
dep_tristate 'Network block device support' CONFIG_BLK_DEV_NBD $CONFIG_NET
Index: 18.102/arch/ia64/config.in
--- 18.102/arch/ia64/config.in Mon, 22 Jul 2002 11:29:07 +1000 kaos (linux-2.4/s/c/38_config.in 1.1.2.1.2.2.3.1.1.6.1.1 644)
+++ 18.102(w)/arch/ia64/config.in Tue, 23 Jul 2002 13:01:43 +1000 kaos (linux-2.4/s/c/38_config.in 1.1.2.1.2.2.3.1.1.6.1.1 644)
@@ -26,13 +26,6 @@ define_bool CONFIG_SBUS n
define_bool CONFIG_RWSEM_GENERIC_SPINLOCK y
define_bool CONFIG_RWSEM_XCHGADD_ALGORITHM n
-if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
- define_bool CONFIG_ACPI y
- define_bool CONFIG_ACPI_EFI y
- define_bool CONFIG_ACPI_INTERPRETER y
- define_bool CONFIG_ACPI_KERNEL_CONFIG y
-fi
-
choice 'IA-64 processor type' \
"Itanium CONFIG_ITANIUM \
Itanium-2 CONFIG_MCKINLEY" Itanium
@@ -89,10 +82,6 @@ if [ "$CONFIG_IA64_SGI_SN1" = "y" -o "$C
bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN_SIM
bool ' Enable autotest (llsc). Option to run cache test instead of booting' \
CONFIG_IA64_SGI_AUTOTEST n
- define_bool CONFIG_DEVFS_FS y
- if [ "$CONFIG_DEVFS_FS" = "y" ]; then
- bool ' Enable DEVFS Debug Code' CONFIG_DEVFS_DEBUG n
- fi
bool ' Enable protocol mode for the L1 console' CONFIG_SERIAL_SGI_L1_PROTOCOL y
define_bool CONFIG_DISCONTIGMEM y
define_bool CONFIG_IA64_MCA y
@@ -117,22 +106,26 @@ tristate 'Kernel support for ELF binarie
tristate 'Kernel support for MISC binaries' CONFIG_BINFMT_MISC
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
+ source drivers/acpi/Config.in
-source drivers/acpi/Config.in
-
-bool 'PCI support' CONFIG_PCI
-source drivers/pci/Config.in
-
-bool 'Support for hot-pluggable devices' CONFIG_HOTPLUG
-if [ "$CONFIG_HOTPLUG" = "y" ]; then
- source drivers/hotplug/Config.in
- source drivers/pcmcia/Config.in
-else
- define_bool CONFIG_PCMCIA n
-fi
-
-source drivers/parport/Config.in
+ bool 'PCI support' CONFIG_PCI
+ source drivers/pci/Config.in
+ bool 'Support for hot-pluggable devices' CONFIG_HOTPLUG
+ if [ "$CONFIG_HOTPLUG" = "y" ]; then
+ source drivers/hotplug/Config.in
+ source drivers/pcmcia/Config.in
+ else
+ define_bool CONFIG_PCMCIA n
+ fi
+
+ source drivers/parport/Config.in
+
+else # !HP_SIM
+ define_bool CONFIG_ACPI y
+ define_bool CONFIG_ACPI_EFI y
+ define_bool CONFIG_ACPI_INTERPRETER y
+ define_bool CONFIG_ACPI_KERNEL_CONFIG y
fi # !HP_SIM
endmenu
@@ -142,43 +135,30 @@ if [ "$CONFIG_NET" = "y" ]; then
fi
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
+ source drivers/mtd/Config.in
+ source drivers/pnp/Config.in
+fi # !HP_SIM
-define_tristate CONFIG_BLK_DEV_LOOP n
-define_tristate CONFIG_BLK_DEV_NBD n
-define_tristate CONFIG_BLK_DEV_RAM n
-
-source drivers/mtd/Config.in
-source drivers/pnp/Config.in
source drivers/block/Config.in
-source drivers/ieee1394/Config.in
-source drivers/message/i2o/Config.in
-source drivers/md/Config.in
-source drivers/message/fusion/Config.in
-mainmenu_option next_comment
-comment 'ATA/IDE/MFM/RLL support'
-
-tristate 'ATA/IDE/MFM/RLL support' CONFIG_IDE
-
-if [ "$CONFIG_IDE" != "n" ]; then
- source drivers/ide/Config.in
-else
- define_bool CONFIG_BLK_DEV_IDE_MODES n
- define_bool CONFIG_BLK_DEV_HD n
-fi
-endmenu
-
-else # ! HP_SIM
-mainmenu_option next_comment
-comment 'Block devices'
-tristate 'Loopback device support' CONFIG_BLK_DEV_LOOP
-dep_tristate 'Network block device support' CONFIG_BLK_DEV_NBD $CONFIG_NET
-
-tristate 'RAM disk support' CONFIG_BLK_DEV_RAM
-if [ "$CONFIG_BLK_DEV_RAM" = "y" -o "$CONFIG_BLK_DEV_RAM" = "m" ]; then
- int ' Default RAM disk size' CONFIG_BLK_DEV_RAM_SIZE 4096
-fi
-endmenu
+if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
+ source drivers/ieee1394/Config.in
+ source drivers/message/i2o/Config.in
+ source drivers/md/Config.in
+ source drivers/message/fusion/Config.in
+
+ mainmenu_option next_comment
+ comment 'ATA/IDE/MFM/RLL support'
+
+ tristate 'ATA/IDE/MFM/RLL support' CONFIG_IDE
+
+ if [ "$CONFIG_IDE" != "n" ]; then
+ source drivers/ide/Config.in
+ else
+ define_bool CONFIG_BLK_DEV_IDE_MODES n
+ define_bool CONFIG_BLK_DEV_HD n
+ fi
+ endmenu
fi # !HP_SIM
mainmenu_option next_comment
@@ -193,36 +173,36 @@ endmenu
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
-if [ "$CONFIG_NET" = "y" ]; then
- mainmenu_option next_comment
- comment 'Network device support'
-
- bool 'Network device support' CONFIG_NETDEVICES
- if [ "$CONFIG_NETDEVICES" = "y" ]; then
- source drivers/net/Config.in
- fi
- endmenu
-fi
-
-source net/ax25/Config.in
-
-mainmenu_option next_comment
-comment 'ISDN subsystem'
-
-tristate 'ISDN support' CONFIG_ISDN
-if [ "$CONFIG_ISDN" != "n" ]; then
- source drivers/isdn/Config.in
-fi
-endmenu
-
-mainmenu_option next_comment
-comment 'CD-ROM drivers (not for SCSI or IDE/ATAPI drives)'
-
-bool 'Support non-SCSI/IDE/ATAPI drives' CONFIG_CD_NO_IDESCSI
-if [ "$CONFIG_CD_NO_IDESCSI" != "n" ]; then
- source drivers/cdrom/Config.in
-fi
-endmenu
+ if [ "$CONFIG_NET" = "y" ]; then
+ mainmenu_option next_comment
+ comment 'Network device support'
+
+ bool 'Network device support' CONFIG_NETDEVICES
+ if [ "$CONFIG_NETDEVICES" = "y" ]; then
+ source drivers/net/Config.in
+ fi
+ endmenu
+ fi
+
+ source net/ax25/Config.in
+
+ mainmenu_option next_comment
+ comment 'ISDN subsystem'
+
+ tristate 'ISDN support' CONFIG_ISDN
+ if [ "$CONFIG_ISDN" != "n" ]; then
+ source drivers/isdn/Config.in
+ fi
+ endmenu
+
+ mainmenu_option next_comment
+ comment 'CD-ROM drivers (not for SCSI or IDE/ATAPI drives)'
+
+ bool 'Support non-SCSI/IDE/ATAPI drives' CONFIG_CD_NO_IDESCSI
+ if [ "$CONFIG_CD_NO_IDESCSI" != "n" ]; then
+ source drivers/cdrom/Config.in
+ fi
+ endmenu
fi # !HP_SIM
@@ -251,20 +231,20 @@ fi
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
-mainmenu_option next_comment
-comment 'Sound'
+ mainmenu_option next_comment
+ comment 'Sound'
-tristate 'Sound card support' CONFIG_SOUND
-if [ "$CONFIG_SOUND" != "n" ]; then
- source drivers/sound/Config.in
-fi
-endmenu
-
-source drivers/usb/Config.in
-
-if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
- source net/bluetooth/Config.in
-fi
+ tristate 'Sound card support' CONFIG_SOUND
+ if [ "$CONFIG_SOUND" != "n" ]; then
+ source drivers/sound/Config.in
+ fi
+ endmenu
+
+ source drivers/usb/Config.in
+
+ if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ source net/bluetooth/Config.in
+ fi
fi # !HP_SIM
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (145 preceding siblings ...)
2002-07-23 3:09 ` Keith Owens
@ 2002-07-23 5:04 ` David Mosberger
2002-07-23 5:58 ` Keith Owens
` (68 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-23 5:04 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 23 Jul 2002 13:09:24 +1000, Keith Owens <kaos@ocs.com.au> said:
Keith> Memo to self: xconfig is a can of worms, try to avoid working
Keith> on it.
Ain't that the truth!
Keith> * Indents arch/ia64/config.in so I can see what is dependent
Keith> on HP_SIM. The bulk of the patch is indent changes, if this
Keith> is a problem I can do another patch without the indent
Keith> changes, but the result is much less readable.
Keith> * Moves force settings of ACPI variables based on HP_SIM to
Keith> _after_ HP_SIM is actually defined. General config bug, not
Keith> xconfig specific.
Good points. I picked these up (though with as little reordering as
possible, to avoid introducing new subtle bugs).
Keith> * Removes dependency on CONFIG_DRM_AGP. That variable does
Keith> not exist and breaks xconfig for ia64. This is a generic
Keith> 2.4.18/2.4.19-rc bug and has been sent to Marcelo.
Makes sense. I see you alread submitted this fix to Marcelo. I picked
it up, too.
Keith> * Moves force setting of DEVFS for SN[12] to fs/Config.in.
Keith> * Removes the attempt to set BLOCK variables from
Keith> arch/ia64/config.in and adds the dependency on HP_SIM to
Keith> drivers/block/Config.in. This is the only method that stands
Keith> any chance of working for xconfig. ACPI is already done this
Keith> way for the same reason, the arch dependencies are inside the
Keith> ACPI menu, not in the arch menu.
Now these I really don't like. I don't think it's good to clutter
platform-independent config files with platform-specific info just
because xconfig is broken. As far as I know, these things work
perfectly fine with "oldconfig", "config", and "menuconfig". (I don't
feel as strongly about this for DEVFS as I do for HP_SIM; but clearly
HP_SIM is such a special case that I'd prefer to keep it in the
ia64-specific config file.)
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (146 preceding siblings ...)
2002-07-23 5:04 ` David Mosberger
@ 2002-07-23 5:58 ` Keith Owens
2002-07-23 6:15 ` David Mosberger
` (67 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Keith Owens @ 2002-07-23 5:58 UTC (permalink / raw)
To: linux-ia64
On Mon, 22 Jul 2002 22:04:16 -0700,
David Mosberger <davidm@napali.hpl.hp.com> wrote:
>>>>>> On Tue, 23 Jul 2002 13:09:24 +1000, Keith Owens <kaos@ocs.com.au> said:
> Keith> * Moves force setting of DEVFS for SN[12] to fs/Config.in.
>
> Keith> * Removes the attempt to set BLOCK variables from
> Keith> arch/ia64/config.in and adds the dependency on HP_SIM to
> Keith> drivers/block/Config.in. This is the only method that stands
> Keith> any chance of working for xconfig. ACPI is already done this
> Keith> way for the same reason, the arch dependencies are inside the
> Keith> ACPI menu, not in the arch menu.
>
>Now these I really don't like. I don't think it's good to clutter
>platform-independent config files with platform-specific info just
>because xconfig is broken. As far as I know, these things work
>perfectly fine with "oldconfig", "config", and "menuconfig". (I don't
>feel as strongly about this for DEVFS as I do for HP_SIM; but clearly
>HP_SIM is such a special case that I'd prefer to keep it in the
>ia64-specific config file.)
I agree, but given the restrictions of CML1, there is no choice. The
problem is not just xconfig, it affects menuconfig as well. Before
this patch, arch/ia64/config.in set DEVFS=y for SN[12] but there was
nothing stopping a user changing DEVFS in the fs menu. Other cross
menu dependencies have the same problem with any config system that
allows out of order evaluation.
CML1 does _not_ enforce dependencies across menus, the dependencies
have to be in a single menu. This was one of the driving forces behind
CML2, to get some method for doing cross menu constraints. Without
CML2, we have to scatter arch dependent code over generic config menus :(.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (147 preceding siblings ...)
2002-07-23 5:58 ` Keith Owens
@ 2002-07-23 6:15 ` David Mosberger
2002-07-23 12:09 ` Andreas Schwab
` (66 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-23 6:15 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 23 Jul 2002 11:00:14 +1000, Keith Owens <kaos@ocs.com.au> said:
Keith> On Mon, 22 Jul 2002 11:05:50 -0700, David Mosberger
Keith> <davidm@napali.hpl.hp.com> wrote:
>> I just uploaded a new patch:
>>
>> linux-2.4.18-ia64-020722.diff.gz
>>
>> It goes back to the more drastic solution of turning off
>> "xconfig" alltogether (it tells you to use "menuconfig" instead).
>> I don't care about "xconfig" myself, but of course will welcome
>> patches that fix the problems for real.
>>
>> The new patch also has the additional #define's needed for the
>> tg3 driver.
Keith> 020722 is identical to 020719, except that the Changelog at
Keith> the start has been removed. If you can wait for a couple of
Keith> hours, I will have the real fix for the ia64 xconfig problem.
OK, the linux-2.4.18-ia64-020722.diff.gz patch now should contain the
latest bits.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (148 preceding siblings ...)
2002-07-23 6:15 ` David Mosberger
@ 2002-07-23 12:09 ` Andreas Schwab
2002-07-23 15:38 ` Wichmann, Mats D
` (65 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2002-07-23 12:09 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@napali.hpl.hp.com> writes:
|> ChangeLog
|>
|> - Make "xconfig" work again (Keith Owens, Khalid Aziz)
|> - Add HCDP ACPI serial line support (Khalid Aziz)
|> - Drop extraneous set_rte() call from iosapic.c (Alex Williamson)
|> - Fix perfmon SMP initialization ordering bug (Stephane Eranian)
|> - Include perfmon_mckinley.h now that it's public info (Stephane Eranian)
|> - Implement pcibios_enable_device for PCI hotplug (Takayoshi Kochi)
Compiling pcihpacpi as module requires acpi_walk_namespace to be
exported from the kernel.
Andreas.
--
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux AG, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (149 preceding siblings ...)
2002-07-23 12:09 ` Andreas Schwab
@ 2002-07-23 15:38 ` Wichmann, Mats D
2002-07-23 16:17 ` David Mosberger
` (64 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Wichmann, Mats D @ 2002-07-23 15:38 UTC (permalink / raw)
To: linux-ia64
> OK, the linux-2.4.18-ia64-020722.diff.gz patch now should contain the
> latest bits.
Does this mean the 0722 patch has been reissued?
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (150 preceding siblings ...)
2002-07-23 15:38 ` Wichmann, Mats D
@ 2002-07-23 16:17 ` David Mosberger
2002-07-23 16:28 ` David Mosberger
` (63 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-23 16:17 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 23 Jul 2002 08:38:56 -0700, "Wichmann, Mats D" <mats.d.wichmann@intel.com> said:
>> OK, the linux-2.4.18-ia64-020722.diff.gz patch now should contain
>> the latest bits.
Mats> Does this mean the 0722 patch has been reissued?
Yes. The old file was bad (I accidentally diff'd the wrong directories).
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (151 preceding siblings ...)
2002-07-23 16:17 ` David Mosberger
@ 2002-07-23 16:28 ` David Mosberger
2002-07-23 16:30 ` David Mosberger
` (62 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-23 16:28 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 23 Jul 2002 15:58:58 +1000, Keith Owens <kaos@ocs.com.au> said:
Keith> I agree, but given the restrictions of CML1, there is no
Keith> choice. The problem is not just xconfig, it affects
Keith> menuconfig as well. Before this patch, arch/ia64/config.in
Keith> set DEVFS=y for SN[12] but there was nothing stopping a user
Keith> changing DEVFS in the fs menu. Other cross menu dependencies
Keith> have the same problem with any config system that allows out
Keith> of order evaluation.
Keith> CML1 does _not_ enforce dependencies across menus, the
Keith> dependencies have to be in a single menu. This was one of
Keith> the driving forces behind CML2, to get some method for doing
Keith> cross menu constraints. Without CML2, we have to scatter
Keith> arch dependent code over generic config menus :(.
I see your point about DEVFS, but the simulator part should be OK. In
that case, we never even include drivers/ide/Config.in.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (152 preceding siblings ...)
2002-07-23 16:28 ` David Mosberger
@ 2002-07-23 16:30 ` David Mosberger
2002-07-23 18:08 ` KOCHI, Takayoshi
` (61 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-07-23 16:30 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 23 Jul 2002 14:09:55 +0200, Andreas Schwab <schwab@suse.de> said:
Andreas> Compiling pcihpacpi as module requires acpi_walk_namespace
Andreas> to be exported from the kernel.
Patch, please?
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (153 preceding siblings ...)
2002-07-23 16:30 ` David Mosberger
@ 2002-07-23 18:08 ` KOCHI, Takayoshi
2002-07-23 19:17 ` Andreas Schwab
` (60 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: KOCHI, Takayoshi @ 2002-07-23 18:08 UTC (permalink / raw)
To: linux-ia64
On Tue, 23 Jul 2002 14:09:55 +0200
Andreas Schwab <schwab@suse.de> wrote:
> David Mosberger <davidm@napali.hpl.hp.com> writes:
>
> |> ChangeLog
> |>
> |> - Make "xconfig" work again (Keith Owens, Khalid Aziz)
> |> - Add HCDP ACPI serial line support (Khalid Aziz)
> |> - Drop extraneous set_rte() call from iosapic.c (Alex Williamson)
> |> - Fix perfmon SMP initialization ordering bug (Stephane Eranian)
> |> - Include perfmon_mckinley.h now that it's public info (Stephane Eranian)
> |> - Implement pcibios_enable_device for PCI hotplug (Takayoshi Kochi)
>
> Compiling pcihpacpi as module requires acpi_walk_namespace to be
> exported from the kernel.
Excuse me,
pcihpacpi is an old module that *cannot* do real PCI hotplug and
isn't included in 2.4.18 ia64 patch (it's included after 2.4.19-pre2).
So don't bother David:)
Recently I posted to a new version of ACPI PCI hotplug to the PCI
hotplug maintainer, and it works for some PCI hot-pluggable platforms,
but you may need some extra patches.
For those who would like to play with PCI hotplug with ACPI,
I'll post patches necessary for the latest 2.4.18 patch later on
today.
Thanks,
--
KOCHI, Takayoshi <t-kouchi@cq.jp.nec.com/t-kouchi@mvf.biglobe.ne.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (154 preceding siblings ...)
2002-07-23 18:08 ` KOCHI, Takayoshi
@ 2002-07-23 19:17 ` Andreas Schwab
2002-07-24 4:30 ` KOCHI, Takayoshi
` (59 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2002-07-23 19:17 UTC (permalink / raw)
To: linux-ia64
"KOCHI, Takayoshi" <t-kouchi@mvf.biglobe.ne.jp> writes:
|> On Tue, 23 Jul 2002 14:09:55 +0200
|> Andreas Schwab <schwab@suse.de> wrote:
|>
|> > David Mosberger <davidm@napali.hpl.hp.com> writes:
|> >
|> > |> ChangeLog
|> > |>
|> > |> - Make "xconfig" work again (Keith Owens, Khalid Aziz)
|> > |> - Add HCDP ACPI serial line support (Khalid Aziz)
|> > |> - Drop extraneous set_rte() call from iosapic.c (Alex Williamson)
|> > |> - Fix perfmon SMP initialization ordering bug (Stephane Eranian)
|> > |> - Include perfmon_mckinley.h now that it's public info (Stephane Eranian)
|> > |> - Implement pcibios_enable_device for PCI hotplug (Takayoshi Kochi)
|> >
|> > Compiling pcihpacpi as module requires acpi_walk_namespace to be
|> > exported from the kernel.
|>
|> Excuse me,
|>
|> pcihpacpi is an old module that *cannot* do real PCI hotplug and
|> isn't included in 2.4.18 ia64 patch (it's included after 2.4.19-pre2).
|> So don't bother David:)
Sorry, I didn't know that, just wanted to report the fact.
Andreas.
--
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux AG, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.4.18)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (155 preceding siblings ...)
2002-07-23 19:17 ` Andreas Schwab
@ 2002-07-24 4:30 ` KOCHI, Takayoshi
2002-08-22 13:42 ` [Linux-ia64] kernel update (relative to 2.4.19) Bjorn Helgaas
` (58 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: KOCHI, Takayoshi @ 2002-07-24 4:30 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 3816 bytes --]
Hi All,
Here's a port of the ACPI PCI hotplug driver I posted to Greg K-H
last week.
This patch should apply against 2.4.18 + david's 020722 patch.
I tested this on McKinley SDV. But this may not work depending on
firmware version.
acpi/acpi_ksyms.c | 1
acpi/events/evrgnini.c | 60 +
hotplug/Config.in | 1
hotplug/Makefile | 17
hotplug/acpiphp.h | 322 ++++++++++
hotplug/acpiphp_core.c | 470 +++++++++++++++
hotplug/acpiphp_glue.c | 1514 +++++++++++++++++++++++++++++++++++++++++++++++++
hotplug/acpiphp_pci.c | 763 ++++++++++++++++++++++++
hotplug/acpiphp_res.c | 708 ++++++++++++++++++++++
pci/names.c | 10
10 files changed, 3859 insertions(+), 7 deletions(-)
Although the patch to acpi_ksyms.c makes `acpi_walk_namespace'
exported, you will fail to resolve `pci_pin_to_vector' (which
is in iosapic.c) for modules.
So please compile in ACPI PCI hotplug statically.
This ACPI PCI hotplug driver implements minimal functionality
and the status is experimental. Please do not use for production
purpose (but patches to leverage it to production level are welcome:)
This driver does not implement handling of PCI-to-PCI bridge on a card
(such as multi-port ethernet cards) yet. So please test it with
non PCI-to-PCI bridge cards (multi-function cards are ok).
How to use:
First of all, after booting the kernel, you will have to mount `pcihpfs'
to somewhere.
# mount -t pcihpfs none /mnt/somewhere
Then you can find several ACPIxx (xx is a number) directories under the
mountpoint. If you can't find any directory under it, your platform
doesn't support PCI hotplug or it's my bug:)
Probably you can find a tab, which is called "MRL" (manually-operated
retension latch) and green/amber leds for each PCI slot.
You can turn off a PCI card by doing
# echo 0 > /mnt/somewhere/ACPIxx/power
Or, if your platform have a push-button near the slot,
pushing it may initiate hot-remove process.
Opening an MRL means immediate shutdown of power supply and not
recommended.
Typically, inserting a card and closing MRL will initiate hot-add
process. If it's successfully added to the system, the green LED
will be on, otherwise amber LED will be on.
And then, the hotplug subsystem will bind the inserted device to
an apropriate driver or run /sbin/hotplug. Even if there's no driver
for the device, at least it's visible through /proc/pci
proc/bus/pci/xx/yy.
There's a GUI program to control PCI hotplug. You can find at
http://www.kroah.com/linux/hotplug/
But I haven't tested yet...
Caveats:
If your PCI device is bound to its driver and it's not hotplug-aware,
hot-removal process may fail. For hotplugging, a driver should support
new PCI driver interface (struct pci_driver). For related information,
see
http://www.uwsg.indiana.edu/hypermail/linux/kernel/0207.1/1078.html
If your card's driver don't call pci_enable_device() at startup,
your card may not operate correctly. Because I/O port and memory-
mapped I/O space access is not enabled after PCI hotplug,
PCI device drivers are responsible for enabling them.
(For example, drivers/net/acenic.c don't call pci_enable_device() at
startup, but enables by itself...)
Related sites:
SourceForge PCI hotplug for Linux project
http://sourceforge.net/projects/pcihpd
If there are problems (perhaps many), please report to me /
pcihpd-discuss@lists.sourceforge.net, with full dmesg.
If you don't see any legal problem, attaching /proc/acpi/dsdt is
very helpful.
On Tue, 23 Jul 2002 11:08:42 -0700
"KOCHI, Takayoshi" <t-kouchi@mvf.biglobe.ne.jp> wrote:
> For those who would like to play with PCI hotplug with ACPI,
> I'll post patches necessary for the latest 2.4.18 patch later on
> today.
Thanks,
--
KOCHI, Takayoshi <t-kouchi@cq.jp.nec.com/t-kouchi@mvf.biglobe.ne.jp>
[-- Attachment #2: acpiphp-ia64-0722.diff.gz --]
[-- Type: application/octet-stream, Size: 21567 bytes --]
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.19)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (156 preceding siblings ...)
2002-07-24 4:30 ` KOCHI, Takayoshi
@ 2002-08-22 13:42 ` Bjorn Helgaas
2002-08-22 14:22 ` Wichmann, Mats D
` (57 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Bjorn Helgaas @ 2002-08-22 13:42 UTC (permalink / raw)
To: linux-ia64
The latest ia64 kernel patch is available here:
ftp://ftp.kernel.org/pub/linux/kernel/ports/v2.4/linux-2.4.19-ia64-020821.diff.gz
Note that this patch retains support for the Microsoft-encumbered SPCR
and DBGP ACPI tables, even though they have been removed from 2.4.19.
I'd like to remove this soon, though.
Bjorn
ChangeLog
- 2.4.19 changes, just as a heads up:
- Support for HCDP (DIG64 Headless Console Debug Port) added.
- Support for SPCR & DBGP (CONFIG_SERIAL_ACPI) was removed from
2.4.19. I kept it in the ia64 patch because some people still
rely on it, but I'd like to remove it eventually.
- Remove hpsim_console simcons_wait_key() due to struct console changes.
- Fix really_local_irq_count() typo (David Mosberger).
- SYM53C8XX version 2 fixes (?).
- Fix flush_tlb_page problem (UP only) (Dan Magenheimer, David Mosberger).
- ACPI: prefer WB mapping over UC (a region may support both) (?).
- Print FPSWA revision (Takayoshi Kochi).
- copy_user fix (Ken Chen).
- Makefile fixes (David Lombard).
- SGI devfs config fixes for xconfig (Keith Owens).
- Fix CPU bitmask truncation to 32 bits.
- do_profile() cleanup (David Mosberger).
- SMP floating-point context switch optimization (Asit Mallick).
- Fix corruption of perfmon registers by ptrace (Stephane Eranian).
- Disable PCI decoding while sizing BARs (Tatsuya Tsurukawa).
- Detect keyboard controller via ACPI, acpi.c cleanup.
- Add early printk support for UARTs.
- ForteMedia FM801 sound driver update (Martin Petersen).
- Make ACPI GPE0/GPE1 optional (Chris McDermott).
- IOSAPIC cleanup (Takayoshi Kochi):
- Cleanup irq/vector/pin terminology
- Remove gsi_to_vector_map[]
- Remove ACPI irq = vector assumption
- Dynamic vector allocation
- Add iosapic PCI segment support.
- Add pcibios PCI segment support.
- Add support for ACPI _TRA (PCI memory space only for now).
- Add platform vector hook in pcibios_enable_device.
- Detect HP sba_iommu via ACPI, support multiple IOCs.
- Correct memory_lseek return (David Mosberger).
- Correct /proc/<pid>/mem lseek return.
- Fix GPT RAID autodetect (only worked for first partition) (Alex Williamson).
- Move efi.h from include/asm-ia64 to include/linux (Matt Domsch).
- Temporarily restore ACPI SPCR/DBGP support (was removed from 2.4.19).
- defconfig update ("generic" kernel, virtual mem_map, add drivers for
fusion mpt, aic7xxx, sym53c8xx (ver 2), tulip, e1000, tigon 3,
zx1 AGPGART).
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.4.19)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (157 preceding siblings ...)
2002-08-22 13:42 ` [Linux-ia64] kernel update (relative to 2.4.19) Bjorn Helgaas
@ 2002-08-22 14:22 ` Wichmann, Mats D
2002-08-22 15:29 ` Bjorn Helgaas
` (56 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Wichmann, Mats D @ 2002-08-22 14:22 UTC (permalink / raw)
To: linux-ia64
> The latest ia64 kernel patch is available here:
>
>
>
ftp://ftp.kernel.org/pub/linux/kernel/ports/v2.4/linux-2.4.19-ia64-020821.di
ff.gz
I think, rather:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/linux-2.4.19-ia64-0208
21.diff.gz
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.19)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (158 preceding siblings ...)
2002-08-22 14:22 ` Wichmann, Mats D
@ 2002-08-22 15:29 ` Bjorn Helgaas
2002-08-23 4:52 ` KOCHI, Takayoshi
` (55 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Bjorn Helgaas @ 2002-08-22 15:29 UTC (permalink / raw)
To: linux-ia64
> I think, rather:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/linux-2.4.19-ia64-020821.diff.gz
Of course you're right. I cut and pasted to avoid that error, but ...
that's about how my week has gone. Thanks, Mats.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.19)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (159 preceding siblings ...)
2002-08-22 15:29 ` Bjorn Helgaas
@ 2002-08-23 4:52 ` KOCHI, Takayoshi
2002-08-23 10:10 ` Andreas Schwab
` (54 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: KOCHI, Takayoshi @ 2002-08-23 4:52 UTC (permalink / raw)
To: linux-ia64
Hi Bjorn,
I have comments on arch/ia64/kernel/acpi.c:
o acpi_get_current_resources
After some version (around Apr. 2002), ACPI CA has automatic memory
allocation functionality. If acpi_buffer's length is
ACPI_ALLOCATE_BUFFER, memory is allocated automatically.
So independent acpi_get_crs makes less sense, we can use
acpi_get_current_resources directly. Returning -ENOMEM
as acpi_status is also wrong.
o void * as a pointer to byte array
I was taught that this is a gcc extension, can to do any arithmetic
operation and should be avoided.
I'm not sure this is still true in C99.
Maybe you reject this because original is simpler and works anyway.
- res = buf->pointer + *offset;
+ res = (acpi_resource *)((char *) buf->pointer + *offset);
o acpi_resource's id and length
It is really counter-intuitive, but you can't count on length
member of acpi_resource:( In acpi_get_crs_addr(), there's a
corner case that resource doesn't have any memory resource
records. In that case, it will go into infinite loop.
If you encounter ACPI_RSTYPE_END_TAG, it's the end
of resources.
+ case ACPI_RSTYPE_END_TAG:
+ return;
+ break;
o I made some functions static
o rewrite acpi_dispose_crs with acpi_os_free
It's not important:)
Index: lia64-2.4/arch/ia64/kernel/acpi.c
diff -u lia64-2.4/arch/ia64/kernel/acpi.c:1.1.1.15 lia64-2.4/arch/ia64/kernel/acpi.c:1.1.1.15.4.1
--- lia64-2.4/arch/ia64/kernel/acpi.c:1.1.1.15 Thu Aug 22 10:39:56 2002
+++ lia64-2.4/arch/ia64/kernel/acpi.c Thu Aug 22 20:55:08 2002
@@ -112,33 +112,7 @@
#ifdef CONFIG_ACPI
-/**
- * acpi_get_crs - Return the current resource settings for a device
- * obj: A handle for this device
- * buf: A buffer to be populated by this call.
- *
- * Pass a valid handle, typically obtained by walking the namespace and a
- * pointer to an allocated buffer, and this function will fill in the buffer
- * with a list of acpi_resource structures.
- */
-acpi_status
-acpi_get_crs (acpi_handle obj, acpi_buffer *buf)
-{
- acpi_status result;
- buf->length = 0;
- buf->pointer = NULL;
-
- result = acpi_get_current_resources(obj, buf);
- if (result != AE_BUFFER_OVERFLOW)
- return result;
- buf->pointer = kmalloc(buf->length, GFP_KERNEL);
- if (!buf->pointer)
- return -ENOMEM;
-
- return acpi_get_current_resources(obj, buf);
-}
-
-acpi_resource *
+static acpi_resource *
acpi_get_crs_next (acpi_buffer *buf, int *offset)
{
acpi_resource *res;
@@ -146,12 +120,12 @@
if (*offset >= buf->length)
return NULL;
- res = buf->pointer + *offset;
+ res = (acpi_resource *)((char *) buf->pointer + *offset);
*offset += res->length;
return res;
}
-acpi_resource_data *
+static acpi_resource_data *
acpi_get_crs_type (acpi_buffer *buf, int *offset, int type)
{
for (;;) {
@@ -163,12 +137,6 @@
}
}
-void
-acpi_dispose_crs (acpi_buffer *buf)
-{
- kfree(buf->pointer);
-}
-
static void
acpi_get_crs_addr (acpi_buffer *buf, int type, u64 *base, u64 *length, u64 *tra)
{
@@ -210,6 +178,9 @@
return;
}
break;
+ case ACPI_RSTYPE_END_TAG:
+ return;
+ break;
}
}
}
@@ -218,13 +189,14 @@
acpi_get_addr_space(acpi_handle obj, u8 type, u64 *base, u64 *length, u64 *tra)
{
acpi_status status;
- acpi_buffer buf;
+ acpi_buffer buf = { .length = ACPI_ALLOCATE_BUFFER,
+ .pointer = NULL };
*base = 0;
*length = 0;
*tra = 0;
- status = acpi_get_crs(obj, &buf);
+ status = acpi_get_current_resources(obj, &buf);
if (ACPI_FAILURE(status)) {
printk(KERN_ERR PREFIX "Unable to get _CRS data on object\n");
return status;
@@ -232,7 +204,7 @@
acpi_get_crs_addr(&buf, type, base, length, tra);
- acpi_dispose_crs(&buf);
+ acpi_os_free(buf.pointer);
return AE_OK;
}
@@ -254,7 +226,8 @@
{
int i, offset = 0;
acpi_status status;
- acpi_buffer buf;
+ acpi_buffer buf = { .length = ACPI_ALLOCATE_BUFFER,
+ .pointer = NULL };
acpi_resource_vendor *res;
acpi_hp_vendor_long *hp_res;
efi_guid_t vendor_guid;
@@ -262,7 +235,7 @@
*csr_base = 0;
*csr_length = 0;
- status = acpi_get_crs(obj, &buf);
+ status = acpi_get_current_resources(obj, &buf);
if (ACPI_FAILURE(status)) {
printk(KERN_ERR PREFIX "Unable to get _CRS data on object\n");
return status;
@@ -271,7 +244,7 @@
res = (acpi_resource_vendor *)acpi_get_crs_type(&buf, &offset, ACPI_RSTYPE_VENDOR);
if (!res) {
printk(KERN_ERR PREFIX "Failed to find config space for device\n");
- acpi_dispose_crs(&buf);
+ acpi_os_free(buf.pointer);
return AE_NOT_FOUND;
}
@@ -279,14 +252,14 @@
if (res->length != HP_CCSR_LENGTH || hp_res->guid_id != HP_CCSR_TYPE) {
printk(KERN_ERR PREFIX "Unknown Vendor data\n");
- acpi_dispose_crs(&buf);
+ acpi_os_free(buf.pointer);
return AE_TYPE; /* Revisit error? */
}
memcpy(&vendor_guid, hp_res->guid, sizeof(efi_guid_t));
if (efi_guidcmp(vendor_guid, HP_CCSR_GUID) != 0) {
printk(KERN_ERR PREFIX "Vendor GUID does not match\n");
- acpi_dispose_crs(&buf);
+ acpi_os_free(buf.pointer);
return AE_TYPE; /* Revisit error? */
}
@@ -295,7 +268,7 @@
*csr_length |= ((u64)(hp_res->csr_length[i]) << (i * 8));
}
- acpi_dispose_crs(&buf);
+ acpi_os_free(buf.pointer);
return AE_OK;
}
Thanks,
--
KOCHI, Takayoshi <t-kouchi@cq.jp.nec.com/t-kouchi@mvf.biglobe.ne.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.19)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (160 preceding siblings ...)
2002-08-23 4:52 ` KOCHI, Takayoshi
@ 2002-08-23 10:10 ` Andreas Schwab
2002-08-30 5:42 ` [Linux-ia64] kernel update (relative to v2.5.32) David Mosberger
` (53 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Andreas Schwab @ 2002-08-23 10:10 UTC (permalink / raw)
To: linux-ia64
Hi!
Here is a patch to kill many warnings.
--- linux/arch/ia64/hp/common/sba_iommu.c.~1~
+++ linux/arch/ia64/hp/common/sba_iommu.c
@@ -626,7 +626,7 @@ mark_clean (void *addr, size_t size)
pg_addr = PAGE_ALIGN((unsigned long) addr);
end = (unsigned long) addr + size;
while (pg_addr + PAGE_SIZE <= end) {
- struct page *page = virt_to_page(pg_addr);
+ struct page *page = virt_to_page((void *)pg_addr);
set_bit(PG_arch_1, &page->flags);
pg_addr += PAGE_SIZE;
}
--- linux/arch/ia64/kernel/machvec.c.~1~
+++ linux/arch/ia64/kernel/machvec.c
@@ -15,7 +15,7 @@ struct ia64_machine_vector ia64_mv;
* into a memory map index.
*/
unsigned long
-map_nr_dense (unsigned long addr)
+map_nr_dense (void *addr)
{
return MAP_NR_DENSE(addr);
}
--- linux/arch/ia64/lib/swiotlb.c.~1~
+++ linux/arch/ia64/lib/swiotlb.c
@@ -351,7 +351,7 @@ mark_clean (void *addr, size_t size)
pg_addr = PAGE_ALIGN((unsigned long) addr);
end = (unsigned long) addr + size;
while (pg_addr + PAGE_SIZE <= end) {
- struct page *page = virt_to_page(pg_addr);
+ struct page *page = virt_to_page((void *)pg_addr);
set_bit(PG_arch_1, &page->flags);
pg_addr += PAGE_SIZE;
}
--- linux/arch/ia64/mm/init.c.~1~
+++ linux/arch/ia64/mm/init.c
@@ -114,8 +114,8 @@ free_initmem (void)
addr = (unsigned long) &__init_begin;
for (; addr < (unsigned long) &__init_end; addr += PAGE_SIZE) {
- clear_bit(PG_reserved, &virt_to_page(addr)->flags);
- set_page_count(virt_to_page(addr), 1);
+ clear_bit(PG_reserved, &virt_to_page((void *)addr)->flags);
+ set_page_count(virt_to_page((void *)addr), 1);
free_page(addr);
++totalram_pages;
}
@@ -164,10 +164,10 @@ free_initrd_mem(unsigned long start, uns
printk(KERN_INFO "Freeing initrd memory: %ldkB freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
- if (!VALID_PAGE(virt_to_page(start)))
+ if (!VALID_PAGE(virt_to_page((void *)start)))
continue;
- clear_bit(PG_reserved, &virt_to_page(start)->flags);
- set_page_count(virt_to_page(start), 1);
+ clear_bit(PG_reserved, &virt_to_page((void *)start)->flags);
+ set_page_count(virt_to_page((void *)start), 1);
free_page(start);
++totalram_pages;
}
@@ -554,7 +554,7 @@ count_reserved_pages (u64 start, u64 end
unsigned long *count = arg;
struct page *pg;
- for (pg = virt_to_page(start); pg < virt_to_page(end); ++pg)
+ for (pg = virt_to_page((void *)start); pg < virt_to_page((void *)end); ++pg)
if (PageReserved(pg))
++num_reserved;
*count += num_reserved;
--- linux/arch/ia64/sn/kernel/setup.c.~1~
+++ linux/arch/ia64/sn/kernel/setup.c
@@ -137,7 +137,7 @@ char drive_info[4*16];
* virt_to_page() (asm-ia64/page.h), among other things.
*/
unsigned long
-sn_map_nr (unsigned long addr)
+sn_map_nr (void *addr)
{
return MAP_NR_DISCONTIG(addr);
}
--- linux/include/asm-ia64/machvec.h.~1~
+++ linux/include/asm-ia64/machvec.h
@@ -25,7 +25,7 @@ typedef void ia64_mv_cpu_init_t(void);
typedef void ia64_mv_irq_init_t (void);
typedef void ia64_mv_pci_fixup_t (int);
typedef void ia64_mv_pci_enable_device_t (struct pci_dev *);
-typedef unsigned long ia64_mv_map_nr_t (unsigned long);
+typedef unsigned long ia64_mv_map_nr_t (void *);
typedef void ia64_mv_mca_init_t (void);
typedef void ia64_mv_mca_handler_t (void);
typedef void ia64_mv_cmci_handler_t (int, void *, struct pt_regs *);
Andreas.
--
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux AG, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.5.32)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (161 preceding siblings ...)
2002-08-23 10:10 ` Andreas Schwab
@ 2002-08-30 5:42 ` David Mosberger
2002-08-30 17:26 ` KOCHI, Takayoshi
` (52 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-08-30 5:42 UTC (permalink / raw)
To: linux-ia64
OK, the latest ia64 patch should show up shortly at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5
in file: linux-2.5.32-ia64-020829.diff.gz
Please see http://lia64.bkbits.net:8080/to-linus-2.5
for change log entries.
Note: I cobbled together a version of 8250_hcdp.c which ought to work,
but I don't have firmware to test it with, so it's quite possible it
doesn't work quite right. At the moment, this should affect only hp
zx1-based machines.
Note 2: You may have heard that Linus took out the big stick once
again and replaced the new & sort-of-improved IDE subsystem with the
old (and reliable?) one. I haven't done much testing, but this kernel
at least can read from an IDE CD-ROM again.
Oh, PS/2 keyboards & mice don't seem to work too well at the moment.
USB seems to work fine, though.
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.32)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (162 preceding siblings ...)
2002-08-30 5:42 ` [Linux-ia64] kernel update (relative to v2.5.32) David Mosberger
@ 2002-08-30 17:26 ` KOCHI, Takayoshi
2002-08-30 19:00 ` David Mosberger
` (51 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: KOCHI, Takayoshi @ 2002-08-30 17:26 UTC (permalink / raw)
To: linux-ia64
Hi David,
On Thu, 29 Aug 2002 22:42:23 -0700
David Mosberger <davidm@napali.hpl.hp.com> wrote:
> OK, the latest ia64 patch should show up shortly at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5
>
> in file: linux-2.5.32-ia64-020829.diff.gz
Compie fails at arch/ia64/kernel/init_task.c when I use
RedHat's gcc 2.96.
Is this time to consider gcc3.x as a default compiler for ia64?
Thanks,
--
KOCHI, Takayoshi <t-kouchi@cq.jp.nec.com/t-kouchi@mvf.biglobe.ne.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.32)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (163 preceding siblings ...)
2002-08-30 17:26 ` KOCHI, Takayoshi
@ 2002-08-30 19:00 ` David Mosberger
2002-09-18 3:25 ` Peter Chubb
` (50 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-08-30 19:00 UTC (permalink / raw)
To: linux-ia64
>>>>> On Fri, 30 Aug 2002 10:26:10 -0700, "KOCHI, Takayoshi" <t-kouchi@mvf.biglobe.ne.jp> said:
>> Compie fails at arch/ia64/kernel/init_task.c when I use
>> RedHat's gcc 2.96.
>> Is this time to consider gcc3.x as a default compiler for ia64?
Well, I'm not sure I'd go that far. Even gcc3.1 crashed for me at
some point (I think it was on ide-floppy.c). I haven't had time to
look into this one yet. Perhaps gcc-3.2 has this fixed already.
If someone sends me a patch to make things compile with gcc 2.96, I'll
certainly apply it (assuming it's a reasonable patch). I just don't
use gcc 2.96 myself anymore.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.32)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (164 preceding siblings ...)
2002-08-30 19:00 ` David Mosberger
@ 2002-09-18 3:25 ` Peter Chubb
2002-09-18 3:32 ` David Mosberger
` (49 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-09-18 3:25 UTC (permalink / raw)
To: linux-ia64
Here's a patch. The problem was the struct initialisers change in
other files --- they meant that the INIT_THREAD_IA32 macro didn't line
up properly with the things it was meant to initialise.
--- /tmp/geta19551 2002-09-18 13:18:27.000000000 +1000
+++ linux-2.5-ia64/include/asm-ia64/processor.h 2002-09-18 13:14:55.000000000 +1000
@@ -236,7 +236,7 @@
__u64 ssd; /* IA32 stack selector descriptor */
__u64 old_k1; /* old value of ar.k1 */
__u64 old_iob; /* old IOBase value */
-# define INIT_THREAD_IA32 0, 0, 0x17800000037fULL, 0, 0, 0, 0, 0, 0,
+# define INIT_THREAD_IA32 .eflag = 0, .fsr = 0, .fcr = 0x17800000037fULL, .fir = 0, .fdr = 0, .csd = 0, .ssd = 0, .old_k1 = 0, .old_iob = 0,
#else
# define INIT_THREAD_IA32
#endif /* CONFIG_IA32_SUPPORT */
@@ -248,7 +248,7 @@
atomic_t pfm_notifiers_check; /* when >0, will cleanup ctx_notify_task in tasklist */
atomic_t pfm_owners_check; /* when >0, will cleanup ctx_owner in tasklist */
void *pfm_smpl_buf_list; /* list of sampling buffers to vfree */
-# define INIT_THREAD_PM {0, }, {0, }, 0, NULL, {0}, {0}, NULL,
+# define INIT_THREAD_PM .pmc = {0, }, .pmd = {0, }, .pfm_ovfl_block_reset = 0, .pfm_context = NULL, .pfm_notifiers_check = {0}, .pfm_owners_check = {0}, .pfm_smpl_buf_list = NULL,
#else
# define INIT_THREAD_PM
#endif
@@ -258,16 +258,16 @@
};
#define INIT_THREAD { \
- flags: 0, \
- ksp: 0, \
- map_base: DEFAULT_MAP_BASE, \
- task_size: DEFAULT_TASK_SIZE, \
- siginfo: 0, \
+ .flags= 0, \
+ .ksp= 0, \
+ .map_base= DEFAULT_MAP_BASE, \
+ .task_size= DEFAULT_TASK_SIZE, \
+ .siginfo= 0, \
INIT_THREAD_IA32 \
INIT_THREAD_PM \
- dbr: {0, }, \
- ibr: {0, }, \
- fph: {{{{0}}}, } \
+ .dbr= {0, }, \
+ .ibr= {0, }, \
+ .fph= {{{{0}}}, } \
}
#define start_thread(regs,new_ip,new_sp) do { \
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.32)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (165 preceding siblings ...)
2002-09-18 3:25 ` Peter Chubb
@ 2002-09-18 3:32 ` David Mosberger
2002-09-18 6:54 ` [Linux-ia64] kernel update (relative to 2.5.35) David Mosberger
` (48 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-09-18 3:32 UTC (permalink / raw)
To: linux-ia64
>>>>> On Wed, 18 Sep 2002 13:25:06 +1000, Peter Chubb <peter@chubb.wattle.id.au> said:
Peter> Here's a patch. The problem was the struct initialisers
Peter> change in other files --- they meant that the
Peter> INIT_THREAD_IA32 macro didn't line up properly with the
Peter> things it was meant to initialise.
I already have this fixed in my tree (2.5.35+, soon...). I found it
because of a compiler warning, so I didn't realize this was
responsible for triggering the gcc2.96 problem.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.35)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (166 preceding siblings ...)
2002-09-18 3:32 ` David Mosberger
@ 2002-09-18 6:54 ` David Mosberger
2002-09-28 21:48 ` [Linux-ia64] kernel update (relative to 2.5.39) David Mosberger
` (47 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-09-18 6:54 UTC (permalink / raw)
To: linux-ia64
A new ia64 patch is now at ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/
in file:
linux-2.5.35-ia64-020917.diff.gz
Most important changes:
- added Rohit's huge page patch (now you too can have 4GB
pages... ;-)
- new clone() flags are (should be) supported now
See http://lia64.bkbits.net:8080/to-linus-2.5 for a more detailed
change log.
Seems to work fine on HP Ski and Big Sur. It's not working on zx1
machines at the moment (I'll fix that next, but that will probably be
with 2.5.36 or wherever Linus is by then...)
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (167 preceding siblings ...)
2002-09-18 6:54 ` [Linux-ia64] kernel update (relative to 2.5.35) David Mosberger
@ 2002-09-28 21:48 ` David Mosberger
2002-09-30 23:28 ` Peter Chubb
` (46 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-09-28 21:48 UTC (permalink / raw)
To: linux-ia64
A new ia64 patch is now availabe at
ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/ in file:
linux-2.5.39-ia64-020928.gz
For a detailed changelog etc., see
http://lia64.bkbits.net:8080/to-linus-2.5 as usual.
The major change in this patch is that I switched the IOSAPIC code
over to use the ACPI subsystem. This brings us more in sync with the
x86 tree and avoids code duplication. I tested this both on Big Sur
and zx1 and it seems to work fine. The integration is not perfect
yet, because on ia64 linux the irq number is still different from the
ACPI GSI (for no good reason, IMHO). Hopefully, someone can take a
look into changing this.
I'd like to encourage all developers to try out this kernel. We don't
want to end up in a situation where dozens of new bugs are discovered
a day before v2.6 is released (not that this is going to happen very
soon, but we are approaching feature freeze very quickly...).
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (168 preceding siblings ...)
2002-09-28 21:48 ` [Linux-ia64] kernel update (relative to 2.5.39) David Mosberger
@ 2002-09-30 23:28 ` Peter Chubb
2002-09-30 23:49 ` David Mosberger
` (45 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-09-30 23:28 UTC (permalink / raw)
To: linux-ia64
With the new patch, I see lots of warnings compiling the various
filesystems (particularly ext3 and reiserfs), as ino_t is defined as
an int on IA64, but is printed in error messages as a long (it's a
long on most other platforms).
There are two ways I can think of to fix this:
1. Change the definition of ino_t to be a long, not an int, as
was done in the sparc64 and ppc64 code. The comments in
asm/posix_types.h say these types are used for communication to
userspace, so maybe that's not an option any more?
2. Add casts to long everywhere an ino_t is printed out. This
approach is ugly, and requires quite a few changes; and would
have to be repeated every time other architecture-maintainers
add new printks.
Is there a better, third, way?
--
Dr Peter Chubb peterc@gelato.unsw.edu.au
You are lost in a maze of BitKeeper repositories, all almost the same.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (169 preceding siblings ...)
2002-09-30 23:28 ` Peter Chubb
@ 2002-09-30 23:49 ` David Mosberger
2002-10-01 4:26 ` Peter Chubb
` (44 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-09-30 23:49 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 1 Oct 2002 09:28:12 +1000, Peter Chubb <peter@chubb.wattle.id.au> said:
Peter> With the new patch, I see lots of warnings compiling the
Peter> various filesystems (particularly ext3 and reiserfs), as
Peter> ino_t is defined as an int on IA64, but is printed in error
Peter> messages as a long (it's a long on most other platforms).
Peter> There are two ways I can think of to fix this:
Peter> 1. Change the definition of ino_t to be a long, not an
Peter> int, as was done in the sparc64 and ppc64 code. The comments
Peter> in asm/posix_types.h say these types are used for
Peter> communication to userspace, so maybe that's not an option any
Peter> more?
Peter> 2. Add casts to long everywhere an ino_t is printed
Peter> out. This approach is ugly, and requires quite a few
Peter> changes; and would have to be repeated every time other
Peter> architecture-maintainers add new printks.
Peter> Is there a better, third, way?
If we can do it without breaking apps, I'm all for changing ino_t to
64 bits. Note that "struct stat" already defines the st_ino member as
"unsigned long". Also, glibc declares ino_t as a 64-bit type.
The thing to do would be to check all system calls that directly or
indirectly depend on ino_t and verify that they'd be OK (and let's not
forget about checking ioctl()...). The x86 emulation layer would also
have to be checked to make sure that all ino_t values are properly
converted.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (170 preceding siblings ...)
2002-09-30 23:49 ` David Mosberger
@ 2002-10-01 4:26 ` Peter Chubb
2002-10-01 5:19 ` David Mosberger
` (43 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-10-01 4:26 UTC (permalink / raw)
To: linux-ia64
To get the new patched version to compile without a VGA card, (using
serial console) I needed this patch:
--- /tmp/geta5135 2002-10-01 14:02:36.000000000 +1000
+++ linux-2.5-ia64-merge/kernel/printk.c 2002-10-01 14:01:47.000000000 +1000
@@ -718,9 +718,10 @@
#ifdef CONFIG_IA64_EARLY_PRINTK
+#include <asm/io.h>
+
# ifdef CONFIG_IA64_EARLY_PRINTK_VGA
-#include <asm/io.h>
#define VGABASE ((char *)0xc0000000000b8000)
#define VGALINES 24
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (171 preceding siblings ...)
2002-10-01 4:26 ` Peter Chubb
@ 2002-10-01 5:19 ` David Mosberger
2002-10-03 2:33 ` Jes Sorensen
` (42 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-10-01 5:19 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 1 Oct 2002 14:26:25 +1000, Peter Chubb <peter@chubb.wattle.id.au> said:
Peter> To get the new patched version to compile without a VGA card,
Peter> (using serial console) I needed this patch:
Looks good to me. I applied it.
Thanks,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (172 preceding siblings ...)
2002-10-01 5:19 ` David Mosberger
@ 2002-10-03 2:33 ` Jes Sorensen
2002-10-03 2:46 ` KOCHI, Takayoshi
` (41 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Jes Sorensen @ 2002-10-03 2:33 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@napali.hpl.hp.com> writes:
> >>>>> On Tue, 1 Oct 2002 09:28:12 +1000, Peter Chubb <peter@chubb.wattle.id.au> said:
>
> Peter> 1. Change the definition of ino_t to be a long, not an
> Peter> int, as was done in the sparc64 and ppc64 code. The comments
> Peter> in asm/posix_types.h say these types are used for
> Peter> communication to userspace, so maybe that's not an option any
> Peter> more?
>
> [snip]
>
> If we can do it without breaking apps, I'm all for changing ino_t to
> 64 bits. Note that "struct stat" already defines the st_ino member as
> "unsigned long". Also, glibc declares ino_t as a 64-bit type.
I don't think there should be many (if any) cases where this happens,
but I agree it should be validated. Since we're in little endian mode,
we might be lucky that natural data alignment inside structs will save
us as well ;-)
Cheers,
Jes
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (173 preceding siblings ...)
2002-10-03 2:33 ` Jes Sorensen
@ 2002-10-03 2:46 ` KOCHI, Takayoshi
2002-10-13 23:39 ` Peter Chubb
` (40 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: KOCHI, Takayoshi @ 2002-10-03 2:46 UTC (permalink / raw)
To: linux-ia64
Hi,
On Sat, 28 Sep 2002 14:48:37 -0700
David Mosberger <davidm@napali.hpl.hp.com> wrote:
> The major change in this patch is that I switched the IOSAPIC code
> over to use the ACPI subsystem. This brings us more in sync with the
> x86 tree and avoids code duplication. I tested this both on Big Sur
> and zx1 and it seems to work fine. The integration is not perfect
> yet, because on ia64 linux the irq number is still different from the
> ACPI GSI (for no good reason, IMHO). Hopefully, someone can take a
> look into changing this.
I'll look into it.
Thanks,
--
KOCHI, Takayoshi <t-kouchi@cq.jp.nec.com/t-kouchi@mvf.biglobe.ne.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (174 preceding siblings ...)
2002-10-03 2:46 ` KOCHI, Takayoshi
@ 2002-10-13 23:39 ` Peter Chubb
2002-10-17 11:46 ` Jes Sorensen
` (39 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2002-10-13 23:39 UTC (permalink / raw)
To: linux-ia64
>>>>> "Jes" = Jes Sorensen <jes@trained-monkey.org> writes:
Jes> David Mosberger <davidm@napali.hpl.hp.com> writes:
>> >>>>> On Tue, 1 Oct 2002 09:28:12 +1000, Peter Chubb
>> <peter@chubb.wattle.id.au> said:
>>
Peter> 1. Change the definition of ino_t to be a long, not an int, as
Peter> was done in the sparc64 and ppc64 code. The comments in
Peter> asm/posix_types.h say these types are used for communication to
Peter> userspace, so maybe that's not an option any more?
>> [snip]
>>
>> If we can do it without breaking apps, I'm all for changing ino_t
>> to 64 bits. Note that "struct stat" already defines the st_ino
>> member as "unsigned long". Also, glibc declares ino_t as a 64-bit
>> type.
Looks like this is now a non-issue:
ChangeSet@1.785.1.1, 2002-10-12 12:09:25-07:00,
rth@splat.sfbay.redhat.com
Fix warnings of the form
warning: long int format, different type arg (arg 5)
by casting ino_t arguments to unsigned long for printf formats.
In some instances, change %ld to %lu.
Peter C
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.39)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (175 preceding siblings ...)
2002-10-13 23:39 ` Peter Chubb
@ 2002-10-17 11:46 ` Jes Sorensen
2002-11-01 6:18 ` [Linux-ia64] kernel update (relative to 2.5.45) David Mosberger
` (38 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Jes Sorensen @ 2002-10-17 11:46 UTC (permalink / raw)
To: linux-ia64
>>>>> "Peter" = Peter Chubb <peter@chubb.wattle.id.au> writes:
>>>>> "Jes" = Jes Sorensen <jes@trained-monkey.org> writes:
>>> If we can do it without breaking apps, I'm all for changing ino_t to
>>> 64 bits. Note that "struct stat" already defines the st_ino member
>>> as "unsigned long". Also, glibc declares ino_t as a 64-bit type.
Peter> Looks like this is now a non-issue:
This is probably the least of the problem (if there is one), compiler
warnings for print's don't matter all that much.
Jes
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.45)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (176 preceding siblings ...)
2002-10-17 11:46 ` Jes Sorensen
@ 2002-11-01 6:18 ` David Mosberger
2002-12-11 4:44 ` [Linux-ia64] kernel update (relative to 2.4.20) Bjorn Helgaas
` (37 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-11-01 6:18 UTC (permalink / raw)
To: linux-ia64
The latest ia64 patch is now at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/
in file:
linux-2.5.45-ia64-021031.diff.gz
Apart from various fixes and sync-ups (see log at
http://lia64.bkbits.net:8080/to-linus-2.5 for details), the most
notable change is that the per-CPU data area is now always 64KB in
size, no matter what the normal page size. This was needed because a
per-CPU data structure in the scheduler is very large (>4KB), so a
single page was no longer sufficient.
This patch has been tested on Ski simulator, Big Sur (2-way), and
zx2000 (1-way).
Enjoy,
--david
PS: Don't try to build the NFS v4 server---it doesn't build. Also,
"make xconfig" now requires a ton of Qt libraries. "make menuconfig"
works pretty much as usual, though.
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.4.20)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (177 preceding siblings ...)
2002-11-01 6:18 ` [Linux-ia64] kernel update (relative to 2.5.45) David Mosberger
@ 2002-12-11 4:44 ` Bjorn Helgaas
2002-12-12 2:00 ` Matthew Wilcox
` (36 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Bjorn Helgaas @ 2002-12-11 4:44 UTC (permalink / raw)
To: linux-ia64
The latest ia64 kernel patch for Linux 2.4.20 is available here:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/linux-2.4.20-ia64-021210.diff.gz
The current 2.4-based ia64 tree is also available as a BitKeeper
repository. You can browse the changelog and the source files with a
normal browser at:
http://lia64.bkbits.net:8080/linux-ia64-2.4
or you can use the BitKeeper tools to maintain a local copy of the
tree like this:
$ bk clone bk://lia64.bkbits.net/linux-ia64-2.4 linux-ia64-2.4
Note that the SPCR and DBGP support has been removed. In the opinion
of HP lawyers, that support contained Microsoft intellectual property,
so they requested its removal.
Bjorn
Changes since the 2.4.19-ia64-020821 patch:
* Driver changes:
- bcm: dropped (you may be able to use tg3 instead).
- e1000: dropped 4.1.7, adopted upstream (currently 4.4.12-k1)
- e100: RX_ALIGN now upstream; only last_rx_time fix in ia64 patch.
- acenic: dropped ia64 patches; upstream has most or all.
- forte: dropped 1.45, adopted upstream (currently 1.55).
* SPCR/DBGP support removed (encumbered by Microsoft IP; use HCDP instead).
* McKinley A-step config doc removed (code was already gone).
* HP prototype code removed (Matthew Wilcox).
* include/asm-ia64/offsets.h removed.
* sim{eth,scsi,serial} (drivers for HP simulator) moved to arch/ia64/hp/sim/.
* drivers/media/{radio,video}/dummy.c removed.
* Support for /dev/mem write-coalescing mappings removed.
* Support for non-cached mappings of main memory removed.
* Support scatterlist page/offset in sba_iommu.
* AGP/DRM rework to make it more presentable.
- DRM: dropped obsolete #ifdef __alpha__ diffs.
- DRM: r128, radeon: made all 460GX checks run-time, not compile-time.
* ACPI CRS cleanup (Takayoshi Kochi).
* ACPI debug fixes (Takayoshi Kochi).
* Fix many warnings (Andreas Schwab).
* Fix I/O macros (inb, outb, etc) (Andreas Schwab, David Mosberger).
* If more than NR_CPUS found, ignore extras.
* Add generic RAID xor routines with prefetch (Matthew Wilcox).
* Discard *.text.exit and *.data.exit sections (Matthew Wilcox).
* Fix edge-triggered IRQ handling (David Mosberger).
* Alternate signal stack fixes (David Mosberger).
* VFS extended attribute syscall numbers (Andreas Gruenbacher).
* binfmt argv[1] preservation (David Mosberger).
* Preserve FP registers around firmware calls (John Marvin, David Mosberger).
* FPU load/save optimization (Fenghua Yu).
* Syscalls for Extended Attribute VFS infrastructure (Andreas Gruenbacher).
* Use virtual mem map automatically when needed (John Marvin).
* Fix mremap when returning "negative" addresses (Matt Chapman, David Mosberger).
* Add breakpoint hook for simulator (Peter Chubb, David Mosberger).
* Fix memcpy to return destination address (Ken Chen).
* Fix EFI handling of complicated memory maps (David Mosberger).
* Fix ACPI global lock acquire/release (David Mosberger).
* __init/__devinit fixup for PCI hotplug (Jung-Ik Lee).
* Fix TLB flushing for multi-threaded address spaces on SMP (David Mosberger).
* Bugfixes and cleanup in MCA logging (Jenna Hall).
* Poll for corrected platform errors if no CPE interrupt (Alex Williamson).
* MCA and data corruption fixes in HP ZX1 IOMMU driver (Alex Williamson).
- NOTE: SGI pci_dma.c requires corresponding changes.
* HP ZX1 AGP bridge detected via ACPI, not fake PCI devices.
* Perfmon update to version 1.2 (Stephane Eranian).
* Scan PCI buses 0-255 (not 0-254).
* Skip blind PCI probe when root bridges reported by ACPI.
* ACPI: backport bugfix so we see all 460GX PCI root bridges.
* Save/restore FP state in IA32 exception handling (Venkatesh Pallipadi).
* IA32 ptrace: support xmm regs, bug fixes (Venkatesh Pallipadi).
* Fix several unaligned access problems (David Mosberger).
* Fix fork/ptrace deadlock (David Mosberger).
The following major pieces of the ia64 patch are unchanged since
2.4.19-ia64-020821:
* ACPI: CA version 20020517 (upstream has version 20011018).
* qla1280: 3.23 Beta (upstream has 3.00-Beta).
* qla2x00: 4.31.7b (not in upstream).
NOTE: I removed the mmap support for MAP_WRITECOMBINED and
MAP_NONCACHED to avoid issues with memory attribute aliasing.
The only user of these that I know about is XFree86, which still
seems functional when we ignore the attributes it requests. I'm
very interested in any problems caused by this change.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.20)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (178 preceding siblings ...)
2002-12-11 4:44 ` [Linux-ia64] kernel update (relative to 2.4.20) Bjorn Helgaas
@ 2002-12-12 2:00 ` Matthew Wilcox
2002-12-13 17:36 ` Bjorn Helgaas
` (35 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Matthew Wilcox @ 2002-12-12 2:00 UTC (permalink / raw)
To: linux-ia64
On Tue, Dec 10, 2002 at 09:44:12PM -0700, Bjorn Helgaas wrote:
> The latest ia64 kernel patch for Linux 2.4.20 is available here:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.4/linux-2.4.20-ia64-021210.diff.gz
I've uploaded a version of the latest ACPI code as a patch against this
release to http://sourceforge.net/project/showfiles.php?group_id6832
--
"It's not Hollywood. War is real, war is primarily not about defeat or
victory, it is about death. I've seen thousands and thousands of dead bodies.
Do you think I want to have an academic debate on this subject?" -- Robert Fisk
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.4.20)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (179 preceding siblings ...)
2002-12-12 2:00 ` Matthew Wilcox
@ 2002-12-13 17:36 ` Bjorn Helgaas
2002-12-21 9:00 ` [Linux-ia64] kernel update (relative to 2.5.52) David Mosberger
` (34 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Bjorn Helgaas @ 2002-12-13 17:36 UTC (permalink / raw)
To: linux-ia64
The linux-2.4.20-ia64-021210 patch broke the ski simulator kernel.
The attached additional patch fixes the problem, and also incorporates
the 2.5 simscsi code that auto-detects the simulated disk size, so you
don't need a 1G image. Note that I also adopted the 2.5 config
symbols.
Sorry for the inconvenience.
Bjorn
diff -u -urN linux-2.4.20-ia64-021210/arch/ia64/config.in linux-ski/arch/ia64/config.in
--- linux-2.4.20-ia64-021210/arch/ia64/config.in 2002-12-10 14:23:20.000000000 -0700
+++ linux-ski/arch/ia64/config.in 2002-12-13 10:03:28.000000000 -0700
@@ -250,10 +250,10 @@
mainmenu_option next_comment
comment 'Simulated drivers'
- bool 'Simulated Ethernet ' CONFIG_SIMETH
- bool 'Simulated serial driver support' CONFIG_SIM_SERIAL
+ bool 'Simulated Ethernet ' CONFIG_HP_SIMETH
+ bool 'Simulated serial driver support' CONFIG_HP_SIMSERIAL
if [ "$CONFIG_SCSI" != "n" ]; then
- bool 'Simulated SCSI disk' CONFIG_SCSI_SIM
+ bool 'Simulated SCSI disk' CONFIG_HP_SIMSCSI
fi
endmenu
fi
diff -u -urN linux-2.4.20-ia64-021210/arch/ia64/hp/sim/Makefile linux-ski/arch/ia64/hp/sim/Makefile
--- linux-2.4.20-ia64-021210/arch/ia64/hp/sim/Makefile 2002-12-10 14:23:20.000000000 -0700
+++ linux-ski/arch/ia64/hp/sim/Makefile 2002-12-13 10:04:07.000000000 -0700
@@ -12,8 +12,4 @@
obj-y := hpsim_console.o hpsim_irq.o hpsim_setup.o
obj-$(CONFIG_IA64_GENERIC) += hpsim_machvec.o
-obj-$(CONFIG_HP_SIMETH) += simeth.o
-obj-$(CONFIG_HP_SIMSERIAL) += simserial.o
-obj-$(CONFIG_HP_SIMSCSI) += simscsi.o
-
include $(TOPDIR)/Rules.make
diff -u -urN linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simeth.c linux-ski/arch/ia64/hp/sim/simeth.c
--- linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simeth.c 2002-12-10 14:23:20.000000000 -0700
+++ linux-ski/arch/ia64/hp/sim/simeth.c 1969-12-31 17:00:00.000000000 -0700
@@ -1,533 +0,0 @@
-/*
- * Simulated Ethernet Driver
- *
- * Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 Stephane Eranain <eranian@hpl.hp.com>
- */
-#include <linux/config.h>
-#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/types.h>
-#include <linux/in.h>
-#include <linux/string.h>
-#include <linux/init.h>
-#include <linux/errno.h>
-#include <linux/interrupt.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/inetdevice.h>
-#include <linux/if_ether.h>
-#include <linux/if_arp.h>
-#include <linux/skbuff.h>
-#include <linux/notifier.h>
-#include <asm/bitops.h>
-#include <asm/system.h>
-#include <asm/irq.h>
-
-#define SIMETH_RECV_MAX 10
-
-/*
- * Maximum possible received frame for Ethernet.
- * We preallocate an sk_buff of that size to avoid costly
- * memcpy for temporary buffer into sk_buff. We do basically
- * what's done in other drivers, like eepro with a ring.
- * The difference is, of course, that we don't have real DMA !!!
- */
-#define SIMETH_FRAME_SIZE ETH_FRAME_LEN
-
-
-#define SSC_NETDEV_PROBE 100
-#define SSC_NETDEV_SEND 101
-#define SSC_NETDEV_RECV 102
-#define SSC_NETDEV_ATTACH 103
-#define SSC_NETDEV_DETACH 104
-
-#define NETWORK_INTR 8
-
-struct simeth_local {
- struct net_device_stats stats;
- int simfd; /* descriptor in the simulator */
-};
-
-static int simeth_probe1(void);
-static int simeth_open(struct net_device *dev);
-static int simeth_close(struct net_device *dev);
-static int simeth_tx(struct sk_buff *skb, struct net_device *dev);
-static int simeth_rx(struct net_device *dev);
-static struct net_device_stats *simeth_get_stats(struct net_device *dev);
-static void simeth_interrupt(int irq, void *dev_id, struct pt_regs * regs);
-static void set_multicast_list(struct net_device *dev);
-static int simeth_device_event(struct notifier_block *this,unsigned long event, void *ptr);
-
-static char *simeth_version="0.3";
-
-/*
- * This variable is used to establish a mapping between the Linux/ia64 kernel
- * and the host linux kernel.
- *
- * As of today, we support only one card, even though most of the code
- * is ready for many more. The mapping is then:
- * linux/ia64 -> linux/x86
- * eth0 -> eth1
- *
- * In the future, we some string operations, we could easily support up
- * to 10 cards (0-9).
- *
- * The default mapping can be changed on the kernel command line by
- * specifying simeth=ethX (or whatever string you want).
- */
-static char *simeth_device="eth0"; /* default host interface to use */
-
-
-
-static volatile unsigned int card_count; /* how many cards "found" so far */
-static int simeth_debug; /* set to 1 to get debug information */
-
-/*
- * Used to catch IFF_UP & IFF_DOWN events
- */
-static struct notifier_block simeth_dev_notifier = {
- simeth_device_event,
- 0
-};
-
-
-/*
- * Function used when using a kernel command line option.
- *
- * Format: simeth=interface_name (like eth0)
- */
-static int __init
-simeth_setup(char *str)
-{
- simeth_device = str;
- return 1;
-}
-
-__setup("simeth=", simeth_setup);
-
-/*
- * Function used to probe for simeth devices when not installed
- * as a loadable module
- */
-
-int __init
-simeth_probe (void)
-{
- int r;
-
- printk("simeth: v%s\n", simeth_version);
-
- r = simeth_probe1();
-
- if (r = 0) register_netdevice_notifier(&simeth_dev_notifier);
-
- return r;
-}
-
-extern long ia64_ssc (long, long, long, long, int);
-extern void ia64_ssc_connect_irq (long intr, long irq);
-
-static inline int
-netdev_probe(char *name, unsigned char *ether)
-{
- return ia64_ssc(__pa(name), __pa(ether), 0,0, SSC_NETDEV_PROBE);
-}
-
-
-static inline int
-netdev_connect(int irq)
-{
- /* XXX Fix me
- * this does not support multiple cards
- * also no return value
- */
- ia64_ssc_connect_irq(NETWORK_INTR, irq);
- return 0;
-}
-
-static inline int
-netdev_attach(int fd, int irq, unsigned int ipaddr)
-{
- /* this puts the host interface in the right mode (start interupting) */
- return ia64_ssc(fd, ipaddr, 0,0, SSC_NETDEV_ATTACH);
-}
-
-
-static inline int
-netdev_detach(int fd)
-{
- /*
- * inactivate the host interface (don't interrupt anymore) */
- return ia64_ssc(fd, 0,0,0, SSC_NETDEV_DETACH);
-}
-
-static inline int
-netdev_send(int fd, unsigned char *buf, unsigned int len)
-{
- return ia64_ssc(fd, __pa(buf), len, 0, SSC_NETDEV_SEND);
-}
-
-static inline int
-netdev_read(int fd, unsigned char *buf, unsigned int len)
-{
- return ia64_ssc(fd, __pa(buf), len, 0, SSC_NETDEV_RECV);
-}
-
-/*
- * Function shared with module code, so cannot be in init section
- *
- * So far this function "detects" only one card (test_&_set) but could
- * be extended easily.
- *
- * Return:
- * - -ENODEV is no device found
- * - -ENOMEM is no more memory
- * - 0 otherwise
- */
-static int
-simeth_probe1(void)
-{
- unsigned char mac_addr[ETH_ALEN];
- struct simeth_local *local;
- struct net_device *dev;
- int fd, i;
-
- /*
- * XXX Fix me
- * let's support just one card for now
- */
- if (test_and_set_bit(0, &card_count))
- return -ENODEV;
-
- /*
- * check with the simulator for the device
- */
- fd = netdev_probe(simeth_device, mac_addr);
- if (fd = -1)
- return -ENODEV;
-
- dev = init_etherdev(NULL, sizeof(struct simeth_local));
- if (!dev)
- return -ENOMEM;
-
- memcpy(dev->dev_addr, mac_addr, sizeof(mac_addr));
-
- dev->irq = ia64_alloc_vector();
-
- /*
- * attach the interrupt in the simulator, this does enable interrupts
- * until a netdev_attach() is called
- */
- netdev_connect(dev->irq);
-
- memset(dev->priv, 0, sizeof(struct simeth_local));
-
- local = dev->priv;
- local->simfd = fd; /* keep track of underlying file descriptor */
-
- dev->open = simeth_open;
- dev->stop = simeth_close;
- dev->hard_start_xmit = simeth_tx;
- dev->get_stats = simeth_get_stats;
- dev->set_multicast_list = set_multicast_list; /* no yet used */
-
- /* Fill in the fields of the device structure with ethernet-generic values. */
- ether_setup(dev);
-
- printk("%s: hosteth=%s simfd=%d, HwAddr", dev->name, simeth_device, local->simfd);
- for(i = 0; i < ETH_ALEN; i++) {
- printk(" %2.2x", dev->dev_addr[i]);
- }
- printk(", IRQ %d\n", dev->irq);
-
- return 0;
-}
-
-/*
- * actually binds the device to an interrupt vector
- */
-static int
-simeth_open(struct net_device *dev)
-{
- if (request_irq(dev->irq, simeth_interrupt, 0, "simeth", dev)) {
- printk ("simeth: unable to get IRQ %d.\n", dev->irq);
- return -EAGAIN;
- }
-
- netif_start_queue(dev);
-
- return 0;
-}
-
-/* copied from lapbether.c */
-static __inline__ int dev_is_ethdev(struct net_device *dev)
-{
- return ( dev->type = ARPHRD_ETHER && strncmp(dev->name, "dummy", 5));
-}
-
-
-/*
- * Handler for IFF_UP or IFF_DOWN
- *
- * The reason for that is that we don't want to be interrupted when the
- * interface is down. There is no way to unconnect in the simualtor. Instead
- * we use this function to shutdown packet processing in the frame filter
- * in the simulator. Thus no interrupts are generated
- *
- *
- * That's also the place where we pass the IP address of this device to the
- * simulator so that that we can start filtering packets for it
- *
- * There may be a better way of doing this, but I don't know which yet.
- */
-static int
-simeth_device_event(struct notifier_block *this,unsigned long event, void *ptr)
-{
- struct net_device *dev = (struct net_device *)ptr;
- struct simeth_local *local;
- struct in_device *in_dev;
- struct in_ifaddr **ifap = NULL;
- struct in_ifaddr *ifa = NULL;
- int r;
-
-
- if ( ! dev ) {
- printk(KERN_WARNING "simeth_device_event dev=0\n");
- return NOTIFY_DONE;
- }
-
- if ( event != NETDEV_UP && event != NETDEV_DOWN ) return NOTIFY_DONE;
-
- /*
- * Check whether or not it's for an ethernet device
- *
- * XXX Fixme: This works only as long as we support one
- * type of ethernet device.
- */
- if ( !dev_is_ethdev(dev) ) return NOTIFY_DONE;
-
- if ((in_devÞv->ip_ptr) != NULL) {
- for (ifap=&in_dev->ifa_list; (ifa=*ifap) != NULL; ifap=&ifa->ifa_next)
- if (strcmp(dev->name, ifa->ifa_label) = 0) break;
- }
- if ( ifa = NULL ) {
- printk("simeth_open: can't find device %s's ifa\n", dev->name);
- return NOTIFY_DONE;
- }
-
- printk("simeth_device_event: %s ipaddr=0x%x\n", dev->name, htonl(ifa->ifa_local));
-
- /*
- * XXX Fix me
- * if the device was up, and we're simply reconfiguring it, not sure
- * we get DOWN then UP.
- */
-
- local = dev->priv;
- /* now do it for real */
- r = event = NETDEV_UP ?
- netdev_attach(local->simfd, dev->irq, htonl(ifa->ifa_local)):
- netdev_detach(local->simfd);
-
- printk("simeth: netdev_attach/detach: event=%s ->%d\n", event = NETDEV_UP ? "attach":"detach", r);
-
- return NOTIFY_DONE;
-}
-
-static int
-simeth_close(struct net_device *dev)
-{
- netif_stop_queue(dev);
-
- free_irq(dev->irq, dev);
-
- return 0;
-}
-
-/*
- * Only used for debug
- */
-static void
-frame_print(unsigned char *from, unsigned char *frame, int len)
-{
- int i;
-
- printk("%s: (%d) %02x", from, len, frame[0] & 0xff);
- for(i=1; i < 6; i++ ) {
- printk(":%02x", frame[i] &0xff);
- }
- printk(" %2x", frame[6] &0xff);
- for(i=7; i < 12; i++ ) {
- printk(":%02x", frame[i] &0xff);
- }
- printk(" [%02x%02x]\n", frame[12], frame[13]);
-
- for(i\x14; i < len; i++ ) {
- printk("%02x ", frame[i] &0xff);
- if ( (i%10)=0) printk("\n");
- }
- printk("\n");
-}
-
-
-/*
- * Function used to transmit of frame, very last one on the path before
- * going to the simulator.
- */
-static int
-simeth_tx(struct sk_buff *skb, struct net_device *dev)
-{
- struct simeth_local *local = (struct simeth_local *)dev->priv;
-
-#if 0
- /* ensure we have at least ETH_ZLEN bytes (min frame size) */
- unsigned int length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
- /* Where do the extra padding bytes comes from inthe skbuff ? */
-#else
- /* the real driver in the host system is going to take care of that
- * or maybe it's the NIC itself.
- */
- unsigned int length = skb->len;
-#endif
-
- local->stats.tx_bytes += skb->len;
- local->stats.tx_packets++;
-
-
- if (simeth_debug > 5) frame_print("simeth_tx", skb->data, length);
-
- netdev_send(local->simfd, skb->data, length);
-
- /*
- * we are synchronous on write, so we don't simulate a
- * trasnmit complete interrupt, thus we don't need to arm a tx
- */
-
- dev_kfree_skb(skb);
- return 0;
-}
-
-static inline struct sk_buff *
-make_new_skb(struct net_device *dev)
-{
- struct sk_buff *nskb;
-
- /*
- * The +2 is used to make sure that the IP header is nicely
- * aligned (on 4byte boundary I assume 14+2\x16)
- */
- nskb = dev_alloc_skb(SIMETH_FRAME_SIZE + 2);
- if ( nskb = NULL ) {
- printk(KERN_NOTICE "%s: memory squeeze. dropping packet.\n", dev->name);
- return NULL;
- }
- nskb->dev = dev;
-
- skb_reserve(nskb, 2); /* Align IP on 16 byte boundaries */
-
- skb_put(nskb,SIMETH_FRAME_SIZE);
-
- return nskb;
-}
-
-/*
- * called from interrupt handler to process a received frame
- */
-static int
-simeth_rx(struct net_device *dev)
-{
- struct simeth_local *local;
- struct sk_buff *skb;
- int len;
- int rcv_count = SIMETH_RECV_MAX;
-
- local = (struct simeth_local *)dev->priv;
- /*
- * the loop concept has been borrowed from other drivers
- * looks to me like it's a throttling thing to avoid pushing to many
- * packets at one time into the stack. Making sure we can process them
- * upstream and make forward progress overall
- */
- do {
- if ( (skb=make_new_skb(dev)) = NULL ) {
- printk(KERN_NOTICE "%s: memory squeeze. dropping packet.\n", dev->name);
- local->stats.rx_dropped++;
- return 0;
- }
- /*
- * Read only one frame at a time
- */
- len = netdev_read(local->simfd, skb->data, SIMETH_FRAME_SIZE);
- if ( len = 0 ) {
- if ( simeth_debug > 0 ) printk(KERN_WARNING "%s: count=%d netdev_read=0\n", dev->name, SIMETH_RECV_MAX-rcv_count);
- break;
- }
-#if 0
- /*
- * XXX Fix me
- * Should really do a csum+copy here
- */
- memcpy(skb->data, frame, len);
-#endif
- skb->protocol = eth_type_trans(skb, dev);
-
- if ( simeth_debug > 6 ) frame_print("simeth_rx", skb->data, len);
-
- /*
- * push the packet up & trigger software interrupt
- */
- netif_rx(skb);
-
- local->stats.rx_packets++;
- local->stats.rx_bytes += len;
-
- } while ( --rcv_count );
-
- return len; /* 0 = nothing left to read, otherwise, we can try again */
-}
-
-/*
- * Interrupt handler (Yes, we can do it too !!!)
- */
-static void
-simeth_interrupt(int irq, void *dev_id, struct pt_regs * regs)
-{
- struct net_device *dev = dev_id;
-
- if ( dev = NULL ) {
- printk(KERN_WARNING "simeth: irq %d for unknown device\n", irq);
- return;
- }
-
- /*
- * very simple loop because we get interrupts only when receving
- */
- while (simeth_rx(dev));
-}
-
-static struct net_device_stats *
-simeth_get_stats(struct net_device *dev)
-{
- struct simeth_local *local = (struct simeth_local *) dev->priv;
-
- return &local->stats;
-}
-
-/* fake multicast ability */
-static void
-set_multicast_list(struct net_device *dev)
-{
- printk(KERN_WARNING "%s: set_multicast_list called\n", dev->name);
-}
-
-#ifdef CONFIG_NET_FASTROUTE
-static int
-simeth_accept_fastpath(struct net_device *dev, struct dst_entry *dst)
-{
- printk(KERN_WARNING "%s: simeth_accept_fastpath called\n", dev->name);
- return -1;
-}
-#endif
-
-__initcall(simeth_probe);
diff -u -urN linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simscsi.c linux-ski/arch/ia64/hp/sim/simscsi.c
--- linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simscsi.c 2002-12-10 14:23:20.000000000 -0700
+++ linux-ski/arch/ia64/hp/sim/simscsi.c 1969-12-31 17:00:00.000000000 -0700
@@ -1,384 +0,0 @@
-/*
- * Simulated SCSI driver.
- *
- * Copyright (C) 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
- *
- * 99/12/18 David Mosberger Added support for READ10/WRITE10 needed by linux v2.3.33
- */
-#include <linux/config.h>
-#include <linux/blk.h>
-#include <linux/init.h>
-#include <linux/interrupt.h>
-#include <linux/kernel.h>
-#include <linux/timer.h>
-
-#include <scsi/scsi.h>
-
-#include <asm/irq.h>
-
-#include "scsi.h"
-#include "sd.h"
-#include "hosts.h"
-#include "simscsi.h"
-
-#define DEBUG_SIMSCSI 1
-
-/* Simulator system calls: */
-
-#define SSC_OPEN 50
-#define SSC_CLOSE 51
-#define SSC_READ 52
-#define SSC_WRITE 53
-#define SSC_GET_COMPLETION 54
-#define SSC_WAIT_COMPLETION 55
-
-#define SSC_WRITE_ACCESS 2
-#define SSC_READ_ACCESS 1
-
-#ifdef DEBUG_SIMSCSI
- int simscsi_debug;
-# define DBG simscsi_debug
-#else
-# define DBG 0
-#endif
-
-#if 0
-struct timer_list disk_timer;
-#else
-static void simscsi_interrupt (unsigned long val);
-DECLARE_TASKLET(simscsi_tasklet, simscsi_interrupt, 0);
-#endif
-
-struct disk_req {
- unsigned long addr;
- unsigned len;
-};
-
-struct disk_stat {
- int fd;
- unsigned count;
-};
-
-extern long ia64_ssc (long arg0, long arg1, long arg2, long arg3, int nr);
-
-static int desc[8] = { -1, -1, -1, -1, -1, -1, -1, -1 };
-
-static struct queue_entry {
- Scsi_Cmnd *sc;
-} queue[SIMSCSI_REQ_QUEUE_LEN];
-
-static int rd, wr;
-static atomic_t num_reqs = ATOMIC_INIT(0);
-
-/* base name for default disks */
-static char *simscsi_root = DEFAULT_SIMSCSI_ROOT;
-
-#define MAX_ROOT_LEN 128
-
-/*
- * used to setup a new base for disk images
- * to use /foo/bar/disk[a-z] as disk images
- * you have to specify simscsi=/foo/bar/disk on the command line
- */
-static int __init
-simscsi_setup (char *s)
-{
- /* XXX Fix me we may need to strcpy() ? */
- if (strlen(s) > MAX_ROOT_LEN) {
- printk("simscsi_setup: prefix too long---using default %s\n", simscsi_root);
- }
- simscsi_root = s;
- return 1;
-}
-
-__setup("simscsi=", simscsi_setup);
-
-static void
-simscsi_interrupt (unsigned long val)
-{
- unsigned long flags;
- Scsi_Cmnd *sc;
-
- spin_lock_irqsave(&io_request_lock, flags);
- {
- while ((sc = queue[rd].sc) != 0) {
- atomic_dec(&num_reqs);
- queue[rd].sc = 0;
- if (DBG)
- printk("simscsi_interrupt: done with %ld\n", sc->serial_number);
- (*sc->scsi_done)(sc);
- rd = (rd + 1) % SIMSCSI_REQ_QUEUE_LEN;
- }
- }
- spin_unlock_irqrestore(&io_request_lock, flags);
-}
-
-int
-simscsi_detect (Scsi_Host_Template *templ)
-{
- templ->proc_name = "simscsi";
-#if 0
- init_timer(&disk_timer);
- disk_timer.function = simscsi_interrupt;
-#endif
- return 1; /* fake one SCSI host adapter */
-}
-
-int
-simscsi_release (struct Scsi_Host *host)
-{
- return 0; /* this is easy... */
-}
-
-const char *
-simscsi_info (struct Scsi_Host *host)
-{
- return "simulated SCSI host adapter";
-}
-
-int
-simscsi_abort (Scsi_Cmnd *cmd)
-{
- printk ("simscsi_abort: unimplemented\n");
- return SCSI_ABORT_SUCCESS;
-}
-
-int
-simscsi_reset (Scsi_Cmnd *cmd, unsigned int reset_flags)
-{
- printk ("simscsi_reset: unimplemented\n");
- return SCSI_RESET_SUCCESS;
-}
-
-int
-simscsi_biosparam (Disk *disk, kdev_t n, int ip[])
-{
- int size = disk->capacity;
-
- ip[0] = 64;
- ip[1] = 32;
- ip[2] = size >> 11;
- return 0;
-}
-
-static void
-simscsi_readwrite (Scsi_Cmnd *sc, int mode, unsigned long offset, unsigned long len)
-{
- struct disk_stat stat;
- struct disk_req req;
-
- req.addr = __pa(sc->request_buffer);
- req.len = len; /* # of bytes to transfer */
-
- if (sc->request_bufflen < req.len)
- return;
-
- stat.fd = desc[sc->target];
- if (DBG)
- printk("simscsi_%s @ %lx (off %lx, len %lu) ->",
- mode = SSC_READ ? "read":"write", req.addr, offset, len);
- ia64_ssc(stat.fd, 1, __pa(&req), offset, mode);
- ia64_ssc(__pa(&stat), 0, 0, 0, SSC_WAIT_COMPLETION);
-
- if (stat.count = req.len) {
- sc->result = GOOD;
- } else {
- sc->result = DID_ERROR << 16;
- }
- if (DBG)
- printk("%d\n", sc->result);
-}
-
-static void
-simscsi_sg_readwrite (Scsi_Cmnd *sc, int mode, unsigned long offset)
-{
- int list_len = sc->use_sg;
- struct scatterlist *sl = (struct scatterlist *)sc->buffer;
- struct disk_stat stat;
- struct disk_req req;
-
- stat.fd = desc[sc->target];
-
- while (list_len) {
- req.addr = __pa(sl->address);
- req.len = sl->length;
- if (DBG)
- printk("simscsi_sg_%s @ %lx (off %lx) use_sg=%d len=%d\n",
- mode = SSC_READ ? "read":"write", req.addr, offset,
- list_len, sl->length);
- ia64_ssc(stat.fd, 1, __pa(&req), offset, mode);
- ia64_ssc(__pa(&stat), 0, 0, 0, SSC_WAIT_COMPLETION);
-
- /* should not happen in our case */
- if (stat.count != req.len) {
- sc->result = DID_ERROR << 16;
- return;
- }
- offset += sl->length;
- sl++;
- list_len--;
- }
- sc->result = GOOD;
-}
-
-/*
- * function handling both READ_6/WRITE_6 (non-scatter/gather mode)
- * commands.
- * Added 02/26/99 S.Eranian
- */
-static void
-simscsi_readwrite6 (Scsi_Cmnd *sc, int mode)
-{
- unsigned long offset;
-
- offset = (((sc->cmnd[1] & 0x1f) << 16) | (sc->cmnd[2] << 8) | sc->cmnd[3])*512;
- if (sc->use_sg > 0)
- simscsi_sg_readwrite(sc, mode, offset);
- else
- simscsi_readwrite(sc, mode, offset, sc->cmnd[4]*512);
-}
-
-
-static void
-simscsi_readwrite10 (Scsi_Cmnd *sc, int mode)
-{
- unsigned long offset;
-
- offset = ( (sc->cmnd[2] << 24) | (sc->cmnd[3] << 16)
- | (sc->cmnd[4] << 8) | (sc->cmnd[5] << 0))*512;
- if (sc->use_sg > 0)
- simscsi_sg_readwrite(sc, mode, offset);
- else
- simscsi_readwrite(sc, mode, offset, ((sc->cmnd[7] << 8) | sc->cmnd[8])*512);
-}
-
-int
-simscsi_queuecommand (Scsi_Cmnd *sc, void (*done)(Scsi_Cmnd *))
-{
- char fname[MAX_ROOT_LEN+16];
- char *buf;
-#if DEBUG_SIMSCSI
- register long sp asm ("sp");
-
- if (DBG)
- printk("simscsi_queuecommand: target=%d,cmnd=%u,sc=%lu,sp=%lx,done=%p\n",
- sc->target, sc->cmnd[0], sc->serial_number, sp, done);
-#endif
-
- sc->result = DID_BAD_TARGET << 16;
- sc->scsi_done = done;
- if (sc->target <= 7 && sc->lun = 0) {
- switch (sc->cmnd[0]) {
- case INQUIRY:
- if (sc->request_bufflen < 35) {
- break;
- }
- sprintf (fname, "%s%c", simscsi_root, 'a' + sc->target);
- desc[sc->target] = ia64_ssc (__pa(fname), SSC_READ_ACCESS|SSC_WRITE_ACCESS,
- 0, 0, SSC_OPEN);
- if (desc[sc->target] < 0) {
- /* disk doesn't exist... */
- break;
- }
- buf = sc->request_buffer;
- buf[0] = 0; /* magnetic disk */
- buf[1] = 0; /* not a removable medium */
- buf[2] = 2; /* SCSI-2 compliant device */
- buf[3] = 2; /* SCSI-2 response data format */
- buf[4] = 31; /* additional length (bytes) */
- buf[5] = 0; /* reserved */
- buf[6] = 0; /* reserved */
- buf[7] = 0; /* various flags */
- memcpy(buf + 8, "HP SIMULATED DISK 0.00", 28);
- sc->result = GOOD;
- break;
-
- case TEST_UNIT_READY:
- sc->result = GOOD;
- break;
-
- case READ_6:
- if (desc[sc->target] < 0 )
- break;
- simscsi_readwrite6(sc, SSC_READ);
- break;
-
- case READ_10:
- if (desc[sc->target] < 0 )
- break;
- simscsi_readwrite10(sc, SSC_READ);
- break;
-
- case WRITE_6:
- if (desc[sc->target] < 0)
- break;
- simscsi_readwrite6(sc, SSC_WRITE);
- break;
-
- case WRITE_10:
- if (desc[sc->target] < 0)
- break;
- simscsi_readwrite10(sc, SSC_WRITE);
- break;
-
-
- case READ_CAPACITY:
- if (desc[sc->target] < 0 || sc->request_bufflen < 8) {
- break;
- }
- buf = sc->request_buffer;
-
- /* pretend to be a 1GB disk (partition table contains real stuff): */
- buf[0] = 0x00;
- buf[1] = 0x1f;
- buf[2] = 0xff;
- buf[3] = 0xff;
- /* set block size of 512 bytes: */
- buf[4] = 0;
- buf[5] = 0;
- buf[6] = 2;
- buf[7] = 0;
- sc->result = GOOD;
- break;
-
- case MODE_SENSE:
- printk("MODE_SENSE\n");
- break;
-
- case START_STOP:
- printk("START_STOP\n");
- break;
-
- default:
- panic("simscsi: unknown SCSI command %u\n", sc->cmnd[0]);
- }
- }
- if (sc->result = DID_BAD_TARGET) {
- sc->result |= DRIVER_SENSE << 24;
- sc->sense_buffer[0] = 0x70;
- sc->sense_buffer[2] = 0x00;
- }
- if (atomic_read(&num_reqs) >= SIMSCSI_REQ_QUEUE_LEN) {
- panic("Attempt to queue command while command is pending!!");
- }
- atomic_inc(&num_reqs);
- queue[wr].sc = sc;
- wr = (wr + 1) % SIMSCSI_REQ_QUEUE_LEN;
-
-#if 0
- if (!timer_pending(&disk_timer)) {
- disk_timer.expires = jiffies;
- add_timer(&disk_timer);
- }
-#else
- tasklet_schedule(&simscsi_tasklet);
-#endif
- return 0;
-}
-
-
-static Scsi_Host_Template driver_template = SIMSCSI;
-
-#include "scsi_module.c"
diff -u -urN linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simscsi.h linux-ski/arch/ia64/hp/sim/simscsi.h
--- linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simscsi.h 2002-12-10 14:23:20.000000000 -0700
+++ linux-ski/arch/ia64/hp/sim/simscsi.h 1969-12-31 17:00:00.000000000 -0700
@@ -1,39 +0,0 @@
-/*
- * Simulated SCSI driver.
- *
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
- */
-#ifndef SIMSCSI_H
-#define SIMSCSI_H
-
-#define SIMSCSI_REQ_QUEUE_LEN 64
-
-#define DEFAULT_SIMSCSI_ROOT "/var/ski-disks/sd"
-
-extern int simscsi_detect (Scsi_Host_Template *);
-extern int simscsi_release (struct Scsi_Host *);
-extern const char *simscsi_info (struct Scsi_Host *);
-extern int simscsi_queuecommand (Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));
-extern int simscsi_abort (Scsi_Cmnd *);
-extern int simscsi_reset (Scsi_Cmnd *, unsigned int);
-extern int simscsi_biosparam (Disk *, kdev_t, int[]);
-
-#define SIMSCSI { \
- detect: simscsi_detect, \
- release: simscsi_release, \
- info: simscsi_info, \
- queuecommand: simscsi_queuecommand, \
- abort: simscsi_abort, \
- reset: simscsi_reset, \
- bios_param: simscsi_biosparam, \
- can_queue: SIMSCSI_REQ_QUEUE_LEN, \
- this_id: -1, \
- sg_tablesize: SG_ALL, \
- cmd_per_lun: SIMSCSI_REQ_QUEUE_LEN, \
- present: 0, \
- unchecked_isa_dma: 0, \
- use_clustering: DISABLE_CLUSTERING \
-}
-
-#endif /* SIMSCSI_H */
diff -u -urN linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simserial.c linux-ski/arch/ia64/hp/sim/simserial.c
--- linux-2.4.20-ia64-021210/arch/ia64/hp/sim/simserial.c 2002-12-10 14:23:20.000000000 -0700
+++ linux-ski/arch/ia64/hp/sim/simserial.c 1969-12-31 17:00:00.000000000 -0700
@@ -1,1095 +0,0 @@
-/*
- * Simulated Serial Driver (fake serial)
- *
- * This driver is mostly used for bringup purposes and will go away.
- * It has a strong dependency on the system console. All outputs
- * are rerouted to the same facility as the one used by printk which, in our
- * case means sys_sim.c console (goes via the simulator). The code hereafter
- * is completely leveraged from the serial.c driver.
- *
- * Copyright (C) 1999-2000, 2002 Hewlett-Packard Co
- * Stephane Eranian <eranian@hpl.hp.com>
- * David Mosberger-Tang <davidm@hpl.hp.com>
- *
- * 02/04/00 D. Mosberger Merged in serial.c bug fixes in rs_close().
- * 02/25/00 D. Mosberger Synced up with 2.3.99pre-5 version of serial.c.
- */
-
-#include <linux/config.h>
-#include <linux/init.h>
-#include <linux/errno.h>
-#include <linux/sched.h>
-#include <linux/tty.h>
-#include <linux/tty_flip.h>
-#include <linux/major.h>
-#include <linux/fcntl.h>
-#include <linux/mm.h>
-#include <linux/console.h>
-#include <linux/module.h>
-#include <linux/serial.h>
-#include <linux/serialP.h>
-#include <linux/slab.h>
-
-#include <asm/irq.h>
-#include <asm/uaccess.h>
-
-#undef SIMSERIAL_DEBUG /* define this to get some debug information */
-
-#define KEYBOARD_INTR 3 /* must match with simulator! */
-
-#define NR_PORTS 1 /* only one port for now */
-#define SERIAL_INLINE 1
-
-#ifdef SERIAL_INLINE
-#define _INLINE_ inline
-#endif
-
-#ifndef MIN
-#define MIN(a,b) ((a) < (b) ? (a) : (b))
-#endif
-
-#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? SA_SHIRQ : SA_INTERRUPT)
-
-#define SSC_GETCHAR 21
-
-extern long ia64_ssc (long, long, long, long, int);
-extern void ia64_ssc_connect_irq (long intr, long irq);
-
-static char *serial_name = "SimSerial driver";
-static char *serial_version = "0.6";
-
-/*
- * This has been extracted from asm/serial.h. We need one eventually but
- * I don't know exactly what we're going to put in it so just fake one
- * for now.
- */
-#define BASE_BAUD ( 1843200 / 16 )
-
-#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
-
-/*
- * Most of the values here are meaningless to this particular driver.
- * However some values must be preserved for the code (leveraged from serial.c
- * to work correctly).
- * port must not be 0
- * type must not be UNKNOWN
- * So I picked arbitrary (guess from where?) values instead
- */
-static struct serial_state rs_table[NR_PORTS]={
- /* UART CLK PORT IRQ FLAGS */
- { 0, BASE_BAUD, 0x3F8, 0, STD_COM_FLAGS,0,PORT_16550 } /* ttyS0 */
-};
-
-/*
- * Just for the fun of it !
- */
-static struct serial_uart_config uart_config[] = {
- { "unknown", 1, 0 },
- { "8250", 1, 0 },
- { "16450", 1, 0 },
- { "16550", 1, 0 },
- { "16550A", 16, UART_CLEAR_FIFO | UART_USE_FIFO },
- { "cirrus", 1, 0 },
- { "ST16650", 1, UART_CLEAR_FIFO | UART_STARTECH },
- { "ST16650V2", 32, UART_CLEAR_FIFO | UART_USE_FIFO |
- UART_STARTECH },
- { "TI16750", 64, UART_CLEAR_FIFO | UART_USE_FIFO},
- { 0, 0}
-};
-
-static struct tty_driver serial_driver, callout_driver;
-static int serial_refcount;
-
-static struct async_struct *IRQ_ports[NR_IRQS];
-static struct tty_struct *serial_table[NR_PORTS];
-static struct termios *serial_termios[NR_PORTS];
-static struct termios *serial_termios_locked[NR_PORTS];
-
-static struct console *console;
-
-static unsigned char *tmp_buf;
-static DECLARE_MUTEX(tmp_buf_sem);
-
-extern struct console *console_drivers; /* from kernel/printk.c */
-
-/*
- * ------------------------------------------------------------
- * rs_stop() and rs_start()
- *
- * This routines are called before setting or resetting tty->stopped.
- * They enable or disable transmitter interrupts, as necessary.
- * ------------------------------------------------------------
- */
-static void rs_stop(struct tty_struct *tty)
-{
-#ifdef SIMSERIAL_DEBUG
- printk("rs_stop: tty->stopped=%d tty->hw_stopped=%d tty->flow_stopped=%d\n",
- tty->stopped, tty->hw_stopped, tty->flow_stopped);
-#endif
-
-}
-
-static void rs_start(struct tty_struct *tty)
-{
-#if SIMSERIAL_DEBUG
- printk("rs_start: tty->stopped=%d tty->hw_stopped=%d tty->flow_stopped=%d\n",
- tty->stopped, tty->hw_stopped, tty->flow_stopped);
-#endif
-}
-
-static void receive_chars(struct tty_struct *tty, struct pt_regs *regs)
-{
- unsigned char ch;
- static unsigned char seen_esc = 0;
-
- while ( (ch = ia64_ssc(0, 0, 0, 0, SSC_GETCHAR)) ) {
- if ( ch = 27 && seen_esc = 0 ) {
- seen_esc = 1;
- continue;
- } else {
- if ( seen_esc=1 && ch = 'O' ) {
- seen_esc = 2;
- continue;
- } else if ( seen_esc = 2 ) {
- if ( ch = 'P' ) show_state(); /* F1 key */
- if ( ch = 'Q' ) show_buffers(); /* F2 key */
- seen_esc = 0;
- continue;
- }
- }
- seen_esc = 0;
- if (tty->flip.count >= TTY_FLIPBUF_SIZE) break;
-
- *tty->flip.char_buf_ptr = ch;
-
- *tty->flip.flag_buf_ptr = 0;
-
- tty->flip.flag_buf_ptr++;
- tty->flip.char_buf_ptr++;
- tty->flip.count++;
- }
- tty_flip_buffer_push(tty);
-}
-
-/*
- * This is the serial driver's interrupt routine for a single port
- */
-static void rs_interrupt_single(int irq, void *dev_id, struct pt_regs * regs)
-{
- struct async_struct * info;
-
- /*
- * I don't know exactly why they don't use the dev_id opaque data
- * pointer instead of this extra lookup table
- */
- info = IRQ_ports[irq];
- if (!info || !info->tty) {
- printk("simrs_interrupt_single: info|tty=0 info=%p problem\n", info);
- return;
- }
- /*
- * pretty simple in our case, because we only get interrupts
- * on inbound traffic
- */
- receive_chars(info->tty, regs);
-}
-
-/*
- * -------------------------------------------------------------------
- * Here ends the serial interrupt routines.
- * -------------------------------------------------------------------
- */
-
-#if 0
-/*
- * not really used in our situation so keep them commented out for now
- */
-static DECLARE_TASK_QUEUE(tq_serial); /* used to be at the top of the file */
-static void do_serial_bh(void)
-{
- run_task_queue(&tq_serial);
- printk("do_serial_bh: called\n");
-}
-#endif
-
-static void do_softint(void *private_)
-{
- printk("simserial: do_softint called\n");
-}
-
-static void rs_put_char(struct tty_struct *tty, unsigned char ch)
-{
- struct async_struct *info = (struct async_struct *)tty->driver_data;
- unsigned long flags;
-
- if (!tty || !info->xmit.buf) return;
-
- save_flags(flags); cli();
- if (CIRC_SPACE(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE) = 0) {
- restore_flags(flags);
- return;
- }
- info->xmit.buf[info->xmit.head] = ch;
- info->xmit.head = (info->xmit.head + 1) & (SERIAL_XMIT_SIZE-1);
- restore_flags(flags);
-}
-
-static _INLINE_ void transmit_chars(struct async_struct *info, int *intr_done)
-{
- int count;
- unsigned long flags;
-
- save_flags(flags); cli();
-
- if (info->x_char) {
- char c = info->x_char;
-
- console->write(console, &c, 1);
-
- info->state->icount.tx++;
- info->x_char = 0;
-
- goto out;
- }
-
- if (info->xmit.head = info->xmit.tail || info->tty->stopped || info->tty->hw_stopped) {
-#ifdef SIMSERIAL_DEBUG
- printk("transmit_chars: head=%d, tail=%d, stopped=%d\n",
- info->xmit.head, info->xmit.tail, info->tty->stopped);
-#endif
- goto out;
- }
- /*
- * We removed the loop and try to do it in to chunks. We need
- * 2 operations maximum because it's a ring buffer.
- *
- * First from current to tail if possible.
- * Then from the beginning of the buffer until necessary
- */
-
- count = MIN(CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE),
- SERIAL_XMIT_SIZE - info->xmit.tail);
- console->write(console, info->xmit.buf+info->xmit.tail, count);
-
- info->xmit.tail = (info->xmit.tail+count) & (SERIAL_XMIT_SIZE-1);
-
- /*
- * We have more at the beginning of the buffer
- */
- count = CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
- if (count) {
- console->write(console, info->xmit.buf, count);
- info->xmit.tail += count;
- }
-out:
- restore_flags(flags);
-}
-
-static void rs_flush_chars(struct tty_struct *tty)
-{
- struct async_struct *info = (struct async_struct *)tty->driver_data;
-
- if (info->xmit.head = info->xmit.tail || tty->stopped || tty->hw_stopped ||
- !info->xmit.buf)
- return;
-
- transmit_chars(info, NULL);
-}
-
-
-static int rs_write(struct tty_struct * tty, int from_user,
- const unsigned char *buf, int count)
-{
- int c, ret = 0;
- struct async_struct *info = (struct async_struct *)tty->driver_data;
- unsigned long flags;
-
- if (!tty || !info->xmit.buf || !tmp_buf) return 0;
-
- save_flags(flags);
- if (from_user) {
- down(&tmp_buf_sem);
- while (1) {
- int c1;
- c = CIRC_SPACE_TO_END(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
- if (count < c)
- c = count;
- if (c <= 0)
- break;
-
- c -= copy_from_user(tmp_buf, buf, c);
- if (!c) {
- if (!ret)
- ret = -EFAULT;
- break;
- }
- cli();
- c1 = CIRC_SPACE_TO_END(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
- if (c1 < c)
- c = c1;
- memcpy(info->xmit.buf + info->xmit.head, tmp_buf, c);
- info->xmit.head = ((info->xmit.head + c) &
- (SERIAL_XMIT_SIZE-1));
- restore_flags(flags);
- buf += c;
- count -= c;
- ret += c;
- }
- up(&tmp_buf_sem);
- } else {
- cli();
- while (1) {
- c = CIRC_SPACE_TO_END(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
- if (count < c)
- c = count;
- if (c <= 0) {
- break;
- }
- memcpy(info->xmit.buf + info->xmit.head, buf, c);
- info->xmit.head = ((info->xmit.head + c) &
- (SERIAL_XMIT_SIZE-1));
- buf += c;
- count -= c;
- ret += c;
- }
- restore_flags(flags);
- }
- /*
- * Hey, we transmit directly from here in our case
- */
- if (CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE)
- && !tty->stopped && !tty->hw_stopped) {
- transmit_chars(info, NULL);
- }
- return ret;
-}
-
-static int rs_write_room(struct tty_struct *tty)
-{
- struct async_struct *info = (struct async_struct *)tty->driver_data;
-
- return CIRC_SPACE(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
-}
-
-static int rs_chars_in_buffer(struct tty_struct *tty)
-{
- struct async_struct *info = (struct async_struct *)tty->driver_data;
-
- return CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
-}
-
-static void rs_flush_buffer(struct tty_struct *tty)
-{
- struct async_struct *info = (struct async_struct *)tty->driver_data;
- unsigned long flags;
-
- save_flags(flags); cli();
- info->xmit.head = info->xmit.tail = 0;
- restore_flags(flags);
-
- wake_up_interruptible(&tty->write_wait);
-
- if ((tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) &&
- tty->ldisc.write_wakeup)
- (tty->ldisc.write_wakeup)(tty);
-}
-
-/*
- * This function is used to send a high-priority XON/XOFF character to
- * the device
- */
-static void rs_send_xchar(struct tty_struct *tty, char ch)
-{
- struct async_struct *info = (struct async_struct *)tty->driver_data;
-
- info->x_char = ch;
- if (ch) {
- /*
- * I guess we could call console->write() directly but
- * let's do that for now.
- */
- transmit_chars(info, NULL);
- }
-}
-
-/*
- * ------------------------------------------------------------
- * rs_throttle()
- *
- * This routine is called by the upper-layer tty layer to signal that
- * incoming characters should be throttled.
- * ------------------------------------------------------------
- */
-static void rs_throttle(struct tty_struct * tty)
-{
- if (I_IXOFF(tty)) rs_send_xchar(tty, STOP_CHAR(tty));
-
- printk("simrs_throttle called\n");
-}
-
-static void rs_unthrottle(struct tty_struct * tty)
-{
- struct async_struct *info = (struct async_struct *)tty->driver_data;
-
- if (I_IXOFF(tty)) {
- if (info->x_char)
- info->x_char = 0;
- else
- rs_send_xchar(tty, START_CHAR(tty));
- }
- printk("simrs_unthrottle called\n");
-}
-
-/*
- * rs_break() --- routine which turns the break handling on or off
- */
-static void rs_break(struct tty_struct *tty, int break_state)
-{
-}
-
-static int rs_ioctl(struct tty_struct *tty, struct file * file,
- unsigned int cmd, unsigned long arg)
-{
- if ((cmd != TIOCGSERIAL) && (cmd != TIOCSSERIAL) &&
- (cmd != TIOCSERCONFIG) && (cmd != TIOCSERGSTRUCT) &&
- (cmd != TIOCMIWAIT) && (cmd != TIOCGICOUNT)) {
- if (tty->flags & (1 << TTY_IO_ERROR))
- return -EIO;
- }
-
- switch (cmd) {
- case TIOCMGET:
- printk("rs_ioctl: TIOCMGET called\n");
- return -EINVAL;
- case TIOCMBIS:
- case TIOCMBIC:
- case TIOCMSET:
- printk("rs_ioctl: TIOCMBIS/BIC/SET called\n");
- return -EINVAL;
- case TIOCGSERIAL:
- printk("simrs_ioctl TIOCGSERIAL called\n");
- return 0;
- case TIOCSSERIAL:
- printk("simrs_ioctl TIOCSSERIAL called\n");
- return 0;
- case TIOCSERCONFIG:
- printk("rs_ioctl: TIOCSERCONFIG called\n");
- return -EINVAL;
-
- case TIOCSERGETLSR: /* Get line status register */
- printk("rs_ioctl: TIOCSERGETLSR called\n");
- return -EINVAL;
-
- case TIOCSERGSTRUCT:
- printk("rs_ioctl: TIOCSERGSTRUCT called\n");
-#if 0
- if (copy_to_user((struct async_struct *) arg,
- info, sizeof(struct async_struct)))
- return -EFAULT;
-#endif
- return 0;
-
- /*
- * Wait for any of the 4 modem inputs (DCD,RI,DSR,CTS) to change
- * - mask passed in arg for lines of interest
- * (use |'ed TIOCM_RNG/DSR/CD/CTS for masking)
- * Caller should use TIOCGICOUNT to see which one it was
- */
- case TIOCMIWAIT:
- printk("rs_ioctl: TIOCMIWAIT: called\n");
- return 0;
- /*
- * Get counter of input serial line interrupts (DCD,RI,DSR,CTS)
- * Return: write counters to the user passed counter struct
- * NB: both 1->0 and 0->1 transitions are counted except for
- * RI where only 0->1 is counted.
- */
- case TIOCGICOUNT:
- printk("rs_ioctl: TIOCGICOUNT called\n");
- return 0;
-
- case TIOCSERGWILD:
- case TIOCSERSWILD:
- /* "setserial -W" is called in Debian boot */
- printk ("TIOCSER?WILD ioctl obsolete, ignored.\n");
- return 0;
-
- default:
- return -ENOIOCTLCMD;
- }
- return 0;
-}
-
-#define RELEVANT_IFLAG(iflag) (iflag & (IGNBRK|BRKINT|IGNPAR|PARMRK|INPCK))
-
-static void rs_set_termios(struct tty_struct *tty, struct termios *old_termios)
-{
- unsigned int cflag = tty->termios->c_cflag;
-
- if ( (cflag = old_termios->c_cflag)
- && ( RELEVANT_IFLAG(tty->termios->c_iflag)
- = RELEVANT_IFLAG(old_termios->c_iflag)))
- return;
-
-
- /* Handle turning off CRTSCTS */
- if ((old_termios->c_cflag & CRTSCTS) &&
- !(tty->termios->c_cflag & CRTSCTS)) {
- tty->hw_stopped = 0;
- rs_start(tty);
- }
-}
-/*
- * This routine will shutdown a serial port; interrupts are disabled, and
- * DTR is dropped if the hangup on close termio flag is on.
- */
-static void shutdown(struct async_struct * info)
-{
- unsigned long flags;
- struct serial_state *state;
- int retval;
-
- if (!(info->flags & ASYNC_INITIALIZED)) return;
-
- state = info->state;
-
-#ifdef SIMSERIAL_DEBUG
- printk("Shutting down serial port %d (irq %d)....", info->line,
- state->irq);
-#endif
-
- save_flags(flags); cli(); /* Disable interrupts */
-
- /*
- * First unlink the serial port from the IRQ chain...
- */
- if (info->next_port)
- info->next_port->prev_port = info->prev_port;
- if (info->prev_port)
- info->prev_port->next_port = info->next_port;
- else
- IRQ_ports[state->irq] = info->next_port;
-
- /*
- * Free the IRQ, if necessary
- */
- if (state->irq && (!IRQ_ports[state->irq] ||
- !IRQ_ports[state->irq]->next_port)) {
- if (IRQ_ports[state->irq]) {
- free_irq(state->irq, NULL);
- retval = request_irq(state->irq, rs_interrupt_single,
- IRQ_T(info), "serial", NULL);
-
- if (retval)
- printk("serial shutdown: request_irq: error %d"
- " Couldn't reacquire IRQ.\n", retval);
- } else
- free_irq(state->irq, NULL);
- }
-
- if (info->xmit.buf) {
- free_page((unsigned long) info->xmit.buf);
- info->xmit.buf = 0;
- }
-
- if (info->tty) set_bit(TTY_IO_ERROR, &info->tty->flags);
-
- info->flags &= ~ASYNC_INITIALIZED;
- restore_flags(flags);
-}
-
-/*
- * ------------------------------------------------------------
- * rs_close()
- *
- * This routine is called when the serial port gets closed. First, we
- * wait for the last remaining data to be sent. Then, we unlink its
- * async structure from the interrupt chain if necessary, and we free
- * that IRQ if nothing is left in the chain.
- * ------------------------------------------------------------
- */
-static void rs_close(struct tty_struct *tty, struct file * filp)
-{
- struct async_struct * info = (struct async_struct *)tty->driver_data;
- struct serial_state *state;
- unsigned long flags;
-
- if (!info ) return;
-
- state = info->state;
-
- save_flags(flags); cli();
-
- if (tty_hung_up_p(filp)) {
-#ifdef SIMSERIAL_DEBUG
- printk("rs_close: hung_up\n");
-#endif
- MOD_DEC_USE_COUNT;
- restore_flags(flags);
- return;
- }
-#ifdef SIMSERIAL_DEBUG
- printk("rs_close ttys%d, count = %d\n", info->line, state->count);
-#endif
- if ((tty->count = 1) && (state->count != 1)) {
- /*
- * Uh, oh. tty->count is 1, which means that the tty
- * structure will be freed. state->count should always
- * be one in these conditions. If it's greater than
- * one, we've got real problems, since it means the
- * serial port won't be shutdown.
- */
- printk("rs_close: bad serial port count; tty->count is 1, "
- "state->count is %d\n", state->count);
- state->count = 1;
- }
- if (--state->count < 0) {
- printk("rs_close: bad serial port count for ttys%d: %d\n",
- info->line, state->count);
- state->count = 0;
- }
- if (state->count) {
- MOD_DEC_USE_COUNT;
- restore_flags(flags);
- return;
- }
- info->flags |= ASYNC_CLOSING;
- restore_flags(flags);
-
- /*
- * Now we wait for the transmit buffer to clear; and we notify
- * the line discipline to only process XON/XOFF characters.
- */
- shutdown(info);
- if (tty->driver.flush_buffer) tty->driver.flush_buffer(tty);
- if (tty->ldisc.flush_buffer) tty->ldisc.flush_buffer(tty);
- info->event = 0;
- info->tty = 0;
- if (info->blocked_open) {
- if (info->close_delay) {
- current->state = TASK_INTERRUPTIBLE;
- schedule_timeout(info->close_delay);
- }
- wake_up_interruptible(&info->open_wait);
- }
- info->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CALLOUT_ACTIVE|ASYNC_CLOSING);
- wake_up_interruptible(&info->close_wait);
- MOD_DEC_USE_COUNT;
-}
-
-/*
- * rs_wait_until_sent() --- wait until the transmitter is empty
- */
-static void rs_wait_until_sent(struct tty_struct *tty, int timeout)
-{
-}
-
-
-/*
- * rs_hangup() --- called by tty_hangup() when a hangup is signaled.
- */
-static void rs_hangup(struct tty_struct *tty)
-{
- struct async_struct * info = (struct async_struct *)tty->driver_data;
- struct serial_state *state = info->state;
-
-#ifdef SIMSERIAL_DEBUG
- printk("rs_hangup: called\n");
-#endif
-
- state = info->state;
-
- rs_flush_buffer(tty);
- if (info->flags & ASYNC_CLOSING)
- return;
- shutdown(info);
-
- info->event = 0;
- state->count = 0;
- info->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CALLOUT_ACTIVE);
- info->tty = 0;
- wake_up_interruptible(&info->open_wait);
-}
-
-
-static int get_async_struct(int line, struct async_struct **ret_info)
-{
- struct async_struct *info;
- struct serial_state *sstate;
-
- sstate = rs_table + line;
- sstate->count++;
- if (sstate->info) {
- *ret_info = sstate->info;
- return 0;
- }
- info = kmalloc(sizeof(struct async_struct), GFP_KERNEL);
- if (!info) {
- sstate->count--;
- return -ENOMEM;
- }
- memset(info, 0, sizeof(struct async_struct));
- init_waitqueue_head(&info->open_wait);
- init_waitqueue_head(&info->close_wait);
- init_waitqueue_head(&info->delta_msr_wait);
- info->magic = SERIAL_MAGIC;
- info->port = sstate->port;
- info->flags = sstate->flags;
- info->xmit_fifo_size = sstate->xmit_fifo_size;
- info->line = line;
- info->tqueue.routine = do_softint;
- info->tqueue.data = info;
- info->state = sstate;
- if (sstate->info) {
- kfree(info);
- *ret_info = sstate->info;
- return 0;
- }
- *ret_info = sstate->info = info;
- return 0;
-}
-
-static int
-startup(struct async_struct *info)
-{
- unsigned long flags;
- int retval=0;
- void (*handler)(int, void *, struct pt_regs *);
- struct serial_state *state= info->state;
- unsigned long page;
-
- page = get_free_page(GFP_KERNEL);
- if (!page)
- return -ENOMEM;
-
- save_flags(flags); cli();
-
- if (info->flags & ASYNC_INITIALIZED) {
- free_page(page);
- goto errout;
- }
-
- if (!state->port || !state->type) {
- if (info->tty) set_bit(TTY_IO_ERROR, &info->tty->flags);
- free_page(page);
- goto errout;
- }
- if (info->xmit.buf)
- free_page(page);
- else
- info->xmit.buf = (unsigned char *) page;
-
-#ifdef SIMSERIAL_DEBUG
- printk("startup: ttys%d (irq %d)...", info->line, state->irq);
-#endif
-
- /*
- * Allocate the IRQ if necessary
- */
- if (state->irq && (!IRQ_ports[state->irq] ||
- !IRQ_ports[state->irq]->next_port)) {
- if (IRQ_ports[state->irq]) {
- retval = -EBUSY;
- goto errout;
- } else
- handler = rs_interrupt_single;
-
- retval = request_irq(state->irq, handler, IRQ_T(info),
- "simserial", NULL);
- if (retval) {
- if (capable(CAP_SYS_ADMIN)) {
- if (info->tty)
- set_bit(TTY_IO_ERROR,
- &info->tty->flags);
- retval = 0;
- }
- goto errout;
- }
- }
-
- /*
- * Insert serial port into IRQ chain.
- */
- info->prev_port = 0;
- info->next_port = IRQ_ports[state->irq];
- if (info->next_port)
- info->next_port->prev_port = info;
- IRQ_ports[state->irq] = info;
-
- if (info->tty) clear_bit(TTY_IO_ERROR, &info->tty->flags);
-
- info->xmit.head = info->xmit.tail = 0;
-
-#if 0
- /*
- * Set up serial timers...
- */
- timer_table[RS_TIMER].expires = jiffies + 2*HZ/100;
- timer_active |= 1 << RS_TIMER;
-#endif
-
- /*
- * Set up the tty->alt_speed kludge
- */
- if (info->tty) {
- if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_HI)
- info->tty->alt_speed = 57600;
- if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_VHI)
- info->tty->alt_speed = 115200;
- if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_SHI)
- info->tty->alt_speed = 230400;
- if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_WARP)
- info->tty->alt_speed = 460800;
- }
-
- info->flags |= ASYNC_INITIALIZED;
- restore_flags(flags);
- return 0;
-
-errout:
- restore_flags(flags);
- return retval;
-}
-
-
-/*
- * This routine is called whenever a serial port is opened. It
- * enables interrupts for a serial port, linking in its async structure into
- * the IRQ chain. It also performs the serial-specific
- * initialization for the tty structure.
- */
-static int rs_open(struct tty_struct *tty, struct file * filp)
-{
- struct async_struct *info;
- int retval, line;
- unsigned long page;
-
- MOD_INC_USE_COUNT;
- line = MINOR(tty->device) - tty->driver.minor_start;
- if ((line < 0) || (line >= NR_PORTS)) {
- MOD_DEC_USE_COUNT;
- return -ENODEV;
- }
- retval = get_async_struct(line, &info);
- if (retval) {
- MOD_DEC_USE_COUNT;
- return retval;
- }
- tty->driver_data = info;
- info->tty = tty;
-
-#ifdef SIMSERIAL_DEBUG
- printk("rs_open %s%d, count = %d\n", tty->driver.name, info->line,
- info->state->count);
-#endif
- info->tty->low_latency = (info->flags & ASYNC_LOW_LATENCY) ? 1 : 0;
-
- if (!tmp_buf) {
- page = get_free_page(GFP_KERNEL);
- if (!page) {
- /* MOD_DEC_USE_COUNT; "info->tty" will cause this? */
- return -ENOMEM;
- }
- if (tmp_buf)
- free_page(page);
- else
- tmp_buf = (unsigned char *) page;
- }
-
- /*
- * If the port is the middle of closing, bail out now
- */
- if (tty_hung_up_p(filp) ||
- (info->flags & ASYNC_CLOSING)) {
- if (info->flags & ASYNC_CLOSING)
- interruptible_sleep_on(&info->close_wait);
- /* MOD_DEC_USE_COUNT; "info->tty" will cause this? */
-#ifdef SERIAL_DO_RESTART
- return ((info->flags & ASYNC_HUP_NOTIFY) ?
- -EAGAIN : -ERESTARTSYS);
-#else
- return -EAGAIN;
-#endif
- }
-
- /*
- * Start up serial port
- */
- retval = startup(info);
- if (retval) {
- /* MOD_DEC_USE_COUNT; "info->tty" will cause this? */
- return retval;
- }
-
- if ((info->state->count = 1) &&
- (info->flags & ASYNC_SPLIT_TERMIOS)) {
- if (tty->driver.subtype = SERIAL_TYPE_NORMAL)
- *tty->termios = info->state->normal_termios;
- else
- *tty->termios = info->state->callout_termios;
- }
-
- /*
- * figure out which console to use (should be one already)
- */
- console = console_drivers;
- while (console) {
- if ((console->flags & CON_ENABLED) && console->write) break;
- console = console->next;
- }
-
- info->session = current->session;
- info->pgrp = current->pgrp;
-
-#ifdef SIMSERIAL_DEBUG
- printk("rs_open ttys%d successful\n", info->line);
-#endif
- return 0;
-}
-
-/*
- * /proc fs routines....
- */
-
-static inline int line_info(char *buf, struct serial_state *state)
-{
- return sprintf(buf, "%d: uart:%s port:%lX irq:%d\n",
- state->line, uart_config[state->type].name,
- state->port, state->irq);
-}
-
-static int rs_read_proc(char *page, char **start, off_t off, int count,
- int *eof, void *data)
-{
- int i, len = 0, l;
- off_t begin = 0;
-
- len += sprintf(page, "simserinfo:1.0 driver:%s\n", serial_version);
- for (i = 0; i < NR_PORTS && len < 4000; i++) {
- l = line_info(page + len, &rs_table[i]);
- len += l;
- if (len+begin > off+count)
- goto done;
- if (len+begin < off) {
- begin += len;
- len = 0;
- }
- }
- *eof = 1;
-done:
- if (off >= len+begin)
- return 0;
- *start = page + (begin-off);
- return ((count < begin+len-off) ? count : begin+len-off);
-}
-
-/*
- * ---------------------------------------------------------------------
- * rs_init() and friends
- *
- * rs_init() is called at boot-time to initialize the serial driver.
- * ---------------------------------------------------------------------
- */
-
-/*
- * This routine prints out the appropriate serial driver version
- * number, and identifies which options were configured into this
- * driver.
- */
-static inline void show_serial_version(void)
-{
- printk(KERN_INFO "%s version %s with", serial_name, serial_version);
- printk(" no serial options enabled\n");
-}
-
-/*
- * The serial driver boot-time initialization code!
- */
-static int __init
-simrs_init (void)
-{
- int i;
- struct serial_state *state;
-
- show_serial_version();
-
- /* Initialize the tty_driver structure */
-
- memset(&serial_driver, 0, sizeof(struct tty_driver));
- serial_driver.magic = TTY_DRIVER_MAGIC;
- serial_driver.driver_name = "simserial";
- serial_driver.name = "ttyS";
- serial_driver.major = TTY_MAJOR;
- serial_driver.minor_start = 64;
- serial_driver.num = 1;
- serial_driver.type = TTY_DRIVER_TYPE_SERIAL;
- serial_driver.subtype = SERIAL_TYPE_NORMAL;
- serial_driver.init_termios = tty_std_termios;
- serial_driver.init_termios.c_cflag - B9600 | CS8 | CREAD | HUPCL | CLOCAL;
- serial_driver.flags = TTY_DRIVER_REAL_RAW;
- serial_driver.refcount = &serial_refcount;
- serial_driver.table = serial_table;
- serial_driver.termios = serial_termios;
- serial_driver.termios_locked = serial_termios_locked;
-
- serial_driver.open = rs_open;
- serial_driver.close = rs_close;
- serial_driver.write = rs_write;
- serial_driver.put_char = rs_put_char;
- serial_driver.flush_chars = rs_flush_chars;
- serial_driver.write_room = rs_write_room;
- serial_driver.chars_in_buffer = rs_chars_in_buffer;
- serial_driver.flush_buffer = rs_flush_buffer;
- serial_driver.ioctl = rs_ioctl;
- serial_driver.throttle = rs_throttle;
- serial_driver.unthrottle = rs_unthrottle;
- serial_driver.send_xchar = rs_send_xchar;
- serial_driver.set_termios = rs_set_termios;
- serial_driver.stop = rs_stop;
- serial_driver.start = rs_start;
- serial_driver.hangup = rs_hangup;
- serial_driver.break_ctl = rs_break;
- serial_driver.wait_until_sent = rs_wait_until_sent;
- serial_driver.read_proc = rs_read_proc;
-
- /*
- * Let's have a little bit of fun !
- */
- for (i = 0, state = rs_table; i < NR_PORTS; i++,state++) {
-
- if (state->type = PORT_UNKNOWN) continue;
-
- if (!state->irq) {
- state->irq = ia64_alloc_vector();
- ia64_ssc_connect_irq(KEYBOARD_INTR, state->irq);
- }
-
- printk(KERN_INFO "ttyS%02d at 0x%04lx (irq = %d) is a %s\n",
- state->line,
- state->port, state->irq,
- uart_config[state->type].name);
- }
- /*
- * The callout device is just like normal device except for
- * major number and the subtype code.
- */
- callout_driver = serial_driver;
- callout_driver.name = "cua";
- callout_driver.major = TTYAUX_MAJOR;
- callout_driver.subtype = SERIAL_TYPE_CALLOUT;
- callout_driver.read_proc = 0;
- callout_driver.proc_entry = 0;
-
- if (tty_register_driver(&serial_driver))
- panic("Couldn't register simserial driver\n");
-
- if (tty_register_driver(&callout_driver))
- panic("Couldn't register callout driver\n");
-
- return 0;
-}
-
-#ifndef MODULE
-__initcall(simrs_init);
-#endif
diff -u -urN linux-2.4.20-ia64-021210/drivers/char/Makefile linux-ski/drivers/char/Makefile
--- linux-2.4.20-ia64-021210/drivers/char/Makefile 2002-11-28 16:53:12.000000000 -0700
+++ linux-ski/drivers/char/Makefile 2002-12-13 10:04:07.000000000 -0700
@@ -168,6 +168,7 @@
obj-$(CONFIG_HIL) += hp_keyb.o
obj-$(CONFIG_MAGIC_SYSRQ) += sysrq.o
obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o
+obj-$(CONFIG_HP_SIMSERIAL) += simserial.o
obj-$(CONFIG_ROCKETPORT) += rocket.o
obj-$(CONFIG_MOXA_SMARTIO) += mxser.o
obj-$(CONFIG_MOXA_INTELLIO) += moxa.o
diff -u -urN linux-2.4.20-ia64-021210/drivers/char/simserial.c linux-ski/drivers/char/simserial.c
--- linux-2.4.20-ia64-021210/drivers/char/simserial.c 1969-12-31 17:00:00.000000000 -0700
+++ linux-ski/drivers/char/simserial.c 2002-12-13 10:04:07.000000000 -0700
@@ -0,0 +1,1095 @@
+/*
+ * Simulated Serial Driver (fake serial)
+ *
+ * This driver is mostly used for bringup purposes and will go away.
+ * It has a strong dependency on the system console. All outputs
+ * are rerouted to the same facility as the one used by printk which, in our
+ * case means sys_sim.c console (goes via the simulator). The code hereafter
+ * is completely leveraged from the serial.c driver.
+ *
+ * Copyright (C) 1999-2000, 2002 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 02/04/00 D. Mosberger Merged in serial.c bug fixes in rs_close().
+ * 02/25/00 D. Mosberger Synced up with 2.3.99pre-5 version of serial.c.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <linux/major.h>
+#include <linux/fcntl.h>
+#include <linux/mm.h>
+#include <linux/console.h>
+#include <linux/module.h>
+#include <linux/serial.h>
+#include <linux/serialP.h>
+#include <linux/slab.h>
+
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+
+#undef SIMSERIAL_DEBUG /* define this to get some debug information */
+
+#define KEYBOARD_INTR 3 /* must match with simulator! */
+
+#define NR_PORTS 1 /* only one port for now */
+#define SERIAL_INLINE 1
+
+#ifdef SERIAL_INLINE
+#define _INLINE_ inline
+#endif
+
+#ifndef MIN
+#define MIN(a,b) ((a) < (b) ? (a) : (b))
+#endif
+
+#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? SA_SHIRQ : SA_INTERRUPT)
+
+#define SSC_GETCHAR 21
+
+extern long ia64_ssc (long, long, long, long, int);
+extern void ia64_ssc_connect_irq (long intr, long irq);
+
+static char *serial_name = "SimSerial driver";
+static char *serial_version = "0.6";
+
+/*
+ * This has been extracted from asm/serial.h. We need one eventually but
+ * I don't know exactly what we're going to put in it so just fake one
+ * for now.
+ */
+#define BASE_BAUD ( 1843200 / 16 )
+
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+
+/*
+ * Most of the values here are meaningless to this particular driver.
+ * However some values must be preserved for the code (leveraged from serial.c
+ * to work correctly).
+ * port must not be 0
+ * type must not be UNKNOWN
+ * So I picked arbitrary (guess from where?) values instead
+ */
+static struct serial_state rs_table[NR_PORTS]={
+ /* UART CLK PORT IRQ FLAGS */
+ { 0, BASE_BAUD, 0x3F8, 0, STD_COM_FLAGS,0,PORT_16550 } /* ttyS0 */
+};
+
+/*
+ * Just for the fun of it !
+ */
+static struct serial_uart_config uart_config[] = {
+ { "unknown", 1, 0 },
+ { "8250", 1, 0 },
+ { "16450", 1, 0 },
+ { "16550", 1, 0 },
+ { "16550A", 16, UART_CLEAR_FIFO | UART_USE_FIFO },
+ { "cirrus", 1, 0 },
+ { "ST16650", 1, UART_CLEAR_FIFO | UART_STARTECH },
+ { "ST16650V2", 32, UART_CLEAR_FIFO | UART_USE_FIFO |
+ UART_STARTECH },
+ { "TI16750", 64, UART_CLEAR_FIFO | UART_USE_FIFO},
+ { 0, 0}
+};
+
+static struct tty_driver serial_driver, callout_driver;
+static int serial_refcount;
+
+static struct async_struct *IRQ_ports[NR_IRQS];
+static struct tty_struct *serial_table[NR_PORTS];
+static struct termios *serial_termios[NR_PORTS];
+static struct termios *serial_termios_locked[NR_PORTS];
+
+static struct console *console;
+
+static unsigned char *tmp_buf;
+static DECLARE_MUTEX(tmp_buf_sem);
+
+extern struct console *console_drivers; /* from kernel/printk.c */
+
+/*
+ * ------------------------------------------------------------
+ * rs_stop() and rs_start()
+ *
+ * This routines are called before setting or resetting tty->stopped.
+ * They enable or disable transmitter interrupts, as necessary.
+ * ------------------------------------------------------------
+ */
+static void rs_stop(struct tty_struct *tty)
+{
+#ifdef SIMSERIAL_DEBUG
+ printk("rs_stop: tty->stopped=%d tty->hw_stopped=%d tty->flow_stopped=%d\n",
+ tty->stopped, tty->hw_stopped, tty->flow_stopped);
+#endif
+
+}
+
+static void rs_start(struct tty_struct *tty)
+{
+#if SIMSERIAL_DEBUG
+ printk("rs_start: tty->stopped=%d tty->hw_stopped=%d tty->flow_stopped=%d\n",
+ tty->stopped, tty->hw_stopped, tty->flow_stopped);
+#endif
+}
+
+static void receive_chars(struct tty_struct *tty, struct pt_regs *regs)
+{
+ unsigned char ch;
+ static unsigned char seen_esc = 0;
+
+ while ( (ch = ia64_ssc(0, 0, 0, 0, SSC_GETCHAR)) ) {
+ if ( ch = 27 && seen_esc = 0 ) {
+ seen_esc = 1;
+ continue;
+ } else {
+ if ( seen_esc=1 && ch = 'O' ) {
+ seen_esc = 2;
+ continue;
+ } else if ( seen_esc = 2 ) {
+ if ( ch = 'P' ) show_state(); /* F1 key */
+ if ( ch = 'Q' ) show_buffers(); /* F2 key */
+ seen_esc = 0;
+ continue;
+ }
+ }
+ seen_esc = 0;
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE) break;
+
+ *tty->flip.char_buf_ptr = ch;
+
+ *tty->flip.flag_buf_ptr = 0;
+
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ }
+ tty_flip_buffer_push(tty);
+}
+
+/*
+ * This is the serial driver's interrupt routine for a single port
+ */
+static void rs_interrupt_single(int irq, void *dev_id, struct pt_regs * regs)
+{
+ struct async_struct * info;
+
+ /*
+ * I don't know exactly why they don't use the dev_id opaque data
+ * pointer instead of this extra lookup table
+ */
+ info = IRQ_ports[irq];
+ if (!info || !info->tty) {
+ printk("simrs_interrupt_single: info|tty=0 info=%p problem\n", info);
+ return;
+ }
+ /*
+ * pretty simple in our case, because we only get interrupts
+ * on inbound traffic
+ */
+ receive_chars(info->tty, regs);
+}
+
+/*
+ * -------------------------------------------------------------------
+ * Here ends the serial interrupt routines.
+ * -------------------------------------------------------------------
+ */
+
+#if 0
+/*
+ * not really used in our situation so keep them commented out for now
+ */
+static DECLARE_TASK_QUEUE(tq_serial); /* used to be at the top of the file */
+static void do_serial_bh(void)
+{
+ run_task_queue(&tq_serial);
+ printk("do_serial_bh: called\n");
+}
+#endif
+
+static void do_softint(void *private_)
+{
+ printk("simserial: do_softint called\n");
+}
+
+static void rs_put_char(struct tty_struct *tty, unsigned char ch)
+{
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+ unsigned long flags;
+
+ if (!tty || !info->xmit.buf) return;
+
+ save_flags(flags); cli();
+ if (CIRC_SPACE(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE) = 0) {
+ restore_flags(flags);
+ return;
+ }
+ info->xmit.buf[info->xmit.head] = ch;
+ info->xmit.head = (info->xmit.head + 1) & (SERIAL_XMIT_SIZE-1);
+ restore_flags(flags);
+}
+
+static _INLINE_ void transmit_chars(struct async_struct *info, int *intr_done)
+{
+ int count;
+ unsigned long flags;
+
+ save_flags(flags); cli();
+
+ if (info->x_char) {
+ char c = info->x_char;
+
+ console->write(console, &c, 1);
+
+ info->state->icount.tx++;
+ info->x_char = 0;
+
+ goto out;
+ }
+
+ if (info->xmit.head = info->xmit.tail || info->tty->stopped || info->tty->hw_stopped) {
+#ifdef SIMSERIAL_DEBUG
+ printk("transmit_chars: head=%d, tail=%d, stopped=%d\n",
+ info->xmit.head, info->xmit.tail, info->tty->stopped);
+#endif
+ goto out;
+ }
+ /*
+ * We removed the loop and try to do it in to chunks. We need
+ * 2 operations maximum because it's a ring buffer.
+ *
+ * First from current to tail if possible.
+ * Then from the beginning of the buffer until necessary
+ */
+
+ count = MIN(CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE),
+ SERIAL_XMIT_SIZE - info->xmit.tail);
+ console->write(console, info->xmit.buf+info->xmit.tail, count);
+
+ info->xmit.tail = (info->xmit.tail+count) & (SERIAL_XMIT_SIZE-1);
+
+ /*
+ * We have more at the beginning of the buffer
+ */
+ count = CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
+ if (count) {
+ console->write(console, info->xmit.buf, count);
+ info->xmit.tail += count;
+ }
+out:
+ restore_flags(flags);
+}
+
+static void rs_flush_chars(struct tty_struct *tty)
+{
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+
+ if (info->xmit.head = info->xmit.tail || tty->stopped || tty->hw_stopped ||
+ !info->xmit.buf)
+ return;
+
+ transmit_chars(info, NULL);
+}
+
+
+static int rs_write(struct tty_struct * tty, int from_user,
+ const unsigned char *buf, int count)
+{
+ int c, ret = 0;
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+ unsigned long flags;
+
+ if (!tty || !info->xmit.buf || !tmp_buf) return 0;
+
+ save_flags(flags);
+ if (from_user) {
+ down(&tmp_buf_sem);
+ while (1) {
+ int c1;
+ c = CIRC_SPACE_TO_END(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
+ if (count < c)
+ c = count;
+ if (c <= 0)
+ break;
+
+ c -= copy_from_user(tmp_buf, buf, c);
+ if (!c) {
+ if (!ret)
+ ret = -EFAULT;
+ break;
+ }
+ cli();
+ c1 = CIRC_SPACE_TO_END(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
+ if (c1 < c)
+ c = c1;
+ memcpy(info->xmit.buf + info->xmit.head, tmp_buf, c);
+ info->xmit.head = ((info->xmit.head + c) &
+ (SERIAL_XMIT_SIZE-1));
+ restore_flags(flags);
+ buf += c;
+ count -= c;
+ ret += c;
+ }
+ up(&tmp_buf_sem);
+ } else {
+ cli();
+ while (1) {
+ c = CIRC_SPACE_TO_END(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
+ if (count < c)
+ c = count;
+ if (c <= 0) {
+ break;
+ }
+ memcpy(info->xmit.buf + info->xmit.head, buf, c);
+ info->xmit.head = ((info->xmit.head + c) &
+ (SERIAL_XMIT_SIZE-1));
+ buf += c;
+ count -= c;
+ ret += c;
+ }
+ restore_flags(flags);
+ }
+ /*
+ * Hey, we transmit directly from here in our case
+ */
+ if (CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE)
+ && !tty->stopped && !tty->hw_stopped) {
+ transmit_chars(info, NULL);
+ }
+ return ret;
+}
+
+static int rs_write_room(struct tty_struct *tty)
+{
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+
+ return CIRC_SPACE(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
+}
+
+static int rs_chars_in_buffer(struct tty_struct *tty)
+{
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+
+ return CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
+}
+
+static void rs_flush_buffer(struct tty_struct *tty)
+{
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+ unsigned long flags;
+
+ save_flags(flags); cli();
+ info->xmit.head = info->xmit.tail = 0;
+ restore_flags(flags);
+
+ wake_up_interruptible(&tty->write_wait);
+
+ if ((tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) &&
+ tty->ldisc.write_wakeup)
+ (tty->ldisc.write_wakeup)(tty);
+}
+
+/*
+ * This function is used to send a high-priority XON/XOFF character to
+ * the device
+ */
+static void rs_send_xchar(struct tty_struct *tty, char ch)
+{
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+
+ info->x_char = ch;
+ if (ch) {
+ /*
+ * I guess we could call console->write() directly but
+ * let's do that for now.
+ */
+ transmit_chars(info, NULL);
+ }
+}
+
+/*
+ * ------------------------------------------------------------
+ * rs_throttle()
+ *
+ * This routine is called by the upper-layer tty layer to signal that
+ * incoming characters should be throttled.
+ * ------------------------------------------------------------
+ */
+static void rs_throttle(struct tty_struct * tty)
+{
+ if (I_IXOFF(tty)) rs_send_xchar(tty, STOP_CHAR(tty));
+
+ printk("simrs_throttle called\n");
+}
+
+static void rs_unthrottle(struct tty_struct * tty)
+{
+ struct async_struct *info = (struct async_struct *)tty->driver_data;
+
+ if (I_IXOFF(tty)) {
+ if (info->x_char)
+ info->x_char = 0;
+ else
+ rs_send_xchar(tty, START_CHAR(tty));
+ }
+ printk("simrs_unthrottle called\n");
+}
+
+/*
+ * rs_break() --- routine which turns the break handling on or off
+ */
+static void rs_break(struct tty_struct *tty, int break_state)
+{
+}
+
+static int rs_ioctl(struct tty_struct *tty, struct file * file,
+ unsigned int cmd, unsigned long arg)
+{
+ if ((cmd != TIOCGSERIAL) && (cmd != TIOCSSERIAL) &&
+ (cmd != TIOCSERCONFIG) && (cmd != TIOCSERGSTRUCT) &&
+ (cmd != TIOCMIWAIT) && (cmd != TIOCGICOUNT)) {
+ if (tty->flags & (1 << TTY_IO_ERROR))
+ return -EIO;
+ }
+
+ switch (cmd) {
+ case TIOCMGET:
+ printk("rs_ioctl: TIOCMGET called\n");
+ return -EINVAL;
+ case TIOCMBIS:
+ case TIOCMBIC:
+ case TIOCMSET:
+ printk("rs_ioctl: TIOCMBIS/BIC/SET called\n");
+ return -EINVAL;
+ case TIOCGSERIAL:
+ printk("simrs_ioctl TIOCGSERIAL called\n");
+ return 0;
+ case TIOCSSERIAL:
+ printk("simrs_ioctl TIOCSSERIAL called\n");
+ return 0;
+ case TIOCSERCONFIG:
+ printk("rs_ioctl: TIOCSERCONFIG called\n");
+ return -EINVAL;
+
+ case TIOCSERGETLSR: /* Get line status register */
+ printk("rs_ioctl: TIOCSERGETLSR called\n");
+ return -EINVAL;
+
+ case TIOCSERGSTRUCT:
+ printk("rs_ioctl: TIOCSERGSTRUCT called\n");
+#if 0
+ if (copy_to_user((struct async_struct *) arg,
+ info, sizeof(struct async_struct)))
+ return -EFAULT;
+#endif
+ return 0;
+
+ /*
+ * Wait for any of the 4 modem inputs (DCD,RI,DSR,CTS) to change
+ * - mask passed in arg for lines of interest
+ * (use |'ed TIOCM_RNG/DSR/CD/CTS for masking)
+ * Caller should use TIOCGICOUNT to see which one it was
+ */
+ case TIOCMIWAIT:
+ printk("rs_ioctl: TIOCMIWAIT: called\n");
+ return 0;
+ /*
+ * Get counter of input serial line interrupts (DCD,RI,DSR,CTS)
+ * Return: write counters to the user passed counter struct
+ * NB: both 1->0 and 0->1 transitions are counted except for
+ * RI where only 0->1 is counted.
+ */
+ case TIOCGICOUNT:
+ printk("rs_ioctl: TIOCGICOUNT called\n");
+ return 0;
+
+ case TIOCSERGWILD:
+ case TIOCSERSWILD:
+ /* "setserial -W" is called in Debian boot */
+ printk ("TIOCSER?WILD ioctl obsolete, ignored.\n");
+ return 0;
+
+ default:
+ return -ENOIOCTLCMD;
+ }
+ return 0;
+}
+
+#define RELEVANT_IFLAG(iflag) (iflag & (IGNBRK|BRKINT|IGNPAR|PARMRK|INPCK))
+
+static void rs_set_termios(struct tty_struct *tty, struct termios *old_termios)
+{
+ unsigned int cflag = tty->termios->c_cflag;
+
+ if ( (cflag = old_termios->c_cflag)
+ && ( RELEVANT_IFLAG(tty->termios->c_iflag)
+ = RELEVANT_IFLAG(old_termios->c_iflag)))
+ return;
+
+
+ /* Handle turning off CRTSCTS */
+ if ((old_termios->c_cflag & CRTSCTS) &&
+ !(tty->termios->c_cflag & CRTSCTS)) {
+ tty->hw_stopped = 0;
+ rs_start(tty);
+ }
+}
+/*
+ * This routine will shutdown a serial port; interrupts are disabled, and
+ * DTR is dropped if the hangup on close termio flag is on.
+ */
+static void shutdown(struct async_struct * info)
+{
+ unsigned long flags;
+ struct serial_state *state;
+ int retval;
+
+ if (!(info->flags & ASYNC_INITIALIZED)) return;
+
+ state = info->state;
+
+#ifdef SIMSERIAL_DEBUG
+ printk("Shutting down serial port %d (irq %d)....", info->line,
+ state->irq);
+#endif
+
+ save_flags(flags); cli(); /* Disable interrupts */
+
+ /*
+ * First unlink the serial port from the IRQ chain...
+ */
+ if (info->next_port)
+ info->next_port->prev_port = info->prev_port;
+ if (info->prev_port)
+ info->prev_port->next_port = info->next_port;
+ else
+ IRQ_ports[state->irq] = info->next_port;
+
+ /*
+ * Free the IRQ, if necessary
+ */
+ if (state->irq && (!IRQ_ports[state->irq] ||
+ !IRQ_ports[state->irq]->next_port)) {
+ if (IRQ_ports[state->irq]) {
+ free_irq(state->irq, NULL);
+ retval = request_irq(state->irq, rs_interrupt_single,
+ IRQ_T(info), "serial", NULL);
+
+ if (retval)
+ printk("serial shutdown: request_irq: error %d"
+ " Couldn't reacquire IRQ.\n", retval);
+ } else
+ free_irq(state->irq, NULL);
+ }
+
+ if (info->xmit.buf) {
+ free_page((unsigned long) info->xmit.buf);
+ info->xmit.buf = 0;
+ }
+
+ if (info->tty) set_bit(TTY_IO_ERROR, &info->tty->flags);
+
+ info->flags &= ~ASYNC_INITIALIZED;
+ restore_flags(flags);
+}
+
+/*
+ * ------------------------------------------------------------
+ * rs_close()
+ *
+ * This routine is called when the serial port gets closed. First, we
+ * wait for the last remaining data to be sent. Then, we unlink its
+ * async structure from the interrupt chain if necessary, and we free
+ * that IRQ if nothing is left in the chain.
+ * ------------------------------------------------------------
+ */
+static void rs_close(struct tty_struct *tty, struct file * filp)
+{
+ struct async_struct * info = (struct async_struct *)tty->driver_data;
+ struct serial_state *state;
+ unsigned long flags;
+
+ if (!info ) return;
+
+ state = info->state;
+
+ save_flags(flags); cli();
+
+ if (tty_hung_up_p(filp)) {
+#ifdef SIMSERIAL_DEBUG
+ printk("rs_close: hung_up\n");
+#endif
+ MOD_DEC_USE_COUNT;
+ restore_flags(flags);
+ return;
+ }
+#ifdef SIMSERIAL_DEBUG
+ printk("rs_close ttys%d, count = %d\n", info->line, state->count);
+#endif
+ if ((tty->count = 1) && (state->count != 1)) {
+ /*
+ * Uh, oh. tty->count is 1, which means that the tty
+ * structure will be freed. state->count should always
+ * be one in these conditions. If it's greater than
+ * one, we've got real problems, since it means the
+ * serial port won't be shutdown.
+ */
+ printk("rs_close: bad serial port count; tty->count is 1, "
+ "state->count is %d\n", state->count);
+ state->count = 1;
+ }
+ if (--state->count < 0) {
+ printk("rs_close: bad serial port count for ttys%d: %d\n",
+ info->line, state->count);
+ state->count = 0;
+ }
+ if (state->count) {
+ MOD_DEC_USE_COUNT;
+ restore_flags(flags);
+ return;
+ }
+ info->flags |= ASYNC_CLOSING;
+ restore_flags(flags);
+
+ /*
+ * Now we wait for the transmit buffer to clear; and we notify
+ * the line discipline to only process XON/XOFF characters.
+ */
+ shutdown(info);
+ if (tty->driver.flush_buffer) tty->driver.flush_buffer(tty);
+ if (tty->ldisc.flush_buffer) tty->ldisc.flush_buffer(tty);
+ info->event = 0;
+ info->tty = 0;
+ if (info->blocked_open) {
+ if (info->close_delay) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(info->close_delay);
+ }
+ wake_up_interruptible(&info->open_wait);
+ }
+ info->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CALLOUT_ACTIVE|ASYNC_CLOSING);
+ wake_up_interruptible(&info->close_wait);
+ MOD_DEC_USE_COUNT;
+}
+
+/*
+ * rs_wait_until_sent() --- wait until the transmitter is empty
+ */
+static void rs_wait_until_sent(struct tty_struct *tty, int timeout)
+{
+}
+
+
+/*
+ * rs_hangup() --- called by tty_hangup() when a hangup is signaled.
+ */
+static void rs_hangup(struct tty_struct *tty)
+{
+ struct async_struct * info = (struct async_struct *)tty->driver_data;
+ struct serial_state *state = info->state;
+
+#ifdef SIMSERIAL_DEBUG
+ printk("rs_hangup: called\n");
+#endif
+
+ state = info->state;
+
+ rs_flush_buffer(tty);
+ if (info->flags & ASYNC_CLOSING)
+ return;
+ shutdown(info);
+
+ info->event = 0;
+ state->count = 0;
+ info->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CALLOUT_ACTIVE);
+ info->tty = 0;
+ wake_up_interruptible(&info->open_wait);
+}
+
+
+static int get_async_struct(int line, struct async_struct **ret_info)
+{
+ struct async_struct *info;
+ struct serial_state *sstate;
+
+ sstate = rs_table + line;
+ sstate->count++;
+ if (sstate->info) {
+ *ret_info = sstate->info;
+ return 0;
+ }
+ info = kmalloc(sizeof(struct async_struct), GFP_KERNEL);
+ if (!info) {
+ sstate->count--;
+ return -ENOMEM;
+ }
+ memset(info, 0, sizeof(struct async_struct));
+ init_waitqueue_head(&info->open_wait);
+ init_waitqueue_head(&info->close_wait);
+ init_waitqueue_head(&info->delta_msr_wait);
+ info->magic = SERIAL_MAGIC;
+ info->port = sstate->port;
+ info->flags = sstate->flags;
+ info->xmit_fifo_size = sstate->xmit_fifo_size;
+ info->line = line;
+ info->tqueue.routine = do_softint;
+ info->tqueue.data = info;
+ info->state = sstate;
+ if (sstate->info) {
+ kfree(info);
+ *ret_info = sstate->info;
+ return 0;
+ }
+ *ret_info = sstate->info = info;
+ return 0;
+}
+
+static int
+startup(struct async_struct *info)
+{
+ unsigned long flags;
+ int retval=0;
+ void (*handler)(int, void *, struct pt_regs *);
+ struct serial_state *state= info->state;
+ unsigned long page;
+
+ page = get_free_page(GFP_KERNEL);
+ if (!page)
+ return -ENOMEM;
+
+ save_flags(flags); cli();
+
+ if (info->flags & ASYNC_INITIALIZED) {
+ free_page(page);
+ goto errout;
+ }
+
+ if (!state->port || !state->type) {
+ if (info->tty) set_bit(TTY_IO_ERROR, &info->tty->flags);
+ free_page(page);
+ goto errout;
+ }
+ if (info->xmit.buf)
+ free_page(page);
+ else
+ info->xmit.buf = (unsigned char *) page;
+
+#ifdef SIMSERIAL_DEBUG
+ printk("startup: ttys%d (irq %d)...", info->line, state->irq);
+#endif
+
+ /*
+ * Allocate the IRQ if necessary
+ */
+ if (state->irq && (!IRQ_ports[state->irq] ||
+ !IRQ_ports[state->irq]->next_port)) {
+ if (IRQ_ports[state->irq]) {
+ retval = -EBUSY;
+ goto errout;
+ } else
+ handler = rs_interrupt_single;
+
+ retval = request_irq(state->irq, handler, IRQ_T(info),
+ "simserial", NULL);
+ if (retval) {
+ if (capable(CAP_SYS_ADMIN)) {
+ if (info->tty)
+ set_bit(TTY_IO_ERROR,
+ &info->tty->flags);
+ retval = 0;
+ }
+ goto errout;
+ }
+ }
+
+ /*
+ * Insert serial port into IRQ chain.
+ */
+ info->prev_port = 0;
+ info->next_port = IRQ_ports[state->irq];
+ if (info->next_port)
+ info->next_port->prev_port = info;
+ IRQ_ports[state->irq] = info;
+
+ if (info->tty) clear_bit(TTY_IO_ERROR, &info->tty->flags);
+
+ info->xmit.head = info->xmit.tail = 0;
+
+#if 0
+ /*
+ * Set up serial timers...
+ */
+ timer_table[RS_TIMER].expires = jiffies + 2*HZ/100;
+ timer_active |= 1 << RS_TIMER;
+#endif
+
+ /*
+ * Set up the tty->alt_speed kludge
+ */
+ if (info->tty) {
+ if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_HI)
+ info->tty->alt_speed = 57600;
+ if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_VHI)
+ info->tty->alt_speed = 115200;
+ if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_SHI)
+ info->tty->alt_speed = 230400;
+ if ((info->flags & ASYNC_SPD_MASK) = ASYNC_SPD_WARP)
+ info->tty->alt_speed = 460800;
+ }
+
+ info->flags |= ASYNC_INITIALIZED;
+ restore_flags(flags);
+ return 0;
+
+errout:
+ restore_flags(flags);
+ return retval;
+}
+
+
+/*
+ * This routine is called whenever a serial port is opened. It
+ * enables interrupts for a serial port, linking in its async structure into
+ * the IRQ chain. It also performs the serial-specific
+ * initialization for the tty structure.
+ */
+static int rs_open(struct tty_struct *tty, struct file * filp)
+{
+ struct async_struct *info;
+ int retval, line;
+ unsigned long page;
+
+ MOD_INC_USE_COUNT;
+ line = MINOR(tty->device) - tty->driver.minor_start;
+ if ((line < 0) || (line >= NR_PORTS)) {
+ MOD_DEC_USE_COUNT;
+ return -ENODEV;
+ }
+ retval = get_async_struct(line, &info);
+ if (retval) {
+ MOD_DEC_USE_COUNT;
+ return retval;
+ }
+ tty->driver_data = info;
+ info->tty = tty;
+
+#ifdef SIMSERIAL_DEBUG
+ printk("rs_open %s%d, count = %d\n", tty->driver.name, info->line,
+ info->state->count);
+#endif
+ info->tty->low_latency = (info->flags & ASYNC_LOW_LATENCY) ? 1 : 0;
+
+ if (!tmp_buf) {
+ page = get_free_page(GFP_KERNEL);
+ if (!page) {
+ /* MOD_DEC_USE_COUNT; "info->tty" will cause this? */
+ return -ENOMEM;
+ }
+ if (tmp_buf)
+ free_page(page);
+ else
+ tmp_buf = (unsigned char *) page;
+ }
+
+ /*
+ * If the port is the middle of closing, bail out now
+ */
+ if (tty_hung_up_p(filp) ||
+ (info->flags & ASYNC_CLOSING)) {
+ if (info->flags & ASYNC_CLOSING)
+ interruptible_sleep_on(&info->close_wait);
+ /* MOD_DEC_USE_COUNT; "info->tty" will cause this? */
+#ifdef SERIAL_DO_RESTART
+ return ((info->flags & ASYNC_HUP_NOTIFY) ?
+ -EAGAIN : -ERESTARTSYS);
+#else
+ return -EAGAIN;
+#endif
+ }
+
+ /*
+ * Start up serial port
+ */
+ retval = startup(info);
+ if (retval) {
+ /* MOD_DEC_USE_COUNT; "info->tty" will cause this? */
+ return retval;
+ }
+
+ if ((info->state->count = 1) &&
+ (info->flags & ASYNC_SPLIT_TERMIOS)) {
+ if (tty->driver.subtype = SERIAL_TYPE_NORMAL)
+ *tty->termios = info->state->normal_termios;
+ else
+ *tty->termios = info->state->callout_termios;
+ }
+
+ /*
+ * figure out which console to use (should be one already)
+ */
+ console = console_drivers;
+ while (console) {
+ if ((console->flags & CON_ENABLED) && console->write) break;
+ console = console->next;
+ }
+
+ info->session = current->session;
+ info->pgrp = current->pgrp;
+
+#ifdef SIMSERIAL_DEBUG
+ printk("rs_open ttys%d successful\n", info->line);
+#endif
+ return 0;
+}
+
+/*
+ * /proc fs routines....
+ */
+
+static inline int line_info(char *buf, struct serial_state *state)
+{
+ return sprintf(buf, "%d: uart:%s port:%lX irq:%d\n",
+ state->line, uart_config[state->type].name,
+ state->port, state->irq);
+}
+
+static int rs_read_proc(char *page, char **start, off_t off, int count,
+ int *eof, void *data)
+{
+ int i, len = 0, l;
+ off_t begin = 0;
+
+ len += sprintf(page, "simserinfo:1.0 driver:%s\n", serial_version);
+ for (i = 0; i < NR_PORTS && len < 4000; i++) {
+ l = line_info(page + len, &rs_table[i]);
+ len += l;
+ if (len+begin > off+count)
+ goto done;
+ if (len+begin < off) {
+ begin += len;
+ len = 0;
+ }
+ }
+ *eof = 1;
+done:
+ if (off >= len+begin)
+ return 0;
+ *start = page + (begin-off);
+ return ((count < begin+len-off) ? count : begin+len-off);
+}
+
+/*
+ * ---------------------------------------------------------------------
+ * rs_init() and friends
+ *
+ * rs_init() is called at boot-time to initialize the serial driver.
+ * ---------------------------------------------------------------------
+ */
+
+/*
+ * This routine prints out the appropriate serial driver version
+ * number, and identifies which options were configured into this
+ * driver.
+ */
+static inline void show_serial_version(void)
+{
+ printk(KERN_INFO "%s version %s with", serial_name, serial_version);
+ printk(" no serial options enabled\n");
+}
+
+/*
+ * The serial driver boot-time initialization code!
+ */
+static int __init
+simrs_init (void)
+{
+ int i;
+ struct serial_state *state;
+
+ show_serial_version();
+
+ /* Initialize the tty_driver structure */
+
+ memset(&serial_driver, 0, sizeof(struct tty_driver));
+ serial_driver.magic = TTY_DRIVER_MAGIC;
+ serial_driver.driver_name = "simserial";
+ serial_driver.name = "ttyS";
+ serial_driver.major = TTY_MAJOR;
+ serial_driver.minor_start = 64;
+ serial_driver.num = 1;
+ serial_driver.type = TTY_DRIVER_TYPE_SERIAL;
+ serial_driver.subtype = SERIAL_TYPE_NORMAL;
+ serial_driver.init_termios = tty_std_termios;
+ serial_driver.init_termios.c_cflag + B9600 | CS8 | CREAD | HUPCL | CLOCAL;
+ serial_driver.flags = TTY_DRIVER_REAL_RAW;
+ serial_driver.refcount = &serial_refcount;
+ serial_driver.table = serial_table;
+ serial_driver.termios = serial_termios;
+ serial_driver.termios_locked = serial_termios_locked;
+
+ serial_driver.open = rs_open;
+ serial_driver.close = rs_close;
+ serial_driver.write = rs_write;
+ serial_driver.put_char = rs_put_char;
+ serial_driver.flush_chars = rs_flush_chars;
+ serial_driver.write_room = rs_write_room;
+ serial_driver.chars_in_buffer = rs_chars_in_buffer;
+ serial_driver.flush_buffer = rs_flush_buffer;
+ serial_driver.ioctl = rs_ioctl;
+ serial_driver.throttle = rs_throttle;
+ serial_driver.unthrottle = rs_unthrottle;
+ serial_driver.send_xchar = rs_send_xchar;
+ serial_driver.set_termios = rs_set_termios;
+ serial_driver.stop = rs_stop;
+ serial_driver.start = rs_start;
+ serial_driver.hangup = rs_hangup;
+ serial_driver.break_ctl = rs_break;
+ serial_driver.wait_until_sent = rs_wait_until_sent;
+ serial_driver.read_proc = rs_read_proc;
+
+ /*
+ * Let's have a little bit of fun !
+ */
+ for (i = 0, state = rs_table; i < NR_PORTS; i++,state++) {
+
+ if (state->type = PORT_UNKNOWN) continue;
+
+ if (!state->irq) {
+ state->irq = ia64_alloc_vector();
+ ia64_ssc_connect_irq(KEYBOARD_INTR, state->irq);
+ }
+
+ printk(KERN_INFO "ttyS%02d at 0x%04lx (irq = %d) is a %s\n",
+ state->line,
+ state->port, state->irq,
+ uart_config[state->type].name);
+ }
+ /*
+ * The callout device is just like normal device except for
+ * major number and the subtype code.
+ */
+ callout_driver = serial_driver;
+ callout_driver.name = "cua";
+ callout_driver.major = TTYAUX_MAJOR;
+ callout_driver.subtype = SERIAL_TYPE_CALLOUT;
+ callout_driver.read_proc = 0;
+ callout_driver.proc_entry = 0;
+
+ if (tty_register_driver(&serial_driver))
+ panic("Couldn't register simserial driver\n");
+
+ if (tty_register_driver(&callout_driver))
+ panic("Couldn't register callout driver\n");
+
+ return 0;
+}
+
+#ifndef MODULE
+__initcall(simrs_init);
+#endif
diff -u -urN linux-2.4.20-ia64-021210/drivers/net/Makefile linux-ski/drivers/net/Makefile
--- linux-2.4.20-ia64-021210/drivers/net/Makefile 2002-11-28 16:53:13.000000000 -0700
+++ linux-ski/drivers/net/Makefile 2002-12-13 10:04:07.000000000 -0700
@@ -142,6 +142,7 @@
obj-$(CONFIG_LNE390) += lne390.o 8390.o
obj-$(CONFIG_NE3210) += ne3210.o 8390.o
obj-$(CONFIG_NET_SB1250_MAC) += sb1250-mac.o
+obj-$(CONFIG_HP_SIMETH) += simeth.o
obj-$(CONFIG_PPP) += ppp_generic.o slhc.o
obj-$(CONFIG_PPP_ASYNC) += ppp_async.o
diff -u -urN linux-2.4.20-ia64-021210/drivers/net/simeth.c linux-ski/drivers/net/simeth.c
--- linux-2.4.20-ia64-021210/drivers/net/simeth.c 1969-12-31 17:00:00.000000000 -0700
+++ linux-ski/drivers/net/simeth.c 2002-12-13 10:04:07.000000000 -0700
@@ -0,0 +1,533 @@
+/*
+ * Simulated Ethernet Driver
+ *
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * Copyright (C) 1999-2001 Stephane Eranain <eranian@hpl.hp.com>
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/types.h>
+#include <linux/in.h>
+#include <linux/string.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/inetdevice.h>
+#include <linux/if_ether.h>
+#include <linux/if_arp.h>
+#include <linux/skbuff.h>
+#include <linux/notifier.h>
+#include <asm/bitops.h>
+#include <asm/system.h>
+#include <asm/irq.h>
+
+#define SIMETH_RECV_MAX 10
+
+/*
+ * Maximum possible received frame for Ethernet.
+ * We preallocate an sk_buff of that size to avoid costly
+ * memcpy for temporary buffer into sk_buff. We do basically
+ * what's done in other drivers, like eepro with a ring.
+ * The difference is, of course, that we don't have real DMA !!!
+ */
+#define SIMETH_FRAME_SIZE ETH_FRAME_LEN
+
+
+#define SSC_NETDEV_PROBE 100
+#define SSC_NETDEV_SEND 101
+#define SSC_NETDEV_RECV 102
+#define SSC_NETDEV_ATTACH 103
+#define SSC_NETDEV_DETACH 104
+
+#define NETWORK_INTR 8
+
+struct simeth_local {
+ struct net_device_stats stats;
+ int simfd; /* descriptor in the simulator */
+};
+
+static int simeth_probe1(void);
+static int simeth_open(struct net_device *dev);
+static int simeth_close(struct net_device *dev);
+static int simeth_tx(struct sk_buff *skb, struct net_device *dev);
+static int simeth_rx(struct net_device *dev);
+static struct net_device_stats *simeth_get_stats(struct net_device *dev);
+static void simeth_interrupt(int irq, void *dev_id, struct pt_regs * regs);
+static void set_multicast_list(struct net_device *dev);
+static int simeth_device_event(struct notifier_block *this,unsigned long event, void *ptr);
+
+static char *simeth_version="0.3";
+
+/*
+ * This variable is used to establish a mapping between the Linux/ia64 kernel
+ * and the host linux kernel.
+ *
+ * As of today, we support only one card, even though most of the code
+ * is ready for many more. The mapping is then:
+ * linux/ia64 -> linux/x86
+ * eth0 -> eth1
+ *
+ * In the future, we some string operations, we could easily support up
+ * to 10 cards (0-9).
+ *
+ * The default mapping can be changed on the kernel command line by
+ * specifying simeth=ethX (or whatever string you want).
+ */
+static char *simeth_device="eth0"; /* default host interface to use */
+
+
+
+static volatile unsigned int card_count; /* how many cards "found" so far */
+static int simeth_debug; /* set to 1 to get debug information */
+
+/*
+ * Used to catch IFF_UP & IFF_DOWN events
+ */
+static struct notifier_block simeth_dev_notifier = {
+ simeth_device_event,
+ 0
+};
+
+
+/*
+ * Function used when using a kernel command line option.
+ *
+ * Format: simeth=interface_name (like eth0)
+ */
+static int __init
+simeth_setup(char *str)
+{
+ simeth_device = str;
+ return 1;
+}
+
+__setup("simeth=", simeth_setup);
+
+/*
+ * Function used to probe for simeth devices when not installed
+ * as a loadable module
+ */
+
+int __init
+simeth_probe (void)
+{
+ int r;
+
+ printk("simeth: v%s\n", simeth_version);
+
+ r = simeth_probe1();
+
+ if (r = 0) register_netdevice_notifier(&simeth_dev_notifier);
+
+ return r;
+}
+
+extern long ia64_ssc (long, long, long, long, int);
+extern void ia64_ssc_connect_irq (long intr, long irq);
+
+static inline int
+netdev_probe(char *name, unsigned char *ether)
+{
+ return ia64_ssc(__pa(name), __pa(ether), 0,0, SSC_NETDEV_PROBE);
+}
+
+
+static inline int
+netdev_connect(int irq)
+{
+ /* XXX Fix me
+ * this does not support multiple cards
+ * also no return value
+ */
+ ia64_ssc_connect_irq(NETWORK_INTR, irq);
+ return 0;
+}
+
+static inline int
+netdev_attach(int fd, int irq, unsigned int ipaddr)
+{
+ /* this puts the host interface in the right mode (start interupting) */
+ return ia64_ssc(fd, ipaddr, 0,0, SSC_NETDEV_ATTACH);
+}
+
+
+static inline int
+netdev_detach(int fd)
+{
+ /*
+ * inactivate the host interface (don't interrupt anymore) */
+ return ia64_ssc(fd, 0,0,0, SSC_NETDEV_DETACH);
+}
+
+static inline int
+netdev_send(int fd, unsigned char *buf, unsigned int len)
+{
+ return ia64_ssc(fd, __pa(buf), len, 0, SSC_NETDEV_SEND);
+}
+
+static inline int
+netdev_read(int fd, unsigned char *buf, unsigned int len)
+{
+ return ia64_ssc(fd, __pa(buf), len, 0, SSC_NETDEV_RECV);
+}
+
+/*
+ * Function shared with module code, so cannot be in init section
+ *
+ * So far this function "detects" only one card (test_&_set) but could
+ * be extended easily.
+ *
+ * Return:
+ * - -ENODEV is no device found
+ * - -ENOMEM is no more memory
+ * - 0 otherwise
+ */
+static int
+simeth_probe1(void)
+{
+ unsigned char mac_addr[ETH_ALEN];
+ struct simeth_local *local;
+ struct net_device *dev;
+ int fd, i;
+
+ /*
+ * XXX Fix me
+ * let's support just one card for now
+ */
+ if (test_and_set_bit(0, &card_count))
+ return -ENODEV;
+
+ /*
+ * check with the simulator for the device
+ */
+ fd = netdev_probe(simeth_device, mac_addr);
+ if (fd = -1)
+ return -ENODEV;
+
+ dev = init_etherdev(NULL, sizeof(struct simeth_local));
+ if (!dev)
+ return -ENOMEM;
+
+ memcpy(dev->dev_addr, mac_addr, sizeof(mac_addr));
+
+ dev->irq = ia64_alloc_vector();
+
+ /*
+ * attach the interrupt in the simulator, this does enable interrupts
+ * until a netdev_attach() is called
+ */
+ netdev_connect(dev->irq);
+
+ memset(dev->priv, 0, sizeof(struct simeth_local));
+
+ local = dev->priv;
+ local->simfd = fd; /* keep track of underlying file descriptor */
+
+ dev->open = simeth_open;
+ dev->stop = simeth_close;
+ dev->hard_start_xmit = simeth_tx;
+ dev->get_stats = simeth_get_stats;
+ dev->set_multicast_list = set_multicast_list; /* no yet used */
+
+ /* Fill in the fields of the device structure with ethernet-generic values. */
+ ether_setup(dev);
+
+ printk("%s: hosteth=%s simfd=%d, HwAddr", dev->name, simeth_device, local->simfd);
+ for(i = 0; i < ETH_ALEN; i++) {
+ printk(" %2.2x", dev->dev_addr[i]);
+ }
+ printk(", IRQ %d\n", dev->irq);
+
+ return 0;
+}
+
+/*
+ * actually binds the device to an interrupt vector
+ */
+static int
+simeth_open(struct net_device *dev)
+{
+ if (request_irq(dev->irq, simeth_interrupt, 0, "simeth", dev)) {
+ printk ("simeth: unable to get IRQ %d.\n", dev->irq);
+ return -EAGAIN;
+ }
+
+ netif_start_queue(dev);
+
+ return 0;
+}
+
+/* copied from lapbether.c */
+static __inline__ int dev_is_ethdev(struct net_device *dev)
+{
+ return ( dev->type = ARPHRD_ETHER && strncmp(dev->name, "dummy", 5));
+}
+
+
+/*
+ * Handler for IFF_UP or IFF_DOWN
+ *
+ * The reason for that is that we don't want to be interrupted when the
+ * interface is down. There is no way to unconnect in the simualtor. Instead
+ * we use this function to shutdown packet processing in the frame filter
+ * in the simulator. Thus no interrupts are generated
+ *
+ *
+ * That's also the place where we pass the IP address of this device to the
+ * simulator so that that we can start filtering packets for it
+ *
+ * There may be a better way of doing this, but I don't know which yet.
+ */
+static int
+simeth_device_event(struct notifier_block *this,unsigned long event, void *ptr)
+{
+ struct net_device *dev = (struct net_device *)ptr;
+ struct simeth_local *local;
+ struct in_device *in_dev;
+ struct in_ifaddr **ifap = NULL;
+ struct in_ifaddr *ifa = NULL;
+ int r;
+
+
+ if ( ! dev ) {
+ printk(KERN_WARNING "simeth_device_event dev=0\n");
+ return NOTIFY_DONE;
+ }
+
+ if ( event != NETDEV_UP && event != NETDEV_DOWN ) return NOTIFY_DONE;
+
+ /*
+ * Check whether or not it's for an ethernet device
+ *
+ * XXX Fixme: This works only as long as we support one
+ * type of ethernet device.
+ */
+ if ( !dev_is_ethdev(dev) ) return NOTIFY_DONE;
+
+ if ((in_devÞv->ip_ptr) != NULL) {
+ for (ifap=&in_dev->ifa_list; (ifa=*ifap) != NULL; ifap=&ifa->ifa_next)
+ if (strcmp(dev->name, ifa->ifa_label) = 0) break;
+ }
+ if ( ifa = NULL ) {
+ printk("simeth_open: can't find device %s's ifa\n", dev->name);
+ return NOTIFY_DONE;
+ }
+
+ printk("simeth_device_event: %s ipaddr=0x%x\n", dev->name, htonl(ifa->ifa_local));
+
+ /*
+ * XXX Fix me
+ * if the device was up, and we're simply reconfiguring it, not sure
+ * we get DOWN then UP.
+ */
+
+ local = dev->priv;
+ /* now do it for real */
+ r = event = NETDEV_UP ?
+ netdev_attach(local->simfd, dev->irq, htonl(ifa->ifa_local)):
+ netdev_detach(local->simfd);
+
+ printk("simeth: netdev_attach/detach: event=%s ->%d\n", event = NETDEV_UP ? "attach":"detach", r);
+
+ return NOTIFY_DONE;
+}
+
+static int
+simeth_close(struct net_device *dev)
+{
+ netif_stop_queue(dev);
+
+ free_irq(dev->irq, dev);
+
+ return 0;
+}
+
+/*
+ * Only used for debug
+ */
+static void
+frame_print(unsigned char *from, unsigned char *frame, int len)
+{
+ int i;
+
+ printk("%s: (%d) %02x", from, len, frame[0] & 0xff);
+ for(i=1; i < 6; i++ ) {
+ printk(":%02x", frame[i] &0xff);
+ }
+ printk(" %2x", frame[6] &0xff);
+ for(i=7; i < 12; i++ ) {
+ printk(":%02x", frame[i] &0xff);
+ }
+ printk(" [%02x%02x]\n", frame[12], frame[13]);
+
+ for(i\x14; i < len; i++ ) {
+ printk("%02x ", frame[i] &0xff);
+ if ( (i%10)=0) printk("\n");
+ }
+ printk("\n");
+}
+
+
+/*
+ * Function used to transmit of frame, very last one on the path before
+ * going to the simulator.
+ */
+static int
+simeth_tx(struct sk_buff *skb, struct net_device *dev)
+{
+ struct simeth_local *local = (struct simeth_local *)dev->priv;
+
+#if 0
+ /* ensure we have at least ETH_ZLEN bytes (min frame size) */
+ unsigned int length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
+ /* Where do the extra padding bytes comes from inthe skbuff ? */
+#else
+ /* the real driver in the host system is going to take care of that
+ * or maybe it's the NIC itself.
+ */
+ unsigned int length = skb->len;
+#endif
+
+ local->stats.tx_bytes += skb->len;
+ local->stats.tx_packets++;
+
+
+ if (simeth_debug > 5) frame_print("simeth_tx", skb->data, length);
+
+ netdev_send(local->simfd, skb->data, length);
+
+ /*
+ * we are synchronous on write, so we don't simulate a
+ * trasnmit complete interrupt, thus we don't need to arm a tx
+ */
+
+ dev_kfree_skb(skb);
+ return 0;
+}
+
+static inline struct sk_buff *
+make_new_skb(struct net_device *dev)
+{
+ struct sk_buff *nskb;
+
+ /*
+ * The +2 is used to make sure that the IP header is nicely
+ * aligned (on 4byte boundary I assume 14+2\x16)
+ */
+ nskb = dev_alloc_skb(SIMETH_FRAME_SIZE + 2);
+ if ( nskb = NULL ) {
+ printk(KERN_NOTICE "%s: memory squeeze. dropping packet.\n", dev->name);
+ return NULL;
+ }
+ nskb->dev = dev;
+
+ skb_reserve(nskb, 2); /* Align IP on 16 byte boundaries */
+
+ skb_put(nskb,SIMETH_FRAME_SIZE);
+
+ return nskb;
+}
+
+/*
+ * called from interrupt handler to process a received frame
+ */
+static int
+simeth_rx(struct net_device *dev)
+{
+ struct simeth_local *local;
+ struct sk_buff *skb;
+ int len;
+ int rcv_count = SIMETH_RECV_MAX;
+
+ local = (struct simeth_local *)dev->priv;
+ /*
+ * the loop concept has been borrowed from other drivers
+ * looks to me like it's a throttling thing to avoid pushing to many
+ * packets at one time into the stack. Making sure we can process them
+ * upstream and make forward progress overall
+ */
+ do {
+ if ( (skb=make_new_skb(dev)) = NULL ) {
+ printk(KERN_NOTICE "%s: memory squeeze. dropping packet.\n", dev->name);
+ local->stats.rx_dropped++;
+ return 0;
+ }
+ /*
+ * Read only one frame at a time
+ */
+ len = netdev_read(local->simfd, skb->data, SIMETH_FRAME_SIZE);
+ if ( len = 0 ) {
+ if ( simeth_debug > 0 ) printk(KERN_WARNING "%s: count=%d netdev_read=0\n", dev->name, SIMETH_RECV_MAX-rcv_count);
+ break;
+ }
+#if 0
+ /*
+ * XXX Fix me
+ * Should really do a csum+copy here
+ */
+ memcpy(skb->data, frame, len);
+#endif
+ skb->protocol = eth_type_trans(skb, dev);
+
+ if ( simeth_debug > 6 ) frame_print("simeth_rx", skb->data, len);
+
+ /*
+ * push the packet up & trigger software interrupt
+ */
+ netif_rx(skb);
+
+ local->stats.rx_packets++;
+ local->stats.rx_bytes += len;
+
+ } while ( --rcv_count );
+
+ return len; /* 0 = nothing left to read, otherwise, we can try again */
+}
+
+/*
+ * Interrupt handler (Yes, we can do it too !!!)
+ */
+static void
+simeth_interrupt(int irq, void *dev_id, struct pt_regs * regs)
+{
+ struct net_device *dev = dev_id;
+
+ if ( dev = NULL ) {
+ printk(KERN_WARNING "simeth: irq %d for unknown device\n", irq);
+ return;
+ }
+
+ /*
+ * very simple loop because we get interrupts only when receving
+ */
+ while (simeth_rx(dev));
+}
+
+static struct net_device_stats *
+simeth_get_stats(struct net_device *dev)
+{
+ struct simeth_local *local = (struct simeth_local *) dev->priv;
+
+ return &local->stats;
+}
+
+/* fake multicast ability */
+static void
+set_multicast_list(struct net_device *dev)
+{
+ printk(KERN_WARNING "%s: set_multicast_list called\n", dev->name);
+}
+
+#ifdef CONFIG_NET_FASTROUTE
+static int
+simeth_accept_fastpath(struct net_device *dev, struct dst_entry *dst)
+{
+ printk(KERN_WARNING "%s: simeth_accept_fastpath called\n", dev->name);
+ return -1;
+}
+#endif
+
+__initcall(simeth_probe);
diff -u -urN linux-2.4.20-ia64-021210/drivers/scsi/Makefile linux-ski/drivers/scsi/Makefile
--- linux-2.4.20-ia64-021210/drivers/scsi/Makefile 2002-12-10 14:23:20.000000000 -0700
+++ linux-ski/drivers/scsi/Makefile 2002-12-13 10:04:07.000000000 -0700
@@ -53,6 +53,7 @@
obj-$(CONFIG_SUN3_SCSI) += sun3_scsi.o
obj-$(CONFIG_MVME16x_SCSI) += mvme16x.o 53c7xx.o
obj-$(CONFIG_BVME6000_SCSI) += bvme6000.o 53c7xx.o
+obj-$(CONFIG_HP_SIMSCSI) += simscsi.o
obj-$(CONFIG_SCSI_SIM710) += sim710.o
obj-$(CONFIG_SCSI_ADVANSYS) += advansys.o
obj-$(CONFIG_SCSI_PCI2000) += pci2000.o
diff -u -urN linux-2.4.20-ia64-021210/drivers/scsi/simscsi.c linux-ski/drivers/scsi/simscsi.c
--- linux-2.4.20-ia64-021210/drivers/scsi/simscsi.c 1969-12-31 17:00:00.000000000 -0700
+++ linux-ski/drivers/scsi/simscsi.c 2002-12-13 10:04:07.000000000 -0700
@@ -0,0 +1,395 @@
+/*
+ * Simulated SCSI driver.
+ *
+ * Copyright (C) 1999, 2001-2002 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * 02/01/15 David Mosberger Updated for v2.5.1
+ * 99/12/18 David Mosberger Added support for READ10/WRITE10 needed by linux v2.3.33
+ */
+#include <linux/config.h>
+#include <linux/blk.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/timer.h>
+
+#include <scsi/scsi.h>
+
+#include <asm/irq.h>
+
+#include "scsi.h"
+#include "sd.h"
+#include "hosts.h"
+#include "simscsi.h"
+
+#define DEBUG_SIMSCSI 1
+
+/* Simulator system calls: */
+
+#define SSC_OPEN 50
+#define SSC_CLOSE 51
+#define SSC_READ 52
+#define SSC_WRITE 53
+#define SSC_GET_COMPLETION 54
+#define SSC_WAIT_COMPLETION 55
+
+#define SSC_WRITE_ACCESS 2
+#define SSC_READ_ACCESS 1
+
+#if DEBUG_SIMSCSI
+ int simscsi_debug;
+# define DBG simscsi_debug
+#else
+# define DBG 0
+#endif
+
+static void simscsi_interrupt (unsigned long val);
+DECLARE_TASKLET(simscsi_tasklet, simscsi_interrupt, 0);
+
+struct disk_req {
+ unsigned long addr;
+ unsigned len;
+};
+
+struct disk_stat {
+ int fd;
+ unsigned count;
+};
+
+extern long ia64_ssc (long arg0, long arg1, long arg2, long arg3, int nr);
+
+static int desc[16] = {
+ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1
+};
+
+static struct queue_entry {
+ Scsi_Cmnd *sc;
+} queue[SIMSCSI_REQ_QUEUE_LEN];
+
+static int rd, wr;
+static atomic_t num_reqs = ATOMIC_INIT(0);
+
+/* base name for default disks */
+static char *simscsi_root = DEFAULT_SIMSCSI_ROOT;
+
+#define MAX_ROOT_LEN 128
+
+/*
+ * used to setup a new base for disk images
+ * to use /foo/bar/disk[a-z] as disk images
+ * you have to specify simscsi=/foo/bar/disk on the command line
+ */
+static int __init
+simscsi_setup (char *s)
+{
+ /* XXX Fix me we may need to strcpy() ? */
+ if (strlen(s) > MAX_ROOT_LEN) {
+ printk("simscsi_setup: prefix too long---using default %s\n", simscsi_root);
+ }
+ simscsi_root = s;
+ return 1;
+}
+
+__setup("simscsi=", simscsi_setup);
+
+static void
+simscsi_interrupt (unsigned long val)
+{
+ unsigned long flags;
+ Scsi_Cmnd *sc;
+
+ spin_lock_irqsave(&io_request_lock, flags);
+ {
+ while ((sc = queue[rd].sc) != 0) {
+ atomic_dec(&num_reqs);
+ queue[rd].sc = 0;
+ if (DBG)
+ printk("simscsi_interrupt: done with %ld\n", sc->serial_number);
+ (*sc->scsi_done)(sc);
+ rd = (rd + 1) % SIMSCSI_REQ_QUEUE_LEN;
+ }
+ }
+ spin_unlock_irqrestore(&io_request_lock, flags);
+}
+
+int
+simscsi_detect (Scsi_Host_Template *templ)
+{
+ templ->proc_name = "simscsi";
+ return 1; /* fake one SCSI host adapter */
+}
+
+int
+simscsi_release (struct Scsi_Host *host)
+{
+ return 0; /* this is easy... */
+}
+
+const char *
+simscsi_info (struct Scsi_Host *host)
+{
+ return "simulated SCSI host adapter";
+}
+
+int
+simscsi_biosparam (Disk *disk, kdev_t n, int ip[])
+{
+ unsigned capacity = disk->capacity;
+
+ ip[0] = 64; /* heads */
+ ip[1] = 32; /* sectors */
+ ip[2] = capacity >> 11; /* cylinders */
+ return 0;
+}
+
+static void
+simscsi_readwrite (Scsi_Cmnd *sc, int mode, unsigned long offset, unsigned long len)
+{
+ struct disk_stat stat;
+ struct disk_req req;
+
+ req.addr = __pa(sc->request_buffer);
+ req.len = len; /* # of bytes to transfer */
+
+ if (sc->request_bufflen < req.len)
+ return;
+
+ stat.fd = desc[sc->target];
+ if (DBG)
+ printk("simscsi_%s @ %lx (off %lx)\n",
+ mode = SSC_READ ? "read":"write", req.addr, offset);
+ ia64_ssc(stat.fd, 1, __pa(&req), offset, mode);
+ ia64_ssc(__pa(&stat), 0, 0, 0, SSC_WAIT_COMPLETION);
+
+ if (stat.count = req.len) {
+ sc->result = GOOD;
+ } else {
+ sc->result = DID_ERROR << 16;
+ }
+}
+
+static void
+simscsi_sg_readwrite (Scsi_Cmnd *sc, int mode, unsigned long offset)
+{
+ int list_len = sc->use_sg;
+ struct scatterlist *sl = (struct scatterlist *)sc->buffer;
+ struct disk_stat stat;
+ struct disk_req req;
+
+ stat.fd = desc[sc->target];
+
+ while (list_len) {
+ req.addr = __pa(sl->address);
+ req.len = sl->length;
+ if (DBG)
+ printk("simscsi_sg_%s @ %lx (off %lx) use_sg=%d len=%d\n",
+ mode = SSC_READ ? "read":"write", req.addr, offset,
+ list_len, sl->length);
+ ia64_ssc(stat.fd, 1, __pa(&req), offset, mode);
+ ia64_ssc(__pa(&stat), 0, 0, 0, SSC_WAIT_COMPLETION);
+
+ /* should not happen in our case */
+ if (stat.count != req.len) {
+ sc->result = DID_ERROR << 16;
+ return;
+ }
+ offset += sl->length;
+ sl++;
+ list_len--;
+ }
+ sc->result = GOOD;
+}
+
+/*
+ * function handling both READ_6/WRITE_6 (non-scatter/gather mode)
+ * commands.
+ * Added 02/26/99 S.Eranian
+ */
+static void
+simscsi_readwrite6 (Scsi_Cmnd *sc, int mode)
+{
+ unsigned long offset;
+
+ offset = (((sc->cmnd[1] & 0x1f) << 16) | (sc->cmnd[2] << 8) | sc->cmnd[3])*512;
+ if (sc->use_sg > 0)
+ simscsi_sg_readwrite(sc, mode, offset);
+ else
+ simscsi_readwrite(sc, mode, offset, sc->cmnd[4]*512);
+}
+
+static size_t
+simscsi_get_disk_size (int fd)
+{
+ struct disk_stat stat;
+ size_t bit, sectors = 0;
+ struct disk_req req;
+ char buf[512];
+
+ /*
+ * This is a bit kludgey: the simulator doesn't provide a direct way of determining
+ * the disk size, so we do a binary search, assuming a maximum disk size of 4GB.
+ */
+ for (bit = (4UL << 30)/512; bit != 0; bit >>= 1) {
+ req.addr = __pa(&buf);
+ req.len = sizeof(buf);
+ ia64_ssc(fd, 1, __pa(&req), ((sectors | bit) - 1)*512, SSC_READ);
+ stat.fd = fd;
+ ia64_ssc(__pa(&stat), 0, 0, 0, SSC_WAIT_COMPLETION);
+ if (stat.count = sizeof(buf))
+ sectors |= bit;
+ }
+ return sectors - 1; /* return last valid sector number */
+}
+
+static void
+simscsi_readwrite10 (Scsi_Cmnd *sc, int mode)
+{
+ unsigned long offset;
+
+ offset = ( (sc->cmnd[2] << 24) | (sc->cmnd[3] << 16)
+ | (sc->cmnd[4] << 8) | (sc->cmnd[5] << 0))*512;
+ if (sc->use_sg > 0)
+ simscsi_sg_readwrite(sc, mode, offset);
+ else
+ simscsi_readwrite(sc, mode, offset, ((sc->cmnd[7] << 8) | sc->cmnd[8])*512);
+}
+
+int
+simscsi_queuecommand (Scsi_Cmnd *sc, void (*done)(Scsi_Cmnd *))
+{
+ char fname[MAX_ROOT_LEN+16];
+ size_t disk_size;
+ char *buf;
+#if DEBUG_SIMSCSI
+ register long sp asm ("sp");
+
+ if (DBG)
+ printk("simscsi_queuecommand: target=%d,cmnd=%u,sc=%lu,sp=%lx,done=%p\n",
+ sc->target, sc->cmnd[0], sc->serial_number, sp, done);
+#endif
+
+ sc->result = DID_BAD_TARGET << 16;
+ sc->scsi_done = done;
+ if (sc->target <= 15 && sc->lun = 0) {
+ switch (sc->cmnd[0]) {
+ case INQUIRY:
+ if (sc->request_bufflen < 35) {
+ break;
+ }
+ sprintf (fname, "%s%c", simscsi_root, 'a' + sc->target);
+ desc[sc->target] = ia64_ssc(__pa(fname), SSC_READ_ACCESS|SSC_WRITE_ACCESS,
+ 0, 0, SSC_OPEN);
+ if (desc[sc->target] < 0) {
+ /* disk doesn't exist... */
+ break;
+ }
+ buf = sc->request_buffer;
+ buf[0] = 0; /* magnetic disk */
+ buf[1] = 0; /* not a removable medium */
+ buf[2] = 2; /* SCSI-2 compliant device */
+ buf[3] = 2; /* SCSI-2 response data format */
+ buf[4] = 31; /* additional length (bytes) */
+ buf[5] = 0; /* reserved */
+ buf[6] = 0; /* reserved */
+ buf[7] = 0; /* various flags */
+ memcpy(buf + 8, "HP SIMULATED DISK 0.00", 28);
+ sc->result = GOOD;
+ break;
+
+ case TEST_UNIT_READY:
+ sc->result = GOOD;
+ break;
+
+ case READ_6:
+ if (desc[sc->target] < 0 )
+ break;
+ simscsi_readwrite6(sc, SSC_READ);
+ break;
+
+ case READ_10:
+ if (desc[sc->target] < 0 )
+ break;
+ simscsi_readwrite10(sc, SSC_READ);
+ break;
+
+ case WRITE_6:
+ if (desc[sc->target] < 0)
+ break;
+ simscsi_readwrite6(sc, SSC_WRITE);
+ break;
+
+ case WRITE_10:
+ if (desc[sc->target] < 0)
+ break;
+ simscsi_readwrite10(sc, SSC_WRITE);
+ break;
+
+
+ case READ_CAPACITY:
+ if (desc[sc->target] < 0 || sc->request_bufflen < 8) {
+ break;
+ }
+ buf = sc->request_buffer;
+
+ disk_size = simscsi_get_disk_size(desc[sc->target]);
+ buf[0] = (disk_size >> 24) & 0xff;
+ buf[1] = (disk_size >> 16) & 0xff;
+ buf[2] = (disk_size >> 8) & 0xff;
+ buf[3] = (disk_size >> 0) & 0xff;
+ /* set block size of 512 bytes: */
+ buf[4] = 0;
+ buf[5] = 0;
+ buf[6] = 2;
+ buf[7] = 0;
+ sc->result = GOOD;
+ break;
+
+ case MODE_SENSE:
+ /* sd.c uses this to determine whether disk does write-caching. */
+ memset(sc->request_buffer, 0, 128);
+ sc->result = GOOD;
+ break;
+
+ case START_STOP:
+ printk("START_STOP\n");
+ break;
+
+ default:
+ panic("simscsi: unknown SCSI command %u\n", sc->cmnd[0]);
+ }
+ }
+ if (sc->result = DID_BAD_TARGET) {
+ sc->result |= DRIVER_SENSE << 24;
+ sc->sense_buffer[0] = 0x70;
+ sc->sense_buffer[2] = 0x00;
+ }
+ if (atomic_read(&num_reqs) >= SIMSCSI_REQ_QUEUE_LEN) {
+ panic("Attempt to queue command while command is pending!!");
+ }
+ atomic_inc(&num_reqs);
+ queue[wr].sc = sc;
+ wr = (wr + 1) % SIMSCSI_REQ_QUEUE_LEN;
+
+ tasklet_schedule(&simscsi_tasklet);
+ return 0;
+}
+
+int
+simscsi_reset (Scsi_Cmnd *cmd, unsigned int reset_flags)
+{
+ printk ("simscsi_reset: unimplemented\n");
+ return SCSI_RESET_SUCCESS;
+}
+
+int
+simscsi_abort (Scsi_Cmnd *cmd)
+{
+ printk ("simscsi_abort: unimplemented\n");
+ return SCSI_ABORT_SUCCESS;
+}
+
+static Scsi_Host_Template driver_template = SIMSCSI;
+
+#include "scsi_module.c"
diff -u -urN linux-2.4.20-ia64-021210/drivers/scsi/simscsi.h linux-ski/drivers/scsi/simscsi.h
--- linux-2.4.20-ia64-021210/drivers/scsi/simscsi.h 1969-12-31 17:00:00.000000000 -0700
+++ linux-ski/drivers/scsi/simscsi.h 2002-12-13 10:04:07.000000000 -0700
@@ -0,0 +1,39 @@
+/*
+ * Simulated SCSI driver.
+ *
+ * Copyright (C) 1999, 2002 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#ifndef SIMSCSI_H
+#define SIMSCSI_H
+
+#define SIMSCSI_REQ_QUEUE_LEN 64
+
+#define DEFAULT_SIMSCSI_ROOT "/var/ski-disks/sd"
+
+extern int simscsi_detect (Scsi_Host_Template *);
+extern int simscsi_release (struct Scsi_Host *);
+extern const char *simscsi_info (struct Scsi_Host *);
+extern int simscsi_queuecommand (Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));
+extern int simscsi_abort (Scsi_Cmnd *);
+extern int simscsi_reset (Scsi_Cmnd *, unsigned int);
+extern int simscsi_biosparam (Disk *, kdev_t, int[]);
+
+#define SIMSCSI { \
+ .detect = simscsi_detect, \
+ .release = simscsi_release, \
+ .info = simscsi_info, \
+ .queuecommand = simscsi_queuecommand, \
+ .abort = simscsi_abort, \
+ .reset = simscsi_reset, \
+ .bios_param = simscsi_biosparam, \
+ .can_queue = SIMSCSI_REQ_QUEUE_LEN, \
+ .this_id = -1, \
+ .sg_tablesize = SG_ALL, \
+ .cmd_per_lun = SIMSCSI_REQ_QUEUE_LEN, \
+ .present = 0, \
+ .unchecked_isa_dma = 0, \
+ .use_clustering = DISABLE_CLUSTERING \
+}
+
+#endif /* SIMSCSI_H */
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.52)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (180 preceding siblings ...)
2002-12-13 17:36 ` Bjorn Helgaas
@ 2002-12-21 9:00 ` David Mosberger
2002-12-26 6:07 ` Kimio Suganuma
` (33 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2002-12-21 9:00 UTC (permalink / raw)
To: linux-ia64
A new ia64 patch is available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5
in file linux-2.5.52-ia64-021221.diff.gz.
I wasted a fair amount of time figuring out why the kernel would
suddenly segv all other the place, just to find that, in the end, it
was due to a line that got left out of the main Makefile. Oh, well, I
have had better weeks...
The good news is that this kernel actually seems to work pretty well
for me (modulo know issues: no kernel module loader yet, etc).
Hoewever, the MPT Fusion SCSI driver broke pretty badly in 2.5.50: it
just freezes the machine while it's probing for SCSI devices. Someone
who actually knows something about this driver needs to take a good
look at this.
Oh, and I noticed one bug in the current patch already: in
/proc/pal/cpuN/vm_info, the memory-attributes are currently printed
with an extra newline. Not a biggie, but for those who care, the
attached (untested) patch should fix the problem.
Enjoy & Happy Holidays,
--david
PS: Pretty much all of HP will be closed next week. I'll check in on
mail from time to time, but I'm also looking forward to spend some
quality R&R time with my family. ;-)
# This is a BitKeeper generated patch for the following project:
# Project Name: Linux kernel tree
# This patch format is intended for GNU patch command version 2.5 or higher.
# This patch includes the following deltas:
# ChangeSet 1.892 -> 1.893
# arch/ia64/kernel/palinfo.c 1.8 -> 1.9
#
# The following is the BitKeeper ChangeSet Log
# --------------------------------------------
# 02/12/21 davidm@tiger.hpl.hp.com 1.893
# ia64: Fix printing of memory attributes.
# --------------------------------------------
#
diff -Nru a/arch/ia64/kernel/palinfo.c b/arch/ia64/kernel/palinfo.c
--- a/arch/ia64/kernel/palinfo.c Sat Dec 21 00:58:08 2002
+++ b/arch/ia64/kernel/palinfo.c Sat Dec 21 00:58:08 2002
@@ -333,10 +333,11 @@
sep = "";
for (i = 0; i < 8; i++) {
if (attrib & (1 << i)) {
- p += sprintf(p, "%s%s\n", sep, mem_attrib[i]);
+ p += sprintf(p, "%s%s", sep, mem_attrib[i]);
sep = ", ";
}
}
+ p += sprintf(p, "\n");
if ((status=ia64_pal_vm_page_size(&tr_pages, &vw_pages)) !=0) {
printk("ia64_pal_vm_page_size=%ld\n", status);
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.52)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (181 preceding siblings ...)
2002-12-21 9:00 ` [Linux-ia64] kernel update (relative to 2.5.52) David Mosberger
@ 2002-12-26 6:07 ` Kimio Suganuma
2003-01-02 21:27 ` David Mosberger
` (32 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Kimio Suganuma @ 2002-12-26 6:07 UTC (permalink / raw)
To: linux-ia64
Hi David,
Please apply this patch for discontigmem.
I made sure NUMA kernel worked fine with the patch
on 16way Itanium machine.
Thanks,
Kimi
*** arch/ia64/mm/init.c.org Thu Dec 26 11:46:44 2002
--- arch/ia64/mm/init.c Thu Dec 26 14:52:57 2002
***************
*** 501,508 ****
extern void discontig_paging_init(void);
discontig_paging_init();
-
- num_dma_physpages = 0;
efi_memmap_walk(count_pages, &num_physpages);
}
#else /* !CONFIG_DISCONTIGMEM */
--- 501,506 ----
On Sat, 21 Dec 2002 01:00:09 -0800
David Mosberger <davidm@napali.hpl.hp.com> wrote:
> A new ia64 patch is available at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5
>
> in file linux-2.5.52-ia64-021221.diff.gz.
--
Kimio Suganuma <k-suganuma@mvj.biglobe.ne.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.52)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (182 preceding siblings ...)
2002-12-26 6:07 ` Kimio Suganuma
@ 2003-01-02 21:27 ` David Mosberger
2003-01-25 5:02 ` [Linux-ia64] kernel update (relative to 2.5.59) David Mosberger
` (31 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-01-02 21:27 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 26 Dec 2002 15:07:11 +0900, Kimio Suganuma <k-suganuma@mvj.biglobe.ne.jp> said:
Kimio> Hi David,
Kimio> Please apply this patch for discontigmem.
Kimio> I made sure NUMA kernel worked fine with the patch
Kimio> on 16way Itanium machine.
The patch didn't apply because of whitespace issues and because it
wasn't relative to the parent of the top-level directory. I applied
it by hand now, but in the future, please try to submit proper patches
(if you mailer does the munging, you can send patches as a MIME
attachement).
Thanks,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (183 preceding siblings ...)
2003-01-02 21:27 ` David Mosberger
@ 2003-01-25 5:02 ` David Mosberger
2003-01-25 20:19 ` Sam Ravnborg
` (30 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-01-25 5:02 UTC (permalink / raw)
To: linux-ia64
I just uploaded the latest ia64 patch to the usual location(s). You
can get it from ftp.kernel.org/pub/linux/ia64/ports/v2.5/ in
file:
linux-2.5.59-ia64-030124.diff.gz
This is mostly a sync-up with all the changes that happened between
2.5.52 and 2.5.59. I also added one more light-weight system call:
set_tid_address(). The reason I added this one is because it makes
for a great example that shows how to deal with system call arguments
(explicit testing for NaT is required). It might also make a (small)
difference in startup overheads for NPTL thread creation.
Stephane, I just realized that I forgot to apply your perfmon patch.
Sorry about tat---I'll fix that in the next patch.
Peter (Chubb): if you have an updated premption-support patch, I'd be
interested in merging it in (I wanted to do that for a while, but just
never got around to it).
This patch works well for me on the platforms I tested (zx6000 and Ski
simulator). However, there seems to be a problem with running shared
x86 apps. If someone could look into that, that would be great.
Oh, most importantly: you'll need a new assembler in order to use this
patch. There was a nasty bug up until Dec 18 last year which
basically made certain place-relative expressions generate bad data.
Fortunately, HJ Lu has fixed that bug and I put a read-for-use, static
binary of a fixed assembler at:
ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz
As a measure of safety, I added a sanity check which will cause "make"
to refuse to build a kernel with a buggy assembler.
As usual, you can get detailed changelogs at:
http://lia64.bkbits.net:8080/to-linus-2.5
Enjoy,
--david
diff -Nru a/Documentation/ia64/README b/Documentation/ia64/README
--- a/Documentation/ia64/README Fri Jan 24 20:41:05 2003
+++ b/Documentation/ia64/README Fri Jan 24 20:41:05 2003
@@ -4,40 +4,40 @@
platform. This document provides information specific to IA-64
ONLY, to get additional information about the Linux kernel also
read the original Linux README provided with the kernel.
-
+
INSTALLING the kernel:
- IA-64 kernel installation is the same as the other platforms, see
original README for details.
-
-
+
+
SOFTWARE REQUIREMENTS
Compiling and running this kernel requires an IA-64 compliant GCC
compiler. And various software packages also compiled with an
IA-64 compliant GCC compiler.
-
+
CONFIGURING the kernel:
Configuration is the same, see original README for details.
-
-
+
+
COMPILING the kernel:
- Compiling this kernel doesn't differ from other platform so read
the original README for details BUT make sure you have an IA-64
compliant GCC compiler.
-
+
IA-64 SPECIFICS
- General issues:
-
+
o Hardly any performance tuning has been done. Obvious targets
include the library routines (IP checksum, etc.). Less
obvious targets include making sure we don't flush the TLB
needlessly, etc.
-
+
o SMP locks cleanup/optimization
-
+
o IA32 support. Currently experimental. It mostly works.
diff -Nru a/Documentation/ia64/fsys.txt b/Documentation/ia64/fsys.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/ia64/fsys.txt Fri Jan 24 20:41:06 2003
@@ -0,0 +1,231 @@
+-*-Mode: outline-*-
+
+ Light-weight System Calls for IA-64
+ -----------------------------------
+
+ Started: 13-Jan-2002
+ Last update: 24-Jan-2002
+
+ David Mosberger-Tang
+ <davidm@hpl.hp.com>
+
+Using the "epc" instruction effectively introduces a new mode of
+execution to the ia64 linux kernel. We call this mode the
+"fsys-mode". To recap, the normal states of execution are:
+
+ - kernel mode:
+ Both the register stack and the memory stack have been
+ switched over to kernel memory. The user-level state is saved
+ in a pt-regs structure at the top of the kernel memory stack.
+
+ - user mode:
+ Both the register stack and the kernel stack are in
+ user memory. The user-level state is contained in the
+ CPU registers.
+
+ - bank 0 interruption-handling mode:
+ This is the non-interruptible state which all
+ interruption-handlers start execution in. The user-level
+ state remains in the CPU registers and some kernel state may
+ be stored in bank 0 of registers r16-r31.
+
+In contrast, fsys-mode has the following special properties:
+
+ - execution is at privilege level 0 (most-privileged)
+
+ - CPU registers may contain a mixture of user-level and kernel-level
+ state (it is the responsibility of the kernel to ensure that no
+ security-sensitive kernel-level state is leaked back to
+ user-level)
+
+ - execution is interruptible and preemptible (an fsys-mode handler
+ can disable interrupts and avoid all other interruption-sources
+ to avoid preemption)
+
+ - neither the memory nor the register stack can be trusted while
+ in fsys-mode (they point to the user-level stacks, which may
+ be invalid)
+
+In summary, fsys-mode is much more similar to running in user-mode
+than it is to running in kernel-mode. Of course, given that the
+privilege level is at level 0, this means that fsys-mode requires some
+care (see below).
+
+
+* How to tell fsys-mode
+
+Linux operates in fsys-mode when (a) the privilege level is 0 (most
+privileged) and (b) the stacks have NOT been switched to kernel memory
+yet. For convenience, the header file <asm-ia64/ptrace.h> provides
+three macros:
+
+ user_mode(regs)
+ user_stack(task,regs)
+ fsys_mode(task,regs)
+
+The "regs" argument is a pointer to a pt_regs structure. The "task"
+argument is a pointer to the task structure to which the "regs"
+pointer belongs to. user_mode() returns TRUE if the CPU state pointed
+to by "regs" was executing in user mode (privilege level 3).
+user_stack() returns TRUE if the state pointed to by "regs" was
+executing on the user-level stack(s). Finally, fsys_mode() returns
+TRUE if the CPU state pointed to by "regs" was executing in fsys-mode.
+The fsys_mode() macro is equivalent to the expression:
+
+ !user_mode(regs) && user_stack(task,regs)
+
+* How to write an fsyscall handler
+
+The file arch/ia64/kernel/fsys.S contains a table of fsyscall-handlers
+(fsyscall_table). This table contains one entry for each system call.
+By default, a system call is handled by fsys_fallback_syscall(). This
+routine takes care of entering (full) kernel mode and calling the
+normal Linux system call handler. For performance-critical system
+calls, it is possible to write a hand-tuned fsyscall_handler. For
+example, fsys.S contains fsys_getpid(), which is a hand-tuned version
+of the getpid() system call.
+
+The entry and exit-state of an fsyscall handler is as follows:
+
+** Machine state on entry to fsyscall handler:
+
+ - r10 = 0
+ - r11 = saved ar.pfs (a user-level value)
+ - r15 = system call number
+ - r16 = "current" task pointer (in normal kernel-mode, this is in r13)
+ - r32-r39 = system call arguments
+ - b6 = return address (a user-level value)
+ - ar.pfs = previous frame-state (a user-level value)
+ - PSR.be = cleared to zero (i.e., little-endian byte order is in effect)
+ - all other registers may contain values passed in from user-mode
+
+** Required machine state on exit to fsyscall handler:
+
+ - r11 = saved ar.pfs (as passed into the fsyscall handler)
+ - r15 = system call number (as passed into the fsyscall handler)
+ - r32-r39 = system call arguments (as passed into the fsyscall handler)
+ - b6 = return address (as passed into the fsyscall handler)
+ - ar.pfs = previous frame-state (as passed into the fsyscall handler)
+
+Fsyscall handlers can execute with very little overhead, but with that
+speed comes a set of restrictions:
+
+ o Fsyscall-handlers MUST check for any pending work in the flags
+ member of the thread-info structure and if any of the
+ TIF_ALLWORK_MASK flags are set, the handler needs to fall back on
+ doing a full system call (by calling fsys_fallback_syscall).
+
+ o Fsyscall-handlers MUST preserve incoming arguments (r32-r39, r11,
+ r15, b6, and ar.pfs) because they will be needed in case of a
+ system call restart. Of course, all "preserved" registers also
+ must be preserved, in accordance to the normal calling conventions.
+
+ o Fsyscall-handlers MUST check argument registers for containing a
+ NaT value before using them in any way that could trigger a
+ NaT-consumption fault. If a system call argument is found to
+ contain a NaT value, an fsyscall-handler may return immediately
+ with r8=EINVAL, r10=-1.
+
+ o Fsyscall-handlers MUST NOT use the "alloc" instruction or perform
+ any other operation that would trigger mandatory RSE
+ (register-stack engine) traffic.
+
+ o Fsyscall-handlers MUST NOT write to any stacked registers because
+ it is not safe to assume that user-level called a handler with the
+ proper number of arguments.
+
+ o Fsyscall-handlers need to be careful when accessing per-CPU variables:
+ unless proper safe-guards are taken (e.g., interruptions are avoided),
+ execution may be pre-empted and resumed on another CPU at any given
+ time.
+
+ o Fsyscall-handlers must be careful not to leak sensitive kernel'
+ information back to user-level. In particular, before returning to
+ user-level, care needs to be taken to clear any scratch registers
+ that could contain sensitive information (note that the current
+ task pointer is not considered sensitive: it's already exposed
+ through ar.k6).
+
+The above restrictions may seem draconian, but remember that it's
+possible to trade off some of the restrictions by paying a slightly
+higher overhead. For example, if an fsyscall-handler could benefit
+from the shadow register bank, it could temporarily disable PSR.i and
+PSR.ic, switch to bank 0 (bsw.0) and then use the shadow registers as
+needed. In other words, following the above rules yields extremely
+fast system call execution (while fully preserving system call
+semantics), but there is also a lot of flexibility in handling more
+complicated cases.
+
+* Signal handling
+
+The delivery of (asynchronous) signals must be delayed until fsys-mode
+is exited. This is acomplished with the help of the lower-privilege
+transfer trap: arch/ia64/kernel/process.c:do_notify_resume_user()
+checks whether the interrupted task was in fsys-mode and, if so, sets
+PSR.lp and returns immediately. When fsys-mode is exited via the
+"br.ret" instruction that lowers the privilege level, a trap will
+occur. The trap handler clears PSR.lp again and returns immediately.
+The kernel exit path then checks for and delivers any pending signals.
+
+* PSR Handling
+
+The "epc" instruction doesn't change the contents of PSR at all. This
+is in contrast to a regular interruption, which clears almost all
+bits. Because of that, some care needs to be taken to ensure things
+work as expected. The following discussion describes how each PSR bit
+is handled.
+
+PSR.be Cleared when entering fsys-mode. A srlz.d instruction is used
+ to ensure the CPU is in little-endian mode before the first
+ load/store instruction is executed. PSR.be is normally NOT
+ restored upon return from an fsys-mode handler. In other
+ words, user-level code must not rely on PSR.be being preserved
+ across a system call.
+PSR.up Unchanged.
+PSR.ac Unchanged.
+PSR.mfl Unchanged. Note: fsys-mode handlers must not write-registers!
+PSR.mfh Unchanged. Note: fsys-mode handlers must not write-registers!
+PSR.ic Unchanged. Note: fsys-mode handlers can clear the bit, if needed.
+PSR.i Unchanged. Note: fsys-mode handlers can clear the bit, if needed.
+PSR.pk Unchanged.
+PSR.dt Unchanged.
+PSR.dfl Unchanged. Note: fsys-mode handlers must not write-registers!
+PSR.dfh Unchanged. Note: fsys-mode handlers must not write-registers!
+PSR.sp Unchanged.
+PSR.pp Unchanged.
+PSR.di Unchanged.
+PSR.si Unchanged.
+PSR.db Unchanged. The kernel prevents user-level from setting a hardware
+ breakpoint that triggers at any privilege level other than 3 (user-mode).
+PSR.lp Unchanged.
+PSR.tb Lazy redirect. If a taken-branch trap occurs while in
+ fsys-mode, the trap-handler modifies the saved machine state
+ such that execution resumes in the gate page at
+ syscall_via_break(), with privilege level 3. Note: the
+ taken branch would occur on the branch invoking the
+ fsyscall-handler, at which point, by definition, a syscall
+ restart is still safe. If the system call number is invalid,
+ the fsys-mode handler will return directly to user-level. This
+ return will trigger a taken-branch trap, but since the trap is
+ taken _after_ restoring the privilege level, the CPU has already
+ left fsys-mode, so no special treatment is needed.
+PSR.rt Unchanged.
+PSR.cpl Cleared to 0.
+PSR.is Unchanged (guaranteed to be 0 on entry to the gate page).
+PSR.mc Unchanged.
+PSR.it Unchanged (guaranteed to be 1).
+PSR.id Unchanged. Note: the ia64 linux kernel never sets this bit.
+PSR.da Unchanged. Note: the ia64 linux kernel never sets this bit.
+PSR.dd Unchanged. Note: the ia64 linux kernel never sets this bit.
+PSR.ss Lazy redirect. If set, "epc" will cause a Single Step Trap to
+ be taken. The trap handler then modifies the saved machine
+ state such that execution resumes in the gate page at
+ syscall_via_break(), with privilege level 3.
+PSR.ri Unchanged.
+PSR.ed Unchanged. Note: This bit could only have an effect if an fsys-mode
+ handler performed a speculative load that gets NaTted. If so, this
+ would be the normal & expected behavior, so no special treatment is
+ needed.
+PSR.bn Unchanged. Note: fsys-mode handlers may clear the bit, if needed.
+ Doing so requires clearing PSR.i and PSR.ic as well.
+PSR.ia Unchanged. Note: the ia64 linux kernel never sets this bit.
diff -Nru a/Documentation/mmio_barrier.txt b/Documentation/mmio_barrier.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/mmio_barrier.txt Fri Jan 24 20:41:06 2003
@@ -0,0 +1,15 @@
+On some platforms, so-called memory-mapped I/O is weakly ordered. For
+example, the following might occur:
+
+CPU A writes 0x1 to Device #1
+CPU B writes 0x2 to Device #1
+Device #1 sees 0x2
+Device #1 sees 0x1
+
+On such platforms, driver writers are responsible for ensuring that I/O
+writes to memory-mapped addresses on their device arrive in the order
+intended. The mmiob() macro is provided for this purpose. A typical use
+of this macro might be immediately prior to the exit of a critical
+section of code proteced by spinlocks. This would ensure that subsequent
+writes to I/O space arrived only after all prior writes (much like a
+typical memory barrier op, mb(), only with respect to I/O).
diff -Nru a/Makefile b/Makefile
--- a/Makefile Fri Jan 24 20:41:05 2003
+++ b/Makefile Fri Jan 24 20:41:05 2003
@@ -170,7 +170,7 @@
NOSTDINC_FLAGS = -nostdinc -iwithprefix include
CPPFLAGS := -D__KERNEL__ -Iinclude
-CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -O2 \
+CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 \
-fno-strict-aliasing -fno-common
AFLAGS := -D__ASSEMBLY__ $(CPPFLAGS)
diff -Nru a/arch/ia64/Kconfig b/arch/ia64/Kconfig
--- a/arch/ia64/Kconfig Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/Kconfig Fri Jan 24 20:41:05 2003
@@ -768,6 +768,9 @@
menu "Kernel hacking"
+config FSYS
+ bool "Light-weight system-call support (via epc)"
+
choice
prompt "Physical memory granularity"
default IA64_GRANULE_64MB
diff -Nru a/arch/ia64/Makefile b/arch/ia64/Makefile
--- a/arch/ia64/Makefile Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/Makefile Fri Jan 24 20:41:05 2003
@@ -5,7 +5,7 @@
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
-# Copyright (C) 1998-2002 by David Mosberger-Tang <davidm@hpl.hp.com>
+# Copyright (C) 1998-2003 by David Mosberger-Tang <davidm@hpl.hp.com>
#
NM := $(CROSS_COMPILE)nm -B
@@ -23,6 +23,16 @@
GCC_VERSION=$(shell $(CC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.')
+GAS_STATUS=$(shell arch/ia64/scripts/check-gas $(CC))
+
+ifeq ($(GAS_STATUS),buggy)
+$(error Sorry, you need a newer version of the assember, one that is built from \
+ a source-tree that post-dates 18-Dec-2002. You can find a pre-compiled \
+ static binary of such an assembler at: \
+ \
+ ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz)
+endif
+
ifneq ($(GCC_VERSION),2)
cflags-y += -frename-registers --param max-inline-insnsP00
endif
@@ -48,26 +58,37 @@
drivers-$(CONFIG_IA64_HP_ZX1) += arch/ia64/hp/common/ arch/ia64/hp/zx1/
drivers-$(CONFIG_IA64_SGI_SN) += arch/ia64/sn/fakeprom/
-makeboot =$(Q)$(MAKE) -f scripts/Makefile.build obj=arch/ia64/boot $(1)
-maketool =$(Q)$(MAKE) -f scripts/Makefile.build obj=arch/ia64/tools $(1)
+boot := arch/ia64/boot
+tools := arch/ia64/tools
.PHONY: boot compressed archclean archmrproper include/asm-ia64/offsets.h
-all compressed: vmlinux.gz
+all: vmlinux
+
+compressed: vmlinux.gz
vmlinux.gz: vmlinux
- $(call makeboot,vmlinux.gz)
+ $(Q)$(MAKE) $(build)=$(boot) vmlinux.gz
+
+check: vmlinux
+ arch/ia64/scripts/unwcheck.sh vmlinux
archmrproper:
archclean:
- $(Q)$(MAKE) -f scripts/Makefile.clean obj=arch/ia64/boot
+ $(Q)$(MAKE) $(clean)=$(boot)
+ $(Q)$(MAKE) $(clean)=$(tools)
CLEAN_FILES += include/asm-ia64/offsets.h vmlinux.gz bootloader
prepare: include/asm-ia64/offsets.h
boot: lib/lib.a vmlinux
- $(call makeboot,$@)
+ $(Q)$(MAKE) $(build)=$(boot) $@
include/asm-ia64/offsets.h: include/asm include/linux/version.h include/config/MARKER
- $(call maketool,$@)
+ $(Q)$(MAKE) $(build)=$(tools) $@
+
+define archhelp
+ echo ' compressed - Build compressed kernel image'
+ echo ' boot - Build vmlinux and bootloader for Ski simulator'
+endef
diff -Nru a/arch/ia64/hp/zx1/hpzx1_misc.c b/arch/ia64/hp/zx1/hpzx1_misc.c
--- a/arch/ia64/hp/zx1/hpzx1_misc.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/hp/zx1/hpzx1_misc.c Fri Jan 24 20:41:05 2003
@@ -1,9 +1,9 @@
/*
* Misc. support for HP zx1 chipset support
*
- * Copyright (C) 2002 Hewlett-Packard Co
- * Copyright (C) 2002 Alex Williamson <alex_williamson@hp.com>
- * Copyright (C) 2002 Bjorn Helgaas <bjorn_helgaas@hp.com>
+ * Copyright (C) 2002-2003 Hewlett-Packard Co
+ * Alex Williamson <alex_williamson@hp.com>
+ * Bjorn Helgaas <bjorn_helgaas@hp.com>
*/
@@ -17,7 +17,7 @@
#include <asm/dma.h>
#include <asm/iosapic.h>
-extern acpi_status acpi_evaluate_integer (acpi_handle, acpi_string, acpi_object_list *,
+extern acpi_status acpi_evaluate_integer (acpi_handle, acpi_string, struct acpi_object_list *,
unsigned long *);
#define PFX "hpzx1: "
@@ -190,31 +190,31 @@
hpzx1_devices++;
}
-typedef struct {
+struct acpi_hp_vendor_long {
u8 guid_id;
u8 guid[16];
u8 csr_base[8];
u8 csr_length[8];
-} acpi_hp_vendor_long;
+};
#define HP_CCSR_LENGTH 0x21
#define HP_CCSR_TYPE 0x2
#define HP_CCSR_GUID EFI_GUID(0x69e9adf9, 0x924f, 0xab5f, \
0xf6, 0x4a, 0x24, 0xd2, 0x01, 0x37, 0x0e, 0xad)
-extern acpi_status acpi_get_crs(acpi_handle, acpi_buffer *);
-extern acpi_resource *acpi_get_crs_next(acpi_buffer *, int *);
-extern acpi_resource_data *acpi_get_crs_type(acpi_buffer *, int *, int);
-extern void acpi_dispose_crs(acpi_buffer *);
+extern acpi_status acpi_get_crs(acpi_handle, struct acpi_buffer *);
+extern struct acpi_resource *acpi_get_crs_next(struct acpi_buffer *, int *);
+extern union acpi_resource_data *acpi_get_crs_type(struct acpi_buffer *, int *, int);
+extern void acpi_dispose_crs(struct acpi_buffer *);
static acpi_status
hp_csr_space(acpi_handle obj, u64 *csr_base, u64 *csr_length)
{
int i, offset = 0;
acpi_status status;
- acpi_buffer buf;
- acpi_resource_vendor *res;
- acpi_hp_vendor_long *hp_res;
+ struct acpi_buffer buf;
+ struct acpi_resource_vendor *res;
+ struct acpi_hp_vendor_long *hp_res;
efi_guid_t vendor_guid;
*csr_base = 0;
@@ -226,14 +226,14 @@
return status;
}
- res = (acpi_resource_vendor *)acpi_get_crs_type(&buf, &offset, ACPI_RSTYPE_VENDOR);
+ res = (struct acpi_resource_vendor *)acpi_get_crs_type(&buf, &offset, ACPI_RSTYPE_VENDOR);
if (!res) {
printk(KERN_ERR PFX "Failed to find config space for device\n");
acpi_dispose_crs(&buf);
return AE_NOT_FOUND;
}
- hp_res = (acpi_hp_vendor_long *)(res->reserved);
+ hp_res = (struct acpi_hp_vendor_long *)(res->reserved);
if (res->length != HP_CCSR_LENGTH || hp_res->guid_id != HP_CCSR_TYPE) {
printk(KERN_ERR PFX "Unknown Vendor data\n");
@@ -288,7 +288,7 @@
{
u64 csr_base = 0, csr_length = 0;
acpi_status status;
- NATIVE_UINT busnum;
+ acpi_native_uint busnum;
char *name = context;
char fullname[32];
diff -Nru a/arch/ia64/ia32/binfmt_elf32.c b/arch/ia64/ia32/binfmt_elf32.c
--- a/arch/ia64/ia32/binfmt_elf32.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/ia32/binfmt_elf32.c Fri Jan 24 20:41:05 2003
@@ -44,7 +44,6 @@
static void elf32_set_personality (void);
-#define ELF_PLAT_INIT(_r) ia64_elf32_init(_r)
#define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm)
#define elf_map elf32_map
diff -Nru a/arch/ia64/ia32/ia32_entry.S b/arch/ia64/ia32/ia32_entry.S
--- a/arch/ia64/ia32/ia32_entry.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/ia32/ia32_entry.S Fri Jan 24 20:41:05 2003
@@ -95,12 +95,19 @@
GLOBAL_ENTRY(ia32_ret_from_clone)
PT_REGS_UNWIND_INFO(0)
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
+{ /*
+ * Some versions of gas generate bad unwind info if the first instruction of a
+ * procedure doesn't go into the first slot of a bundle. This is a workaround.
+ */
+ nop.m 0
+ nop.i 0
/*
* We need to call schedule_tail() to complete the scheduling process.
* Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
* address of the previously executing task.
*/
br.call.sptk.many rp=ia64_invoke_schedule_tail
+}
.ret1:
#endif
adds r2=TI_FLAGS+IA64_TASK_SIZE,r13
@@ -264,7 +271,7 @@
data8 sys_setreuid /* 16-bit version */ /* 70 */
data8 sys_setregid /* 16-bit version */
data8 sys32_sigsuspend
- data8 sys32_sigpending
+ data8 compat_sys_sigpending
data8 sys_sethostname
data8 sys32_setrlimit /* 75 */
data8 sys32_old_getrlimit
@@ -290,8 +297,8 @@
data8 sys_getpriority
data8 sys_setpriority
data8 sys32_ni_syscall /* old profil syscall holder */
- data8 sys32_statfs
- data8 sys32_fstatfs /* 100 */
+ data8 compat_sys_statfs
+ data8 compat_sys_fstatfs /* 100 */
data8 sys32_ioperm
data8 sys32_socketcall
data8 sys_syslog
@@ -317,7 +324,7 @@
data8 sys32_modify_ldt
data8 sys32_ni_syscall /* adjtimex */
data8 sys32_mprotect /* 125 */
- data8 sys32_sigprocmask
+ data8 compat_sys_sigprocmask
data8 sys32_ni_syscall /* create_module */
data8 sys32_ni_syscall /* init_module */
data8 sys32_ni_syscall /* delete_module */
diff -Nru a/arch/ia64/ia32/ia32_signal.c b/arch/ia64/ia32/ia32_signal.c
--- a/arch/ia64/ia32/ia32_signal.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/ia32/ia32_signal.c Fri Jan 24 20:41:05 2003
@@ -56,7 +56,7 @@
int sig;
struct sigcontext_ia32 sc;
struct _fpstate_ia32 fpstate;
- unsigned int extramask[_IA32_NSIG_WORDS-1];
+ unsigned int extramask[_COMPAT_NSIG_WORDS-1];
char retcode[8];
};
@@ -463,7 +463,7 @@
}
asmlinkage long
-ia32_rt_sigsuspend (sigset32_t *uset, unsigned int sigsetsize, struct sigscratch *scr)
+ia32_rt_sigsuspend (compat_sigset_t *uset, unsigned int sigsetsize, struct sigscratch *scr)
{
extern long ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall);
sigset_t oldset, set;
@@ -504,7 +504,7 @@
asmlinkage long
ia32_sigsuspend (unsigned int mask, struct sigscratch *scr)
{
- return ia32_rt_sigsuspend((sigset32_t *)&mask, sizeof(mask), scr);
+ return ia32_rt_sigsuspend((compat_sigset_t *)&mask, sizeof(mask), scr);
}
asmlinkage long
@@ -530,14 +530,14 @@
int ret;
/* XXX: Don't preclude handling different sized sigset_t's. */
- if (sigsetsize != sizeof(sigset32_t))
+ if (sigsetsize != sizeof(compat_sigset_t))
return -EINVAL;
if (act) {
ret = get_user(handler, &act->sa_handler);
ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
ret |= get_user(restorer, &act->sa_restorer);
- ret |= copy_from_user(&new_ka.sa.sa_mask, &act->sa_mask, sizeof(sigset32_t));
+ ret |= copy_from_user(&new_ka.sa.sa_mask, &act->sa_mask, sizeof(compat_sigset_t));
if (ret)
return -EFAULT;
@@ -550,7 +550,7 @@
ret = put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler);
ret |= put_user(old_ka.sa.sa_flags, &oact->sa_flags);
ret |= put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer);
- ret |= copy_to_user(&oact->sa_mask, &old_ka.sa.sa_mask, sizeof(sigset32_t));
+ ret |= copy_to_user(&oact->sa_mask, &old_ka.sa.sa_mask, sizeof(compat_sigset_t));
}
return ret;
}
@@ -560,7 +560,7 @@
size_t sigsetsize);
asmlinkage long
-sys32_rt_sigprocmask (int how, sigset32_t *set, sigset32_t *oset, unsigned int sigsetsize)
+sys32_rt_sigprocmask (int how, compat_sigset_t *set, compat_sigset_t *oset, unsigned int sigsetsize)
{
mm_segment_t old_fs = get_fs();
sigset_t s;
@@ -587,13 +587,7 @@
}
asmlinkage long
-sys32_sigprocmask (int how, unsigned int *set, unsigned int *oset)
-{
- return sys32_rt_sigprocmask(how, (sigset32_t *) set, (sigset32_t *) oset, sizeof(*set));
-}
-
-asmlinkage long
-sys32_rt_sigtimedwait (sigset32_t *uthese, siginfo_t32 *uinfo,
+sys32_rt_sigtimedwait (compat_sigset_t *uthese, siginfo_t32 *uinfo,
struct compat_timespec *uts, unsigned int sigsetsize)
{
extern asmlinkage long sys_rt_sigtimedwait (const sigset_t *, siginfo_t *,
@@ -605,16 +599,13 @@
sigset_t s;
int ret;
- if (copy_from_user(&s.sig, uthese, sizeof(sigset32_t)))
+ if (copy_from_user(&s.sig, uthese, sizeof(compat_sigset_t)))
+ return -EFAULT;
+ if (uts && get_compat_timespec(&t, uts))
return -EFAULT;
- if (uts) {
- ret = get_user(t.tv_sec, &uts->tv_sec);
- ret |= get_user(t.tv_nsec, &uts->tv_nsec);
- if (ret)
- return -EFAULT;
- }
set_fs(KERNEL_DS);
- ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
+ ret = sys_rt_sigtimedwait(&s, uinfo ? &info : NULL, uts ? &t : NULL,
+ sigsetsize);
set_fs(old_fs);
if (ret >= 0 && uinfo) {
if (copy_siginfo_to_user32(uinfo, &info))
@@ -648,7 +639,7 @@
int ret;
if (act) {
- old_sigset32_t mask;
+ compat_old_sigset_t mask;
ret = get_user(handler, &act->sa_handler);
ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
@@ -866,7 +857,7 @@
err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
- if (_IA32_NSIG_WORDS > 1)
+ if (_COMPAT_NSIG_WORDS > 1)
err |= __copy_to_user(frame->extramask, (char *) &set->sig + 4,
sizeof(frame->extramask));
@@ -1011,7 +1002,7 @@
goto badframe;
if (__get_user(set.sig[0], &frame->sc.oldmask)
- || (_IA32_NSIG_WORDS > 1 && __copy_from_user((char *) &set.sig + 4, &frame->extramask,
+ || (_COMPAT_NSIG_WORDS > 1 && __copy_from_user((char *) &set.sig + 4, &frame->extramask,
sizeof(frame->extramask))))
goto badframe;
diff -Nru a/arch/ia64/ia32/ia32_support.c b/arch/ia64/ia32/ia32_support.c
--- a/arch/ia64/ia32/ia32_support.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/ia32/ia32_support.c Fri Jan 24 20:41:05 2003
@@ -95,8 +95,6 @@
struct pt_regs *regs = ia64_task_regs(t);
int nr = smp_processor_id(); /* LDT and TSS depend on CPU number: */
- nr = smp_processor_id();
-
eflag = t->thread.eflag;
fsr = t->thread.fsr;
fcr = t->thread.fcr;
diff -Nru a/arch/ia64/ia32/sys_ia32.c b/arch/ia64/ia32/sys_ia32.c
--- a/arch/ia64/ia32/sys_ia32.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/ia32/sys_ia32.c Fri Jan 24 20:41:05 2003
@@ -6,7 +6,7 @@
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
- * Copyright (C) 2000-2002 Hewlett-Packard Co
+ * Copyright (C) 2000-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* These routines maintain argument size conversion between 32bit and 64bit
@@ -609,61 +609,6 @@
return retval;
}
-static inline int
-put_statfs (struct statfs32 *ubuf, struct statfs *kbuf)
-{
- int err;
-
- if (!access_ok(VERIFY_WRITE, ubuf, sizeof(*ubuf)))
- return -EFAULT;
-
- err = __put_user(kbuf->f_type, &ubuf->f_type);
- err |= __put_user(kbuf->f_bsize, &ubuf->f_bsize);
- err |= __put_user(kbuf->f_blocks, &ubuf->f_blocks);
- err |= __put_user(kbuf->f_bfree, &ubuf->f_bfree);
- err |= __put_user(kbuf->f_bavail, &ubuf->f_bavail);
- err |= __put_user(kbuf->f_files, &ubuf->f_files);
- err |= __put_user(kbuf->f_ffree, &ubuf->f_ffree);
- err |= __put_user(kbuf->f_namelen, &ubuf->f_namelen);
- err |= __put_user(kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]);
- err |= __put_user(kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]);
- return err;
-}
-
-extern asmlinkage long sys_statfs(const char * path, struct statfs * buf);
-
-asmlinkage long
-sys32_statfs (const char *path, struct statfs32 *buf)
-{
- int ret;
- struct statfs s;
- mm_segment_t old_fs = get_fs();
-
- set_fs(KERNEL_DS);
- ret = sys_statfs(path, &s);
- set_fs(old_fs);
- if (put_statfs(buf, &s))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_fstatfs(unsigned int fd, struct statfs * buf);
-
-asmlinkage long
-sys32_fstatfs (unsigned int fd, struct statfs32 *buf)
-{
- int ret;
- struct statfs s;
- mm_segment_t old_fs = get_fs();
-
- set_fs(KERNEL_DS);
- ret = sys_fstatfs(fd, &s);
- set_fs(old_fs);
- if (put_statfs(buf, &s))
- return -EFAULT;
- return ret;
-}
-
static inline long
get_tv32 (struct timeval *o, struct compat_timeval *i)
{
@@ -1849,10 +1794,10 @@
struct ipc64_perm32 {
key_t key;
- __kernel_uid32_t32 uid;
- __kernel_gid32_t32 gid;
- __kernel_uid32_t32 cuid;
- __kernel_gid32_t32 cgid;
+ compat_uid32_t uid;
+ compat_gid32_t gid;
+ compat_uid32_t cuid;
+ compat_gid32_t cgid;
compat_mode_t mode;
unsigned short __pad1;
unsigned short seq;
@@ -1895,8 +1840,8 @@
unsigned short msg_cbytes;
unsigned short msg_qnum;
unsigned short msg_qbytes;
- __kernel_ipc_pid_t32 msg_lspid;
- __kernel_ipc_pid_t32 msg_lrpid;
+ compat_ipc_pid_t msg_lspid;
+ compat_ipc_pid_t msg_lrpid;
};
struct msqid64_ds32 {
@@ -1922,8 +1867,8 @@
compat_time_t shm_atime;
compat_time_t shm_dtime;
compat_time_t shm_ctime;
- __kernel_ipc_pid_t32 shm_cpid;
- __kernel_ipc_pid_t32 shm_lpid;
+ compat_ipc_pid_t shm_cpid;
+ compat_ipc_pid_t shm_lpid;
unsigned short shm_nattch;
};
@@ -2011,6 +1956,10 @@
else
fourth.__pad = (void *)A(pad);
switch (third) {
+ default:
+ err = -EINVAL;
+ break;
+
case IPC_INFO:
case IPC_RMID:
case IPC_SET:
@@ -2399,7 +2348,7 @@
static long
semtimedop32(int semid, struct sembuf *tsems, int nsems,
- const struct timespec32 *timeout32)
+ const struct compat_timespec *timeout32)
{
struct timespec t;
if (get_user (t.tv_sec, &timeout32->tv_sec) ||
@@ -2422,7 +2371,7 @@
return sys_semtimedop(first, (struct sembuf *)AA(ptr), second, NULL);
case SEMTIMEDOP:
return semtimedop32(first, (struct sembuf *)AA(ptr), second,
- (const struct timespec32 *)AA(fifth));
+ (const struct compat_timespec *)AA(fifth));
case SEMGET:
return sys_semget(first, second, third);
case SEMCTL:
@@ -3475,12 +3424,6 @@
return ret;
}
-asmlinkage long
-sys32_sigpending (unsigned int *set)
-{
- return do_sigpending(set, sizeof(*set));
-}
-
struct sysinfo32 {
s32 uptime;
u32 loads[3];
@@ -3536,7 +3479,7 @@
set_fs(KERNEL_DS);
ret = sys_sched_rr_get_interval(pid, &t);
set_fs(old_fs);
- if (put_user (t.tv_sec, &interval->tv_sec) || put_user (t.tv_nsec, &interval->tv_nsec))
+ if (put_compat_timespec(&t, interval))
return -EFAULT;
return ret;
}
diff -Nru a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile
--- a/arch/ia64/kernel/Makefile Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/Makefile Fri Jan 24 20:41:05 2003
@@ -12,6 +12,7 @@
semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
+obj-$(CONFIG_FSYS) += fsys.o
obj-$(CONFIG_IOSAPIC) += iosapic.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_EFI_VARS) += efivars.o
diff -Nru a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c
--- a/arch/ia64/kernel/acpi.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/acpi.c Fri Jan 24 20:41:05 2003
@@ -128,7 +128,7 @@
* with a list of acpi_resource structures.
*/
acpi_status
-acpi_get_crs (acpi_handle obj, acpi_buffer *buf)
+acpi_get_crs (acpi_handle obj, struct acpi_buffer *buf)
{
acpi_status result;
buf->length = 0;
@@ -144,10 +144,10 @@
return acpi_get_current_resources(obj, buf);
}
-acpi_resource *
-acpi_get_crs_next (acpi_buffer *buf, int *offset)
+struct acpi_resource *
+acpi_get_crs_next (struct acpi_buffer *buf, int *offset)
{
- acpi_resource *res;
+ struct acpi_resource *res;
if (*offset >= buf->length)
return NULL;
@@ -157,11 +157,11 @@
return res;
}
-acpi_resource_data *
-acpi_get_crs_type (acpi_buffer *buf, int *offset, int type)
+union acpi_resource_data *
+acpi_get_crs_type (struct acpi_buffer *buf, int *offset, int type)
{
for (;;) {
- acpi_resource *res = acpi_get_crs_next(buf, offset);
+ struct acpi_resource *res = acpi_get_crs_next(buf, offset);
if (!res)
return NULL;
if (res->id = type)
@@ -170,7 +170,7 @@
}
void
-acpi_dispose_crs (acpi_buffer *buf)
+acpi_dispose_crs (struct acpi_buffer *buf)
{
kfree(buf->pointer);
}
@@ -638,7 +638,7 @@
acpi_parse_fadt (unsigned long phys_addr, unsigned long size)
{
struct acpi_table_header *fadt_header;
- fadt_descriptor_rev2 *fadt;
+ struct fadt_descriptor_rev2 *fadt;
u32 sci_irq, gsi_base;
char *iosapic_address;
@@ -649,7 +649,7 @@
if (fadt_header->revision != 3)
return -ENODEV; /* Only deal with ACPI 2.0 FADT */
- fadt = (fadt_descriptor_rev2 *) fadt_header;
+ fadt = (struct fadt_descriptor_rev2 *) fadt_header;
if (!(fadt->iapc_boot_arch & BAF_8042_KEYBOARD_CONTROLLER))
acpi_kbd_controller_present = 0;
@@ -886,6 +886,28 @@
return isa_irq_to_vector(irq);
return gsi_to_vector(irq);
+}
+
+int __init
+acpi_register_irq (u32 gsi, u32 polarity, u32 trigger)
+{
+ int vector = 0;
+ u32 irq_base;
+ char *iosapic_address;
+
+ if (acpi_madt->flags.pcat_compat && (gsi < 16))
+ return isa_irq_to_vector(gsi);
+
+ if (!iosapic_register_intr)
+ return 0;
+
+ /* Find the IOSAPIC */
+ if (!acpi_find_iosapic(gsi, &irq_base, &iosapic_address)) {
+ /* Turn it on */
+ vector = iosapic_register_intr (gsi, polarity, trigger,
+ irq_base, iosapic_address);
+ }
+ return vector;
}
#endif /* CONFIG_ACPI_BOOT */
diff -Nru a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
--- a/arch/ia64/kernel/efi.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/efi.c Fri Jan 24 20:41:05 2003
@@ -33,15 +33,6 @@
#define EFI_DEBUG 0
-#ifdef CONFIG_HUGETLB_PAGE
-
-/* By default at total of 512MB is reserved huge pages. */
-#define HTLBZONE_SIZE_DEFAULT 0x20000000
-
-unsigned long htlbzone_pages = (HTLBZONE_SIZE_DEFAULT >> HPAGE_SHIFT);
-
-#endif
-
extern efi_status_t efi_call_phys (void *, ...);
struct efi efi;
@@ -497,25 +488,6 @@
++cp;
}
}
-#ifdef CONFIG_HUGETLB_PAGE
- /* Just duplicating the above algo for lpzone start */
- for (cp = saved_command_line; *cp; ) {
- if (memcmp(cp, "lpmem=", 6) = 0) {
- cp += 6;
- htlbzone_pages = memparse(cp, &end);
- htlbzone_pages = (htlbzone_pages >> HPAGE_SHIFT);
- if (end != cp)
- break;
- cp = end;
- } else {
- while (*cp != ' ' && *cp)
- ++cp;
- while (*cp = ' ')
- ++cp;
- }
- }
- printk("Total HugeTLB_Page memory pages requested 0x%lx \n", htlbzone_pages);
-#endif
if (mem_limit != ~0UL)
printk("Ignoring memory above %luMB\n", mem_limit >> 20);
diff -Nru a/arch/ia64/kernel/entry.S b/arch/ia64/kernel/entry.S
--- a/arch/ia64/kernel/entry.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/entry.S Fri Jan 24 20:41:05 2003
@@ -3,7 +3,7 @@
*
* Kernel entry points.
*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
@@ -22,8 +22,8 @@
/*
* Global (preserved) predicate usage on syscall entry/exit path:
*
- * pKern: See entry.h.
- * pUser: See entry.h.
+ * pKStk: See entry.h.
+ * pUStk: See entry.h.
* pSys: See entry.h.
* pNonSys: !pSys
*/
@@ -63,7 +63,7 @@
sxt4 r8=r8 // return 64-bit result
;;
stf.spill [sp]ð
-(p6) cmp.ne pKern,pUser=r0,r0 // a successful execve() lands us in user-mode...
+(p6) cmp.ne pKStk,pUStk=r0,r0 // a successful execve() lands us in user-mode...
mov rp=loc0
(p6) mov ar.pfs=r0 // clear ar.pfs on success
(p7) br.ret.sptk.many rp
@@ -193,7 +193,7 @@
;;
(p6) srlz.d
ld8 sp=[r21] // load kernel stack pointer of new task
- mov IA64_KR(CURRENT)=r20 // update "current" application register
+ mov IA64_KR(CURRENT)=in0 // update "current" application register
mov r8=r13 // return pointer to previously running task
mov r13=in0 // set "current" pointer
;;
@@ -507,7 +507,14 @@
GLOBAL_ENTRY(ia64_trace_syscall)
PT_REGS_UNWIND_INFO(0)
+{ /*
+ * Some versions of gas generate bad unwind info if the first instruction of a
+ * procedure doesn't go into the first slot of a bundle. This is a workaround.
+ */
+ nop.m 0
+ nop.i 0
br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch syscall args
+}
.ret6: br.call.sptk.many rp¶ // do the syscall
strace_check_retval:
cmp.lt p6,p0=r8,r0 // syscall failed?
@@ -537,12 +544,19 @@
GLOBAL_ENTRY(ia64_ret_from_clone)
PT_REGS_UNWIND_INFO(0)
+{ /*
+ * Some versions of gas generate bad unwind info if the first instruction of a
+ * procedure doesn't go into the first slot of a bundle. This is a workaround.
+ */
+ nop.m 0
+ nop.i 0
/*
* We need to call schedule_tail() to complete the scheduling process.
* Called by ia64_switch_to() after do_fork()->copy_thread(). r8 contains the
* address of the previously executing task.
*/
br.call.sptk.many rp=ia64_invoke_schedule_tail
+}
.ret8:
adds r2=TI_FLAGS+IA64_TASK_SIZE,r13
;;
@@ -569,11 +583,12 @@
// fall through
GLOBAL_ENTRY(ia64_leave_kernel)
PT_REGS_UNWIND_INFO(0)
- // work.need_resched etc. mustn't get changed by this CPU before it returns to userspace:
-(pUser) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUser
-(pUser) rsm psr.i
+ // work.need_resched etc. mustn't get changed by this CPU before it returns to
+ // user- or fsys-mode:
+(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
+(pUStk) rsm psr.i
;;
-(pUser) adds r17=TI_FLAGS+IA64_TASK_SIZE,r13
+(pUStk) adds r17=TI_FLAGS+IA64_TASK_SIZE,r13
;;
.work_processed:
(p6) ld4 r18=[r17] // load current_thread_info()->flags
@@ -635,9 +650,9 @@
;;
srlz.i // ensure interruption collection is off
mov b7=r15
+ bsw.0 // switch back to bank 0 (no stop bit required beforehand...)
;;
- bsw.0 // switch back to bank 0
- ;;
+(pUStk) mov r18=IA64_KR(CURRENT) // Itanium 2: 12 cycle read latency
adds r16\x16,r12
adds r17$,r12
;;
@@ -665,16 +680,21 @@
;;
ld8.fill r12=[r16],16
ld8.fill r13=[r17],16
+(pUStk) adds r18=IA64_TASK_THREAD_ON_USTACK_OFFSET,r18
;;
ld8.fill r14=[r16]
ld8.fill r15=[r17]
+(pUStk) mov r17=1
+ ;;
+(pUStk) st1 [r18]=r17 // restore current->thread.on_ustack
shr.u r18=r19,16 // get byte size of existing "dirty" partition
;;
mov r16=ar.bsp // get existing backing store pointer
movl r17=THIS_CPU(ia64_phys_stacked_size_p8)
;;
ld4 r17=[r17] // r17 = cpu_data->phys_stacked_size_p8
-(pKern) br.cond.dpnt skip_rbs_switch
+(pKStk) br.cond.dpnt skip_rbs_switch
+
/*
* Restore user backing store.
*
@@ -710,21 +730,9 @@
shr.u loc1=r18,9 // RNaTslots <= dirtySize / (64*8) + 1
sub r17=r17,r18 // r17 = (physStackedSize + 8) - dirtySize
;;
-#if 1
- .align 32 // see comment below about gas bug...
-#endif
mov ar.rsc=r19 // load ar.rsc to be used for "loadrs"
shladd in0=loc1,3,r17
mov in1=0
-#if 0
- // gas-2.12.90 is unable to generate a stop bit after .align, which is bad,
- // because alloc must be at the beginning of an insn-group.
- .align 32
-#else
- nop 0
- nop 0
- nop 0
-#endif
;;
rse_clear_invalid:
#ifdef CONFIG_ITANIUM
@@ -788,12 +796,12 @@
skip_rbs_switch:
mov b6=rB6
mov ar.pfs=rARPFS
-(pUser) mov ar.bspstore=rARBSPSTORE
+(pUStk) mov ar.bspstore=rARBSPSTORE
(p9) mov cr.ifs=rCRIFS
mov cr.ipsr=rCRIPSR
mov cr.iip=rCRIIP
;;
-(pUser) mov ar.rnat=rARRNAT // must happen with RSE in lazy mode
+(pUStk) mov ar.rnat=rARRNAT // must happen with RSE in lazy mode
mov ar.rsc=rARRSC
mov ar.unat=rARUNAT
mov pr=rARPR,-1
@@ -963,17 +971,16 @@
END(sys_rt_sigreturn)
GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
- //
- // r16 = fake ar.pfs, we simply need to make sure
- // privilege is still 0
- //
- mov r16=r0
.prologue
+ /*
+ * r16 = fake ar.pfs, we simply need to make sure privilege is still 0
+ */
+ mov r16=r0
DO_SAVE_SWITCH_STACK
- br.call.sptk.many rp=ia64_handle_unaligned // stack frame setup in ivt
+ br.call.sptk.many rp=ia64_handle_unaligned // stack frame setup in ivt
.ret21: .body
DO_LOAD_SWITCH_STACK
- br.cond.sptk.many rp // goes to ia64_leave_kernel
+ br.cond.sptk.many rp // goes to ia64_leave_kernel
END(ia64_prepare_handle_unaligned)
//
@@ -1235,8 +1242,8 @@
data8 sys_sched_setaffinity
data8 sys_sched_getaffinity
data8 sys_set_tid_address
- data8 ia64_ni_syscall // available. (was sys_alloc_hugepages)
- data8 ia64_ni_syscall // available (was sys_free_hugepages)
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1235
data8 sys_exit_group
data8 sys_lookup_dcookie
data8 sys_io_setup
diff -Nru a/arch/ia64/kernel/entry.h b/arch/ia64/kernel/entry.h
--- a/arch/ia64/kernel/entry.h Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/entry.h Fri Jan 24 20:41:05 2003
@@ -4,8 +4,8 @@
* Preserved registers that are shared between code in ivt.S and entry.S. Be
* careful not to step on these!
*/
-#define pKern p2 /* will leave_kernel return to kernel-mode? */
-#define pUser p3 /* will leave_kernel return to user-mode? */
+#define pKStk p2 /* will leave_kernel return to kernel-stacks? */
+#define pUStk p3 /* will leave_kernel return to user-stacks? */
#define pSys p4 /* are we processing a (synchronous) system call? */
#define pNonSys p5 /* complement of pSys */
diff -Nru a/arch/ia64/kernel/fsys.S b/arch/ia64/kernel/fsys.S
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/arch/ia64/kernel/fsys.S Fri Jan 24 20:41:06 2003
@@ -0,0 +1,339 @@
+/*
+ * This file contains the light-weight system call handlers (fsyscall-handlers).
+ *
+ * Copyright (C) 2003 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/asmmacro.h>
+#include <asm/errno.h>
+#include <asm/offsets.h>
+#include <asm/thread_info.h>
+
+/*
+ * See Documentation/ia64/fsys.txt for details on fsyscalls.
+ *
+ * On entry to an fsyscall handler:
+ * r10 = 0 (i.e., defaults to "successful syscall return")
+ * r11 = saved ar.pfs (a user-level value)
+ * r15 = system call number
+ * r16 = "current" task pointer (in normal kernel-mode, this is in r13)
+ * r32-r39 = system call arguments
+ * b6 = return address (a user-level value)
+ * ar.pfs = previous frame-state (a user-level value)
+ * PSR.be = cleared to zero (i.e., little-endian byte order is in effect)
+ * all other registers may contain values passed in from user-mode
+ *
+ * On return from an fsyscall handler:
+ * r11 = saved ar.pfs (as passed into the fsyscall handler)
+ * r15 = system call number (as passed into the fsyscall handler)
+ * r32-r39 = system call arguments (as passed into the fsyscall handler)
+ * b6 = return address (as passed into the fsyscall handler)
+ * ar.pfs = previous frame-state (as passed into the fsyscall handler)
+ */
+
+ENTRY(fsys_ni_syscall)
+ mov r8=ENOSYS
+ mov r10=-1
+ MCKINLEY_E9_WORKAROUND
+ br.ret.sptk.many b6
+END(fsys_ni_syscall)
+
+ENTRY(fsys_getpid)
+ add r9=TI_FLAGS+IA64_TASK_SIZE,r16
+ ;;
+ ld4 r9=[r9]
+ add r8=IA64_TASK_TGID_OFFSET,r16
+ ;;
+ and r9=TIF_ALLWORK_MASK,r9
+ ld4 r8=[r8]
+ ;;
+ cmp.ne p8,p0=0,r9
+(p8) br.spnt.many fsys_fallback_syscall
+ MCKINLEY_E9_WORKAROUND
+ br.ret.sptk.many b6
+END(fsys_getpid)
+
+ENTRY(fsys_set_tid_address)
+ add r9=TI_FLAGS+IA64_TASK_SIZE,r16
+ ;;
+ ld4 r9=[r9]
+ tnat.z p6,p7=r32 // check argument register for being NaT
+ ;;
+ and r9=TIF_ALLWORK_MASK,r9
+ add r8=IA64_TASK_PID_OFFSET,r16
+ add r18=IA64_TASK_CLEAR_CHILD_TID_OFFSET,r16
+ ;;
+ ld4 r8=[r8]
+ cmp.ne p8,p0=0,r9
+ mov r17=-1
+ ;;
+(p6) st8 [r18]=r32
+(p7) st8 [r18]=r17
+(p8) br.spnt.many fsys_fallback_syscall
+ ;;
+ mov r17=0 // don't leak kernel bits...
+ mov r18=0 // don't leak kernel bits...
+ MCKINLEY_E9_WORKAROUND
+ br.ret.sptk.many b6
+END(fsys_set_tid_address)
+
+ .rodata
+ .align 8
+ .globl fsyscall_table
+fsyscall_table:
+ data8 fsys_ni_syscall
+ data8 fsys_fallback_syscall // exit // 1025
+ data8 fsys_fallback_syscall // read
+ data8 fsys_fallback_syscall // write
+ data8 fsys_fallback_syscall // open
+ data8 fsys_fallback_syscall // close
+ data8 fsys_fallback_syscall // creat // 1030
+ data8 fsys_fallback_syscall // link
+ data8 fsys_fallback_syscall // unlink
+ data8 fsys_fallback_syscall // execve
+ data8 fsys_fallback_syscall // chdir
+ data8 fsys_fallback_syscall // fchdir // 1035
+ data8 fsys_fallback_syscall // utimes
+ data8 fsys_fallback_syscall // mknod
+ data8 fsys_fallback_syscall // chmod
+ data8 fsys_fallback_syscall // chown
+ data8 fsys_fallback_syscall // lseek // 1040
+ data8 fsys_getpid
+ data8 fsys_fallback_syscall // getppid
+ data8 fsys_fallback_syscall // mount
+ data8 fsys_fallback_syscall // umount
+ data8 fsys_fallback_syscall // setuid // 1045
+ data8 fsys_fallback_syscall // getuid
+ data8 fsys_fallback_syscall // geteuid
+ data8 fsys_fallback_syscall // ptrace
+ data8 fsys_fallback_syscall // access
+ data8 fsys_fallback_syscall // sync // 1050
+ data8 fsys_fallback_syscall // fsync
+ data8 fsys_fallback_syscall // fdatasync
+ data8 fsys_fallback_syscall // kill
+ data8 fsys_fallback_syscall // rename
+ data8 fsys_fallback_syscall // mkdir // 1055
+ data8 fsys_fallback_syscall // rmdir
+ data8 fsys_fallback_syscall // dup
+ data8 fsys_fallback_syscall // pipe
+ data8 fsys_fallback_syscall // times
+ data8 fsys_fallback_syscall // brk // 1060
+ data8 fsys_fallback_syscall // setgid
+ data8 fsys_fallback_syscall // getgid
+ data8 fsys_fallback_syscall // getegid
+ data8 fsys_fallback_syscall // acct
+ data8 fsys_fallback_syscall // ioctl // 1065
+ data8 fsys_fallback_syscall // fcntl
+ data8 fsys_fallback_syscall // umask
+ data8 fsys_fallback_syscall // chroot
+ data8 fsys_fallback_syscall // ustat
+ data8 fsys_fallback_syscall // dup2 // 1070
+ data8 fsys_fallback_syscall // setreuid
+ data8 fsys_fallback_syscall // setregid
+ data8 fsys_fallback_syscall // getresuid
+ data8 fsys_fallback_syscall // setresuid
+ data8 fsys_fallback_syscall // getresgid // 1075
+ data8 fsys_fallback_syscall // setresgid
+ data8 fsys_fallback_syscall // getgroups
+ data8 fsys_fallback_syscall // setgroups
+ data8 fsys_fallback_syscall // getpgid
+ data8 fsys_fallback_syscall // setpgid // 1080
+ data8 fsys_fallback_syscall // setsid
+ data8 fsys_fallback_syscall // getsid
+ data8 fsys_fallback_syscall // sethostname
+ data8 fsys_fallback_syscall // setrlimit
+ data8 fsys_fallback_syscall // getrlimit // 1085
+ data8 fsys_fallback_syscall // getrusage
+ data8 fsys_fallback_syscall // gettimeofday
+ data8 fsys_fallback_syscall // settimeofday
+ data8 fsys_fallback_syscall // select
+ data8 fsys_fallback_syscall // poll // 1090
+ data8 fsys_fallback_syscall // symlink
+ data8 fsys_fallback_syscall // readlink
+ data8 fsys_fallback_syscall // uselib
+ data8 fsys_fallback_syscall // swapon
+ data8 fsys_fallback_syscall // swapoff // 1095
+ data8 fsys_fallback_syscall // reboot
+ data8 fsys_fallback_syscall // truncate
+ data8 fsys_fallback_syscall // ftruncate
+ data8 fsys_fallback_syscall // fchmod
+ data8 fsys_fallback_syscall // fchown // 1100
+ data8 fsys_fallback_syscall // getpriority
+ data8 fsys_fallback_syscall // setpriority
+ data8 fsys_fallback_syscall // statfs
+ data8 fsys_fallback_syscall // fstatfs
+ data8 fsys_fallback_syscall // gettid // 1105
+ data8 fsys_fallback_syscall // semget
+ data8 fsys_fallback_syscall // semop
+ data8 fsys_fallback_syscall // semctl
+ data8 fsys_fallback_syscall // msgget
+ data8 fsys_fallback_syscall // msgsnd // 1110
+ data8 fsys_fallback_syscall // msgrcv
+ data8 fsys_fallback_syscall // msgctl
+ data8 fsys_fallback_syscall // shmget
+ data8 fsys_fallback_syscall // shmat
+ data8 fsys_fallback_syscall // shmdt // 1115
+ data8 fsys_fallback_syscall // shmctl
+ data8 fsys_fallback_syscall // syslog
+ data8 fsys_fallback_syscall // setitimer
+ data8 fsys_fallback_syscall // getitimer
+ data8 fsys_fallback_syscall // 1120
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // vhangup
+ data8 fsys_fallback_syscall // lchown
+ data8 fsys_fallback_syscall // remap_file_pages // 1125
+ data8 fsys_fallback_syscall // wait4
+ data8 fsys_fallback_syscall // sysinfo
+ data8 fsys_fallback_syscall // clone
+ data8 fsys_fallback_syscall // setdomainname
+ data8 fsys_fallback_syscall // newuname // 1130
+ data8 fsys_fallback_syscall // adjtimex
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // init_module
+ data8 fsys_fallback_syscall // delete_module
+ data8 fsys_fallback_syscall // 1135
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // quotactl
+ data8 fsys_fallback_syscall // bdflush
+ data8 fsys_fallback_syscall // sysfs
+ data8 fsys_fallback_syscall // personality // 1140
+ data8 fsys_fallback_syscall // afs_syscall
+ data8 fsys_fallback_syscall // setfsuid
+ data8 fsys_fallback_syscall // setfsgid
+ data8 fsys_fallback_syscall // getdents
+ data8 fsys_fallback_syscall // flock // 1145
+ data8 fsys_fallback_syscall // readv
+ data8 fsys_fallback_syscall // writev
+ data8 fsys_fallback_syscall // pread64
+ data8 fsys_fallback_syscall // pwrite64
+ data8 fsys_fallback_syscall // sysctl // 1150
+ data8 fsys_fallback_syscall // mmap
+ data8 fsys_fallback_syscall // munmap
+ data8 fsys_fallback_syscall // mlock
+ data8 fsys_fallback_syscall // mlockall
+ data8 fsys_fallback_syscall // mprotect // 1155
+ data8 fsys_fallback_syscall // mremap
+ data8 fsys_fallback_syscall // msync
+ data8 fsys_fallback_syscall // munlock
+ data8 fsys_fallback_syscall // munlockall
+ data8 fsys_fallback_syscall // sched_getparam // 1160
+ data8 fsys_fallback_syscall // sched_setparam
+ data8 fsys_fallback_syscall // sched_getscheduler
+ data8 fsys_fallback_syscall // sched_setscheduler
+ data8 fsys_fallback_syscall // sched_yield
+ data8 fsys_fallback_syscall // sched_get_priority_max // 1165
+ data8 fsys_fallback_syscall // sched_get_priority_min
+ data8 fsys_fallback_syscall // sched_rr_get_interval
+ data8 fsys_fallback_syscall // nanosleep
+ data8 fsys_fallback_syscall // nfsservctl
+ data8 fsys_fallback_syscall // prctl // 1170
+ data8 fsys_fallback_syscall // getpagesize
+ data8 fsys_fallback_syscall // mmap2
+ data8 fsys_fallback_syscall // pciconfig_read
+ data8 fsys_fallback_syscall // pciconfig_write
+ data8 fsys_fallback_syscall // perfmonctl // 1175
+ data8 fsys_fallback_syscall // sigaltstack
+ data8 fsys_fallback_syscall // rt_sigaction
+ data8 fsys_fallback_syscall // rt_sigpending
+ data8 fsys_fallback_syscall // rt_sigprocmask
+ data8 fsys_fallback_syscall // rt_sigqueueinfo // 1180
+ data8 fsys_fallback_syscall // rt_sigreturn
+ data8 fsys_fallback_syscall // rt_sigsuspend
+ data8 fsys_fallback_syscall // rt_sigtimedwait
+ data8 fsys_fallback_syscall // getcwd
+ data8 fsys_fallback_syscall // capget // 1185
+ data8 fsys_fallback_syscall // capset
+ data8 fsys_fallback_syscall // sendfile
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // socket // 1190
+ data8 fsys_fallback_syscall // bind
+ data8 fsys_fallback_syscall // connect
+ data8 fsys_fallback_syscall // listen
+ data8 fsys_fallback_syscall // accept
+ data8 fsys_fallback_syscall // getsockname // 1195
+ data8 fsys_fallback_syscall // getpeername
+ data8 fsys_fallback_syscall // socketpair
+ data8 fsys_fallback_syscall // send
+ data8 fsys_fallback_syscall // sendto
+ data8 fsys_fallback_syscall // recv // 1200
+ data8 fsys_fallback_syscall // recvfrom
+ data8 fsys_fallback_syscall // shutdown
+ data8 fsys_fallback_syscall // setsockopt
+ data8 fsys_fallback_syscall // getsockopt
+ data8 fsys_fallback_syscall // sendmsg // 1205
+ data8 fsys_fallback_syscall // recvmsg
+ data8 fsys_fallback_syscall // pivot_root
+ data8 fsys_fallback_syscall // mincore
+ data8 fsys_fallback_syscall // madvise
+ data8 fsys_fallback_syscall // newstat // 1210
+ data8 fsys_fallback_syscall // newlstat
+ data8 fsys_fallback_syscall // newfstat
+ data8 fsys_fallback_syscall // clone2
+ data8 fsys_fallback_syscall // getdents64
+ data8 fsys_fallback_syscall // getunwind // 1215
+ data8 fsys_fallback_syscall // readahead
+ data8 fsys_fallback_syscall // setxattr
+ data8 fsys_fallback_syscall // lsetxattr
+ data8 fsys_fallback_syscall // fsetxattr
+ data8 fsys_fallback_syscall // getxattr // 1220
+ data8 fsys_fallback_syscall // lgetxattr
+ data8 fsys_fallback_syscall // fgetxattr
+ data8 fsys_fallback_syscall // listxattr
+ data8 fsys_fallback_syscall // llistxattr
+ data8 fsys_fallback_syscall // flistxattr // 1225
+ data8 fsys_fallback_syscall // removexattr
+ data8 fsys_fallback_syscall // lremovexattr
+ data8 fsys_fallback_syscall // fremovexattr
+ data8 fsys_fallback_syscall // tkill
+ data8 fsys_fallback_syscall // futex // 1230
+ data8 fsys_fallback_syscall // sched_setaffinity
+ data8 fsys_fallback_syscall // sched_getaffinity
+ data8 fsys_set_tid_address // set_tid_address
+ data8 fsys_fallback_syscall // unused
+ data8 fsys_fallback_syscall // unused // 1235
+ data8 fsys_fallback_syscall // exit_group
+ data8 fsys_fallback_syscall // lookup_dcookie
+ data8 fsys_fallback_syscall // io_setup
+ data8 fsys_fallback_syscall // io_destroy
+ data8 fsys_fallback_syscall // io_getevents // 1240
+ data8 fsys_fallback_syscall // io_submit
+ data8 fsys_fallback_syscall // io_cancel
+ data8 fsys_fallback_syscall // epoll_create
+ data8 fsys_fallback_syscall // epoll_ctl
+ data8 fsys_fallback_syscall // epoll_wait // 1245
+ data8 fsys_fallback_syscall // restart_syscall
+ data8 fsys_fallback_syscall // semtimedop
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // 1250
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // 1255
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // 1260
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // 1265
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // 1270
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall // 1275
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
+ data8 fsys_fallback_syscall
diff -Nru a/arch/ia64/kernel/gate.S b/arch/ia64/kernel/gate.S
--- a/arch/ia64/kernel/gate.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/gate.S Fri Jan 24 20:41:05 2003
@@ -2,7 +2,7 @@
* This file contains the code that gets mapped at the upper end of each task's text
* region. For now, it contains the signal trampoline code only.
*
- * Copyright (C) 1999-2002 Hewlett-Packard Co
+ * Copyright (C) 1999-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
@@ -14,6 +14,87 @@
#include <asm/page.h>
.section .text.gate, "ax"
+.start_gate:
+
+
+#if CONFIG_FSYS
+
+#include <asm/errno.h>
+
+/*
+ * On entry:
+ * r11 = saved ar.pfs
+ * r15 = system call #
+ * b0 = saved return address
+ * b6 = return address
+ * On exit:
+ * r11 = saved ar.pfs
+ * r15 = system call #
+ * b0 = saved return address
+ * all other "scratch" registers: undefined
+ * all "preserved" registers: same as on entry
+ */
+GLOBAL_ENTRY(syscall_via_epc)
+ .prologue
+ .altrp b6
+ .body
+{
+ /*
+ * Note: the kernel cannot assume that the first two instructions in this
+ * bundle get executed. The remaining code must be safe even if
+ * they do not get executed.
+ */
+ adds r17=-1024,r15
+ mov r10=0 // default to successful syscall execution
+ epc
+}
+ ;;
+ rsm psr.be
+ movl r18=fsyscall_table
+
+ mov r16=IA64_KR(CURRENT)
+ mov r19%5
+ ;;
+ shladd r18=r17,3,r18
+ cmp.geu p6,p0=r19,r17 // (syscall > 0 && syscall <= 1024+255)?
+ ;;
+ srlz.d // ensure little-endian byteorder is in effect
+(p6) ld8 r18=[r18]
+ ;;
+(p6) mov b7=r18
+(p6) br.sptk.many b7
+
+ mov r10=-1
+ mov r8=ENOSYS
+ MCKINLEY_E9_WORKAROUND
+ br.ret.sptk.many b6
+END(syscall_via_epc)
+
+GLOBAL_ENTRY(syscall_via_break)
+ .prologue
+ .altrp b6
+ .body
+ break 0x100000
+ br.ret.sptk.many b6
+END(syscall_via_break)
+
+GLOBAL_ENTRY(fsys_fallback_syscall)
+ /*
+ * It would be better/fsyser to do the SAVE_MIN magic directly here, but for now
+ * we simply fall back on doing a system-call via break. Good enough
+ * to get started. (Note: we have to do this through the gate page again, since
+ * the br.ret will switch us back to user-level privilege.)
+ *
+ * XXX Move this back to fsys.S after changing it over to avoid break 0x100000.
+ */
+ movl r2=(syscall_via_break - .start_gate) + GATE_ADDR
+ ;;
+ MCKINLEY_E9_WORKAROUND
+ mov b7=r2
+ br.ret.sptk.many b7
+END(fsys_fallback_syscall)
+
+#endif /* CONFIG_FSYS */
# define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET)
# define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET)
@@ -63,15 +144,18 @@
* call stack.
*/
+#define SIGTRAMP_SAVES \
+ .unwabi @svr4, 's' // mark this as a sigtramp handler (saves scratch regs) \
+ .savesp ar.unat, UNAT_OFF+SIGCONTEXT_OFF \
+ .savesp ar.fpsr, FPSR_OFF+SIGCONTEXT_OFF \
+ .savesp pr, PR_OFF+SIGCONTEXT_OFF \
+ .savesp rp, RP_OFF+SIGCONTEXT_OFF \
+ .vframesp SP_OFF+SIGCONTEXT_OFF
+
GLOBAL_ENTRY(ia64_sigtramp)
// describe the state that is active when we get here:
.prologue
- .unwabi @svr4, 's' // mark this as a sigtramp handler (saves scratch regs)
- .savesp ar.unat, UNAT_OFF+SIGCONTEXT_OFF
- .savesp ar.fpsr, FPSR_OFF+SIGCONTEXT_OFF
- .savesp pr, PR_OFF+SIGCONTEXT_OFF
- .savesp rp, RP_OFF+SIGCONTEXT_OFF
- .vframesp SP_OFF+SIGCONTEXT_OFF
+ SIGTRAMP_SAVES
.body
.label_state 1
@@ -156,10 +240,11 @@
ldf.fill f14=[base0],32
ldf.fill f15=[base1],32
mov r15=__NR_rt_sigreturn
+ .restore sp // pop .prologue
break __BREAK_SYSCALL
- .body
- .copy_state 1
+ .prologue
+ SIGTRAMP_SAVES
setup_rbs:
mov ar.rsc=0 // put RSE into enforced lazy mode
;;
@@ -171,6 +256,7 @@
;;
.spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
st8 [r14]=r16 // save sc_ar_rnat
+ .body
adds r14=(LOADRS_OFF+SIGCONTEXT_OFF),sp
mov.m r16=ar.bsp // sc_loadrs <- (new bsp - new bspstore) << 16
@@ -182,10 +268,11 @@
;;
st8 [r14]=r15 // save sc_loadrs
mov ar.rsc=0xf // set RSE into eager mode, pl 3
+ .restore sp // pop .prologue
br.cond.sptk back_from_setup_rbs
.prologue
- .copy_state 1
+ SIGTRAMP_SAVES
.spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
.body
restore_rbs:
diff -Nru a/arch/ia64/kernel/head.S b/arch/ia64/kernel/head.S
--- a/arch/ia64/kernel/head.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/head.S Fri Jan 24 20:41:05 2003
@@ -5,7 +5,7 @@
* to set up the kernel's global pointer and jump to the kernel
* entry point.
*
- * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001, 2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
@@ -143,17 +143,14 @@
movl r2=init_thread_union
cmp.eq isBP,isAP=r0,r0
#endif
- ;;
- extr r3=r2,0,61 // r3 = phys addr of task struct
mov r16=KERNEL_TR_PAGE_NUM
;;
// load the "current" pointer (r13) and ar.k6 with the current task
- mov r13=r2
- mov IA64_KR(CURRENT)=r3 // Physical address
-
+ mov IA64_KR(CURRENT)=r2 // virtual address
// initialize k4 to a safe value (64-128MB is mapped by TR_KERNEL)
mov IA64_KR(CURRENT_STACK)=r16
+ mov r13=r2
/*
* Reserve space at the top of the stack for "struct pt_regs". Kernel threads
* don't store interesting values in that structure, but the space still needs
diff -Nru a/arch/ia64/kernel/ia64_ksyms.c b/arch/ia64/kernel/ia64_ksyms.c
--- a/arch/ia64/kernel/ia64_ksyms.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/ia64_ksyms.c Fri Jan 24 20:41:05 2003
@@ -56,6 +56,12 @@
#include <asm/page.h>
EXPORT_SYMBOL(clear_page);
+#ifdef CONFIG_VIRTUAL_MEM_MAP
+#include <asm/pgtable.h>
+EXPORT_SYMBOL(vmalloc_end);
+EXPORT_SYMBOL(ia64_pfn_valid);
+#endif
+
#include <asm/processor.h>
# ifndef CONFIG_NUMA
EXPORT_SYMBOL(cpu_info__per_cpu);
@@ -142,4 +148,8 @@
EXPORT_SYMBOL(ia64_mv);
#endif
EXPORT_SYMBOL(machvec_noop);
-
+#ifdef CONFIG_PERFMON
+#include <asm/perfmon.h>
+EXPORT_SYMBOL(pfm_install_alternate_syswide_subsystem);
+EXPORT_SYMBOL(pfm_remove_alternate_syswide_subsystem);
+#endif
diff -Nru a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
--- a/arch/ia64/kernel/iosapic.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/iosapic.c Fri Jan 24 20:41:05 2003
@@ -752,7 +752,7 @@
if (index < 0) {
printk(KERN_WARNING"IOSAPIC: GSI 0x%x has no IOSAPIC!\n", gsi);
- return;
+ continue;
}
addr = iosapic_lists[index].addr;
gsi_base = iosapic_lists[index].gsi_base;
diff -Nru a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
--- a/arch/ia64/kernel/irq_ia64.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/irq_ia64.c Fri Jan 24 20:41:05 2003
@@ -178,7 +178,7 @@
register_percpu_irq(IA64_IPI_VECTOR, &ipi_irqaction);
#endif
#ifdef CONFIG_PERFMON
- perfmon_init_percpu();
+ pfm_init_percpu();
#endif
platform_irq_init();
}
diff -Nru a/arch/ia64/kernel/ivt.S b/arch/ia64/kernel/ivt.S
--- a/arch/ia64/kernel/ivt.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/ivt.S Fri Jan 24 20:41:05 2003
@@ -192,7 +192,7 @@
rfi
END(vhpt_miss)
- .align 1024
+ .org ia64_ivt+0x400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0400 Entry 1 (size 64 bundles) ITLB (21)
ENTRY(itlb_miss)
@@ -206,7 +206,7 @@
mov r16=cr.ifa // get virtual address
mov r29° // save b0
mov r31=pr // save predicates
-itlb_fault:
+.itlb_fault:
mov r17=cr.iha // get virtual address of L3 PTE
movl r30\x1f // load nested fault continuation point
;;
@@ -230,7 +230,7 @@
rfi
END(itlb_miss)
- .align 1024
+ .org ia64_ivt+0x0800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0800 Entry 2 (size 64 bundles) DTLB (9,48)
ENTRY(dtlb_miss)
@@ -268,7 +268,7 @@
rfi
END(dtlb_miss)
- .align 1024
+ .org ia64_ivt+0x0c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
ENTRY(alt_itlb_miss)
@@ -288,7 +288,7 @@
;;
(p8) mov cr.iha=r17
(p8) mov r29° // save b0
-(p8) br.cond.dptk itlb_fault
+(p8) br.cond.dptk .itlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
and r19=r19,r16 // clear ed, reserved bits, and PTE control bits
@@ -306,7 +306,7 @@
rfi
END(alt_itlb_miss)
- .align 1024
+ .org ia64_ivt+0x1000
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1000 Entry 4 (size 64 bundles) Alt DTLB (7,46)
ENTRY(alt_dtlb_miss)
@@ -379,7 +379,7 @@
br.call.sptk.many b6=ia64_do_page_fault // ignore return address
END(page_fault)
- .align 1024
+ .org ia64_ivt+0x1400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1400 Entry 5 (size 64 bundles) Data nested TLB (6,45)
ENTRY(nested_dtlb_miss)
@@ -440,7 +440,7 @@
br.sptk.many b0 // return to continuation point
END(nested_dtlb_miss)
- .align 1024
+ .org ia64_ivt+0x1800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1800 Entry 6 (size 64 bundles) Instruction Key Miss (24)
ENTRY(ikey_miss)
@@ -448,7 +448,7 @@
FAULT(6)
END(ikey_miss)
- .align 1024
+ .org ia64_ivt+0x1c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1c00 Entry 7 (size 64 bundles) Data Key Miss (12,51)
ENTRY(dkey_miss)
@@ -456,7 +456,7 @@
FAULT(7)
END(dkey_miss)
- .align 1024
+ .org ia64_ivt+0x2000
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2000 Entry 8 (size 64 bundles) Dirty-bit (54)
ENTRY(dirty_bit)
@@ -512,7 +512,7 @@
rfi
END(idirty_bit)
- .align 1024
+ .org ia64_ivt+0x2400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2400 Entry 9 (size 64 bundles) Instruction Access-bit (27)
ENTRY(iaccess_bit)
@@ -571,7 +571,7 @@
rfi
END(iaccess_bit)
- .align 1024
+ .org ia64_ivt+0x2800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2800 Entry 10 (size 64 bundles) Data Access-bit (15,55)
ENTRY(daccess_bit)
@@ -618,7 +618,7 @@
rfi
END(daccess_bit)
- .align 1024
+ .org ia64_ivt+0x2c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2c00 Entry 11 (size 64 bundles) Break instruction (33)
ENTRY(break_fault)
@@ -690,7 +690,7 @@
// NOT REACHED
END(break_fault)
-ENTRY(demine_args)
+ENTRY_MIN_ALIGN(demine_args)
alloc r2=ar.pfs,8,0,0,0
tnat.nz p8,p0=in0
tnat.nz p9,p0=in1
@@ -719,7 +719,7 @@
br.ret.sptk.many rp
END(demine_args)
- .align 1024
+ .org ia64_ivt+0x3000
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4)
ENTRY(interrupt)
@@ -746,19 +746,19 @@
br.call.sptk.many b6=ia64_handle_irq
END(interrupt)
- .align 1024
+ .org ia64_ivt+0x3400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3400 Entry 13 (size 64 bundles) Reserved
DBG_FAULT(13)
FAULT(13)
- .align 1024
+ .org ia64_ivt+0x3800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3800 Entry 14 (size 64 bundles) Reserved
DBG_FAULT(14)
FAULT(14)
- .align 1024
+ .org ia64_ivt+0x3c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3c00 Entry 15 (size 64 bundles) Reserved
DBG_FAULT(15)
@@ -803,7 +803,7 @@
br.sptk.many ia64_leave_kernel
END(dispatch_illegal_op_fault)
- .align 1024
+ .org ia64_ivt+0x4000
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4000 Entry 16 (size 64 bundles) Reserved
DBG_FAULT(16)
@@ -893,7 +893,7 @@
#endif /* CONFIG_IA32_SUPPORT */
- .align 1024
+ .org ia64_ivt+0x4400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4400 Entry 17 (size 64 bundles) Reserved
DBG_FAULT(17)
@@ -925,7 +925,7 @@
br.call.sptk.many b6=ia64_bad_break // avoid WAW on CFM and ignore return addr
END(non_syscall)
- .align 1024
+ .org ia64_ivt+0x4800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4800 Entry 18 (size 64 bundles) Reserved
DBG_FAULT(18)
@@ -959,7 +959,7 @@
br.sptk.many ia64_prepare_handle_unaligned
END(dispatch_unaligned_handler)
- .align 1024
+ .org ia64_ivt+0x4c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4c00 Entry 19 (size 64 bundles) Reserved
DBG_FAULT(19)
@@ -1005,7 +1005,7 @@
// --- End of long entries, Beginning of short entries
//
- .align 1024
+ .org ia64_ivt+0x5000
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5000 Entry 20 (size 16 bundles) Page Not Present (10,22,49)
ENTRY(page_not_present)
@@ -1025,7 +1025,7 @@
br.sptk.many page_fault
END(page_not_present)
- .align 256
+ .org ia64_ivt+0x5100
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5100 Entry 21 (size 16 bundles) Key Permission (13,25,52)
ENTRY(key_permission)
@@ -1038,7 +1038,7 @@
br.sptk.many page_fault
END(key_permission)
- .align 256
+ .org ia64_ivt+0x5200
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5200 Entry 22 (size 16 bundles) Instruction Access Rights (26)
ENTRY(iaccess_rights)
@@ -1051,7 +1051,7 @@
br.sptk.many page_fault
END(iaccess_rights)
- .align 256
+ .org ia64_ivt+0x5300
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5300 Entry 23 (size 16 bundles) Data Access Rights (14,53)
ENTRY(daccess_rights)
@@ -1064,7 +1064,7 @@
br.sptk.many page_fault
END(daccess_rights)
- .align 256
+ .org ia64_ivt+0x5400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5400 Entry 24 (size 16 bundles) General Exception (5,32,34,36,38,39)
ENTRY(general_exception)
@@ -1079,7 +1079,7 @@
br.sptk.many dispatch_to_fault_handler
END(general_exception)
- .align 256
+ .org ia64_ivt+0x5500
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5500 Entry 25 (size 16 bundles) Disabled FP-Register (35)
ENTRY(disabled_fp_reg)
@@ -1092,7 +1092,7 @@
br.sptk.many dispatch_to_fault_handler
END(disabled_fp_reg)
- .align 256
+ .org ia64_ivt+0x5600
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5600 Entry 26 (size 16 bundles) Nat Consumption (11,23,37,50)
ENTRY(nat_consumption)
@@ -1100,7 +1100,7 @@
FAULT(26)
END(nat_consumption)
- .align 256
+ .org ia64_ivt+0x5700
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5700 Entry 27 (size 16 bundles) Speculation (40)
ENTRY(speculation_vector)
@@ -1137,13 +1137,13 @@
rfi // and go back
END(speculation_vector)
- .align 256
+ .org ia64_ivt+0x5800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5800 Entry 28 (size 16 bundles) Reserved
DBG_FAULT(28)
FAULT(28)
- .align 256
+ .org ia64_ivt+0x5900
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5900 Entry 29 (size 16 bundles) Debug (16,28,56)
ENTRY(debug_vector)
@@ -1151,7 +1151,7 @@
FAULT(29)
END(debug_vector)
- .align 256
+ .org ia64_ivt+0x5a00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5a00 Entry 30 (size 16 bundles) Unaligned Reference (57)
ENTRY(unaligned_access)
@@ -1162,91 +1162,103 @@
br.sptk.many dispatch_unaligned_handler
END(unaligned_access)
- .align 256
+ .org ia64_ivt+0x5b00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5b00 Entry 31 (size 16 bundles) Unsupported Data Reference (57)
+ENTRY(unsupported_data_reference)
DBG_FAULT(31)
FAULT(31)
+END(unsupported_data_reference)
- .align 256
+ .org ia64_ivt+0x5c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5c00 Entry 32 (size 16 bundles) Floating-Point Fault (64)
+ENTRY(floating_point_fault)
DBG_FAULT(32)
FAULT(32)
+END(floating_point_fault)
- .align 256
+ .org ia64_ivt+0x5d00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5d00 Entry 33 (size 16 bundles) Floating Point Trap (66)
+ENTRY(floating_point_trap)
DBG_FAULT(33)
FAULT(33)
+END(floating_point_trap)
- .align 256
+ .org ia64_ivt+0x5e00
/////////////////////////////////////////////////////////////////////////////////////////
-// 0x5e00 Entry 34 (size 16 bundles) Lower Privilege Tranfer Trap (66)
+// 0x5e00 Entry 34 (size 16 bundles) Lower Privilege Transfer Trap (66)
+ENTRY(lower_privilege_trap)
DBG_FAULT(34)
FAULT(34)
+END(lower_privilege_trap)
- .align 256
+ .org ia64_ivt+0x5f00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5f00 Entry 35 (size 16 bundles) Taken Branch Trap (68)
+ENTRY(taken_branch_trap)
DBG_FAULT(35)
FAULT(35)
+END(taken_branch_trap)
- .align 256
+ .org ia64_ivt+0x6000
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6000 Entry 36 (size 16 bundles) Single Step Trap (69)
+ENTRY(single_step_trap)
DBG_FAULT(36)
FAULT(36)
+END(single_step_trap)
- .align 256
+ .org ia64_ivt+0x6100
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6100 Entry 37 (size 16 bundles) Reserved
DBG_FAULT(37)
FAULT(37)
- .align 256
+ .org ia64_ivt+0x6200
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6200 Entry 38 (size 16 bundles) Reserved
DBG_FAULT(38)
FAULT(38)
- .align 256
+ .org ia64_ivt+0x6300
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6300 Entry 39 (size 16 bundles) Reserved
DBG_FAULT(39)
FAULT(39)
- .align 256
+ .org ia64_ivt+0x6400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6400 Entry 40 (size 16 bundles) Reserved
DBG_FAULT(40)
FAULT(40)
- .align 256
+ .org ia64_ivt+0x6500
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6500 Entry 41 (size 16 bundles) Reserved
DBG_FAULT(41)
FAULT(41)
- .align 256
+ .org ia64_ivt+0x6600
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6600 Entry 42 (size 16 bundles) Reserved
DBG_FAULT(42)
FAULT(42)
- .align 256
+ .org ia64_ivt+0x6700
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6700 Entry 43 (size 16 bundles) Reserved
DBG_FAULT(43)
FAULT(43)
- .align 256
+ .org ia64_ivt+0x6800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6800 Entry 44 (size 16 bundles) Reserved
DBG_FAULT(44)
FAULT(44)
- .align 256
+ .org ia64_ivt+0x6900
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6900 Entry 45 (size 16 bundles) IA-32 Exeception (17,18,29,41,42,43,44,58,60,61,62,72,73,75,76,77)
ENTRY(ia32_exception)
@@ -1254,7 +1266,7 @@
FAULT(45)
END(ia32_exception)
- .align 256
+ .org ia64_ivt+0x6a00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6a00 Entry 46 (size 16 bundles) IA-32 Intercept (30,31,59,70,71)
ENTRY(ia32_intercept)
@@ -1284,7 +1296,7 @@
FAULT(46)
END(ia32_intercept)
- .align 256
+ .org ia64_ivt+0x6b00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6b00 Entry 47 (size 16 bundles) IA-32 Interrupt (74)
ENTRY(ia32_interrupt)
@@ -1297,121 +1309,121 @@
#endif
END(ia32_interrupt)
- .align 256
+ .org ia64_ivt+0x6c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6c00 Entry 48 (size 16 bundles) Reserved
DBG_FAULT(48)
FAULT(48)
- .align 256
+ .org ia64_ivt+0x6d00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6d00 Entry 49 (size 16 bundles) Reserved
DBG_FAULT(49)
FAULT(49)
- .align 256
+ .org ia64_ivt+0x6e00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6e00 Entry 50 (size 16 bundles) Reserved
DBG_FAULT(50)
FAULT(50)
- .align 256
+ .org ia64_ivt+0x6f00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6f00 Entry 51 (size 16 bundles) Reserved
DBG_FAULT(51)
FAULT(51)
- .align 256
+ .org ia64_ivt+0x7000
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7000 Entry 52 (size 16 bundles) Reserved
DBG_FAULT(52)
FAULT(52)
- .align 256
+ .org ia64_ivt+0x7100
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7100 Entry 53 (size 16 bundles) Reserved
DBG_FAULT(53)
FAULT(53)
- .align 256
+ .org ia64_ivt+0x7200
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7200 Entry 54 (size 16 bundles) Reserved
DBG_FAULT(54)
FAULT(54)
- .align 256
+ .org ia64_ivt+0x7300
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7300 Entry 55 (size 16 bundles) Reserved
DBG_FAULT(55)
FAULT(55)
- .align 256
+ .org ia64_ivt+0x7400
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7400 Entry 56 (size 16 bundles) Reserved
DBG_FAULT(56)
FAULT(56)
- .align 256
+ .org ia64_ivt+0x7500
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7500 Entry 57 (size 16 bundles) Reserved
DBG_FAULT(57)
FAULT(57)
- .align 256
+ .org ia64_ivt+0x7600
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7600 Entry 58 (size 16 bundles) Reserved
DBG_FAULT(58)
FAULT(58)
- .align 256
+ .org ia64_ivt+0x7700
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7700 Entry 59 (size 16 bundles) Reserved
DBG_FAULT(59)
FAULT(59)
- .align 256
+ .org ia64_ivt+0x7800
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7800 Entry 60 (size 16 bundles) Reserved
DBG_FAULT(60)
FAULT(60)
- .align 256
+ .org ia64_ivt+0x7900
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7900 Entry 61 (size 16 bundles) Reserved
DBG_FAULT(61)
FAULT(61)
- .align 256
+ .org ia64_ivt+0x7a00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7a00 Entry 62 (size 16 bundles) Reserved
DBG_FAULT(62)
FAULT(62)
- .align 256
+ .org ia64_ivt+0x7b00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7b00 Entry 63 (size 16 bundles) Reserved
DBG_FAULT(63)
FAULT(63)
- .align 256
+ .org ia64_ivt+0x7c00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7c00 Entry 64 (size 16 bundles) Reserved
DBG_FAULT(64)
FAULT(64)
- .align 256
+ .org ia64_ivt+0x7d00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7d00 Entry 65 (size 16 bundles) Reserved
DBG_FAULT(65)
FAULT(65)
- .align 256
+ .org ia64_ivt+0x7e00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7e00 Entry 66 (size 16 bundles) Reserved
DBG_FAULT(66)
FAULT(66)
- .align 256
+ .org ia64_ivt+0x7f00
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7f00 Entry 67 (size 16 bundles) Reserved
DBG_FAULT(67)
diff -Nru a/arch/ia64/kernel/minstate.h b/arch/ia64/kernel/minstate.h
--- a/arch/ia64/kernel/minstate.h Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/minstate.h Fri Jan 24 20:41:05 2003
@@ -30,25 +30,23 @@
* on interrupts.
*/
#define MINSTATE_START_SAVE_MIN_VIRT \
-(pUser) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
- dep r1=-1,r1,61,3; /* r1 = current (virtual) */ \
+(pUStk) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
;; \
-(pUser) mov.m rARRNAT=ar.rnat; \
-(pUser) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of RBS */ \
-(pKern) mov r1=sp; /* get sp */ \
- ;; \
-(pUser) lfetch.fault.excl.nt1 [rKRBS]; \
-(pUser) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
-(pUser) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
+(pUStk) mov.m rARRNAT=ar.rnat; \
+(pUStk) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of RBS */ \
+(pKStk) mov r1=sp; /* get sp */ \
;; \
-(pUser) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
-(pKern) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
+(pUStk) lfetch.fault.excl.nt1 [rKRBS]; \
+(pUStk) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
+(pUStk) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
;; \
-(pUser) mov r18=ar.bsp; \
-(pUser) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
+(pUStk) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
+(pKStk) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
+ ;; \
+(pUStk) mov r18=ar.bsp; \
+(pUStk) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
#define MINSTATE_END_SAVE_MIN_VIRT \
- or r13=r13,r14; /* make `current' a kernel virtual address */ \
bsw.1; /* switch back to bank 1 (must be last in insn group) */ \
;;
@@ -57,21 +55,21 @@
* go virtual and dont want to destroy the iip or ipsr.
*/
#define MINSTATE_START_SAVE_MIN_PHYS \
-(pKern) movl sp=ia64_init_stack+IA64_STK_OFFSET-IA64_PT_REGS_SIZE; \
-(pUser) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
-(pUser) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
- ;; \
-(pUser) mov rARRNAT=ar.rnat; \
-(pKern) dep r1=0,sp,61,3; /* compute physical addr of sp */ \
-(pUser) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
-(pUser) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
-(pUser) dep rKRBS=-1,rKRBS,61,3; /* compute kernel virtual addr of RBS */\
+(pKStk) movl sp=ia64_init_stack+IA64_STK_OFFSET-IA64_PT_REGS_SIZE; \
+(pUStk) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
+(pUStk) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
+ ;; \
+(pUStk) mov rARRNAT=ar.rnat; \
+(pKStk) dep r1=0,sp,61,3; /* compute physical addr of sp */ \
+(pUStk) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
+(pUStk) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
+(pUStk) dep rKRBS=-1,rKRBS,61,3; /* compute kernel virtual addr of RBS */\
;; \
-(pKern) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
-(pUser) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
+(pKStk) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
+(pUStk) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
;; \
-(pUser) mov r18=ar.bsp; \
-(pUser) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
+(pUStk) mov r18=ar.bsp; \
+(pUStk) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
#define MINSTATE_END_SAVE_MIN_PHYS \
or r12=r12,r14; /* make sp a kernel virtual address */ \
@@ -79,11 +77,13 @@
;;
#ifdef MINSTATE_VIRT
+# define MINSTATE_GET_CURRENT(reg) mov reg=IA64_KR(CURRENT)
# define MINSTATE_START_SAVE_MIN MINSTATE_START_SAVE_MIN_VIRT
# define MINSTATE_END_SAVE_MIN MINSTATE_END_SAVE_MIN_VIRT
#endif
#ifdef MINSTATE_PHYS
+# define MINSTATE_GET_CURRENT(reg) mov reg=IA64_KR(CURRENT);; dep reg=0,reg,61,3
# define MINSTATE_START_SAVE_MIN MINSTATE_START_SAVE_MIN_PHYS
# define MINSTATE_END_SAVE_MIN MINSTATE_END_SAVE_MIN_PHYS
#endif
@@ -110,23 +110,26 @@
* we can pass interruption state as arguments to a handler.
*/
#define DO_SAVE_MIN(COVER,SAVE_IFS,EXTRA) \
- mov rARRSC=ar.rsc; \
- mov rARPFS=ar.pfs; \
- mov rR1=r1; \
- mov rARUNAT=ar.unat; \
- mov rCRIPSR=cr.ipsr; \
- mov rB6¶; /* rB6 = branch reg 6 */ \
- mov rCRIIP=cr.iip; \
- mov r1=IA64_KR(CURRENT); /* r1 = current (physical) */ \
- COVER; \
+ mov rARRSC=ar.rsc; /* M */ \
+ mov rARUNAT=ar.unat; /* M */ \
+ mov rR1=r1; /* A */ \
+ MINSTATE_GET_CURRENT(r1); /* M (or M;;I) */ \
+ mov rCRIPSR=cr.ipsr; /* M */ \
+ mov rARPFS=ar.pfs; /* I */ \
+ mov rCRIIP=cr.iip; /* M */ \
+ mov rB6¶; /* I */ /* rB6 = branch reg 6 */ \
+ COVER; /* B;; (or nothing) */ \
;; \
- invala; \
- extr.u r16=rCRIPSR,32,2; /* extract psr.cpl */ \
+ adds r16=IA64_TASK_THREAD_ON_USTACK_OFFSET,r1; \
;; \
- cmp.eq pKern,pUser=r0,r16; /* are we in kernel mode already? (psr.cpl=0) */ \
+ ld1 r17=[r16]; /* load current->thread.on_ustack flag */ \
+ st1 [r16]=r0; /* clear current->thread.on_ustack flag */ \
/* switch from user to kernel RBS: */ \
;; \
+ invala; /* M */ \
SAVE_IFS; \
+ cmp.eq pKStk,pUStk=r0,r17; /* are we in kernel mode already? (psr.cpl=0) */ \
+ ;; \
MINSTATE_START_SAVE_MIN \
add r17=L1_CACHE_BYTES,r1 /* really: biggest cache-line size */ \
;; \
@@ -138,23 +141,23 @@
;; \
lfetch.fault.excl.nt1 [r17]; \
adds r17=8,r1; /* initialize second base pointer */ \
-(pKern) mov r18=r0; /* make sure r18 isn't NaT */ \
+(pKStk) mov r18=r0; /* make sure r18 isn't NaT */ \
;; \
st8 [r17]=rCRIIP,16; /* save cr.iip */ \
st8 [r16]=rCRIFS,16; /* save cr.ifs */ \
-(pUser) sub r18=r18,rKRBS; /* r18=RSE.ndirty*8 */ \
+(pUStk) sub r18=r18,rKRBS; /* r18=RSE.ndirty*8 */ \
;; \
st8 [r17]=rARUNAT,16; /* save ar.unat */ \
st8 [r16]=rARPFS,16; /* save ar.pfs */ \
shl r18=r18,16; /* compute ar.rsc to be used for "loadrs" */ \
;; \
st8 [r17]=rARRSC,16; /* save ar.rsc */ \
-(pUser) st8 [r16]=rARRNAT,16; /* save ar.rnat */ \
-(pKern) adds r16\x16,r16; /* skip over ar_rnat field */ \
+(pUStk) st8 [r16]=rARRNAT,16; /* save ar.rnat */ \
+(pKStk) adds r16\x16,r16; /* skip over ar_rnat field */ \
;; /* avoid RAW on r16 & r17 */ \
-(pUser) st8 [r17]=rARBSPSTORE,16; /* save ar.bspstore */ \
+(pUStk) st8 [r17]=rARBSPSTORE,16; /* save ar.bspstore */ \
st8 [r16]=rARPR,16; /* save predicates */ \
-(pKern) adds r17\x16,r17; /* skip over ar_bspstore field */ \
+(pKStk) adds r17\x16,r17; /* skip over ar_bspstore field */ \
;; \
st8 [r17]=rB6,16; /* save b6 */ \
st8 [r16]=r18,16; /* save ar.rsc value for "loadrs" */ \
diff -Nru a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S
--- a/arch/ia64/kernel/pal.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/pal.S Fri Jan 24 20:41:05 2003
@@ -4,7 +4,7 @@
*
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * Copyright (C) 1999-2001, 2003 Hewlett-Packard Co
* David Mosberger <davidm@hpl.hp.com>
* Stephane Eranian <eranian@hpl.hp.com>
*
@@ -114,7 +114,7 @@
;;
rsm psr.i
mov b7 = loc2
- ;;
+ ;;
br.call.sptk.many rp· // now make the call
.ret0: mov psr.l = loc3
mov ar.pfs = loc1
@@ -131,15 +131,15 @@
* in0 Index of PAL service
* in2 - in3 Remaning PAL arguments
*
- * PSR_DB, PSR_LP, PSR_TB, PSR_ID, PSR_DA are never set by the kernel.
+ * PSR_LP, PSR_TB, PSR_ID, PSR_DA are never set by the kernel.
* So we don't need to clear them.
*/
-#define PAL_PSR_BITS_TO_CLEAR \
- (IA64_PSR_I | IA64_PSR_IT | IA64_PSR_DT | IA64_PSR_RT | \
- IA64_PSR_DD | IA64_PSR_SS | IA64_PSR_RI | IA64_PSR_ED | \
+#define PAL_PSR_BITS_TO_CLEAR \
+ (IA64_PSR_I | IA64_PSR_IT | IA64_PSR_DT | IA64_PSR_DB | IA64_PSR_RT | \
+ IA64_PSR_DD | IA64_PSR_SS | IA64_PSR_RI | IA64_PSR_ED | \
IA64_PSR_DFL | IA64_PSR_DFH)
-#define PAL_PSR_BITS_TO_SET \
+#define PAL_PSR_BITS_TO_SET \
(IA64_PSR_BN)
@@ -161,7 +161,7 @@
;;
mov loc3 = psr // save psr
adds r8 = 1f-1b,r8 // calculate return address for call
- ;;
+ ;;
mov loc4=ar.rsc // save RSE configuration
dep.z loc2=loc2,0,61 // convert pal entry point to physical
dep.z r8=r8,0,61 // convert rp to physical
@@ -275,7 +275,6 @@
* Inputs:
* in0 Address of stack storage for fp regs
*/
-
GLOBAL_ENTRY(ia64_load_scratch_fpregs)
alloc r3=ar.pfs,1,0,0,0
add r2\x16,in0
diff -Nru a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
--- a/arch/ia64/kernel/perfmon.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/perfmon.c Fri Jan 24 20:41:05 2003
@@ -28,7 +28,6 @@
#include <asm/bitops.h>
#include <asm/errno.h>
#include <asm/page.h>
-#include <asm/pal.h>
#include <asm/perfmon.h>
#include <asm/processor.h>
#include <asm/signal.h>
@@ -56,8 +55,8 @@
/*
* Reset register flags
*/
-#define PFM_RELOAD_LONG_RESET 1
-#define PFM_RELOAD_SHORT_RESET 2
+#define PFM_PMD_LONG_RESET 1
+#define PFM_PMD_SHORT_RESET 2
/*
* Misc macros and definitions
@@ -83,8 +82,10 @@
#define PFM_REG_CONFIG (0x4<<4|PFM_REG_IMPL) /* refine configuration */
#define PFM_REG_BUFFER (0x5<<4|PFM_REG_IMPL) /* PMD used as buffer */
+#define PMC_IS_LAST(i) (pmu_conf.pmc_desc[i].type & PFM_REG_END)
+#define PMD_IS_LAST(i) (pmu_conf.pmd_desc[i].type & PFM_REG_END)
-#define PFM_IS_DISABLED() pmu_conf.pfm_is_disabled
+#define PFM_IS_DISABLED() pmu_conf.disabled
#define PMC_OVFL_NOTIFY(ctx, i) ((ctx)->ctx_soft_pmds[i].flags & PFM_REGFL_OVFL_NOTIFY)
#define PFM_FL_INHERIT_MASK (PFM_FL_INHERIT_NONE|PFM_FL_INHERIT_ONCE|PFM_FL_INHERIT_ALL)
@@ -102,7 +103,6 @@
#define PMD_PMD_DEP(i) pmu_conf.pmd_desc[i].dep_pmd[0]
#define PMC_PMD_DEP(i) pmu_conf.pmc_desc[i].dep_pmd[0]
-
/* k assume unsigned */
#define IBR_IS_IMPL(k) (k<pmu_conf.num_ibrs)
#define DBR_IS_IMPL(k) (k<pmu_conf.num_dbrs)
@@ -131,6 +131,9 @@
#define PFM_REG_RETFLAG_SET(flags, val) do { flags &= ~PFM_REG_RETFL_MASK; flags |= (val); } while(0)
+#define PFM_CPUINFO_CLEAR(v) __get_cpu_var(pfm_syst_info) &= ~(v)
+#define PFM_CPUINFO_SET(v) __get_cpu_var(pfm_syst_info) |= (v)
+
#ifdef CONFIG_SMP
#define cpu_is_online(i) (cpu_online_map & (1UL << i))
#else
@@ -211,7 +214,7 @@
u64 reset_pmds[4]; /* which other pmds to reset when this counter overflows */
u64 seed; /* seed for random-number generator */
u64 mask; /* mask for random-number generator */
- int flags; /* notify/do not notify */
+ unsigned int flags; /* notify/do not notify */
} pfm_counter_t;
/*
@@ -226,7 +229,8 @@
unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */
unsigned int protected:1; /* allow access to creator of context only */
unsigned int using_dbreg:1; /* using range restrictions (debug registers) */
- unsigned int reserved:24;
+ unsigned int excl_idle:1; /* exclude idle task in system wide session */
+ unsigned int reserved:23;
} pfm_context_flags_t;
/*
@@ -261,7 +265,7 @@
u64 ctx_saved_psr; /* copy of psr used for lazy ctxsw */
unsigned long ctx_saved_cpus_allowed; /* copy of the task cpus_allowed (system wide) */
- unsigned long ctx_cpu; /* cpu to which perfmon is applied (system wide) */
+ unsigned int ctx_cpu; /* CPU used by system wide session */
atomic_t ctx_saving_in_progress; /* flag indicating actual save in progress */
atomic_t ctx_is_busy; /* context accessed by overflow handler */
@@ -274,6 +278,7 @@
#define ctx_fl_frozen ctx_flags.frozen
#define ctx_fl_protected ctx_flags.protected
#define ctx_fl_using_dbreg ctx_flags.using_dbreg
+#define ctx_fl_excl_idle ctx_flags.excl_idle
/*
* global information about all sessions
@@ -282,10 +287,10 @@
typedef struct {
spinlock_t pfs_lock; /* lock the structure */
- unsigned long pfs_task_sessions; /* number of per task sessions */
- unsigned long pfs_sys_sessions; /* number of per system wide sessions */
- unsigned long pfs_sys_use_dbregs; /* incremented when a system wide session uses debug regs */
- unsigned long pfs_ptrace_use_dbregs; /* incremented when a process uses debug regs */
+ unsigned int pfs_task_sessions; /* number of per task sessions */
+ unsigned int pfs_sys_sessions; /* number of per system wide sessions */
+ unsigned int pfs_sys_use_dbregs; /* incremented when a system wide session uses debug regs */
+ unsigned int pfs_ptrace_use_dbregs; /* incremented when a process uses debug regs */
struct task_struct *pfs_sys_session[NR_CPUS]; /* point to task owning a system-wide session */
} pfm_session_t;
@@ -313,23 +318,22 @@
/*
* This structure is initialized at boot time and contains
- * a description of the PMU main characteristic as indicated
- * by PAL along with a list of inter-registers dependencies and configurations.
+ * a description of the PMU main characteristics.
*/
typedef struct {
- unsigned long pfm_is_disabled; /* indicates if perfmon is working properly */
- unsigned long perf_ovfl_val; /* overflow value for generic counters */
- unsigned long max_counters; /* upper limit on counter pair (PMC/PMD) */
- unsigned long num_pmcs ; /* highest PMC implemented (may have holes) */
- unsigned long num_pmds; /* highest PMD implemented (may have holes) */
- unsigned long impl_regs[16]; /* buffer used to hold implememted PMC/PMD mask */
- unsigned long num_ibrs; /* number of instruction debug registers */
- unsigned long num_dbrs; /* number of data debug registers */
- pfm_reg_desc_t *pmc_desc; /* detailed PMC register descriptions */
- pfm_reg_desc_t *pmd_desc; /* detailed PMD register descriptions */
+ unsigned int disabled; /* indicates if perfmon is working properly */
+ unsigned long ovfl_val; /* overflow value for generic counters */
+ unsigned long impl_pmcs[4]; /* bitmask of implemented PMCS */
+ unsigned long impl_pmds[4]; /* bitmask of implemented PMDS */
+ unsigned int num_pmcs; /* number of implemented PMCS */
+ unsigned int num_pmds; /* number of implemented PMDS */
+ unsigned int num_ibrs; /* number of implemented IBRS */
+ unsigned int num_dbrs; /* number of implemented DBRS */
+ unsigned int num_counters; /* number of PMD/PMC counters */
+ pfm_reg_desc_t *pmc_desc; /* detailed PMC register dependencies descriptions */
+ pfm_reg_desc_t *pmd_desc; /* detailed PMD register dependencies descriptions */
} pmu_config_t;
-
/*
* structure used to pass argument to/from remote CPU
* using IPI to check and possibly save the PMU context on SMP systems.
@@ -389,13 +393,12 @@
/*
* perfmon internal variables
*/
-static pmu_config_t pmu_conf; /* PMU configuration */
static pfm_session_t pfm_sessions; /* global sessions information */
static struct proc_dir_entry *perfmon_dir; /* for debug only */
static pfm_stats_t pfm_stats[NR_CPUS];
+static pfm_intr_handler_desc_t *pfm_alternate_intr_handler;
-DEFINE_PER_CPU(int, pfm_syst_wide);
-static DEFINE_PER_CPU(int, pfm_dcr_pp);
+DEFINE_PER_CPU(unsigned long, pfm_syst_info);
/* sysctl() controls */
static pfm_sysctl_t pfm_sysctl;
@@ -449,42 +452,62 @@
#include "perfmon_generic.h"
#endif
+static inline void
+pfm_clear_psr_pp(void)
+{
+ __asm__ __volatile__ ("rsm psr.pp;; srlz.i;;"::: "memory");
+}
+
+static inline void
+pfm_set_psr_pp(void)
+{
+ __asm__ __volatile__ ("ssm psr.pp;; srlz.i;;"::: "memory");
+}
+
+static inline void
+pfm_clear_psr_up(void)
+{
+ __asm__ __volatile__ ("rum psr.up;; srlz.i;;"::: "memory");
+}
+
+static inline void
+pfm_set_psr_up(void)
+{
+ __asm__ __volatile__ ("sum psr.up;; srlz.i;;"::: "memory");
+}
+
+static inline unsigned long
+pfm_get_psr(void)
+{
+ unsigned long tmp;
+ __asm__ __volatile__ ("mov %0=psr;;": "=r"(tmp) :: "memory");
+ return tmp;
+}
+
+static inline void
+pfm_set_psr_l(unsigned long val)
+{
+ __asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(val): "memory");
+}
+
+
static inline unsigned long
pfm_read_soft_counter(pfm_context_t *ctx, int i)
{
- return ctx->ctx_soft_pmds[i].val + (ia64_get_pmd(i) & pmu_conf.perf_ovfl_val);
+ return ctx->ctx_soft_pmds[i].val + (ia64_get_pmd(i) & pmu_conf.ovfl_val);
}
static inline void
pfm_write_soft_counter(pfm_context_t *ctx, int i, unsigned long val)
{
- ctx->ctx_soft_pmds[i].val = val & ~pmu_conf.perf_ovfl_val;
+ ctx->ctx_soft_pmds[i].val = val & ~pmu_conf.ovfl_val;
/*
* writing to unimplemented part is ignore, so we do not need to
* mask off top part
*/
- ia64_set_pmd(i, val & pmu_conf.perf_ovfl_val);
-}
-
-/*
- * finds the number of PM(C|D) registers given
- * the bitvector returned by PAL
- */
-static unsigned long __init
-find_num_pm_regs(long *buffer)
-{
- int i=3; /* 4 words/per bitvector */
-
- /* start from the most significant word */
- while (i>=0 && buffer[i] = 0 ) i--;
- if (i< 0) {
- printk(KERN_ERR "perfmon: No bit set in pm_buffer\n");
- return 0;
- }
- return 1+ ia64_fls(buffer[i]) + 64 * i;
+ ia64_set_pmd(i, val & pmu_conf.ovfl_val);
}
-
/*
* Generates a unique (per CPU) timestamp
*/
@@ -875,6 +898,120 @@
return -ENOMEM;
}
+static int
+pfm_reserve_session(struct task_struct *task, int is_syswide, unsigned long cpu_mask)
+{
+ unsigned long m, undo_mask;
+ unsigned int n, i;
+
+ /*
+ * validy checks on cpu_mask have been done upstream
+ */
+ LOCK_PFS();
+
+ if (is_syswide) {
+ /*
+ * cannot mix system wide and per-task sessions
+ */
+ if (pfm_sessions.pfs_task_sessions > 0UL) {
+ DBprintk(("system wide not possible, %u conflicting task_sessions\n",
+ pfm_sessions.pfs_task_sessions));
+ goto abort;
+ }
+
+ m = cpu_mask; undo_mask = 0UL; n = 0;
+ DBprintk(("cpu_mask=0x%lx\n", cpu_mask));
+ for(i=0; m; i++, m>>=1) {
+
+ if ((m & 0x1) = 0UL) continue;
+
+ if (pfm_sessions.pfs_sys_session[i]) goto undo;
+
+ DBprintk(("reserving CPU%d currently on CPU%d\n", i, smp_processor_id()));
+
+ pfm_sessions.pfs_sys_session[i] = task;
+ undo_mask |= 1UL << i;
+ n++;
+ }
+ pfm_sessions.pfs_sys_sessions += n;
+ } else {
+ if (pfm_sessions.pfs_sys_sessions) goto abort;
+ pfm_sessions.pfs_task_sessions++;
+ }
+ DBprintk(("task_sessions=%u sys_session[%d]=%d",
+ pfm_sessions.pfs_task_sessions,
+ smp_processor_id(), pfm_sessions.pfs_sys_session[smp_processor_id()] ? 1 : 0));
+ UNLOCK_PFS();
+ return 0;
+undo:
+ DBprintk(("system wide not possible, conflicting session [%d] on CPU%d\n",
+ pfm_sessions.pfs_sys_session[i]->pid, i));
+
+ for(i=0; undo_mask; i++, undo_mask >>=1) {
+ pfm_sessions.pfs_sys_session[i] = NULL;
+ }
+abort:
+ UNLOCK_PFS();
+
+ return -EBUSY;
+
+}
+
+static int
+pfm_unreserve_session(struct task_struct *task, int is_syswide, unsigned long cpu_mask)
+{
+ pfm_context_t *ctx;
+ unsigned long m;
+ unsigned int n, i;
+
+ ctx = task ? task->thread.pfm_context : NULL;
+
+ /*
+ * validy checks on cpu_mask have been done upstream
+ */
+ LOCK_PFS();
+
+ DBprintk(("[%d] sys_sessions=%u task_sessions=%u dbregs=%u syswide=%d cpu_mask=0x%lx\n",
+ task->pid,
+ pfm_sessions.pfs_sys_sessions,
+ pfm_sessions.pfs_task_sessions,
+ pfm_sessions.pfs_sys_use_dbregs,
+ is_syswide,
+ cpu_mask));
+
+
+ if (is_syswide) {
+ m = cpu_mask; n = 0;
+ for(i=0; m; i++, m>>=1) {
+ if ((m & 0x1) = 0UL) continue;
+ pfm_sessions.pfs_sys_session[i] = NULL;
+ n++;
+ }
+ /*
+ * would not work with perfmon+more than one bit in cpu_mask
+ */
+ if (ctx && ctx->ctx_fl_using_dbreg) {
+ if (pfm_sessions.pfs_sys_use_dbregs = 0) {
+ printk("perfmon: invalid release for [%d] sys_use_dbregs=0\n", task->pid);
+ } else {
+ pfm_sessions.pfs_sys_use_dbregs--;
+ }
+ }
+ pfm_sessions.pfs_sys_sessions -= n;
+
+ DBprintk(("CPU%d sys_sessions=%u\n",
+ smp_processor_id(), pfm_sessions.pfs_sys_sessions));
+ } else {
+ pfm_sessions.pfs_task_sessions--;
+ DBprintk(("[%d] task_sessions=%u\n",
+ task->pid, pfm_sessions.pfs_task_sessions));
+ }
+
+ UNLOCK_PFS();
+
+ return 0;
+}
+
/*
* XXX: do something better here
*/
@@ -891,6 +1028,7 @@
static int
pfx_is_sane(struct task_struct *task, pfarg_context_t *pfx)
{
+ unsigned long smpl_pmds = pfx->ctx_smpl_regs[0];
int ctx_flags;
int cpu;
@@ -957,6 +1095,11 @@
}
#endif
}
+ /* verify validity of smpl_regs */
+ if ((smpl_pmds & pmu_conf.impl_pmds[0]) != smpl_pmds) {
+ DBprintk(("invalid smpl_regs 0x%lx\n", smpl_pmds));
+ return -EINVAL;
+ }
/* probably more to add here */
return 0;
@@ -968,7 +1111,7 @@
{
pfarg_context_t tmp;
void *uaddr = NULL;
- int ret, cpu = 0;
+ int ret;
int ctx_flags;
pid_t notify_pid;
@@ -987,40 +1130,8 @@
ctx_flags = tmp.ctx_flags;
- ret = -EBUSY;
-
- LOCK_PFS();
-
- if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
-
- /* at this point, we know there is at least one bit set */
- cpu = ffz(~tmp.ctx_cpu_mask);
-
- DBprintk(("requesting CPU%d currently on CPU%d\n",cpu, smp_processor_id()));
-
- if (pfm_sessions.pfs_task_sessions > 0) {
- DBprintk(("system wide not possible, task_sessions=%ld\n", pfm_sessions.pfs_task_sessions));
- goto abort;
- }
-
- if (pfm_sessions.pfs_sys_session[cpu]) {
- DBprintk(("system wide not possible, conflicting session [%d] on CPU%d\n",pfm_sessions.pfs_sys_session[cpu]->pid, cpu));
- goto abort;
- }
- pfm_sessions.pfs_sys_session[cpu] = task;
- /*
- * count the number of system wide sessions
- */
- pfm_sessions.pfs_sys_sessions++;
-
- } else if (pfm_sessions.pfs_sys_sessions = 0) {
- pfm_sessions.pfs_task_sessions++;
- } else {
- /* no per-process monitoring while there is a system wide session */
- goto abort;
- }
-
- UNLOCK_PFS();
+ ret = pfm_reserve_session(task, ctx_flags & PFM_FL_SYSTEM_WIDE, tmp.ctx_cpu_mask);
+ if (ret) goto abort;
ret = -ENOMEM;
@@ -1103,6 +1214,7 @@
ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
ctx->ctx_fl_block = (ctx_flags & PFM_FL_NOTIFY_BLOCK) ? 1 : 0;
ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEM_WIDE) ? 1: 0;
+ ctx->ctx_fl_excl_idle = (ctx_flags & PFM_FL_EXCL_IDLE) ? 1: 0;
ctx->ctx_fl_frozen = 0;
/*
* setting this flag to 0 here means, that the creator or the task that the
@@ -1113,7 +1225,7 @@
ctx->ctx_fl_protected = 0;
/* for system wide mode only (only 1 bit set) */
- ctx->ctx_cpu = cpu;
+ ctx->ctx_cpu = ffz(~tmp.ctx_cpu_mask);
atomic_set(&ctx->ctx_last_cpu,-1); /* SMP only, means no CPU */
@@ -1131,9 +1243,9 @@
DBprintk(("context=%p, pid=%d notify_task=%p\n",
(void *)ctx, task->pid, ctx->ctx_notify_task));
- DBprintk(("context=%p, pid=%d flags=0x%x inherit=%d block=%d system=%d\n",
+ DBprintk(("context=%p, pid=%d flags=0x%x inherit=%d block=%d system=%d excl_idle=%d\n",
(void *)ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit,
- ctx->ctx_fl_block, ctx->ctx_fl_system));
+ ctx->ctx_fl_block, ctx->ctx_fl_system, ctx->ctx_fl_excl_idle));
/*
* when no notification is required, we can make this visible at the last moment
@@ -1146,8 +1258,8 @@
*/
if (ctx->ctx_fl_system) {
ctx->ctx_saved_cpus_allowed = task->cpus_allowed;
- set_cpus_allowed(task, 1UL << cpu);
- DBprintk(("[%d] rescheduled allowed=0x%lx\n", task->pid,task->cpus_allowed));
+ set_cpus_allowed(task, tmp.ctx_cpu_mask);
+ DBprintk(("[%d] rescheduled allowed=0x%lx\n", task->pid, task->cpus_allowed));
}
return 0;
@@ -1155,20 +1267,8 @@
buffer_error:
pfm_context_free(ctx);
error:
- /*
- * undo session reservation
- */
- LOCK_PFS();
-
- if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
- pfm_sessions.pfs_sys_session[cpu] = NULL;
- pfm_sessions.pfs_sys_sessions--;
- } else {
- pfm_sessions.pfs_task_sessions--;
- }
+ pfm_unreserve_session(task, ctx_flags & PFM_FL_SYSTEM_WIDE , tmp.ctx_cpu_mask);
abort:
- UNLOCK_PFS();
-
/* make sure we don't leave anything behind */
task->thread.pfm_context = NULL;
@@ -1200,9 +1300,7 @@
unsigned long mask = ovfl_regs[0];
unsigned long reset_others = 0UL;
unsigned long val;
- int i, is_long_reset = (flag & PFM_RELOAD_LONG_RESET);
-
- DBprintk(("masks=0x%lx\n", mask));
+ int i, is_long_reset = (flag = PFM_PMD_LONG_RESET);
/*
* now restore reset value on sampling overflowed counters
@@ -1213,7 +1311,7 @@
val = pfm_new_counter_value(ctx->ctx_soft_pmds + i, is_long_reset);
reset_others |= ctx->ctx_soft_pmds[i].reset_pmds[0];
- DBprintk(("[%d] %s reset soft_pmd[%d]=%lx\n", current->pid,
+ DBprintk_ovfl(("[%d] %s reset soft_pmd[%d]=%lx\n", current->pid,
is_long_reset ? "long" : "short", i, val));
/* upper part is ignored on rval */
@@ -1235,7 +1333,7 @@
} else {
ia64_set_pmd(i, val);
}
- DBprintk(("[%d] %s reset_others pmd[%d]=%lx\n", current->pid,
+ DBprintk_ovfl(("[%d] %s reset_others pmd[%d]=%lx\n", current->pid,
is_long_reset ? "long" : "short", i, val));
}
ia64_srlz_d();
@@ -1246,7 +1344,7 @@
{
struct thread_struct *th = &task->thread;
pfarg_reg_t tmp, *req = (pfarg_reg_t *)arg;
- unsigned long value;
+ unsigned long value, reset_pmds;
unsigned int cnum, reg_flags, flags;
int i;
int ret = -EINVAL;
@@ -1262,10 +1360,11 @@
if (__copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
- cnum = tmp.reg_num;
- reg_flags = tmp.reg_flags;
- value = tmp.reg_value;
- flags = 0;
+ cnum = tmp.reg_num;
+ reg_flags = tmp.reg_flags;
+ value = tmp.reg_value;
+ reset_pmds = tmp.reg_reset_pmds[0];
+ flags = 0;
/*
* we reject all non implemented PMC as well
@@ -1283,6 +1382,8 @@
* any other configuration is rejected.
*/
if (PMC_IS_MONITOR(cnum) || PMC_IS_COUNTING(cnum)) {
+ DBprintk(("pmc[%u].pm=%ld\n", cnum, PMC_PM(cnum, value)));
+
if (ctx->ctx_fl_system ^ PMC_PM(cnum, value)) {
DBprintk(("pmc_pm=%ld fl_system=%d\n", PMC_PM(cnum, value), ctx->ctx_fl_system));
goto error;
@@ -1310,6 +1411,11 @@
if (reg_flags & PFM_REGFL_RANDOM) flags |= PFM_REGFL_RANDOM;
+ /* verify validity of reset_pmds */
+ if ((reset_pmds & pmu_conf.impl_pmds[0]) != reset_pmds) {
+ DBprintk(("invalid reset_pmds 0x%lx for pmc%u\n", reset_pmds, cnum));
+ goto error;
+ }
} else if (reg_flags & (PFM_REGFL_OVFL_NOTIFY|PFM_REGFL_RANDOM)) {
DBprintk(("cannot set ovfl_notify or random on pmc%u\n", cnum));
goto error;
@@ -1348,13 +1454,10 @@
ctx->ctx_soft_pmds[cnum].flags = flags;
if (PMC_IS_COUNTING(cnum)) {
- /*
- * copy reset vector
- */
- ctx->ctx_soft_pmds[cnum].reset_pmds[0] = tmp.reg_reset_pmds[0];
- ctx->ctx_soft_pmds[cnum].reset_pmds[1] = tmp.reg_reset_pmds[1];
- ctx->ctx_soft_pmds[cnum].reset_pmds[2] = tmp.reg_reset_pmds[2];
- ctx->ctx_soft_pmds[cnum].reset_pmds[3] = tmp.reg_reset_pmds[3];
+ ctx->ctx_soft_pmds[cnum].reset_pmds[0] = reset_pmds;
+
+ /* mark all PMDS to be accessed as used */
+ CTX_USED_PMD(ctx, reset_pmds);
}
/*
@@ -1397,7 +1500,7 @@
unsigned long value, hw_value;
unsigned int cnum;
int i;
- int ret;
+ int ret = 0;
/* we don't quite support this right now */
if (task != current) return -EINVAL;
@@ -1448,9 +1551,9 @@
/* update virtualized (64bits) counter */
if (PMD_IS_COUNTING(cnum)) {
ctx->ctx_soft_pmds[cnum].lval = value;
- ctx->ctx_soft_pmds[cnum].val = value & ~pmu_conf.perf_ovfl_val;
+ ctx->ctx_soft_pmds[cnum].val = value & ~pmu_conf.ovfl_val;
- hw_value = value & pmu_conf.perf_ovfl_val;
+ hw_value = value & pmu_conf.ovfl_val;
ctx->ctx_soft_pmds[cnum].long_reset = tmp.reg_long_reset;
ctx->ctx_soft_pmds[cnum].short_reset = tmp.reg_short_reset;
@@ -1478,7 +1581,7 @@
ctx->ctx_soft_pmds[cnum].val,
ctx->ctx_soft_pmds[cnum].short_reset,
ctx->ctx_soft_pmds[cnum].long_reset,
- ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val,
+ ia64_get_pmd(cnum) & pmu_conf.ovfl_val,
PMC_OVFL_NOTIFY(ctx, cnum) ? 'Y':'N',
ctx->ctx_used_pmds[0],
ctx->ctx_soft_pmds[cnum].reset_pmds[0]));
@@ -1504,15 +1607,18 @@
return ret;
}
-
static int
pfm_read_pmds(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs)
{
struct thread_struct *th = &task->thread;
- unsigned long val = 0UL;
+ unsigned long val, lval;
pfarg_reg_t *req = (pfarg_reg_t *)arg;
unsigned int cnum, reg_flags = 0;
- int i, ret = -EINVAL;
+ int i, ret = 0;
+
+#if __GNUC__ < 3
+ int foo;
+#endif
if (!CTX_IS_ENABLED(ctx)) return -EINVAL;
@@ -1528,9 +1634,16 @@
DBprintk(("ctx_last_cpu=%d for [%d]\n", atomic_read(&ctx->ctx_last_cpu), task->pid));
for (i = 0; i < count; i++, req++) {
-
+#if __GNUC__ < 3
+ foo = __get_user(cnum, &req->reg_num);
+ if (foo) return -EFAULT;
+ foo = __get_user(reg_flags, &req->reg_flags);
+ if (foo) return -EFAULT;
+#else
if (__get_user(cnum, &req->reg_num)) return -EFAULT;
if (__get_user(reg_flags, &req->reg_flags)) return -EFAULT;
+#endif
+ lval = 0UL;
if (!PMD_IS_IMPL(cnum)) goto abort_mission;
/*
@@ -1578,9 +1691,10 @@
/*
* XXX: need to check for overflow
*/
-
- val &= pmu_conf.perf_ovfl_val;
+ val &= pmu_conf.ovfl_val;
val += ctx->ctx_soft_pmds[cnum].val;
+
+ lval = ctx->ctx_soft_pmds[cnum].lval;
}
/*
@@ -1592,10 +1706,11 @@
val = v;
}
- PFM_REG_RETFLAG_SET(reg_flags, 0);
+ PFM_REG_RETFLAG_SET(reg_flags, ret);
DBprintk(("read pmd[%u] ret=%d value=0x%lx pmc=0x%lx\n",
- cnum, ret, val, ia64_get_pmc(cnum)));
+ cnum, ret, val, ia64_get_pmc(cnum)));
+
/*
* update register return value, abort all if problem during copy.
* we only modify the reg_flags field. no check mode is fine because
@@ -1604,16 +1719,19 @@
if (__put_user(cnum, &req->reg_num)) return -EFAULT;
if (__put_user(val, &req->reg_value)) return -EFAULT;
if (__put_user(reg_flags, &req->reg_flags)) return -EFAULT;
+ if (__put_user(lval, &req->reg_last_reset_value)) return -EFAULT;
}
return 0;
abort_mission:
PFM_REG_RETFLAG_SET(reg_flags, PFM_REG_RETFL_EINVAL);
+ /*
+ * XXX: if this fails, we stick with the original failure, flag not updated!
+ */
+ __put_user(reg_flags, &req->reg_flags);
- if (__put_user(reg_flags, &req->reg_flags)) ret = -EFAULT;
-
- return ret;
+ return -EINVAL;
}
#ifdef PFM_PMU_USES_DBR
@@ -1655,7 +1773,7 @@
else
pfm_sessions.pfs_ptrace_use_dbregs++;
- DBprintk(("ptrace_use_dbregs=%lu sys_use_dbregs=%lu by [%d] ret = %d\n",
+ DBprintk(("ptrace_use_dbregs=%u sys_use_dbregs=%u by [%d] ret = %d\n",
pfm_sessions.pfs_ptrace_use_dbregs,
pfm_sessions.pfs_sys_use_dbregs,
task->pid, ret));
@@ -1673,7 +1791,6 @@
* perfmormance monitoring, so we only decrement the number
* of "ptraced" debug register users to keep the count up to date
*/
-
int
pfm_release_debug_registers(struct task_struct *task)
{
@@ -1702,6 +1819,7 @@
{
return 0;
}
+
int
pfm_release_debug_registers(struct task_struct *task)
{
@@ -1721,9 +1839,12 @@
if (!CTX_IS_ENABLED(ctx)) return -EINVAL;
if (task = current) {
- DBprintk(("restarting self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
+ DBprintk(("restarting self %d frozen=%d ovfl_regs=0x%lx\n",
+ task->pid,
+ ctx->ctx_fl_frozen,
+ ctx->ctx_ovfl_regs[0]));
- pfm_reset_regs(ctx, ctx->ctx_ovfl_regs, PFM_RELOAD_LONG_RESET);
+ pfm_reset_regs(ctx, ctx->ctx_ovfl_regs, PFM_PMD_LONG_RESET);
ctx->ctx_ovfl_regs[0] = 0UL;
@@ -1806,18 +1927,18 @@
ia64_set_dcr(ia64_get_dcr() & ~IA64_DCR_PP);
/* stop monitoring */
- __asm__ __volatile__ ("rsm psr.pp;;"::: "memory");
+ pfm_clear_psr_pp();
ia64_srlz_i();
- __get_cpu_var(pfm_dcr_pp) = 0;
+ PFM_CPUINFO_CLEAR(PFM_CPUINFO_DCR_PP);
ia64_psr(regs)->pp = 0;
} else {
/* stop monitoring */
- __asm__ __volatile__ ("rum psr.up;;"::: "memory");
+ pfm_clear_psr_up();
ia64_srlz_i();
@@ -1979,14 +2100,9 @@
int i, ret = 0;
/*
- * for range restriction: psr.db must be cleared or the
- * the PMU will ignore the debug registers.
- *
- * XXX: may need more in system wide mode,
- * no task can have this bit set?
+ * we do not need to check for ipsr.db because we do clear ibr.x, dbr.r, and dbr.w
+ * ensuring that no real breakpoint can be installed via this call.
*/
- if (ia64_psr(regs)->db = 1) return -EINVAL;
-
first_time = ctx->ctx_fl_using_dbreg = 0;
@@ -2055,7 +2171,6 @@
* Now install the values into the registers
*/
for (i = 0; i < count; i++, req++) {
-
if (__copy_from_user(&tmp, req, sizeof(tmp))) goto abort_mission;
@@ -2145,7 +2260,7 @@
* XXX: for now we can only come here on EINVAL
*/
PFM_REG_RETFLAG_SET(tmp.dbreg_flags, PFM_REG_RETFL_EINVAL);
- __put_user(tmp.dbreg_flags, &req->dbreg_flags);
+ if (__put_user(tmp.dbreg_flags, &req->dbreg_flags)) ret = -EFAULT;
}
return ret;
}
@@ -2215,13 +2330,13 @@
if (ctx->ctx_fl_system) {
- __get_cpu_var(pfm_dcr_pp) = 1;
+ PFM_CPUINFO_SET(PFM_CPUINFO_DCR_PP);
/* set user level psr.pp */
ia64_psr(regs)->pp = 1;
/* start monitoring at kernel level */
- __asm__ __volatile__ ("ssm psr.pp;;"::: "memory");
+ pfm_set_psr_pp();
/* enable dcr pp */
ia64_set_dcr(ia64_get_dcr()|IA64_DCR_PP);
@@ -2237,7 +2352,7 @@
ia64_psr(regs)->up = 1;
/* start monitoring at kernel level */
- __asm__ __volatile__ ("sum psr.up;;"::: "memory");
+ pfm_set_psr_up();
ia64_srlz_i();
}
@@ -2264,11 +2379,12 @@
ia64_psr(regs)->up = 0; /* just to make sure! */
/* make sure monitoring is stopped */
- __asm__ __volatile__ ("rsm psr.pp;;"::: "memory");
+ pfm_clear_psr_pp();
ia64_srlz_i();
- __get_cpu_var(pfm_dcr_pp) = 0;
- __get_cpu_var(pfm_syst_wide) = 1;
+ PFM_CPUINFO_CLEAR(PFM_CPUINFO_DCR_PP);
+ PFM_CPUINFO_SET(PFM_CPUINFO_SYST_WIDE);
+ if (ctx->ctx_fl_excl_idle) PFM_CPUINFO_SET(PFM_CPUINFO_EXCL_IDLE);
} else {
/*
* needed in case the task was a passive task during
@@ -2279,7 +2395,7 @@
ia64_psr(regs)->up = 0;
/* make sure monitoring is stopped */
- __asm__ __volatile__ ("rum psr.up;;"::: "memory");
+ pfm_clear_psr_up();
ia64_srlz_i();
DBprintk(("clearing psr.sp for [%d]\n", current->pid));
@@ -2331,6 +2447,7 @@
abort_mission:
PFM_REG_RETFLAG_SET(tmp.reg_flags, PFM_REG_RETFL_EINVAL);
if (__copy_to_user(req, &tmp, sizeof(tmp))) ret = -EFAULT;
+
return ret;
}
@@ -2532,7 +2649,7 @@
* use the local reference
*/
- pfm_reset_regs(ctx, ctx->ctx_ovfl_regs, PFM_RELOAD_LONG_RESET);
+ pfm_reset_regs(ctx, ctx->ctx_ovfl_regs, PFM_PMD_LONG_RESET);
ctx->ctx_ovfl_regs[0] = 0UL;
@@ -2591,19 +2708,11 @@
h->pid = current->pid;
h->cpu = smp_processor_id();
h->last_reset_value = ovfl_mask ? ctx->ctx_soft_pmds[ffz(~ovfl_mask)].lval : 0UL;
- /*
- * where did the fault happen
- */
- h->ip = regs ? regs->cr_iip | ((regs->cr_ipsr >> 41) & 0x3): 0x0UL;
-
- /*
- * which registers overflowed
- */
- h->regs = ovfl_mask;
+ h->ip = regs ? regs->cr_iip | ((regs->cr_ipsr >> 41) & 0x3): 0x0UL;
+ h->regs = ovfl_mask; /* which registers overflowed */
/* guaranteed to monotonically increase on each cpu */
h->stamp = pfm_get_stamp();
- h->period = 0UL; /* not yet used */
/* position for first pmd */
e = (unsigned long *)(h+1);
@@ -2724,7 +2833,7 @@
* pfm_read_pmds().
*/
old_val = ctx->ctx_soft_pmds[i].val;
- ctx->ctx_soft_pmds[i].val += 1 + pmu_conf.perf_ovfl_val;
+ ctx->ctx_soft_pmds[i].val += 1 + pmu_conf.ovfl_val;
/*
* check for overflow condition
@@ -2739,9 +2848,7 @@
}
DBprintk_ovfl(("soft_pmd[%d].val=0x%lx old_val=0x%lx pmd=0x%lx ovfl_pmds=0x%lx ovfl_notify=0x%lx\n",
i, ctx->ctx_soft_pmds[i].val, old_val,
- ia64_get_pmd(i) & pmu_conf.perf_ovfl_val, ovfl_pmds, ovfl_notify));
-
-
+ ia64_get_pmd(i) & pmu_conf.ovfl_val, ovfl_pmds, ovfl_notify));
}
/*
@@ -2776,7 +2883,7 @@
*/
if (ovfl_notify = 0UL) {
if (ovfl_pmds)
- pfm_reset_regs(ctx, &ovfl_pmds, PFM_RELOAD_SHORT_RESET);
+ pfm_reset_regs(ctx, &ovfl_pmds, PFM_PMD_SHORT_RESET);
return 0x0;
}
@@ -2924,7 +3031,7 @@
}
static void
-perfmon_interrupt (int irq, void *arg, struct pt_regs *regs)
+pfm_interrupt_handler(int irq, void *arg, struct pt_regs *regs)
{
u64 pmc0;
struct task_struct *task;
@@ -2932,6 +3039,14 @@
pfm_stats[smp_processor_id()].pfm_ovfl_intr_count++;
+ /*
+ * if an alternate handler is registered, just bypass the default one
+ */
+ if (pfm_alternate_intr_handler) {
+ (*pfm_alternate_intr_handler->handler)(irq, arg, regs);
+ return;
+ }
+
/*
* srlz.d done before arriving here
*
@@ -2994,14 +3109,13 @@
/* for debug only */
static int
-perfmon_proc_info(char *page)
+pfm_proc_info(char *page)
{
char *p = page;
int i;
- p += sprintf(p, "enabled : %s\n", pmu_conf.pfm_is_disabled ? "No": "Yes");
p += sprintf(p, "fastctxsw : %s\n", pfm_sysctl.fastctxsw > 0 ? "Yes": "No");
- p += sprintf(p, "ovfl_mask : 0x%lx\n", pmu_conf.perf_ovfl_val);
+ p += sprintf(p, "ovfl_mask : 0x%lx\n", pmu_conf.ovfl_val);
for(i=0; i < NR_CPUS; i++) {
if (cpu_is_online(i) = 0) continue;
@@ -3009,16 +3123,18 @@
p += sprintf(p, "CPU%-2d spurious intrs : %lu\n", i, pfm_stats[i].pfm_spurious_ovfl_intr_count);
p += sprintf(p, "CPU%-2d recorded samples : %lu\n", i, pfm_stats[i].pfm_recorded_samples_count);
p += sprintf(p, "CPU%-2d smpl buffer full : %lu\n", i, pfm_stats[i].pfm_full_smpl_buffer_count);
+ p += sprintf(p, "CPU%-2d syst_wide : %d\n", i, per_cpu(pfm_syst_info, i) & PFM_CPUINFO_SYST_WIDE ? 1 : 0);
+ p += sprintf(p, "CPU%-2d dcr_pp : %d\n", i, per_cpu(pfm_syst_info, i) & PFM_CPUINFO_DCR_PP ? 1 : 0);
+ p += sprintf(p, "CPU%-2d exclude idle : %d\n", i, per_cpu(pfm_syst_info, i) & PFM_CPUINFO_EXCL_IDLE ? 1 : 0);
p += sprintf(p, "CPU%-2d owner : %d\n", i, pmu_owners[i].owner ? pmu_owners[i].owner->pid: -1);
- p += sprintf(p, "CPU%-2d syst_wide : %d\n", i, per_cpu(pfm_syst_wide, i));
- p += sprintf(p, "CPU%-2d dcr_pp : %d\n", i, per_cpu(pfm_dcr_pp, i));
}
LOCK_PFS();
- p += sprintf(p, "proc_sessions : %lu\n"
- "sys_sessions : %lu\n"
- "sys_use_dbregs : %lu\n"
- "ptrace_use_dbregs : %lu\n",
+
+ p += sprintf(p, "proc_sessions : %u\n"
+ "sys_sessions : %u\n"
+ "sys_use_dbregs : %u\n"
+ "ptrace_use_dbregs : %u\n",
pfm_sessions.pfs_task_sessions,
pfm_sessions.pfs_sys_sessions,
pfm_sessions.pfs_sys_use_dbregs,
@@ -3033,7 +3149,7 @@
static int
perfmon_read_entry(char *page, char **start, off_t off, int count, int *eof, void *data)
{
- int len = perfmon_proc_info(page);
+ int len = pfm_proc_info(page);
if (len <= off+count) *eof = 1;
@@ -3046,17 +3162,57 @@
return len;
}
+/*
+ * we come here as soon as PFM_CPUINFO_SYST_WIDE is set. This happens
+ * during pfm_enable() hence before pfm_start(). We cannot assume monitoring
+ * is active or inactive based on mode. We must rely on the value in
+ * cpu_data(i)->pfm_syst_info
+ */
void
-pfm_syst_wide_update_task(struct task_struct *task, int mode)
+pfm_syst_wide_update_task(struct task_struct *task, unsigned long info, int is_ctxswin)
{
- struct pt_regs *regs = (struct pt_regs *)((unsigned long) task + IA64_STK_OFFSET);
+ struct pt_regs *regs;
+ unsigned long dcr;
+ unsigned long dcr_pp;
- regs--;
+ dcr_pp = info & PFM_CPUINFO_DCR_PP ? 1 : 0;
/*
- * propagate the value of the dcr_pp bit to the psr
+ * pid 0 is guaranteed to be the idle task. There is one such task with pid 0
+ * on every CPU, so we can rely on the pid to identify the idle task.
+ */
+ if ((info & PFM_CPUINFO_EXCL_IDLE) = 0 || task->pid) {
+ regs = (struct pt_regs *)((unsigned long) task + IA64_STK_OFFSET);
+ regs--;
+ ia64_psr(regs)->pp = is_ctxswin ? dcr_pp : 0;
+ return;
+ }
+ /*
+ * if monitoring has started
*/
- ia64_psr(regs)->pp = mode ? __get_cpu_var(pfm_dcr_pp) : 0;
+ if (dcr_pp) {
+ dcr = ia64_get_dcr();
+ /*
+ * context switching in?
+ */
+ if (is_ctxswin) {
+ /* mask monitoring for the idle task */
+ ia64_set_dcr(dcr & ~IA64_DCR_PP);
+ pfm_clear_psr_pp();
+ ia64_srlz_i();
+ return;
+ }
+ /*
+ * context switching out
+ * restore monitoring for next task
+ *
+ * Due to inlining this odd if-then-else construction generates
+ * better code.
+ */
+ ia64_set_dcr(dcr |IA64_DCR_PP);
+ pfm_set_psr_pp();
+ ia64_srlz_i();
+ }
}
void
@@ -3067,11 +3223,10 @@
ctx = task->thread.pfm_context;
-
/*
* save current PSR: needed because we modify it
*/
- __asm__ __volatile__ ("mov %0=psr;;": "=r"(psr) :: "memory");
+ psr = pfm_get_psr();
/*
* stop monitoring:
@@ -3369,7 +3524,7 @@
*/
mask = pfm_sysctl.fastctxsw || ctx->ctx_fl_protected ? ctx->ctx_used_pmds[0] : ctx->ctx_reload_pmds[0];
for (i=0; mask; i++, mask>>=1) {
- if (mask & 0x1) ia64_set_pmd(i, t->pmd[i] & pmu_conf.perf_ovfl_val);
+ if (mask & 0x1) ia64_set_pmd(i, t->pmd[i] & pmu_conf.ovfl_val);
}
/*
@@ -3419,7 +3574,7 @@
int i;
if (task != current) {
- printk("perfmon: invalid task in ia64_reset_pmu()\n");
+ printk("perfmon: invalid task in pfm_reset_pmu()\n");
return;
}
@@ -3428,6 +3583,7 @@
/*
* install reset values for PMC. We skip PMC0 (done above)
+ * XX: good up to 64 PMCS
*/
for (i=1; (pmu_conf.pmc_desc[i].type & PFM_REG_END) = 0; i++) {
if ((pmu_conf.pmc_desc[i].type & PFM_REG_IMPL) = 0) continue;
@@ -3444,7 +3600,7 @@
/*
* clear reset values for PMD.
- * XXX: good up to 64 PMDS. Suppose that zero is a valid value.
+ * XXX: good up to 64 PMDS.
*/
for (i=0; (pmu_conf.pmd_desc[i].type & PFM_REG_END) = 0; i++) {
if ((pmu_conf.pmd_desc[i].type & PFM_REG_IMPL) = 0) continue;
@@ -3477,13 +3633,13 @@
*
* We never directly restore PMC0 so we do not include it in the mask.
*/
- ctx->ctx_reload_pmcs[0] = pmu_conf.impl_regs[0] & ~0x1;
+ ctx->ctx_reload_pmcs[0] = pmu_conf.impl_pmcs[0] & ~0x1;
/*
* We must include all the PMD in this mask to avoid picking
* up stale value and leak information, especially directly
* at the user level when psr.sp=0
*/
- ctx->ctx_reload_pmds[0] = pmu_conf.impl_regs[4];
+ ctx->ctx_reload_pmds[0] = pmu_conf.impl_pmds[0];
/*
* Keep track of the pmds we want to sample
@@ -3493,7 +3649,7 @@
*
* We ignore the unimplemented pmds specified by the user
*/
- ctx->ctx_used_pmds[0] = ctx->ctx_smpl_regs[0] & pmu_conf.impl_regs[4];
+ ctx->ctx_used_pmds[0] = ctx->ctx_smpl_regs[0];
ctx->ctx_used_pmcs[0] = 1; /* always save/restore PMC[0] */
/*
@@ -3547,16 +3703,17 @@
ia64_set_dcr(ia64_get_dcr() & ~IA64_DCR_PP);
/* stop monitoring */
- __asm__ __volatile__ ("rsm psr.pp;;"::: "memory");
+ pfm_clear_psr_pp();
ia64_srlz_i();
- __get_cpu_var(pfm_syst_wide) = 0;
- __get_cpu_var(pfm_dcr_pp) = 0;
+ PFM_CPUINFO_CLEAR(PFM_CPUINFO_SYST_WIDE);
+ PFM_CPUINFO_CLEAR(PFM_CPUINFO_DCR_PP);
+ PFM_CPUINFO_CLEAR(PFM_CPUINFO_EXCL_IDLE);
} else {
/* stop monitoring */
- __asm__ __volatile__ ("rum psr.up;;"::: "memory");
+ pfm_clear_psr_up();
ia64_srlz_i();
@@ -3622,10 +3779,14 @@
val = ia64_get_pmd(i);
if (PMD_IS_COUNTING(i)) {
- DBprintk(("[%d] pmd[%d] soft_pmd=0x%lx hw_pmd=0x%lx\n", task->pid, i, ctx->ctx_soft_pmds[i].val, val & pmu_conf.perf_ovfl_val));
+ DBprintk(("[%d] pmd[%d] soft_pmd=0x%lx hw_pmd=0x%lx\n",
+ task->pid,
+ i,
+ ctx->ctx_soft_pmds[i].val,
+ val & pmu_conf.ovfl_val));
/* collect latest results */
- ctx->ctx_soft_pmds[i].val += val & pmu_conf.perf_ovfl_val;
+ ctx->ctx_soft_pmds[i].val += val & pmu_conf.ovfl_val;
/*
* now everything is in ctx_soft_pmds[] and we need
@@ -3638,7 +3799,7 @@
* take care of overflow inline
*/
if (pmc0 & (1UL << i)) {
- ctx->ctx_soft_pmds[i].val += 1 + pmu_conf.perf_ovfl_val;
+ ctx->ctx_soft_pmds[i].val += 1 + pmu_conf.ovfl_val;
DBprintk(("[%d] pmd[%d] overflowed soft_pmd=0x%lx\n",
task->pid, i, ctx->ctx_soft_pmds[i].val));
}
@@ -3771,8 +3932,8 @@
m = nctx->ctx_used_pmds[0] >> PMU_FIRST_COUNTER;
for(i = PMU_FIRST_COUNTER ; m ; m>>=1, i++) {
if ((m & 0x1) && pmu_conf.pmd_desc[i].type = PFM_REG_COUNTING) {
- nctx->ctx_soft_pmds[i].val = nctx->ctx_soft_pmds[i].lval & ~pmu_conf.perf_ovfl_val;
- thread->pmd[i] = nctx->ctx_soft_pmds[i].lval & pmu_conf.perf_ovfl_val;
+ nctx->ctx_soft_pmds[i].val = nctx->ctx_soft_pmds[i].lval & ~pmu_conf.ovfl_val;
+ thread->pmd[i] = nctx->ctx_soft_pmds[i].lval & pmu_conf.ovfl_val;
} else {
thread->pmd[i] = 0UL; /* reset to initial state */
}
@@ -3939,30 +4100,14 @@
UNLOCK_CTX(ctx);
- LOCK_PFS();
+ pfm_unreserve_session(task, ctx->ctx_fl_system, 1UL << ctx->ctx_cpu);
if (ctx->ctx_fl_system) {
-
- pfm_sessions.pfs_sys_session[ctx->ctx_cpu] = NULL;
- pfm_sessions.pfs_sys_sessions--;
- DBprintk(("freeing syswide session on CPU%ld\n", ctx->ctx_cpu));
-
- /* update perfmon debug register usage counter */
- if (ctx->ctx_fl_using_dbreg) {
- if (pfm_sessions.pfs_sys_use_dbregs = 0) {
- printk("perfmon: invalid release for [%d] sys_use_dbregs=0\n", task->pid);
- } else
- pfm_sessions.pfs_sys_use_dbregs--;
- }
-
/*
* remove any CPU pinning
*/
set_cpus_allowed(task, ctx->ctx_saved_cpus_allowed);
- } else {
- pfm_sessions.pfs_task_sessions--;
- }
- UNLOCK_PFS();
+ }
pfm_context_free(ctx);
/*
@@ -3990,8 +4135,7 @@
* Walk through the list and free the sampling buffer and psb
*/
while (psb) {
- DBprintk(("[%d] freeing smpl @%p size %ld\n",
- current->pid, psb->psb_hdr, psb->psb_size));
+ DBprintk(("[%d] freeing smpl @%p size %ld\n", current->pid, psb->psb_hdr, psb->psb_size));
pfm_rvfree(psb->psb_hdr, psb->psb_size);
tmp = psb->psb_next;
@@ -4095,16 +4239,16 @@
if (ctx && ctx->ctx_notify_task = task) {
DBprintk(("trying for notifier [%d] in [%d]\n", task->pid, p->pid));
/*
- * the spinlock is required to take care of a race condition with
- * the send_sig_info() call. We must make sure that either the
- * send_sig_info() completes using a valid task, or the
- * notify_task is cleared before the send_sig_info() can pick up a
- * stale value. Note that by the time this function is executed
- * the 'task' is already detached from the tasklist. The problem
- * is that the notifiers have a direct pointer to it. It is okay
- * to send a signal to a task in this stage, it simply will have
- * no effect. But it is better than sending to a completely
- * destroyed task or worse to a new task using the same
+ * the spinlock is required to take care of a race condition
+ * with the send_sig_info() call. We must make sure that
+ * either the send_sig_info() completes using a valid task,
+ * or the notify_task is cleared before the send_sig_info()
+ * can pick up a stale value. Note that by the time this
+ * function is executed the 'task' is already detached from the
+ * tasklist. The problem is that the notifiers have a direct
+ * pointer to it. It is okay to send a signal to a task in this
+ * stage, it simply will have no effect. But it is better than sending
+ * to a completely destroyed task or worse to a new task using the same
* task_struct address.
*/
LOCK_CTX(ctx);
@@ -4123,87 +4267,131 @@
}
static struct irqaction perfmon_irqaction = {
- .handler = perfmon_interrupt,
+ .handler = pfm_interrupt_handler,
.flags = SA_INTERRUPT,
.name = "perfmon"
};
+int
+pfm_install_alternate_syswide_subsystem(pfm_intr_handler_desc_t *hdl)
+{
+ int ret;
+
+ /* some sanity checks */
+ if (hdl = NULL || hdl->handler = NULL) return -EINVAL;
+
+ /* do the easy test first */
+ if (pfm_alternate_intr_handler) return -EBUSY;
+
+ /* reserve our session */
+ ret = pfm_reserve_session(NULL, 1, cpu_online_map);
+ if (ret) return ret;
+
+ if (pfm_alternate_intr_handler) {
+ printk("perfmon: install_alternate, intr_handler not NULL after reserve\n");
+ return -EINVAL;
+ }
+
+ pfm_alternate_intr_handler = hdl;
+
+ return 0;
+}
+
+int
+pfm_remove_alternate_syswide_subsystem(pfm_intr_handler_desc_t *hdl)
+{
+ if (hdl = NULL) return -EINVAL;
+
+ /* cannot remove someone else's handler! */
+ if (pfm_alternate_intr_handler != hdl) return -EINVAL;
+
+ pfm_alternate_intr_handler = NULL;
+
+ /*
+ * XXX: assume cpu_online_map has not changed since reservation
+ */
+ pfm_unreserve_session(NULL, 1, cpu_online_map);
+
+ return 0;
+}
/*
* perfmon initialization routine, called from the initcall() table
*/
int __init
-perfmon_init (void)
+pfm_init(void)
{
- pal_perf_mon_info_u_t pm_info;
- s64 status;
+ unsigned int n, n_counters, i;
- pmu_conf.pfm_is_disabled = 1;
+ pmu_conf.disabled = 1;
- printk("perfmon: version %u.%u (sampling format v%u.%u) IRQ %u\n",
+ printk("perfmon: version %u.%u IRQ %u\n",
PFM_VERSION_MAJ,
PFM_VERSION_MIN,
- PFM_SMPL_VERSION_MAJ,
- PFM_SMPL_VERSION_MIN,
IA64_PERFMON_VECTOR);
- if ((status=ia64_pal_perf_mon_info(pmu_conf.impl_regs, &pm_info)) != 0) {
- printk("perfmon: PAL call failed (%ld), perfmon disabled\n", status);
- return -1;
- }
-
- pmu_conf.perf_ovfl_val = (1UL << pm_info.pal_perf_mon_info_s.width) - 1;
/*
- * XXX: use the pfm_*_desc tables instead and simply verify with PAL
+ * compute the number of implemented PMD/PMC from the
+ * description tables
*/
- pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic;
- pmu_conf.num_pmcs = find_num_pm_regs(pmu_conf.impl_regs);
- pmu_conf.num_pmds = find_num_pm_regs(&pmu_conf.impl_regs[4]);
-
- printk("perfmon: %u bits counters\n", pm_info.pal_perf_mon_info_s.width);
-
- printk("perfmon: %lu PMC/PMD pairs, %lu PMCs, %lu PMDs\n",
- pmu_conf.max_counters, pmu_conf.num_pmcs, pmu_conf.num_pmds);
+ n = 0;
+ for (i=0; PMC_IS_LAST(i) = 0; i++) {
+ if (PMC_IS_IMPL(i) = 0) continue;
+ pmu_conf.impl_pmcs[i>>6] |= 1UL << (i&63);
+ n++;
+ }
+ pmu_conf.num_pmcs = n;
+
+ n = 0; n_counters = 0;
+ for (i=0; PMD_IS_LAST(i) = 0; i++) {
+ if (PMD_IS_IMPL(i) = 0) continue;
+ pmu_conf.impl_pmds[i>>6] |= 1UL << (i&63);
+ n++;
+ if (PMD_IS_COUNTING(i)) n_counters++;
+ }
+ pmu_conf.num_pmds = n;
+ pmu_conf.num_counters = n_counters;
+
+ printk("perfmon: %u PMCs, %u PMDs, %u counters (%lu bits)\n",
+ pmu_conf.num_pmcs,
+ pmu_conf.num_pmds,
+ pmu_conf.num_counters,
+ ffz(pmu_conf.ovfl_val));
/* sanity check */
if (pmu_conf.num_pmds >= IA64_NUM_PMD_REGS || pmu_conf.num_pmcs >= IA64_NUM_PMC_REGS) {
- printk(KERN_ERR "perfmon: not enough pmc/pmd, perfmon is DISABLED\n");
- return -1; /* no need to continue anyway */
- }
-
- if (ia64_pal_debug_info(&pmu_conf.num_ibrs, &pmu_conf.num_dbrs)) {
- printk(KERN_WARNING "perfmon: unable to get number of debug registers\n");
- pmu_conf.num_ibrs = pmu_conf.num_dbrs = 0;
+ printk(KERN_ERR "perfmon: not enough pmc/pmd, perfmon disabled\n");
+ return -1;
}
- /* PAL reports the number of pairs */
- pmu_conf.num_ibrs <<=1;
- pmu_conf.num_dbrs <<=1;
-
- /*
- * setup the register configuration descriptions for the CPU
- */
- pmu_conf.pmc_desc = pfm_pmc_desc;
- pmu_conf.pmd_desc = pfm_pmd_desc;
-
- /* we are all set */
- pmu_conf.pfm_is_disabled = 0;
/*
* for now here for debug purposes
*/
perfmon_dir = create_proc_read_entry ("perfmon", 0, 0, perfmon_read_entry, NULL);
+ if (perfmon_dir = NULL) {
+ printk(KERN_ERR "perfmon: cannot create /proc entry, perfmon disabled\n");
+ return -1;
+ }
+ /*
+ * create /proc/perfmon
+ */
pfm_sysctl_header = register_sysctl_table(pfm_sysctl_root, 0);
+ /*
+ * initialize all our spinlocks
+ */
spin_lock_init(&pfm_sessions.pfs_lock);
+ /* we are all set */
+ pmu_conf.disabled = 0;
+
return 0;
}
-
-__initcall(perfmon_init);
+__initcall(pfm_init);
void
-perfmon_init_percpu (void)
+pfm_init_percpu(void)
{
int i;
@@ -4222,17 +4410,17 @@
*
* On McKinley, this code is ineffective until PMC4 is initialized.
*/
- for (i=1; (pfm_pmc_desc[i].type & PFM_REG_END) = 0; i++) {
- if ((pfm_pmc_desc[i].type & PFM_REG_IMPL) = 0) continue;
- ia64_set_pmc(i, pfm_pmc_desc[i].default_value);
+ for (i=1; PMC_IS_LAST(i) = 0; i++) {
+ if (PMC_IS_IMPL(i) = 0) continue;
+ ia64_set_pmc(i, PMC_DFL_VAL(i));
}
- for (i=0; (pfm_pmd_desc[i].type & PFM_REG_END) = 0; i++) {
- if ((pfm_pmd_desc[i].type & PFM_REG_IMPL) = 0) continue;
+
+ for (i=0; PMD_IS_LAST(i); i++) {
+ if (PMD_IS_IMPL(i) = 0) continue;
ia64_set_pmd(i, 0UL);
}
ia64_set_pmc(0,1UL);
ia64_srlz_d();
-
}
#else /* !CONFIG_PERFMON */
diff -Nru a/arch/ia64/kernel/perfmon_generic.h b/arch/ia64/kernel/perfmon_generic.h
--- a/arch/ia64/kernel/perfmon_generic.h Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/perfmon_generic.h Fri Jan 24 20:41:05 2003
@@ -1,10 +1,17 @@
+/*
+ * This file contains the architected PMU register description tables
+ * and pmc checker used by perfmon.c.
+ *
+ * Copyright (C) 2002 Hewlett Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ */
#define RDEP(x) (1UL<<(x))
-#if defined(CONFIG_ITANIUM) || defined(CONFIG_MCKINLEY)
-#error "This file should only be used when CONFIG_ITANIUM and CONFIG_MCKINLEY are not defined"
+#if defined(CONFIG_ITANIUM) || defined (CONFIG_MCKINLEY)
+#error "This file should not be used when CONFIG_ITANIUM or CONFIG_MCKINLEY is defined"
#endif
-static pfm_reg_desc_t pmc_desc[PMU_MAX_PMCS]={
+static pfm_reg_desc_t pmc_gen_desc[PMU_MAX_PMCS]={
/* pmc0 */ { PFM_REG_CONTROL , 0, 0x1UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc1 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc2 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
@@ -13,10 +20,10 @@
/* pmc5 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {RDEP(5),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc6 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {RDEP(6),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc7 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {RDEP(7),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
- { PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
+ { PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
-static pfm_reg_desc_t pmd_desc[PMU_MAX_PMDS]={
+static pfm_reg_desc_t pmd_gen_desc[PMU_MAX_PMDS]={
/* pmd0 */ { PFM_REG_NOTIMPL , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}},
/* pmd1 */ { PFM_REG_NOTIMPL , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}},
/* pmd2 */ { PFM_REG_NOTIMPL , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}},
@@ -25,5 +32,17 @@
/* pmd5 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(5),0UL, 0UL, 0UL}},
/* pmd6 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(6),0UL, 0UL, 0UL}},
/* pmd7 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(7),0UL, 0UL, 0UL}},
- { PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
+ { PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
+};
+
+/*
+ * impl_pmcs, impl_pmds are computed at runtime to minimize errors!
+ */
+static pmu_config_t pmu_conf={
+ disabled: 1,
+ ovfl_val: (1UL << 32) - 1,
+ num_ibrs: 8,
+ num_dbrs: 8,
+ pmd_desc: pfm_gen_pmd_desc,
+ pmc_desc: pfm_gen_pmc_desc
};
diff -Nru a/arch/ia64/kernel/perfmon_itanium.h b/arch/ia64/kernel/perfmon_itanium.h
--- a/arch/ia64/kernel/perfmon_itanium.h Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/perfmon_itanium.h Fri Jan 24 20:41:05 2003
@@ -15,7 +15,7 @@
static int pfm_ita_pmc_check(struct task_struct *task, unsigned int cnum, unsigned long *val, struct pt_regs *regs);
static int pfm_write_ibr_dbr(int mode, struct task_struct *task, void *arg, int count, struct pt_regs *regs);
-static pfm_reg_desc_t pfm_pmc_desc[PMU_MAX_PMCS]={
+static pfm_reg_desc_t pfm_ita_pmc_desc[PMU_MAX_PMCS]={
/* pmc0 */ { PFM_REG_CONTROL , 0, 0x1UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc1 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc2 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
@@ -33,7 +33,7 @@
{ PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
-static pfm_reg_desc_t pfm_pmd_desc[PMU_MAX_PMDS]={
+static pfm_reg_desc_t pfm_ita_pmd_desc[PMU_MAX_PMDS]={
/* pmd0 */ { PFM_REG_BUFFER , 0, 0UL, -1UL, NULL, NULL, {RDEP(1),0UL, 0UL, 0UL}, {RDEP(10),0UL, 0UL, 0UL}},
/* pmd1 */ { PFM_REG_BUFFER , 0, 0UL, -1UL, NULL, NULL, {RDEP(0),0UL, 0UL, 0UL}, {RDEP(10),0UL, 0UL, 0UL}},
/* pmd2 */ { PFM_REG_BUFFER , 0, 0UL, -1UL, NULL, NULL, {RDEP(3)|RDEP(17),0UL, 0UL, 0UL}, {RDEP(11),0UL, 0UL, 0UL}},
@@ -54,6 +54,19 @@
/* pmd17 */ { PFM_REG_BUFFER , 0, 0UL, -1UL, NULL, NULL, {RDEP(2)|RDEP(3),0UL, 0UL, 0UL}, {RDEP(11),0UL, 0UL, 0UL}},
{ PFM_REG_END , 0, 0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
+
+/*
+ * impl_pmcs, impl_pmds are computed at runtime to minimize errors!
+ */
+static pmu_config_t pmu_conf={
+ disabled: 1,
+ ovfl_val: (1UL << 32) - 1,
+ num_ibrs: 8,
+ num_dbrs: 8,
+ pmd_desc: pfm_ita_pmd_desc,
+ pmc_desc: pfm_ita_pmc_desc
+};
+
static int
pfm_ita_pmc_check(struct task_struct *task, unsigned int cnum, unsigned long *val, struct pt_regs *regs)
diff -Nru a/arch/ia64/kernel/perfmon_mckinley.h b/arch/ia64/kernel/perfmon_mckinley.h
--- a/arch/ia64/kernel/perfmon_mckinley.h Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/perfmon_mckinley.h Fri Jan 24 20:41:05 2003
@@ -16,7 +16,7 @@
static int pfm_mck_pmc_check(struct task_struct *task, unsigned int cnum, unsigned long *val, struct pt_regs *regs);
static int pfm_write_ibr_dbr(int mode, struct task_struct *task, void *arg, int count, struct pt_regs *regs);
-static pfm_reg_desc_t pfm_pmc_desc[PMU_MAX_PMCS]={
+static pfm_reg_desc_t pfm_mck_pmc_desc[PMU_MAX_PMCS]={
/* pmc0 */ { PFM_REG_CONTROL , 0, 0x1UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc1 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc2 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
@@ -36,7 +36,7 @@
{ PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
-static pfm_reg_desc_t pfm_pmd_desc[PMU_MAX_PMDS]={
+static pfm_reg_desc_t pfm_mck_pmd_desc[PMU_MAX_PMDS]={
/* pmd0 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(1),0UL, 0UL, 0UL}, {RDEP(10),0UL, 0UL, 0UL}},
/* pmd1 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(0),0UL, 0UL, 0UL}, {RDEP(10),0UL, 0UL, 0UL}},
/* pmd2 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(3)|RDEP(17),0UL, 0UL, 0UL}, {RDEP(11),0UL, 0UL, 0UL}},
@@ -57,6 +57,19 @@
/* pmd17 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(2)|RDEP(3),0UL, 0UL, 0UL}, {RDEP(11),0UL, 0UL, 0UL}},
{ PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
+
+/*
+ * impl_pmcs, impl_pmds are computed at runtime to minimize errors!
+ */
+static pmu_config_t pmu_conf={
+ disabled: 1,
+ ovfl_val: (1UL << 47) - 1,
+ num_ibrs: 8,
+ num_dbrs: 8,
+ pmd_desc: pfm_mck_pmd_desc,
+ pmc_desc: pfm_mck_pmc_desc
+};
+
/*
* PMC reserved fields must have their power-up values preserved
diff -Nru a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
--- a/arch/ia64/kernel/process.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/process.c Fri Jan 24 20:41:05 2003
@@ -1,7 +1,7 @@
/*
* Architecture-specific setup.
*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#define __KERNEL_SYSCALLS__ /* see <asm/unistd.h> */
@@ -96,7 +96,7 @@
{
unsigned long ip = regs->cr_iip + ia64_psr(regs)->ri;
- printk("\nPid: %d, comm: %20s\n", current->pid, current->comm);
+ printk("\nPid: %d, CPU %d, comm: %20s\n", current->pid, smp_processor_id(), current->comm);
printk("psr : %016lx ifs : %016lx ip : [<%016lx>] %s\n",
regs->cr_ipsr, regs->cr_ifs, ip, print_tainted());
print_symbol("ip is at %s\n", ip);
@@ -144,6 +144,15 @@
void
do_notify_resume_user (sigset_t *oldset, struct sigscratch *scr, long in_syscall)
{
+#ifdef CONFIG_FSYS
+ if (fsys_mode(current, &scr->pt)) {
+ /* defer signal-handling etc. until we return to privilege-level 0. */
+ if (!ia64_psr(&scr->pt)->lp)
+ ia64_psr(&scr->pt)->lp = 1;
+ return;
+ }
+#endif
+
#ifdef CONFIG_PERFMON
if (current->thread.pfm_ovfl_block_reset)
pfm_ovfl_block_reset();
@@ -198,6 +207,10 @@
void
ia64_save_extra (struct task_struct *task)
{
+#ifdef CONFIG_PERFMON
+ unsigned long info;
+#endif
+
if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0)
ia64_save_debug_regs(&task->thread.dbr[0]);
@@ -205,8 +218,9 @@
if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
pfm_save_regs(task);
- if (__get_cpu_var(pfm_syst_wide))
- pfm_syst_wide_update_task(task, 0);
+ info = __get_cpu_var(pfm_syst_info);
+ if (info & PFM_CPUINFO_SYST_WIDE)
+ pfm_syst_wide_update_task(task, info, 0);
#endif
#ifdef CONFIG_IA32_SUPPORT
@@ -218,6 +232,10 @@
void
ia64_load_extra (struct task_struct *task)
{
+#ifdef CONFIG_PERFMON
+ unsigned long info;
+#endif
+
if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0)
ia64_load_debug_regs(&task->thread.dbr[0]);
@@ -225,8 +243,9 @@
if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0)
pfm_load_regs(task);
- if (__get_cpu_var(pfm_syst_wide))
- pfm_syst_wide_update_task(task, 1);
+ info = __get_cpu_var(pfm_syst_info);
+ if (info & PFM_CPUINFO_SYST_WIDE)
+ pfm_syst_wide_update_task(task, info, 1);
#endif
#ifdef CONFIG_IA32_SUPPORT
diff -Nru a/arch/ia64/kernel/ptrace.c b/arch/ia64/kernel/ptrace.c
--- a/arch/ia64/kernel/ptrace.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/ptrace.c Fri Jan 24 20:41:05 2003
@@ -833,21 +833,19 @@
return -1;
}
#ifdef CONFIG_PERFMON
- /*
- * Check if debug registers are used
- * by perfmon. This test must be done once we know that we can
- * do the operation, i.e. the arguments are all valid, but before
- * we start modifying the state.
+ /*
+ * Check if debug registers are used by perfmon. This test must be done
+ * once we know that we can do the operation, i.e. the arguments are all
+ * valid, but before we start modifying the state.
*
- * Perfmon needs to keep a count of how many processes are
- * trying to modify the debug registers for system wide monitoring
- * sessions.
+ * Perfmon needs to keep a count of how many processes are trying to
+ * modify the debug registers for system wide monitoring sessions.
*
- * We also include read access here, because they may cause
- * the PMU-installed debug register state (dbr[], ibr[]) to
- * be reset. The two arrays are also used by perfmon, but
- * we do not use IA64_THREAD_DBG_VALID. The registers are restored
- * by the PMU context switch code.
+ * We also include read access here, because they may cause the
+ * PMU-installed debug register state (dbr[], ibr[]) to be reset. The two
+ * arrays are also used by perfmon, but we do not use
+ * IA64_THREAD_DBG_VALID. The registers are restored by the PMU context
+ * switch code.
*/
if (pfm_use_debug_registers(child)) return -1;
#endif
diff -Nru a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
--- a/arch/ia64/kernel/setup.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/setup.c Fri Jan 24 20:41:05 2003
@@ -423,7 +423,7 @@
#ifdef CONFIG_ACPI_BOOT
acpi_boot_init(*cmdline_p);
#endif
-#ifdef CONFIG_SERIAL_HCDP
+#ifdef CONFIG_SERIAL_8250_HCDP
if (efi.hcdp) {
void setup_serial_hcdp(void *);
diff -Nru a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
--- a/arch/ia64/kernel/smpboot.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/smpboot.c Fri Jan 24 20:41:05 2003
@@ -265,7 +265,7 @@
extern void ia64_init_itm(void);
#ifdef CONFIG_PERFMON
- extern void perfmon_init_percpu(void);
+ extern void pfm_init_percpu(void);
#endif
cpuid = smp_processor_id();
@@ -300,7 +300,7 @@
#endif
#ifdef CONFIG_PERFMON
- perfmon_init_percpu();
+ pfm_init_percpu();
#endif
local_irq_enable();
diff -Nru a/arch/ia64/kernel/sys_ia64.c b/arch/ia64/kernel/sys_ia64.c
--- a/arch/ia64/kernel/sys_ia64.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/sys_ia64.c Fri Jan 24 20:41:05 2003
@@ -20,7 +20,6 @@
#include <asm/shmparam.h>
#include <asm/uaccess.h>
-
unsigned long
arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
@@ -31,6 +30,20 @@
if (len > RGN_MAP_LIMIT)
return -ENOMEM;
+
+#ifdef CONFIG_HUGETLB_PAGE
+#define COLOR_HALIGN(addr) ((addr + HPAGE_SIZE - 1) & ~(HPAGE_SIZE - 1))
+#define TASK_HPAGE_BASE ((REGION_HPAGE << REGION_SHIFT) | HPAGE_SIZE)
+ if (filp && is_file_hugepages(filp)) {
+ if ((REGION_NUMBER(addr) != REGION_HPAGE) || (addr & (HPAGE_SIZE -1)))
+ addr = TASK_HPAGE_BASE;
+ addr = COLOR_HALIGN(addr);
+ }
+ else {
+ if (REGION_NUMBER(addr) = REGION_HPAGE)
+ addr = 0;
+ }
+#endif
if (!addr)
addr = TASK_UNMAPPED_BASE;
diff -Nru a/arch/ia64/kernel/traps.c b/arch/ia64/kernel/traps.c
--- a/arch/ia64/kernel/traps.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/traps.c Fri Jan 24 20:41:05 2003
@@ -1,7 +1,7 @@
/*
* Architecture-specific trap handling.
*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE
@@ -142,7 +142,7 @@
switch (break_num) {
case 0: /* unknown error (used by GCC for __builtin_abort()) */
- die_if_kernel("bad break", regs, break_num);
+ die_if_kernel("bugcheck!", regs, break_num);
sig = SIGILL; code = ILL_ILLOPC;
break;
@@ -524,6 +524,25 @@
case 29: /* Debug */
case 35: /* Taken Branch Trap */
case 36: /* Single Step Trap */
+#ifdef CONFIG_FSYS
+ if (fsys_mode(current, regs)) {
+ extern char syscall_via_break[], __start_gate_section[];
+ /*
+ * Got a trap in fsys-mode: Taken Branch Trap and Single Step trap
+ * need special handling; Debug trap is not supposed to happen.
+ */
+ if (unlikely(vector = 29)) {
+ die("Got debug trap in fsys-mode---not supposed to happen!",
+ regs, 0);
+ return;
+ }
+ /* re-do the system call via break 0x100000: */
+ regs->cr_iip = GATE_ADDR + (syscall_via_break - __start_gate_section);
+ ia64_psr(regs)->ri = 0;
+ ia64_psr(regs)->cpl = 3;
+ return;
+ }
+#endif
switch (vector) {
case 29:
siginfo.si_code = TRAP_HWBKPT;
@@ -563,19 +582,31 @@
}
return;
- case 34: /* Unimplemented Instruction Address Trap */
- if (user_mode(regs)) {
- siginfo.si_signo = SIGILL;
- siginfo.si_code = ILL_BADIADDR;
- siginfo.si_errno = 0;
- siginfo.si_flags = 0;
- siginfo.si_isr = 0;
- siginfo.si_imm = 0;
- siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
- force_sig_info(SIGILL, &siginfo, current);
+ case 34:
+ if (isr & 0x2) {
+ /* Lower-Privilege Transfer Trap */
+ /*
+ * Just clear PSR.lp and then return immediately: all the
+ * interesting work (e.g., signal delivery is done in the kernel
+ * exit path).
+ */
+ ia64_psr(regs)->lp = 0;
return;
+ } else {
+ /* Unimplemented Instr. Address Trap */
+ if (user_mode(regs)) {
+ siginfo.si_signo = SIGILL;
+ siginfo.si_code = ILL_BADIADDR;
+ siginfo.si_errno = 0;
+ siginfo.si_flags = 0;
+ siginfo.si_isr = 0;
+ siginfo.si_imm = 0;
+ siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
+ force_sig_info(SIGILL, &siginfo, current);
+ return;
+ }
+ sprintf(buf, "Unimplemented Instruction Address fault");
}
- sprintf(buf, "Unimplemented Instruction Address fault");
break;
case 45:
diff -Nru a/arch/ia64/kernel/unaligned.c b/arch/ia64/kernel/unaligned.c
--- a/arch/ia64/kernel/unaligned.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/unaligned.c Fri Jan 24 20:41:05 2003
@@ -331,12 +331,8 @@
return;
}
- /*
- * Avoid using user_mode() here: with "epc", we cannot use the privilege level to
- * infer whether the interrupt task was running on the kernel backing store.
- */
- if (regs->r12 >= TASK_SIZE) {
- DPRINT("ignoring kernel write to r%lu; register isn't on the RBS!", r1);
+ if (!user_stack(current, regs)) {
+ DPRINT("ignoring kernel write to r%lu; register isn't on the kernel RBS!", r1);
return;
}
@@ -406,11 +402,7 @@
return;
}
- /*
- * Avoid using user_mode() here: with "epc", we cannot use the privilege level to
- * infer whether the interrupt task was running on the kernel backing store.
- */
- if (regs->r12 >= TASK_SIZE) {
+ if (!user_stack(current, regs)) {
DPRINT("ignoring kernel read of r%lu; register isn't on the RBS!", r1);
goto fail;
}
@@ -1302,12 +1294,12 @@
void
ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
{
- struct exception_fixup fix = { 0 };
struct ia64_psr *ipsr = ia64_psr(regs);
mm_segment_t old_fs = get_fs();
unsigned long bundle[2];
unsigned long opcode;
struct siginfo si;
+ const struct exception_table_entry *eh = NULL;
union {
unsigned long l;
load_store_t insn;
@@ -1325,10 +1317,9 @@
* user-level unaligned accesses. Otherwise, a clever program could trick this
* handler into reading an arbitrary kernel addresses...
*/
- if (!user_mode(regs)) {
- fix = SEARCH_EXCEPTION_TABLE(regs);
- }
- if (user_mode(regs) || fix.cont) {
+ if (!user_mode(regs))
+ eh = SEARCH_EXCEPTION_TABLE(regs);
+ if (user_mode(regs) || eh) {
if ((current->thread.flags & IA64_THREAD_UAC_SIGBUS) != 0)
goto force_sigbus;
@@ -1494,8 +1485,8 @@
failure:
/* something went wrong... */
if (!user_mode(regs)) {
- if (fix.cont) {
- handle_exception(regs, fix);
+ if (eh) {
+ handle_exception(regs, eh);
goto done;
}
die_if_kernel("error during unaligned kernel access\n", regs, ret);
diff -Nru a/arch/ia64/kernel/unwind.c b/arch/ia64/kernel/unwind.c
--- a/arch/ia64/kernel/unwind.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/kernel/unwind.c Fri Jan 24 20:41:06 2003
@@ -1997,16 +1997,18 @@
{
extern char __start_gate_section[], __stop_gate_section[];
unsigned long *lp, start, end, segbase = unw.kernel_table.segment_base;
- const struct unw_table_entry *entry, *first;
+ const struct unw_table_entry *entry, *first, *unw_table_end;
+ extern int ia64_unw_end;
size_t info_size, size;
char *info;
start = (unsigned long) __start_gate_section - segbase;
end = (unsigned long) __stop_gate_section - segbase;
+ unw_table_end = (struct unw_table_entry *) &ia64_unw_end;
size = 0;
first = lookup(&unw.kernel_table, start);
- for (entry = first; entry->start_offset < end; ++entry)
+ for (entry = first; entry < unw_table_end && entry->start_offset < end; ++entry)
size += 3*8 + 8 + 8*UNW_LENGTH(*(u64 *) (segbase + entry->info_offset));
size += 8; /* reserve space for "end of table" marker */
@@ -2021,7 +2023,7 @@
lp = unw.gate_table;
info = (char *) unw.gate_table + size;
- for (entry = first; entry->start_offset < end; ++entry, lp += 3) {
+ for (entry = first; entry < unw_table_end && entry->start_offset < end; ++entry, lp += 3) {
info_size = 8 + 8*UNW_LENGTH(*(u64 *) (segbase + entry->info_offset));
info -= info_size;
memcpy(info, (char *) segbase + entry->info_offset, info_size);
diff -Nru a/arch/ia64/lib/memcpy_mck.S b/arch/ia64/lib/memcpy_mck.S
--- a/arch/ia64/lib/memcpy_mck.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/lib/memcpy_mck.S Fri Jan 24 20:41:05 2003
@@ -159,7 +159,7 @@
mov ar.ec=2
(p10) br.dpnt.few .aligned_src_tail
;;
- .align 32
+// .align 32
1:
EX(.ex_handler, (p16) ld8 r34=[src0],16)
EK(.ex_handler, (p16) ld8 r38=[src1],16)
@@ -316,7 +316,7 @@
(p7) mov ar.lc = r21
(p8) mov ar.lc = r0
;;
- .align 32
+// .align 32
1: lfetch.fault [src_pre_mem], 128
lfetch.fault.excl [dst_pre_mem], 128
br.cloop.dptk.few 1b
@@ -522,7 +522,7 @@
shrp r21=r22,r38,shift; /* speculative work */ \
br.sptk.few .unaligned_src_tail /* branch out of jump table */ \
;;
- .align 32
+// .align 32
.jump_table:
COPYU(8) // unaligned cases
.jmp1:
diff -Nru a/arch/ia64/lib/memset.S b/arch/ia64/lib/memset.S
--- a/arch/ia64/lib/memset.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/lib/memset.S Fri Jan 24 20:41:05 2003
@@ -125,7 +125,7 @@
(p_zr) br.cond.dptk.many .l1b // Jump to use stf.spill
;; }
- .align 32 // -------------------------- // L1A: store ahead into cache lines; fill later
+// .align 32 // -------------------------- // L1A: store ahead into cache lines; fill later
{ .mmi
and tmp = -(LINE_SIZE), cnt // compute end of range
mov ptr9 = ptr1 // used for prefetching
@@ -194,7 +194,7 @@
br.cond.dpnt.many .move_bytes_from_alignment // Branch no. 3
;; }
- .align 32
+// .align 32
.l1b: // ------------------------------------ // L1B: store ahead into cache lines; fill later
{ .mmi
and tmp = -(LINE_SIZE), cnt // compute end of range
@@ -261,7 +261,7 @@
and cnt = 0x1f, cnt // compute the remaining cnt
mov.i ar.lc = loopcnt
;; }
- .align 32
+// .align 32
.l2: // ------------------------------------ // L2A: store 32B in 2 cycles
{ .mmb
stf8 [ptr1] = fvalue, 8
diff -Nru a/arch/ia64/mm/extable.c b/arch/ia64/mm/extable.c
--- a/arch/ia64/mm/extable.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/mm/extable.c Fri Jan 24 20:41:05 2003
@@ -10,20 +10,19 @@
#include <asm/uaccess.h>
#include <asm/module.h>
-extern const struct exception_table_entry __start___ex_table[];
-extern const struct exception_table_entry __stop___ex_table[];
-
-static inline const struct exception_table_entry *
-search_one_table (const struct exception_table_entry *first,
- const struct exception_table_entry *last,
- unsigned long ip, unsigned long gp)
+const struct exception_table_entry *
+search_extable (const struct exception_table_entry *first,
+ const struct exception_table_entry *last,
+ unsigned long ip)
{
- while (first <= last) {
- const struct exception_table_entry *mid;
- long diff;
+ const struct exception_table_entry *mid;
+ unsigned long mid_ip;
+ long diff;
+ while (first <= last) {
mid = &first[(last - first)/2];
- diff = (mid->addr + gp) - ip;
+ mid_ip = (u64) &mid->addr + mid->addr;
+ diff = mid_ip - ip;
if (diff = 0)
return mid;
else if (diff < 0)
@@ -34,50 +33,14 @@
return 0;
}
-#ifndef CONFIG_MODULES
-register unsigned long main_gp __asm__("gp");
-#endif
-
-struct exception_fixup
-search_exception_table (unsigned long addr)
-{
- const struct exception_table_entry *entry;
- struct exception_fixup fix = { 0 };
-
-#ifndef CONFIG_MODULES
- /* There is only the kernel to search. */
- entry = search_one_table(__start___ex_table, __stop___ex_table - 1, addr, main_gp);
- if (entry)
- fix.cont = entry->cont + main_gp;
- return fix;
-#else
- struct archdata *archdata;
- struct module *mp;
-
- /* The kernel is the last "module" -- no need to treat it special. */
- for (mp = module_list; mp; mp = mp->next) {
- if (!mp->ex_table_start)
- continue;
- archdata = (struct archdata *) mp->archdata_start;
- if (!archdata)
- continue;
- entry = search_one_table(mp->ex_table_start, mp->ex_table_end - 1,
- addr, (unsigned long) archdata->gp);
- if (entry) {
- fix.cont = entry->cont + (unsigned long) archdata->gp;
- return fix;
- }
- }
-#endif
- return fix;
-}
-
void
-handle_exception (struct pt_regs *regs, struct exception_fixup fix)
+handle_exception (struct pt_regs *regs, const struct exception_table_entry *e)
{
+ long fix = (u64) &e->cont + e->cont;
+
regs->r8 = -EFAULT;
- if (fix.cont & 4)
+ if (fix & 4)
regs->r9 = 0;
- regs->cr_iip = (long) fix.cont & ~0xf;
- ia64_psr(regs)->ri = fix.cont & 0x3; /* set continuation slot number */
+ regs->cr_iip = fix & ~0xf;
+ ia64_psr(regs)->ri = fix & 0x3; /* set continuation slot number */
}
diff -Nru a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
--- a/arch/ia64/mm/fault.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/mm/fault.c Fri Jan 24 20:41:05 2003
@@ -58,6 +58,18 @@
if (in_interrupt() || !mm)
goto no_context;
+#ifdef CONFIG_VIRTUAL_MEM_MAP
+ /*
+ * If fault is in region 5 and we are in the kernel, we may already
+ * have the mmap_sem (pfn_valid macro is called during mmap). There
+ * is no vma for region 5 addr's anyway, so skip getting the semaphore
+ * and go directly to the exception handling code.
+ */
+
+ if ((REGION_NUMBER(address) = 5) && !user_mode(regs))
+ goto bad_area_no_up;
+#endif
+
down_read(&mm->mmap_sem);
vma = find_vma_prev(mm, address, &prev_vma);
@@ -139,6 +151,9 @@
bad_area:
up_read(&mm->mmap_sem);
+#ifdef CONFIG_VIRTUAL_MEM_MAP
+ bad_area_no_up:
+#endif
if ((isr & IA64_ISR_SP)
|| ((isr & IA64_ISR_NA) && (isr & IA64_ISR_CODE_MASK) = IA64_ISR_CODE_LFETCH))
{
diff -Nru a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
--- a/arch/ia64/mm/hugetlbpage.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/mm/hugetlbpage.c Fri Jan 24 20:41:05 2003
@@ -12,71 +12,42 @@
#include <linux/pagemap.h>
#include <linux/smp_lock.h>
#include <linux/slab.h>
-
#include <asm/mman.h>
#include <asm/pgalloc.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
-static struct vm_operations_struct hugetlb_vm_ops;
-struct list_head htlbpage_freelist;
-spinlock_t htlbpage_lock = SPIN_LOCK_UNLOCKED;
-extern long htlbpagemem;
+#include <linux/sysctl.h>
+
+static long htlbpagemem;
+int htlbpage_max;
+static long htlbzone_pages;
-static void zap_hugetlb_resources (struct vm_area_struct *);
+struct vm_operations_struct hugetlb_vm_ops;
+static LIST_HEAD(htlbpage_freelist);
+static spinlock_t htlbpage_lock = SPIN_LOCK_UNLOCKED;
-static struct page *
-alloc_hugetlb_page (void)
+static struct page *alloc_hugetlb_page(void)
{
- struct list_head *curr, *head;
+ int i;
struct page *page;
spin_lock(&htlbpage_lock);
-
- head = &htlbpage_freelist;
- curr = head->next;
-
- if (curr = head) {
+ if (list_empty(&htlbpage_freelist)) {
spin_unlock(&htlbpage_lock);
return NULL;
}
- page = list_entry(curr, struct page, list);
- list_del(curr);
+
+ page = list_entry(htlbpage_freelist.next, struct page, list);
+ list_del(&page->list);
htlbpagemem--;
spin_unlock(&htlbpage_lock);
set_page_count(page, 1);
- memset(page_address(page), 0, HPAGE_SIZE);
+ for (i = 0; i < (HPAGE_SIZE/PAGE_SIZE); ++i)
+ clear_highpage(&page[i]);
return page;
}
-static void
-free_hugetlb_page (struct page *page)
-{
- spin_lock(&htlbpage_lock);
- if ((page->mapping != NULL) && (page_count(page) = 2)) {
- struct inode *inode = page->mapping->host;
- int i;
-
- ClearPageDirty(page);
- remove_from_page_cache(page);
- set_page_count(page, 1);
- if ((inode->i_size -= HPAGE_SIZE) = 0) {
- for (i = 0; i < MAX_ID; i++)
- if (htlbpagek[i].key = inode->i_ino) {
- htlbpagek[i].key = 0;
- htlbpagek[i].in = NULL;
- break;
- }
- kfree(inode);
- }
- }
- if (put_page_testzero(page)) {
- list_add(&page->list, &htlbpage_freelist);
- htlbpagemem++;
- }
- spin_unlock(&htlbpage_lock);
-}
-
static pte_t *
huge_pte_alloc (struct mm_struct *mm, unsigned long addr)
{
@@ -126,63 +97,8 @@
return;
}
-static int
-anon_get_hugetlb_page (struct mm_struct *mm, struct vm_area_struct *vma,
- int write_access, pte_t * page_table)
-{
- struct page *page;
-
- page = alloc_hugetlb_page();
- if (page = NULL)
- return -1;
- set_huge_pte(mm, vma, page, page_table, write_access);
- return 1;
-}
-
-static int
-make_hugetlb_pages_present (unsigned long addr, unsigned long end, int flags)
-{
- int write;
- struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
- pte_t *pte;
-
- vma = find_vma(mm, addr);
- if (!vma)
- goto out_error1;
-
- write = (vma->vm_flags & VM_WRITE) != 0;
- if ((vma->vm_end - vma->vm_start) & (HPAGE_SIZE - 1))
- goto out_error1;
- spin_lock(&mm->page_table_lock);
- do {
- pte = huge_pte_alloc(mm, addr);
- if ((pte) && (pte_none(*pte))) {
- if (anon_get_hugetlb_page(mm, vma, write ? VM_WRITE : VM_READ, pte) = -1)
- goto out_error;
- } else
- goto out_error;
- addr += HPAGE_SIZE;
- } while (addr < end);
- spin_unlock(&mm->page_table_lock);
- vma->vm_flags |= (VM_HUGETLB | VM_RESERVED);
- if (flags & MAP_PRIVATE)
- vma->vm_flags |= VM_DONTCOPY;
- vma->vm_ops = &hugetlb_vm_ops;
- return 0;
-out_error:
- if (addr > vma->vm_start) {
- vma->vm_end = addr;
- zap_hugetlb_resources(vma);
- vma->vm_end = end;
- }
- spin_unlock(&mm->page_table_lock);
-out_error1:
- return -1;
-}
-
-int
-copy_hugetlb_page_range (struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *vma)
+int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
+ struct vm_area_struct *vma)
{
pte_t *src_pte, *dst_pte, entry;
struct page *ptepage;
@@ -202,15 +118,14 @@
addr += HPAGE_SIZE;
}
return 0;
-
- nomem:
+nomem:
return -ENOMEM;
}
int
-follow_hugetlb_page (struct mm_struct *mm, struct vm_area_struct *vma,
- struct page **pages, struct vm_area_struct **vmas,
- unsigned long *st, int *length, int i)
+follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct page **pages, struct vm_area_struct **vmas,
+ unsigned long *st, int *length, int i)
{
pte_t *ptep, pte;
unsigned long start = *st;
@@ -234,8 +149,8 @@
i++;
len--;
start += PAGE_SIZE;
- if (((start & HPAGE_MASK) = pstart) && len
- && (start < vma->vm_end))
+ if (((start & HPAGE_MASK) = pstart) && len &&
+ (start < vma->vm_end))
goto back1;
} while (len && start < vma->vm_end);
*length = len;
@@ -243,51 +158,149 @@
return i;
}
-static void
-zap_hugetlb_resources (struct vm_area_struct *mpnt)
+void free_huge_page(struct page *page)
+{
+ BUG_ON(page_count(page));
+ BUG_ON(page->mapping);
+
+ INIT_LIST_HEAD(&page->list);
+
+ spin_lock(&htlbpage_lock);
+ list_add(&page->list, &htlbpage_freelist);
+ htlbpagemem++;
+ spin_unlock(&htlbpage_lock);
+}
+
+void huge_page_release(struct page *page)
{
- struct mm_struct *mm = mpnt->vm_mm;
- unsigned long len, addr, end;
- pte_t *ptep;
+ if (!put_page_testzero(page))
+ return;
+
+ free_huge_page(page);
+}
+
+void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long address;
+ pte_t *pte;
struct page *page;
- addr = mpnt->vm_start;
- end = mpnt->vm_end;
- len = end - addr;
- do {
- ptep = huge_pte_offset(mm, addr);
- page = pte_page(*ptep);
- pte_clear(ptep);
- free_hugetlb_page(page);
- addr += HPAGE_SIZE;
- } while (addr < end);
- mm->rss -= (len >> PAGE_SHIFT);
- mpnt->vm_ops = NULL;
- flush_tlb_range(mpnt, end - len, end);
+ BUG_ON(start & (HPAGE_SIZE - 1));
+ BUG_ON(end & (HPAGE_SIZE - 1));
+
+ spin_lock(&htlbpage_lock);
+ spin_unlock(&htlbpage_lock);
+ for (address = start; address < end; address += HPAGE_SIZE) {
+ pte = huge_pte_offset(mm, address);
+ if (pte_none(*pte))
+ continue;
+ page = pte_page(*pte);
+ huge_page_release(page);
+ pte_clear(pte);
+ }
+ mm->rss -= (end - start) >> PAGE_SHIFT;
+ flush_tlb_range(vma, start, end);
}
-static void
-unlink_vma (struct vm_area_struct *mpnt)
+void zap_hugepage_range(struct vm_area_struct *vma, unsigned long start, unsigned long length)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ spin_lock(&mm->page_table_lock);
+ unmap_hugepage_range(vma, start, start + length);
+ spin_unlock(&mm->page_table_lock);
+}
+
+int hugetlb_prefault(struct address_space *mapping, struct vm_area_struct *vma)
{
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ unsigned long addr;
+ int ret = 0;
+
+ BUG_ON(vma->vm_start & ~HPAGE_MASK);
+ BUG_ON(vma->vm_end & ~HPAGE_MASK);
- vma = mm->mmap;
- if (vma = mpnt) {
- mm->mmap = vma->vm_next;
- } else {
- while (vma->vm_next != mpnt) {
- vma = vma->vm_next;
+ spin_lock(&mm->page_table_lock);
+ for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) {
+ unsigned long idx;
+ pte_t *pte = huge_pte_alloc(mm, addr);
+ struct page *page;
+
+ if (!pte) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ if (!pte_none(*pte))
+ continue;
+
+ idx = ((addr - vma->vm_start) >> HPAGE_SHIFT)
+ + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT));
+ page = find_get_page(mapping, idx);
+ if (!page) {
+ page = alloc_hugetlb_page();
+ if (!page) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ add_to_page_cache(page, mapping, idx);
+ unlock_page(page);
}
- vma->vm_next = mpnt->vm_next;
+ set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE);
}
- rb_erase(&mpnt->vm_rb, &mm->mm_rb);
- mm->mmap_cache = NULL;
- mm->map_count--;
+out:
+ spin_unlock(&mm->page_table_lock);
+ return ret;
}
-int
-set_hugetlb_mem_size (int count)
+void update_and_free_page(struct page *page)
+{
+ int j;
+ struct page *map;
+
+ map = page;
+ htlbzone_pages--;
+ for (j = 0; j < (HPAGE_SIZE / PAGE_SIZE); j++) {
+ map->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced |
+ 1 << PG_dirty | 1 << PG_active | 1 << PG_reserved |
+ 1 << PG_private | 1<< PG_writeback);
+ set_page_count(map, 0);
+ map++;
+ }
+ set_page_count(page, 1);
+ __free_pages(page, HUGETLB_PAGE_ORDER);
+}
+
+int try_to_free_low(int count)
+{
+ struct list_head *p;
+ struct page *page, *map;
+
+ map = NULL;
+ spin_lock(&htlbpage_lock);
+ list_for_each(p, &htlbpage_freelist) {
+ if (map) {
+ list_del(&map->list);
+ update_and_free_page(map);
+ htlbpagemem--;
+ map = NULL;
+ if (++count = 0)
+ break;
+ }
+ page = list_entry(p, struct page, list);
+ if ((page_zone(page))->name[0] != 'H') // Look for non-Highmem
+ map = page;
+ }
+ if (map) {
+ list_del(&map->list);
+ update_and_free_page(map);
+ htlbpagemem--;
+ count++;
+ }
+ spin_unlock(&htlbpage_lock);
+ return count;
+}
+
+int set_hugetlb_mem_size(int count)
{
int j, lcount;
struct page *page, *map;
@@ -298,7 +311,10 @@
lcount = count;
else
lcount = count - htlbzone_pages;
- if (lcount > 0) { /*Increase the mem size. */
+
+ if (lcount = 0)
+ return (int)htlbzone_pages;
+ if (lcount > 0) { /* Increase the mem size. */
while (lcount--) {
page = alloc_pages(__GFP_HIGHMEM, HUGETLB_PAGE_ORDER);
if (page = NULL)
@@ -316,27 +332,79 @@
}
return (int) htlbzone_pages;
}
- /*Shrink the memory size. */
+ /* Shrink the memory size. */
+ lcount = try_to_free_low(lcount);
while (lcount++) {
page = alloc_hugetlb_page();
if (page = NULL)
break;
spin_lock(&htlbpage_lock);
- htlbzone_pages--;
+ update_and_free_page(page);
spin_unlock(&htlbpage_lock);
- map = page;
- for (j = 0; j < (HPAGE_SIZE / PAGE_SIZE); j++) {
- map->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced |
- 1 << PG_dirty | 1 << PG_active | 1 << PG_reserved |
- 1 << PG_private | 1<< PG_writeback);
- map++;
- }
- set_page_count(page, 1);
- __free_pages(page, HUGETLB_PAGE_ORDER);
}
return (int) htlbzone_pages;
}
-static struct vm_operations_struct hugetlb_vm_ops = {
- .close = zap_hugetlb_resources
+int hugetlb_sysctl_handler(ctl_table *table, int write, struct file *file, void *buffer, size_t *length)
+{
+ proc_dointvec(table, write, file, buffer, length);
+ htlbpage_max = set_hugetlb_mem_size(htlbpage_max);
+ return 0;
+}
+
+static int __init hugetlb_setup(char *s)
+{
+ if (sscanf(s, "%d", &htlbpage_max) <= 0)
+ htlbpage_max = 0;
+ return 1;
+}
+__setup("hugepages=", hugetlb_setup);
+
+static int __init hugetlb_init(void)
+{
+ int i, j;
+ struct page *page;
+
+ for (i = 0; i < htlbpage_max; ++i) {
+ page = alloc_pages(__GFP_HIGHMEM, HUGETLB_PAGE_ORDER);
+ if (!page)
+ break;
+ for (j = 0; j < HPAGE_SIZE/PAGE_SIZE; ++j)
+ SetPageReserved(&page[j]);
+ spin_lock(&htlbpage_lock);
+ list_add(&page->list, &htlbpage_freelist);
+ spin_unlock(&htlbpage_lock);
+ }
+ htlbpage_max = htlbpagemem = htlbzone_pages = i;
+ printk("Total HugeTLB memory allocated, %ld\n", htlbpagemem);
+ return 0;
+}
+module_init(hugetlb_init);
+
+int hugetlb_report_meminfo(char *buf)
+{
+ return sprintf(buf,
+ "HugePages_Total: %5lu\n"
+ "HugePages_Free: %5lu\n"
+ "Hugepagesize: %5lu kB\n",
+ htlbzone_pages,
+ htlbpagemem,
+ HPAGE_SIZE/1024);
+}
+
+int is_hugepage_mem_enough(size_t size)
+{
+ if (size > (htlbpagemem << HPAGE_SHIFT))
+ return 0;
+ return 1;
+}
+
+static struct page *hugetlb_nopage(struct vm_area_struct * area, unsigned long address, int unused)
+{
+ BUG();
+ return NULL;
+}
+
+struct vm_operations_struct hugetlb_vm_ops = {
+ .nopage = hugetlb_nopage,
};
diff -Nru a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
--- a/arch/ia64/mm/init.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/mm/init.c Fri Jan 24 20:41:05 2003
@@ -38,6 +38,13 @@
unsigned long MAX_DMA_ADDRESS = PAGE_OFFSET + 0x100000000UL;
+#ifdef CONFIG_VIRTUAL_MEM_MAP
+# define LARGE_GAP 0x40000000 /* Use virtual mem map if hole is > than this */
+ unsigned long vmalloc_end = VMALLOC_END_INIT;
+ static struct page *vmem_map;
+ static unsigned long num_dma_physpages;
+#endif
+
static int pgt_cache_water[2] = { 25, 50 };
void
@@ -338,17 +345,148 @@
ia64_tlb_init();
}
+#ifdef CONFIG_VIRTUAL_MEM_MAP
+
+static int
+create_mem_map_page_table (u64 start, u64 end, void *arg)
+{
+ unsigned long address, start_page, end_page;
+ struct page *map_start, *map_end;
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ map_start = vmem_map + (__pa(start) >> PAGE_SHIFT);
+ map_end = vmem_map + (__pa(end) >> PAGE_SHIFT);
+
+ start_page = (unsigned long) map_start & PAGE_MASK;
+ end_page = PAGE_ALIGN((unsigned long) map_end);
+
+ for (address = start_page; address < end_page; address += PAGE_SIZE) {
+ pgd = pgd_offset_k(address);
+ if (pgd_none(*pgd))
+ pgd_populate(&init_mm, pgd, alloc_bootmem_pages(PAGE_SIZE));
+ pmd = pmd_offset(pgd, address);
+
+ if (pmd_none(*pmd))
+ pmd_populate_kernel(&init_mm, pmd, alloc_bootmem_pages(PAGE_SIZE));
+ pte = pte_offset_kernel(pmd, address);
+
+ if (pte_none(*pte))
+ set_pte(pte, pfn_pte(__pa(alloc_bootmem_pages(PAGE_SIZE)) >> PAGE_SHIFT,
+ PAGE_KERNEL));
+ }
+ return 0;
+}
+
+struct memmap_init_callback_data {
+ memmap_init_callback_t *memmap_init;
+ struct page *start;
+ struct page *end;
+ int nid;
+ unsigned long zone;
+};
+
+static int
+virtual_memmap_init (u64 start, u64 end, void *arg)
+{
+ struct memmap_init_callback_data *args;
+ struct page *map_start, *map_end;
+
+ args = (struct memmap_init_callback_data *) arg;
+
+ map_start = vmem_map + (__pa(start) >> PAGE_SHIFT);
+ map_end = vmem_map + (__pa(end) >> PAGE_SHIFT);
+
+ if (map_start < args->start)
+ map_start = args->start;
+ if (map_end > args->end)
+ map_end = args->end;
+
+ /*
+ * We have to initialize "out of bounds" struct page elements
+ * that fit completely on the same pages that were allocated
+ * for the "in bounds" elements because they may be referenced
+ * later (and found to be "reserved").
+ */
+
+ map_start -= ((unsigned long) map_start & (PAGE_SIZE - 1)) / sizeof(struct page);
+ map_end += ((PAGE_ALIGN((unsigned long) map_end) - (unsigned long) map_end)
+ / sizeof(struct page));
+
+ if (map_start < map_end)
+ (*args->memmap_init)(map_start, (unsigned long)(map_end - map_start),
+ args->nid,args->zone,page_to_pfn(map_start));
+ return 0;
+}
+
+void
+arch_memmap_init (memmap_init_callback_t *memmap_init,
+ struct page *start, unsigned long size, int nid,
+ unsigned long zone, unsigned long start_pfn)
+{
+ if (!vmem_map)
+ memmap_init(start,size,nid,zone,start_pfn);
+ else {
+ struct memmap_init_callback_data args;
+
+ args.memmap_init = memmap_init;
+ args.start = start;
+ args.end = start + size;
+ args.nid = nid;
+ args.zone = zone;
+
+ efi_memmap_walk(virtual_memmap_init, &args);
+ }
+}
+
+int
+ia64_pfn_valid (unsigned long pfn)
+{
+ char byte;
+
+ return __get_user(byte, (char *) pfn_to_page(pfn)) = 0;
+}
+
+static int
+count_dma_pages (u64 start, u64 end, void *arg)
+{
+ unsigned long *count = arg;
+
+ if (end <= MAX_DMA_ADDRESS)
+ *count += (end - start) >> PAGE_SHIFT;
+ return 0;
+}
+
+static int
+find_largest_hole (u64 start, u64 end, void *arg)
+{
+ u64 *max_gap = arg;
+
+ static u64 last_end = PAGE_OFFSET;
+
+ /* NOTE: this algorithm assumes efi memmap table is ordered */
+
+ if (*max_gap < (start - last_end))
+ *max_gap = start - last_end;
+ last_end = end;
+ return 0;
+}
+#endif /* CONFIG_VIRTUAL_MEM_MAP */
+
+static int
+count_pages (u64 start, u64 end, void *arg)
+{
+ unsigned long *count = arg;
+
+ *count += (end - start) >> PAGE_SHIFT;
+ return 0;
+}
+
/*
* Set up the page tables.
*/
-#ifdef CONFIG_HUGETLB_PAGE
-long htlbpagemem;
-int htlbpage_max;
-extern long htlbzone_pages;
-extern struct list_head htlbpage_freelist;
-#endif
-
#ifdef CONFIG_DISCONTIGMEM
void
paging_init (void)
@@ -356,18 +494,71 @@
extern void discontig_paging_init(void);
discontig_paging_init();
+ efi_memmap_walk(count_pages, &num_physpages);
}
#else /* !CONFIG_DISCONTIGMEM */
void
paging_init (void)
{
- unsigned long max_dma, zones_size[MAX_NR_ZONES];
+ unsigned long max_dma;
+ unsigned long zones_size[MAX_NR_ZONES];
+# ifdef CONFIG_VIRTUAL_MEM_MAP
+ unsigned long zholes_size[MAX_NR_ZONES];
+ unsigned long max_gap;
+# endif
/* initialize mem_map[] */
memset(zones_size, 0, sizeof(zones_size));
+ num_physpages = 0;
+ efi_memmap_walk(count_pages, &num_physpages);
+
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
+
+# ifdef CONFIG_VIRTUAL_MEM_MAP
+ memset(zholes_size, 0, sizeof(zholes_size));
+
+ num_dma_physpages = 0;
+ efi_memmap_walk(count_dma_pages, &num_dma_physpages);
+
+ if (max_low_pfn < max_dma) {
+ zones_size[ZONE_DMA] = max_low_pfn;
+ zholes_size[ZONE_DMA] = max_low_pfn - num_dma_physpages;
+ }
+ else {
+ zones_size[ZONE_DMA] = max_dma;
+ zholes_size[ZONE_DMA] = max_dma - num_dma_physpages;
+ if (num_physpages > num_dma_physpages) {
+ zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
+ zholes_size[ZONE_NORMAL] = ((max_low_pfn - max_dma)
+ - (num_physpages - num_dma_physpages));
+ }
+ }
+
+ max_gap = 0;
+ efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
+ if (max_gap < LARGE_GAP) {
+ vmem_map = (struct page *) 0;
+ free_area_init_node(0, &contig_page_data, NULL, zones_size, 0, zholes_size);
+ mem_map = contig_page_data.node_mem_map;
+ }
+ else {
+ unsigned long map_size;
+
+ /* allocate virtual_mem_map */
+
+ map_size = PAGE_ALIGN(max_low_pfn * sizeof(struct page));
+ vmalloc_end -= map_size;
+ vmem_map = (struct page *) vmalloc_end;
+ efi_memmap_walk(create_mem_map_page_table, 0);
+
+ free_area_init_node(0, &contig_page_data, vmem_map, zones_size, 0, zholes_size);
+
+ mem_map = contig_page_data.node_mem_map;
+ printk("Virtual mem_map starts at 0x%p\n", mem_map);
+ }
+# else /* !CONFIG_VIRTUAL_MEM_MAP */
if (max_low_pfn < max_dma)
zones_size[ZONE_DMA] = max_low_pfn;
else {
@@ -375,19 +566,11 @@
zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
}
free_area_init(zones_size);
+# endif /* !CONFIG_VIRTUAL_MEM_MAP */
}
#endif /* !CONFIG_DISCONTIGMEM */
static int
-count_pages (u64 start, u64 end, void *arg)
-{
- unsigned long *count = arg;
-
- *count += (end - start) >> PAGE_SHIFT;
- return 0;
-}
-
-static int
count_reserved_pages (u64 start, u64 end, void *arg)
{
unsigned long num_reserved = 0;
@@ -423,9 +606,6 @@
max_mapnr = max_low_pfn;
#endif
- num_physpages = 0;
- efi_memmap_walk(count_pages, &num_physpages);
-
high_memory = __va(max_low_pfn * PAGE_SIZE);
for_each_pgdat(pgdat)
@@ -461,30 +641,5 @@
#ifdef CONFIG_IA32_SUPPORT
ia32_gdt_init();
-#endif
-#ifdef CONFIG_HUGETLB_PAGE
- {
- long i;
- int j;
- struct page *page, *map;
-
- if ((htlbzone_pages << (HPAGE_SHIFT - PAGE_SHIFT)) >= max_low_pfn)
- htlbzone_pages = (max_low_pfn >> ((HPAGE_SHIFT - PAGE_SHIFT) + 1));
- INIT_LIST_HEAD(&htlbpage_freelist);
- for (i = 0; i < htlbzone_pages; i++) {
- page = alloc_pages(__GFP_HIGHMEM, HUGETLB_PAGE_ORDER);
- if (!page)
- break;
- map = page;
- for (j = 0; j < (HPAGE_SIZE/PAGE_SIZE); j++) {
- SetPageReserved(map);
- map++;
- }
- list_add(&page->list, &htlbpage_freelist);
- }
- printk("Total Huge_TLB_Page memory pages allocated %ld \n", i);
- htlbzone_pages = htlbpagemem = i;
- htlbpage_max = (int)i;
- }
#endif
}
diff -Nru a/arch/ia64/scripts/check-gas b/arch/ia64/scripts/check-gas
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/arch/ia64/scripts/check-gas Fri Jan 24 20:41:06 2003
@@ -0,0 +1,11 @@
+#!/bin/sh
+dir=$(dirname $0)
+CC=$1
+$CC -c $dir/check-gas-asm.S
+res=$(objdump -r --section .data check-gas-asm.o | fgrep 00004 | tr -s ' ' |cut -f3 -d' ')
+if [ $res != ".text" ]; then
+ echo buggy
+else
+ echo good
+fi
+exit 0
diff -Nru a/arch/ia64/scripts/check-gas-asm.S b/arch/ia64/scripts/check-gas-asm.S
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/arch/ia64/scripts/check-gas-asm.S Fri Jan 24 20:41:06 2003
@@ -0,0 +1,2 @@
+[1:] nop 0
+ .xdata4 ".data", 0, 1b-.
diff -Nru a/arch/ia64/scripts/unwcheck.sh b/arch/ia64/scripts/unwcheck.sh
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/arch/ia64/scripts/unwcheck.sh Fri Jan 24 20:41:06 2003
@@ -0,0 +1,109 @@
+#!/bin/sh
+# Usage: unwcheck.sh <executable_file_name>
+# Pre-requisite: readelf [from Gnu binutils package]
+# Purpose: Check the following invariant
+# For each code range in the input binary:
+# Sum[ lengths of unwind regions] = Number of slots in code range.
+# Author : Harish Patil
+# First version: January 2002
+# Modified : 2/13/2002
+# Modified : 3/15/2002: duplicate detection
+readelf -u $1 | gawk '\
+ function todec(hexstr){
+ dec = 0;
+ l = length(hexstr);
+ for (i = 1; i <= l; i++)
+ {
+ c = substr(hexstr, i, 1);
+ if (c = "A")
+ dec = dec*16 + 10;
+ else if (c = "B")
+ dec = dec*16 + 11;
+ else if (c = "C")
+ dec = dec*16 + 12;
+ else if (c = "D")
+ dec = dec*16 + 13;
+ else if (c = "E")
+ dec = dec*16 + 14;
+ else if (c = "F")
+ dec = dec*16 + 15;
+ else
+ dec = dec*16 + c;
+ }
+ return dec;
+ }
+ BEGIN { first = 1; sum_rlen = 0; no_slots = 0; errors=0; no_code_ranges=0; }
+ {
+ if (NF=5 && $3="info")
+ {
+ no_code_ranges += 1;
+ if (first = 0)
+ {
+ if (sum_rlen != no_slots)
+ {
+ print full_code_range;
+ print " ", "lo = ", lo, " hi =", hi;
+ print " ", "sum_rlen = ", sum_rlen, "no_slots = " no_slots;
+ print " "," ", "*******ERROR ***********";
+ print " "," ", "sum_rlen:", sum_rlen, " != no_slots:" no_slots;
+ errors += 1;
+ }
+ sum_rlen = 0;
+ }
+ full_code_range = $0;
+ code_range = $2;
+ gsub("..$", "", code_range);
+ gsub("^.", "", code_range);
+ split(code_range, addr, "-");
+ lo = toupper(addr[1]);
+
+ code_range_lo[no_code_ranges] = addr[1];
+ occurs[addr[1]] += 1;
+ full_range[addr[1]] = $0;
+
+ gsub("0X.[0]*", "", lo);
+ hi = toupper(addr[2]);
+ gsub("0X.[0]*", "", hi);
+ no_slots = (todec(hi) - todec(lo))/ 16*3
+ first = 0;
+ }
+ if (index($0,"rlen") > 0 )
+ {
+ rlen_str = substr($0, index($0,"rlen"));
+ rlen = rlen_str;
+ gsub("rlen=", "", rlen);
+ gsub(")", "", rlen);
+ sum_rlen = sum_rlen + rlen;
+ }
+ }
+ END {
+ if (first = 0)
+ {
+ if (sum_rlen != no_slots)
+ {
+ print "code_range=", code_range;
+ print " ", "lo = ", lo, " hi =", hi;
+ print " ", "sum_rlen = ", sum_rlen, "no_slots = " no_slots;
+ print " "," ", "*******ERROR ***********";
+ print " "," ", "sum_rlen:", sum_rlen, " != no_slots:" no_slots;
+ errors += 1;
+ }
+ }
+ no_duplicates = 0;
+ for (i=1; i<=no_code_ranges; i++)
+ {
+ cr = code_range_lo[i];
+ if (reported_cr[cr]=1) continue;
+ if ( occurs[cr] > 1)
+ {
+ reported_cr[cr] = 1;
+ print "Code range low ", code_range_lo[i], ":", full_range[cr], " occurs: ", occurs[cr], " times.";
+ print " ";
+ no_duplicates++;
+ }
+ }
+ print "==================="
+ print "Total errors:", errors, "/", no_code_ranges, " duplicates:", no_duplicates;
+ print "==================="
+ }
+ '
diff -Nru a/arch/ia64/tools/Makefile b/arch/ia64/tools/Makefile
--- a/arch/ia64/tools/Makefile Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/tools/Makefile Fri Jan 24 20:41:05 2003
@@ -4,14 +4,7 @@
src = $(obj)
-all:
-
-fastdep:
-
-mrproper: clean
-
-clean:
- rm -f $(obj)/print_offsets.s $(obj)/print_offsets $(obj)/offsets.h
+clean-files := print_offsets.s print_offsets offsets.h
$(TARGET): $(obj)/offsets.h
@if ! cmp -s $(obj)/offsets.h ${TARGET}; then \
diff -Nru a/arch/ia64/tools/print_offsets.c b/arch/ia64/tools/print_offsets.c
--- a/arch/ia64/tools/print_offsets.c Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/tools/print_offsets.c Fri Jan 24 20:41:05 2003
@@ -1,7 +1,7 @@
/*
* Utility to generate asm-ia64/offsets.h.
*
- * Copyright (C) 1999-2002 Hewlett-Packard Co
+ * Copyright (C) 1999-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* Note that this file has dual use: when building the kernel
@@ -53,7 +53,10 @@
{ "UNW_FRAME_INFO_SIZE", sizeof (struct unw_frame_info) },
{ "", 0 }, /* spacer */
{ "IA64_TASK_THREAD_KSP_OFFSET", offsetof (struct task_struct, thread.ksp) },
+ { "IA64_TASK_THREAD_ON_USTACK_OFFSET", offsetof (struct task_struct, thread.on_ustack) },
{ "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) },
+ { "IA64_TASK_TGID_OFFSET", offsetof (struct task_struct, tgid) },
+ { "IA64_TASK_CLEAR_CHILD_TID_OFFSET",offsetof (struct task_struct, clear_child_tid) },
{ "IA64_PT_REGS_CR_IPSR_OFFSET", offsetof (struct pt_regs, cr_ipsr) },
{ "IA64_PT_REGS_CR_IIP_OFFSET", offsetof (struct pt_regs, cr_iip) },
{ "IA64_PT_REGS_CR_IFS_OFFSET", offsetof (struct pt_regs, cr_ifs) },
diff -Nru a/arch/ia64/vmlinux.lds.S b/arch/ia64/vmlinux.lds.S
--- a/arch/ia64/vmlinux.lds.S Fri Jan 24 20:41:05 2003
+++ b/arch/ia64/vmlinux.lds.S Fri Jan 24 20:41:05 2003
@@ -6,7 +6,7 @@
#define LOAD_OFFSET PAGE_OFFSET
#include <asm-generic/vmlinux.lds.h>
-
+
OUTPUT_FORMAT("elf64-ia64-little")
OUTPUT_ARCH(ia64)
ENTRY(phys_start)
@@ -29,6 +29,7 @@
_text = .;
_stext = .;
+
.text : AT(ADDR(.text) - PAGE_OFFSET)
{
*(.text.ivt)
@@ -44,33 +45,39 @@
/* Read-only data */
- /* Global data */
- _data = .;
-
/* Exception table */
. = ALIGN(16);
- __start___ex_table = .;
__ex_table : AT(ADDR(__ex_table) - PAGE_OFFSET)
- { *(__ex_table) }
- __stop___ex_table = .;
+ {
+ __start___ex_table = .;
+ *(__ex_table)
+ __stop___ex_table = .;
+ }
+
+ /* Global data */
+ _data = .;
#if defined(CONFIG_IA64_GENERIC)
/* Machine Vector */
. = ALIGN(16);
- machvec_start = .;
.machvec : AT(ADDR(.machvec) - PAGE_OFFSET)
- { *(.machvec) }
- machvec_end = .;
+ {
+ machvec_start = .;
+ *(.machvec)
+ machvec_end = .;
+ }
#endif
/* Unwind info & table: */
. = ALIGN(8);
.IA_64.unwind_info : AT(ADDR(.IA_64.unwind_info) - PAGE_OFFSET)
{ *(.IA_64.unwind_info*) }
- ia64_unw_start = .;
.IA_64.unwind : AT(ADDR(.IA_64.unwind) - PAGE_OFFSET)
- { *(.IA_64.unwind*) }
- ia64_unw_end = .;
+ {
+ ia64_unw_start = .;
+ *(.IA_64.unwind*)
+ ia64_unw_end = .;
+ }
RODATA
@@ -87,32 +94,38 @@
.init.data : AT(ADDR(.init.data) - PAGE_OFFSET)
{ *(.init.data) }
- __initramfs_start = .;
.init.ramfs : AT(ADDR(.init.ramfs) - PAGE_OFFSET)
- { *(.init.ramfs) }
- __initramfs_end = .;
+ {
+ __initramfs_start = .;
+ *(.init.ramfs)
+ __initramfs_end = .;
+ }
. = ALIGN(16);
- __setup_start = .;
.init.setup : AT(ADDR(.init.setup) - PAGE_OFFSET)
- { *(.init.setup) }
- __setup_end = .;
- __start___param = .;
+ {
+ __setup_start = .;
+ *(.init.setup)
+ __setup_end = .;
+ }
__param : AT(ADDR(__param) - PAGE_OFFSET)
- { *(__param) }
- __stop___param = .;
- __initcall_start = .;
+ {
+ __start___param = .;
+ *(__param)
+ __stop___param = .;
+ }
.initcall.init : AT(ADDR(.initcall.init) - PAGE_OFFSET)
{
- *(.initcall1.init)
- *(.initcall2.init)
- *(.initcall3.init)
- *(.initcall4.init)
- *(.initcall5.init)
- *(.initcall6.init)
- *(.initcall7.init)
+ __initcall_start = .;
+ *(.initcall1.init)
+ *(.initcall2.init)
+ *(.initcall3.init)
+ *(.initcall4.init)
+ *(.initcall5.init)
+ *(.initcall6.init)
+ *(.initcall7.init)
+ __initcall_end = .;
}
- __initcall_end = .;
. = ALIGN(PAGE_SIZE);
__init_end = .;
@@ -130,10 +143,6 @@
. = ALIGN(SMP_CACHE_BYTES);
.data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - PAGE_OFFSET)
{ *(.data.cacheline_aligned) }
-
- /* Kernel symbol names for modules: */
- .kstrtab : AT(ADDR(.kstrtab) - PAGE_OFFSET)
- { *(.kstrtab) }
/* Per-cpu data: */
. = ALIGN(PERCPU_PAGE_SIZE);
diff -Nru a/drivers/acpi/osl.c b/drivers/acpi/osl.c
--- a/drivers/acpi/osl.c Fri Jan 24 20:41:05 2003
+++ b/drivers/acpi/osl.c Fri Jan 24 20:41:05 2003
@@ -143,9 +143,9 @@
#ifdef CONFIG_ACPI_EFI
addr->pointer_type = ACPI_PHYSICAL_POINTER;
if (efi.acpi20)
- addr->pointer.physical = (ACPI_PHYSICAL_ADDRESS) virt_to_phys(efi.acpi20);
+ addr->pointer.physical = (acpi_physical_address) virt_to_phys(efi.acpi20);
else if (efi.acpi)
- addr->pointer.physical = (ACPI_PHYSICAL_ADDRESS) virt_to_phys(efi.acpi);
+ addr->pointer.physical = (acpi_physical_address) virt_to_phys(efi.acpi);
else {
printk(KERN_ERR PREFIX "System description tables not found\n");
return AE_NOT_FOUND;
@@ -224,7 +224,14 @@
acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *context)
{
#ifdef CONFIG_IA64
- irq = gsi_to_vector(irq);
+ int vector;
+
+ vector = acpi_irq_to_vector(irq);
+ if (vector < 0) {
+ printk(KERN_ERR PREFIX "SCI (IRQ%d) not registerd\n", irq);
+ return AE_OK;
+ }
+ irq = vector;
#endif
acpi_irq_irq = irq;
acpi_irq_handler = handler;
@@ -242,7 +249,7 @@
{
if (acpi_irq_handler) {
#ifdef CONFIG_IA64
- irq = gsi_to_vector(irq);
+ irq = acpi_irq_to_vector(irq);
#endif
free_irq(irq, acpi_irq);
acpi_irq_handler = NULL;
diff -Nru a/drivers/acpi/pci_irq.c b/drivers/acpi/pci_irq.c
--- a/drivers/acpi/pci_irq.c Fri Jan 24 20:41:05 2003
+++ b/drivers/acpi/pci_irq.c Fri Jan 24 20:41:05 2003
@@ -36,6 +36,9 @@
#ifdef CONFIG_X86_IO_APIC
#include <asm/mpspec.h>
#endif
+#ifdef CONFIG_IOSAPIC
+# include <asm/iosapic.h>
+#endif
#include "acpi_bus.h"
#include "acpi_drivers.h"
@@ -250,6 +253,8 @@
return_VALUE(0);
}
+ entry->irq = entry->link.index;
+
if (!entry->irq && entry->link.handle) {
entry->irq = acpi_pci_link_get_irq(entry->link.handle, entry->link.index);
if (!entry->irq) {
@@ -355,7 +360,7 @@
return_VALUE(0);
}
- dev->irq = irq;
+ dev->irq = gsi_to_irq(irq);
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device %s using IRQ %d\n", dev->slot_name, dev->irq));
diff -Nru a/drivers/char/agp/agp.h b/drivers/char/agp/agp.h
--- a/drivers/char/agp/agp.h Fri Jan 24 20:41:05 2003
+++ b/drivers/char/agp/agp.h Fri Jan 24 20:41:05 2003
@@ -47,7 +47,7 @@
flush_agp_cache();
}
#else
-static void global_cache_flush(void)
+static void __attribute__((unused)) global_cache_flush(void)
{
flush_agp_cache();
}
diff -Nru a/drivers/char/agp/backend.c b/drivers/char/agp/backend.c
--- a/drivers/char/agp/backend.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/agp/backend.c Fri Jan 24 20:41:05 2003
@@ -26,6 +26,7 @@
* TODO:
* - Allocate more than order 0 pages to avoid too much linear map splitting.
*/
+
#include <linux/config.h>
#include <linux/module.h>
#include <linux/pci.h>
diff -Nru a/drivers/char/agp/hp-agp.c b/drivers/char/agp/hp-agp.c
--- a/drivers/char/agp/hp-agp.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/agp/hp-agp.c Fri Jan 24 20:41:05 2003
@@ -369,7 +369,7 @@
}
static struct agp_driver hp_agp_driver = {
- .owner = THIS_MODULE;
+ .owner = THIS_MODULE,
};
static int __init agp_hp_probe (struct pci_dev *dev, const struct pci_device_id *ent)
diff -Nru a/drivers/char/drm/drmP.h b/drivers/char/drm/drmP.h
--- a/drivers/char/drm/drmP.h Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/drmP.h Fri Jan 24 20:41:05 2003
@@ -230,16 +230,16 @@
if (len > DRM_PROC_LIMIT) { ret; *eof = 1; return len - offset; }
/* Mapping helper macros */
-#define DRM_IOREMAP(map) \
- (map)->handle = DRM(ioremap)( (map)->offset, (map)->size )
+#define DRM_IOREMAP(map, dev) \
+ (map)->handle = DRM(ioremap)( (map)->offset, (map)->size, (dev) )
-#define DRM_IOREMAP_NOCACHE(map) \
- (map)->handle = DRM(ioremap_nocache)((map)->offset, (map)->size)
+#define DRM_IOREMAP_NOCACHE(map, dev) \
+ (map)->handle = DRM(ioremap_nocache)((map)->offset, (map)->size, (dev))
-#define DRM_IOREMAPFREE(map) \
- do { \
- if ( (map)->handle && (map)->size ) \
- DRM(ioremapfree)( (map)->handle, (map)->size ); \
+#define DRM_IOREMAPFREE(map, dev) \
+ do { \
+ if ( (map)->handle && (map)->size ) \
+ DRM(ioremapfree)( (map)->handle, (map)->size, (dev) ); \
} while (0)
#define DRM_FIND_MAP(_map, _o) \
@@ -693,9 +693,10 @@
extern unsigned long DRM(alloc_pages)(int order, int area);
extern void DRM(free_pages)(unsigned long address, int order,
int area);
-extern void *DRM(ioremap)(unsigned long offset, unsigned long size);
-extern void *DRM(ioremap_nocache)(unsigned long offset, unsigned long size);
-extern void DRM(ioremapfree)(void *pt, unsigned long size);
+extern void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev);
+extern void *DRM(ioremap_nocache)(unsigned long offset, unsigned long size,
+ drm_device_t *dev);
+extern void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev);
#if __REALLY_HAVE_AGP
extern agp_memory *DRM(alloc_agp)(int pages, u32 type);
diff -Nru a/drivers/char/drm/drm_bufs.h b/drivers/char/drm/drm_bufs.h
--- a/drivers/char/drm/drm_bufs.h Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/drm_bufs.h Fri Jan 24 20:41:05 2003
@@ -107,7 +107,7 @@
switch ( map->type ) {
case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER:
-#if !defined(__sparc__) && !defined(__alpha__)
+#if !defined(__sparc__) && !defined(__alpha__) && !defined(__ia64__)
if ( map->offset + map->size < map->offset ||
map->offset < virt_to_phys(high_memory) ) {
DRM(free)( map, sizeof(*map), DRM_MEM_MAPS );
@@ -124,7 +124,7 @@
MTRR_TYPE_WRCOMB, 1 );
}
#endif
- map->handle = DRM(ioremap)( map->offset, map->size );
+ map->handle = DRM(ioremap)( map->offset, map->size, dev );
break;
case _DRM_SHM:
@@ -246,7 +246,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
diff -Nru a/drivers/char/drm/drm_drv.h b/drivers/char/drm/drm_drv.h
--- a/drivers/char/drm/drm_drv.h Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/drm_drv.h Fri Jan 24 20:41:05 2003
@@ -443,7 +443,7 @@
DRM_DEBUG( "mtrr_del=%d\n", retcode );
}
#endif
- DRM(ioremapfree)( map->handle, map->size );
+ DRM(ioremapfree)( map->handle, map->size, dev );
break;
case _DRM_SHM:
vfree(map->handle);
diff -Nru a/drivers/char/drm/drm_memory.h b/drivers/char/drm/drm_memory.h
--- a/drivers/char/drm/drm_memory.h Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/drm_memory.h Fri Jan 24 20:41:05 2003
@@ -33,6 +33,10 @@
#include <linux/config.h>
#include "drmP.h"
#include <linux/wrapper.h>
+#include <linux/vmalloc.h>
+
+#include <asm/agp.h>
+#include <asm/tlbflush.h>
typedef struct drm_mem_stats {
const char *name;
@@ -291,17 +295,122 @@
}
}
-void *DRM(ioremap)(unsigned long offset, unsigned long size)
+#if __REALLY_HAVE_AGP
+
+/*
+ * Find the drm_map that covers the range [offset, offset+size).
+ */
+static inline drm_map_t *
+drm_lookup_map (unsigned long offset, unsigned long size, drm_device_t *dev)
{
+ struct list_head *list;
+ drm_map_list_t *r_list;
+ drm_map_t *map;
+
+ list_for_each(list, &dev->maplist->head) {
+ r_list = (drm_map_list_t *) list;
+ map = r_list->map;
+ if (!map)
+ continue;
+ if (map->offset <= offset && (offset + size) <= (map->offset + map->size))
+ return map;
+ }
+ return NULL;
+}
+
+static inline void *
+agp_remap (unsigned long offset, unsigned long size, drm_device_t *dev)
+{
+ unsigned long *phys_addr_map, i, num_pages = PAGE_ALIGN(size) / PAGE_SIZE;
+ struct page **page_map, **page_map_ptr;
+ struct drm_agp_mem *agpmem;
+ struct vm_struct *area;
+
+
+ size = PAGE_ALIGN(size);
+
+ for (agpmem = dev->agp->memory; agpmem; agpmem = agpmem->next)
+ if (agpmem->bound <= offset
+ && (agpmem->bound + (agpmem->pages << PAGE_SHIFT)) >= (offset + size))
+ break;
+ if (!agpmem)
+ return NULL;
+
+ /*
+ * OK, we're mapping AGP space on a chipset/platform on which memory accesses by
+ * the CPU do not get remapped by the GART. We fix this by using the kernel's
+ * page-table instead (that's probably faster anyhow...).
+ */
+ area = get_vm_area(size, VM_IOREMAP);
+ if (!area)
+ return NULL;
+
+ flush_cache_all();
+
+ /* note: use vmalloc() because num_pages could be large... */
+ page_map = vmalloc(num_pages * sizeof(struct page *));
+ if (!page_map)
+ return NULL;
+
+ phys_addr_map = agpmem->memory->memory + (offset - agpmem->bound) / PAGE_SIZE;
+ for (i = 0; i < num_pages; ++i)
+ page_map[i] = pfn_to_page(phys_addr_map[i] >> PAGE_SHIFT);
+ page_map_ptr = page_map;
+ if (map_vm_area(area, PAGE_AGP, &page_map_ptr) < 0) {
+ vunmap(area->addr);
+ vfree(page_map);
+ return NULL;
+ }
+ vfree(page_map);
+
+ flush_tlb_kernel_range(area->addr, area->addr + size);
+ return area->addr;
+}
+
+static inline unsigned long
+drm_follow_page (void *vaddr)
+{
+printk("drm_follow_page: vaddr=%p\n", vaddr);
+ pgd_t *pgd = pgd_offset_k((unsigned long) vaddr);
+printk(" pgd=%p\n", pgd);
+ pmd_t *pmd = pmd_offset(pgd, (unsigned long) vaddr);
+printk(" pmd=%p\n", pmd);
+ pte_t *ptep = pte_offset_kernel(pmd, (unsigned long) vaddr);
+printk(" ptep=%p\n", ptep);
+printk(" page=0x%lx\n", pte_pfn(*ptep) << PAGE_SHIFT);
+ return pte_pfn(*ptep) << PAGE_SHIFT;
+}
+
+#else /* !__REALLY_HAVE_AGP */
+
+static inline void *
+agp_remap (unsigned long offset, unsigned long size, drm_device_t *dev) { return NULL; }
+
+#endif /* !__REALLY_HAVE_AGP */
+
+void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev)
+{
+ int remap_aperture = 0;
void *pt;
if (!size) {
- DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
- "Mapping 0 bytes at 0x%08lx\n", offset);
+ DRM_MEM_ERROR(DRM_MEM_MAPPINGS, "Mapping 0 bytes at 0x%08lx\n", offset);
return NULL;
}
- if (!(pt = ioremap(offset, size))) {
+#if __REALLY_HAVE_AGP
+ if (dev->agp->cant_use_aperture) {
+ drm_map_t *map = drm_lookup_map(offset, size, dev);
+
+ if (map && map->type = _DRM_AGP)
+ remap_aperture = 1;
+ }
+#endif
+ if (remap_aperture)
+ pt = agp_remap(offset, size, dev);
+ else
+ pt = ioremap(offset, size);
+ if (!pt) {
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&DRM(mem_lock));
@@ -314,8 +423,9 @@
return pt;
}
-void *DRM(ioremap_nocache)(unsigned long offset, unsigned long size)
+void *DRM(ioremap_nocache)(unsigned long offset, unsigned long size, drm_device_t *dev)
{
+ int remap_aperture = 0;
void *pt;
if (!size) {
@@ -324,7 +434,19 @@
return NULL;
}
- if (!(pt = ioremap_nocache(offset, size))) {
+#if __REALLY_HAVE_AGP
+ if (dev->agp->cant_use_aperture) {
+ drm_map_t *map = drm_lookup_map(offset, size, dev);
+
+ if (map && map->type = _DRM_AGP)
+ remap_aperture = 1;
+ }
+#endif
+ if (remap_aperture)
+ pt = agp_remap(offset, size, dev);
+ else
+ pt = ioremap_nocache(offset, size);
+ if (!pt) {
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&DRM(mem_lock));
@@ -337,16 +459,40 @@
return pt;
}
-void DRM(ioremapfree)(void *pt, unsigned long size)
+void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev)
{
int alloc_count;
int free_count;
+printk("ioremapfree(pt=%p)\n", pt);
if (!pt)
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
- else
- iounmap(pt);
+ else {
+ int unmap_aperture = 0;
+#if __REALLY_HAVE_AGP
+ /*
+ * This is rather ugly. It would be much cleaner if the DRM API would use
+ * separate routines for handling mappings in the AGP space. Hopefully this
+ * can be done in a future revision of the interface...
+ */
+ if (dev->agp->cant_use_aperture
+ && ((unsigned long) pt >= VMALLOC_START && (unsigned long) pt < VMALLOC_END))
+ {
+ unsigned long offset = (drm_follow_page(pt)
+ | ((unsigned long) pt & ~PAGE_MASK));
+printk("offset=0x%lx\n", offset);
+ drm_map_t *map = drm_lookup_map(offset, size, dev);
+printk("map=%p\n", map);
+ if (map && map->type = _DRM_AGP)
+ unmap_aperture = 1;
+ }
+#endif
+ if (unmap_aperture)
+ vunmap(pt);
+ else
+ iounmap(pt);
+ }
spin_lock(&DRM(mem_lock));
DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_freed += size;
diff -Nru a/drivers/char/drm/drm_vm.h b/drivers/char/drm/drm_vm.h
--- a/drivers/char/drm/drm_vm.h Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/drm_vm.h Fri Jan 24 20:41:05 2003
@@ -108,12 +108,12 @@
* Get the page, inc the use count, and return it
*/
offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
- agpmem->memory->memory[offset] &= dev->agp->page_mask;
page = virt_to_page(__va(agpmem->memory->memory[offset]));
get_page(page);
- DRM_DEBUG("baddr = 0x%lx page = 0x%p, offset = 0x%lx\n",
- baddr, __va(agpmem->memory->memory[offset]), offset);
+ DRM_DEBUG("baddr = 0x%lx page = 0x%p, offset = 0x%lx, count=%d\n",
+ baddr, __va(agpmem->memory->memory[offset]), offset,
+ atomic_read(&page->count));
return page;
}
@@ -207,7 +207,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
@@ -421,15 +421,16 @@
switch (map->type) {
case _DRM_AGP:
-#if defined(__alpha__)
+#if __REALLY_HAVE_AGP
+ if (dev->agp->cant_use_aperture) {
/*
- * On Alpha we can't talk to bus dma address from the
- * CPU, so for memory of type DRM_AGP, we'll deal with
- * sorting out the real physical pages and mappings
- * in nopage()
+ * On some platforms we can't talk to bus dma address from the CPU, so for
+ * memory of type DRM_AGP, we'll deal with sorting out the real physical
+ * pages and mappings in nopage()
*/
vma->vm_ops = &DRM(vm_ops);
break;
+ }
#endif
/* fall through to _DRM_FRAME_BUFFER... */
case _DRM_FRAME_BUFFER:
@@ -440,15 +441,15 @@
pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
}
-#elif defined(__ia64__)
- if (map->type != _DRM_AGP)
- vma->vm_page_prot - pgprot_writecombine(vma->vm_page_prot);
#elif defined(__powerpc__)
pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE | _PAGE_GUARDED;
#endif
vma->vm_flags |= VM_IO; /* not in core dump */
}
+#if defined(__ia64__)
+ if (map->type != _DRM_AGP)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+#endif
offset = DRIVER_GET_REG_OFS();
#ifdef __sparc__
if (io_remap_page_range(DRM_RPR_ARG(vma) vma->vm_start,
diff -Nru a/drivers/char/drm/gamma_dma.c b/drivers/char/drm/gamma_dma.c
--- a/drivers/char/drm/gamma_dma.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/gamma_dma.c Fri Jan 24 20:41:05 2003
@@ -637,7 +637,7 @@
} else {
DRM_FIND_MAP( dev_priv->buffers, init->buffers_offset );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->buffers, dev );
buf = dma->buflist[GLINT_DRI_BUF_COUNT];
pgt = buf->address;
@@ -667,7 +667,7 @@
if ( dev->dev_private ) {
drm_gamma_private_t *dev_priv = dev->dev_private;
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
DRM(free)( dev->dev_private, sizeof(drm_gamma_private_t),
DRM_MEM_DRIVER );
diff -Nru a/drivers/char/drm/i810_dma.c b/drivers/char/drm/i810_dma.c
--- a/drivers/char/drm/i810_dma.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/i810_dma.c Fri Jan 24 20:41:05 2003
@@ -275,7 +275,7 @@
if(dev_priv->ring.virtual_start) {
DRM(ioremapfree)((void *) dev_priv->ring.virtual_start,
- dev_priv->ring.Size);
+ dev_priv->ring.Size, dev);
}
if(dev_priv->hw_status_page != 0UL) {
pci_free_consistent(dev->pdev, PAGE_SIZE,
@@ -291,7 +291,7 @@
for (i = 0; i < dma->buf_count; i++) {
drm_buf_t *buf = dma->buflist[ i ];
drm_i810_buf_priv_t *buf_priv = buf->dev_private;
- DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total);
+ DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total, dev);
}
}
return 0;
@@ -361,7 +361,7 @@
*buf_priv->in_use = I810_BUF_FREE;
buf_priv->kernel_virtual = DRM(ioremap)(buf->bus_address,
- buf->total);
+ buf->total, dev);
}
return 0;
}
@@ -414,7 +414,7 @@
dev_priv->ring.virtual_start = DRM(ioremap)(dev->agp->base +
init->ring_start,
- init->ring_size);
+ init->ring_size, dev);
if (dev_priv->ring.virtual_start = NULL) {
dev->dev_private = (void *) dev_priv;
diff -Nru a/drivers/char/drm/i830_dma.c b/drivers/char/drm/i830_dma.c
--- a/drivers/char/drm/i830_dma.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/i830_dma.c Fri Jan 24 20:41:05 2003
@@ -283,7 +283,7 @@
if(dev_priv->ring.virtual_start) {
DRM(ioremapfree)((void *) dev_priv->ring.virtual_start,
- dev_priv->ring.Size);
+ dev_priv->ring.Size, dev);
}
if(dev_priv->hw_status_page != 0UL) {
pci_free_consistent(dev->pdev, PAGE_SIZE,
@@ -299,7 +299,7 @@
for (i = 0; i < dma->buf_count; i++) {
drm_buf_t *buf = dma->buflist[ i ];
drm_i830_buf_priv_t *buf_priv = buf->dev_private;
- DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total);
+ DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total, dev);
}
}
return 0;
@@ -371,7 +371,7 @@
*buf_priv->in_use = I830_BUF_FREE;
buf_priv->kernel_virtual = DRM(ioremap)(buf->bus_address,
- buf->total);
+ buf->total, dev);
}
return 0;
}
@@ -425,7 +425,7 @@
dev_priv->ring.virtual_start = DRM(ioremap)(dev->agp->base +
init->ring_start,
- init->ring_size);
+ init->ring_size, dev);
if (dev_priv->ring.virtual_start = NULL) {
dev->dev_private = (void *) dev_priv;
diff -Nru a/drivers/char/drm/mga_dma.c b/drivers/char/drm/mga_dma.c
--- a/drivers/char/drm/mga_dma.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/mga_dma.c Fri Jan 24 20:41:05 2003
@@ -554,9 +554,9 @@
(drm_mga_sarea_t *)((u8 *)dev_priv->sarea->handle +
init->sarea_priv_offset);
- DRM_IOREMAP( dev_priv->warp );
- DRM_IOREMAP( dev_priv->primary );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->warp, dev );
+ DRM_IOREMAP( dev_priv->primary, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->warp->handle ||
!dev_priv->primary->handle ||
@@ -642,9 +642,9 @@
if ( dev->dev_private ) {
drm_mga_private_t *dev_priv = dev->dev_private;
- DRM_IOREMAPFREE( dev_priv->warp );
- DRM_IOREMAPFREE( dev_priv->primary );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->warp, dev );
+ DRM_IOREMAPFREE( dev_priv->primary, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
if ( dev_priv->head != NULL ) {
mga_freelist_cleanup( dev );
diff -Nru a/drivers/char/drm/mga_drv.h b/drivers/char/drm/mga_drv.h
--- a/drivers/char/drm/mga_drv.h Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/mga_drv.h Fri Jan 24 20:41:05 2003
@@ -238,7 +238,7 @@
if ( MGA_VERBOSE ) { \
DRM_INFO( "BEGIN_DMA( %d ) in %s\n", \
(n), __FUNCTION__ ); \
- DRM_INFO( " space=0x%x req=0x%x\n", \
+ DRM_INFO( " space=0x%x req=0x%Zx\n", \
dev_priv->prim.space, (n) * DMA_BLOCK_SIZE ); \
} \
prim = dev_priv->prim.start; \
@@ -288,7 +288,7 @@
#define DMA_WRITE( offset, val ) \
do { \
if ( MGA_VERBOSE ) { \
- DRM_INFO( " DMA_WRITE( 0x%08x ) at 0x%04x\n", \
+ DRM_INFO( " DMA_WRITE( 0x%08x ) at 0x%04Zx\n", \
(u32)(val), write + (offset) * sizeof(u32) ); \
} \
*(volatile u32 *)(prim + write + (offset) * sizeof(u32)) = val; \
diff -Nru a/drivers/char/drm/r128_cce.c b/drivers/char/drm/r128_cce.c
--- a/drivers/char/drm/r128_cce.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/r128_cce.c Fri Jan 24 20:41:05 2003
@@ -350,8 +350,8 @@
R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
entry->busaddr[page_ofs]);
- DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
- entry->busaddr[page_ofs],
+ DRM_DEBUG( "ring rptr: offset=0x%08lx handle=0x%08lx\n",
+ (unsigned long) entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
}
@@ -540,9 +540,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cce_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cce_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cce_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -618,9 +618,9 @@
#if __REALLY_HAVE_SG
if ( !dev_priv->is_pci ) {
#endif
- DRM_IOREMAPFREE( dev_priv->cce_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cce_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
#if __REALLY_HAVE_SG
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
diff -Nru a/drivers/char/drm/radeon_cp.c b/drivers/char/drm/radeon_cp.c
--- a/drivers/char/drm/radeon_cp.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/drm/radeon_cp.c Fri Jan 24 20:41:05 2003
@@ -904,8 +904,8 @@
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
entry->busaddr[page_ofs]);
- DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
- entry->busaddr[page_ofs],
+ DRM_DEBUG( "ring rptr: offset=0x%08lx handle=0x%08lx\n",
+ (unsigned long) entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
}
@@ -1157,9 +1157,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cp_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cp_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cp_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -1278,9 +1278,9 @@
drm_radeon_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cp_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cp_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
#if __REALLY_HAVE_SG
if (!DRM(ati_pcigart_cleanup)( dev,
diff -Nru a/drivers/char/mem.c b/drivers/char/mem.c
--- a/drivers/char/mem.c Fri Jan 24 20:41:05 2003
+++ b/drivers/char/mem.c Fri Jan 24 20:41:05 2003
@@ -528,10 +528,12 @@
case 0:
file->f_pos = offset;
ret = file->f_pos;
+ force_successful_syscall_return();
break;
case 1:
file->f_pos += offset;
ret = file->f_pos;
+ force_successful_syscall_return();
break;
default:
ret = -EINVAL;
diff -Nru a/drivers/media/radio/Makefile b/drivers/media/radio/Makefile
--- a/drivers/media/radio/Makefile Fri Jan 24 20:41:05 2003
+++ b/drivers/media/radio/Makefile Fri Jan 24 20:41:05 2003
@@ -5,6 +5,8 @@
# All of the (potential) objects that export symbols.
# This list comes from 'grep -l EXPORT_SYMBOL *.[hc]'.
+obj-y := dummy.o
+
export-objs := miropcm20-rds-core.o
miropcm20-objs := miropcm20-rds-core.o miropcm20-radio.o
diff -Nru a/drivers/media/radio/dummy.c b/drivers/media/radio/dummy.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/media/radio/dummy.c Fri Jan 24 20:41:06 2003
@@ -0,0 +1 @@
+/* just so the linker knows what kind of object files it's deadling with... */
diff -Nru a/drivers/media/video/Makefile b/drivers/media/video/Makefile
--- a/drivers/media/video/Makefile Fri Jan 24 20:41:05 2003
+++ b/drivers/media/video/Makefile Fri Jan 24 20:41:05 2003
@@ -12,6 +12,8 @@
bttv-risc.o bttv-vbi.o
zoran-objs := zr36120.o zr36120_i2c.o zr36120_mem.o
+obj-y := dummy.o
+
obj-$(CONFIG_VIDEO_DEV) += videodev.o v4l2-common.o v4l1-compat.o
obj-$(CONFIG_VIDEO_BT848) += bttv.o msp3400.o tvaudio.o \
diff -Nru a/drivers/media/video/dummy.c b/drivers/media/video/dummy.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/media/video/dummy.c Fri Jan 24 20:41:06 2003
@@ -0,0 +1 @@
+/* just so the linker knows what kind of object files it's deadling with... */
diff -Nru a/drivers/net/tulip/media.c b/drivers/net/tulip/media.c
--- a/drivers/net/tulip/media.c Fri Jan 24 20:41:05 2003
+++ b/drivers/net/tulip/media.c Fri Jan 24 20:41:05 2003
@@ -278,6 +278,10 @@
for (i = 0; i < init_length; i++)
outl(init_sequence[i], ioaddr + CSR12);
}
+
+ (void) inl(ioaddr + CSR6); /* flush CSR12 writes */
+ udelay(500); /* Give MII time to recover */
+
tmp_info = get_u16(&misc_info[1]);
if (tmp_info)
tp->advertising[phy_num] = tmp_info | 1;
diff -Nru a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
--- a/drivers/scsi/megaraid.c Fri Jan 24 20:41:05 2003
+++ b/drivers/scsi/megaraid.c Fri Jan 24 20:41:05 2003
@@ -2045,7 +2045,7 @@
return;
mbox = (mega_mailbox *) pScb->mboxData;
- printk ("%u cmd:%x id:%x #scts:%x lba:%x addr:%x logdrv:%x #sg:%x\n",
+ printk ("%lu cmd:%x id:%x #scts:%x lba:%x addr:%x logdrv:%x #sg:%x\n",
pScb->SCpnt->pid,
mbox->cmd, mbox->cmdid, mbox->numsectors,
mbox->lba, mbox->xferaddr, mbox->logdrv, mbox->numsgelements);
@@ -3351,9 +3351,13 @@
mbox[0] = IS_BIOS_ENABLED;
mbox[2] = GET_BIOS;
- mboxpnt->xferaddr = virt_to_bus ((void *) megacfg->mega_buffer);
+ mboxpnt->xferaddr = pci_map_single(megacfg->dev,
+ (void *) megacfg->mega_buffer, (2 * 1024L),
+ PCI_DMA_FROMDEVICE);
ret = megaIssueCmd (megacfg, mbox, NULL, 0);
+
+ pci_unmap_single(megacfg->dev, mboxpnt->xferaddr, 2 * 1024L, PCI_DMA_FROMDEVICE);
return (*(char *) megacfg->mega_buffer);
}
diff -Nru a/drivers/scsi/scsi_ioctl.c b/drivers/scsi/scsi_ioctl.c
--- a/drivers/scsi/scsi_ioctl.c Fri Jan 24 20:41:05 2003
+++ b/drivers/scsi/scsi_ioctl.c Fri Jan 24 20:41:05 2003
@@ -219,6 +219,9 @@
unsigned int needed, buf_needed;
int timeout, retries, result;
int data_direction, gfp_mask = GFP_KERNEL;
+#if __GNUC__ < 3
+ int foo;
+#endif
if (!sic)
return -EINVAL;
@@ -232,11 +235,21 @@
if (verify_area(VERIFY_READ, sic, sizeof(Scsi_Ioctl_Command)))
return -EFAULT;
+#if __GNUC__ < 3
+ foo = __get_user(inlen, &sic->inlen);
+ if (foo)
+ return -EFAULT;
+
+ foo = __get_user(outlen, &sic->outlen);
+ if (foo)
+ return -EFAULT;
+#else
if(__get_user(inlen, &sic->inlen))
return -EFAULT;
if(__get_user(outlen, &sic->outlen))
return -EFAULT;
+#endif
/*
* We do not transfer more than MAX_BUF with this interface.
diff -Nru a/drivers/scsi/sym53c8xx_2/sym_glue.c b/drivers/scsi/sym53c8xx_2/sym_glue.c
--- a/drivers/scsi/sym53c8xx_2/sym_glue.c Fri Jan 24 20:41:06 2003
+++ b/drivers/scsi/sym53c8xx_2/sym_glue.c Fri Jan 24 20:41:06 2003
@@ -295,11 +295,7 @@
#ifndef SYM_LINUX_DYNAMIC_DMA_MAPPING
typedef u_long bus_addr_t;
#else
-#if SYM_CONF_DMA_ADDRESSING_MODE > 0
-typedef dma64_addr_t bus_addr_t;
-#else
typedef dma_addr_t bus_addr_t;
-#endif
#endif
/*
diff -Nru a/drivers/scsi/sym53c8xx_2/sym_malloc.c b/drivers/scsi/sym53c8xx_2/sym_malloc.c
--- a/drivers/scsi/sym53c8xx_2/sym_malloc.c Fri Jan 24 20:41:05 2003
+++ b/drivers/scsi/sym53c8xx_2/sym_malloc.c Fri Jan 24 20:41:05 2003
@@ -143,12 +143,14 @@
a = (m_addr_t) ptr;
while (1) {
-#ifdef SYM_MEM_FREE_UNUSED
if (s = SYM_MEM_CLUSTER_SIZE) {
+#ifdef SYM_MEM_FREE_UNUSED
M_FREE_MEM_CLUSTER(a);
- break;
- }
+#else
+ ((m_link_p) a)->next = h[i].next;
+ h[i].next = (m_link_p) a;
#endif
+ }
b = a ^ s;
q = &h[i];
while (q->next && q->next != (m_link_p) b) {
diff -Nru a/drivers/serial/8250.c b/drivers/serial/8250.c
--- a/drivers/serial/8250.c Fri Jan 24 20:41:05 2003
+++ b/drivers/serial/8250.c Fri Jan 24 20:41:05 2003
@@ -1999,9 +1999,11 @@
return __register_serial(req, -1);
}
-int __init early_serial_setup(struct serial_struct *req)
+int __init early_serial_setup(struct uart_port *port)
{
- __register_serial(req, req->line);
+ serial8250_isa_init_ports();
+ serial8250_ports[port->line].port = *port;
+ serial8250_ports[port->line].port.ops = &serial8250_pops;
return 0;
}
diff -Nru a/drivers/serial/8250_acpi.c b/drivers/serial/8250_acpi.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/serial/8250_acpi.c Fri Jan 24 20:41:06 2003
@@ -0,0 +1,178 @@
+/*
+ * linux/drivers/char/acpi_serial.c
+ *
+ * Copyright (C) 2000, 2002 Hewlett-Packard Co.
+ * Khalid Aziz <khalid_aziz@hp.com>
+ *
+ * Detect and initialize the headless console serial port defined in SPCR table and debug
+ * serial port defined in DBGP table.
+ *
+ * 2002/08/29 davidm Adjust it to new 2.5 serial driver infrastructure.
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/acpi.h>
+#include <linux/init.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <linux/acpi_serial.h>
+
+#include <asm/io.h>
+#include <asm/serial.h>
+
+#undef SERIAL_DEBUG_ACPI
+
+#define ACPI_SERIAL_CONSOLE_PORT 0
+#define ACPI_SERIAL_DEBUG_PORT 5
+
+/*
+ * Query ACPI tables for a debug and a headless console serial port. If found, add them to
+ * rs_table[]. A pointer to either SPCR or DBGP table is passed as parameter. This
+ * function should be called before serial_console_init() is called to make sure the SPCR
+ * serial console will be available for use. IA-64 kernel calls this function from within
+ * acpi.c when it encounters SPCR or DBGP tables as it parses the ACPI 2.0 tables during
+ * bootup.
+ */
+void __init
+setup_serial_acpi (void *tablep)
+{
+ acpi_ser_t *acpi_ser_p;
+ struct uart_port port;
+ unsigned long iobase;
+ int gsi;
+
+#ifdef SERIAL_DEBUG_ACPI
+ printk("Entering setup_serial_acpi()\n");
+#endif
+
+ /* Now get the table */
+ if (!tablep)
+ return;
+
+ memset(&port, 0, sizeof(port));
+
+ acpi_ser_p = (acpi_ser_t *) tablep;
+
+ /*
+ * Perform a sanity check on the table. Table should have a signature of "SPCR" or
+ * "DBGP" and it should be atleast 52 bytes long.
+ */
+ if (strncmp(acpi_ser_p->signature, ACPI_SPCRT_SIGNATURE, ACPI_SIG_LEN) != 0 &&
+ strncmp(acpi_ser_p->signature, ACPI_DBGPT_SIGNATURE, ACPI_SIG_LEN) != 0)
+ return;
+ if (acpi_ser_p->length < 52)
+ return;
+
+ iobase = (((u64) acpi_ser_p->base_addr.addrh) << 32) | acpi_ser_p->base_addr.addrl;
+ gsi = ( (acpi_ser_p->global_int[3] << 24) | (acpi_ser_p->global_int[2] << 16)
+ | (acpi_ser_p->global_int[1] << 8) | (acpi_ser_p->global_int[0] << 0));
+
+#ifdef SERIAL_DEBUG_ACPI
+ printk("setup_serial_acpi(): table pointer = 0x%p\n", acpi_ser_p);
+ printk(" sig = '%c%c%c%c'\n", acpi_ser_p->signature[0],
+ acpi_ser_p->signature[1], acpi_ser_p->signature[2], acpi_ser_p->signature[3]);
+ printk(" length = %d\n", acpi_ser_p->length);
+ printk(" Rev = %d\n", acpi_ser_p->rev);
+ printk(" Interface type = %d\n", acpi_ser_p->intfc_type);
+ printk(" Base address = 0x%lX\n", iobase);
+ printk(" IRQ = %d\n", acpi_ser_p->irq);
+ printk(" Global System Int = %d\n", gsi);
+ printk(" Baud rate = ");
+ switch (acpi_ser_p->baud) {
+ case ACPI_SERIAL_BAUD_9600:
+ printk("9600\n");
+ break;
+
+ case ACPI_SERIAL_BAUD_19200:
+ printk("19200\n");
+ break;
+
+ case ACPI_SERIAL_BAUD_57600:
+ printk("57600\n");
+ break;
+
+ case ACPI_SERIAL_BAUD_115200:
+ printk("115200\n");
+ break;
+
+ default:
+ printk("Huh (%d)\n", acpi_ser_p->baud);
+ break;
+ }
+ if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_PCICONF_SPACE) {
+ printk(" PCI serial port:\n");
+ printk(" Bus %d, Device %d, Vendor ID 0x%x, Dev ID 0x%x\n",
+ acpi_ser_p->pci_bus, acpi_ser_p->pci_dev,
+ acpi_ser_p->pci_vendor_id, acpi_ser_p->pci_dev_id);
+ }
+#endif
+ /*
+ * Now build a serial_req structure to update the entry in rs_table for the
+ * headless console port.
+ */
+ switch (acpi_ser_p->intfc_type) {
+ case ACPI_SERIAL_INTFC_16550:
+ port.type = PORT_16550;
+ port.uartclk = BASE_BAUD * 16;
+ break;
+
+ case ACPI_SERIAL_INTFC_16450:
+ port.type = PORT_16450;
+ port.uartclk = BASE_BAUD * 16;
+ break;
+
+ default:
+ port.type = PORT_UNKNOWN;
+ break;
+ }
+ if (strncmp(acpi_ser_p->signature, ACPI_SPCRT_SIGNATURE, ACPI_SIG_LEN) = 0)
+ port.line = ACPI_SERIAL_CONSOLE_PORT;
+ else if (strncmp(acpi_ser_p->signature, ACPI_DBGPT_SIGNATURE, ACPI_SIG_LEN) = 0)
+ port.line = ACPI_SERIAL_DEBUG_PORT;
+ /*
+ * Check if this is an I/O mapped address or a memory mapped address
+ */
+ if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_MEM_SPACE) {
+ port.iobase = 0;
+ port.mapbase = iobase;
+ port.membase = ioremap(iobase, 64);
+ port.iotype = SERIAL_IO_MEM;
+ } else if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_IO_SPACE) {
+ port.iobase = iobase;
+ port.mapbase = 0;
+ port.membase = NULL;
+ port.iotype = SERIAL_IO_PORT;
+ } else if (acpi_ser_p->base_addr.space_id = ACPI_SERIAL_PCICONF_SPACE) {
+ printk("WARNING: No support for PCI serial console\n");
+ return;
+ }
+
+ /*
+ * If the table does not have IRQ information, use 0 for IRQ. This will force
+ * rs_init() to probe for IRQ.
+ */
+ if (acpi_ser_p->length < 53)
+ port.irq = 0;
+ else {
+ port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_AUTO_IRQ;
+ if (acpi_ser_p->int_type & (ACPI_SERIAL_INT_APIC | ACPI_SERIAL_INT_SAPIC))
+ port.irq = gsi;
+ else if (acpi_ser_p->int_type & ACPI_SERIAL_INT_PCAT)
+ port.irq = acpi_ser_p->irq;
+ else
+ /*
+ * IRQ type not being set would mean UART will run in polling
+ * mode. Do not probe for IRQ in that case.
+ */
+ port.flags &= UPF_AUTO_IRQ;
+ }
+ if (early_serial_setup(&port) < 0) {
+ printk("early_serial_setup() for ACPI serial console port failed\n");
+ return;
+ }
+
+#ifdef SERIAL_DEBUG_ACPI
+ printk("Leaving setup_serial_acpi()\n");
+#endif
+}
diff -Nru a/drivers/serial/8250_hcdp.c b/drivers/serial/8250_hcdp.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/serial/8250_hcdp.c Fri Jan 24 20:41:06 2003
@@ -0,0 +1,215 @@
+/*
+ * linux/drivers/char/hcdp_serial.c
+ *
+ * Copyright (C) 2002 Hewlett-Packard Co.
+ * Khalid Aziz <khalid_aziz@hp.com>
+ *
+ * Parse the EFI HCDP table to locate serial console and debug ports and initialize them.
+ *
+ * 2002/08/29 davidm Adjust it to new 2.5 serial driver infrastructure (untested).
+ */
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/efi.h>
+#include <linux/init.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <linux/types.h>
+
+#include <asm/io.h>
+#include <asm/serial.h>
+
+#include "8250_hcdp.h"
+
+#undef SERIAL_DEBUG_HCDP
+
+/*
+ * Parse the HCDP table to find descriptions for headless console and debug serial ports
+ * and add them to rs_table[]. A pointer to HCDP table is passed as parameter. This
+ * function should be called before serial_console_init() is called to make sure the HCDP
+ * serial console will be available for use. IA-64 kernel calls this function from
+ * setup_arch() after the EFI and ACPI tables have been parsed.
+ */
+void __init
+setup_serial_hcdp (void *tablep)
+{
+ hcdp_dev_t *hcdp_dev;
+ struct uart_port port;
+ unsigned long iobase;
+ hcdp_t hcdp;
+ int gsi, nr;
+#if 0
+ static int shift_once = 1;
+#endif
+
+#ifdef SERIAL_DEBUG_HCDP
+ printk("Entering setup_serial_hcdp()\n");
+#endif
+
+ /* Verify we have a valid table pointer */
+ if (!tablep)
+ return;
+
+ memset(&port, 0, sizeof(port));
+
+ /*
+ * Don't trust firmware to give us a table starting at an aligned address. Make a
+ * local copy of the HCDP table with aligned structures.
+ */
+ memcpy(&hcdp, tablep, sizeof(hcdp));
+
+ /*
+ * Perform a sanity check on the table. Table should have a signature of "HCDP"
+ * and it should be atleast 82 bytes long to have any useful information.
+ */
+ if ((strncmp(hcdp.signature, HCDP_SIGNATURE, HCDP_SIG_LEN) != 0))
+ return;
+ if (hcdp.len < 82)
+ return;
+
+#ifdef SERIAL_DEBUG_HCDP
+ printk("setup_serial_hcdp(): table pointer = 0x%p, sig = '%.4s'\n",
+ tablep, hcdp.signature);
+ printk(" length = %d, rev = %d, ", hcdp.len, hcdp.rev);
+ printk("OEM ID = %.6s, # of entries = %d\n", hcdp.oemid, hcdp.num_entries);
+#endif
+
+ /*
+ * Parse each device entry
+ */
+ for (nr = 0; nr < hcdp.num_entries; nr++) {
+ hcdp_dev = hcdp.hcdp_dev + nr;
+ /*
+ * We will parse only the primary console device which is the first entry
+ * for these devices. We will ignore rest of the entries for the same type
+ * device that has already been parsed and initialized
+ */
+ if (hcdp_dev->type != HCDP_DEV_CONSOLE)
+ continue;
+
+ iobase = ((u64) hcdp_dev->base_addr.addrhi << 32) | hcdp_dev->base_addr.addrlo;
+ gsi = hcdp_dev->global_int;
+
+ /* See PCI spec v2.2, Appendix D (Class Codes): */
+ switch (hcdp_dev->pci_prog_intfc) {
+ case 0x00: port.type = PORT_8250; break;
+ case 0x01: port.type = PORT_16450; break;
+ case 0x02: port.type = PORT_16550; break;
+ case 0x03: port.type = PORT_16650; break;
+ case 0x04: port.type = PORT_16750; break;
+ case 0x05: port.type = PORT_16850; break;
+ case 0x06: port.type = PORT_16C950; break;
+ default:
+ printk(KERN_WARNING"warning: EFI HCDP table reports unknown serial "
+ "programming interface 0x%02x; will autoprobe.\n",
+ hcdp_dev->pci_prog_intfc);
+ port.type = PORT_UNKNOWN;
+ break;
+ }
+
+#ifdef SERIAL_DEBUG_HCDP
+ printk(" type = %s, uart = %d\n", ((hcdp_dev->type = HCDP_DEV_CONSOLE)
+ ? "Headless Console" : ((hcdp_dev->type = HCDP_DEV_DEBUG)
+ ? "Debug port" : "Huh????")),
+ port.type);
+ printk(" base address space = %s, base address = 0x%lx\n",
+ ((hcdp_dev->base_addr.space_id = ACPI_MEM_SPACE)
+ ? "Memory Space" : ((hcdp_dev->base_addr.space_id = ACPI_IO_SPACE)
+ ? "I/O space" : "PCI space")),
+ iobase);
+ printk(" gsi = %d, baud rate = %lu, bits = %d, clock = %d\n",
+ gsi, (unsigned long) hcdp_dev->baud, hcdp_dev->bits, hcdp_dev->clock_rate);
+ if (hcdp_dev->base_addr.space_id = ACPI_PCICONF_SPACE)
+ printk(" PCI id: %02x:%02x:%02x, vendor ID=0x%x, dev ID=0x%x\n",
+ hcdp_dev->pci_seg, hcdp_dev->pci_bus, hcdp_dev->pci_dev,
+ hcdp_dev->pci_vendor_id, hcdp_dev->pci_dev_id);
+#endif
+ /*
+ * Now fill in a port structure to update the 8250 port table..
+ */
+ if (hcdp_dev->clock_rate)
+ port.uartclk = hcdp_dev->clock_rate;
+ else
+ port.uartclk = BASE_BAUD * 16;
+
+ /*
+ * Check if this is an I/O mapped address or a memory mapped address
+ */
+ if (hcdp_dev->base_addr.space_id = ACPI_MEM_SPACE) {
+ port.iobase = 0;
+ port.mapbase = iobase;
+ port.membase = ioremap(iobase, 64);
+ port.iotype = SERIAL_IO_MEM;
+ } else if (hcdp_dev->base_addr.space_id = ACPI_IO_SPACE) {
+ port.iobase = iobase;
+ port.mapbase = 0;
+ port.membase = NULL;
+ port.iotype = SERIAL_IO_PORT;
+ } else if (hcdp_dev->base_addr.space_id = ACPI_PCICONF_SPACE) {
+ printk(KERN_WARNING"warning: No support for PCI serial console\n");
+ return;
+ }
+ port.irq = gsi;
+ port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF;
+ if (gsi)
+ port.flags |= ASYNC_AUTO_IRQ;
+
+ /*
+ * Note: the above memset() initializes port.line to 0, so we register
+ * this port as ttyS0.
+ */
+ if (early_serial_setup(&port) < 0) {
+ printk("setup_serial_hcdp(): early_serial_setup() for HCDP serial "
+ "console port failed. Will try any additional consoles in HCDP.\n");
+ continue;
+ }
+ break;
+ }
+
+#ifdef SERIAL_DEBUG_HCDP
+ printk("Leaving setup_serial_hcdp()\n");
+#endif
+}
+
+#ifdef CONFIG_IA64_EARLY_PRINTK_UART
+unsigned long
+hcdp_early_uart (void)
+{
+ efi_system_table_t *systab;
+ efi_config_table_t *config_tables;
+ unsigned long addr = 0;
+ hcdp_t *hcdp = 0;
+ hcdp_dev_t *dev;
+ int i;
+
+ systab = (efi_system_table_t *) ia64_boot_param->efi_systab;
+ if (!systab)
+ return 0;
+ systab = __va(systab);
+
+ config_tables = (efi_config_table_t *) systab->tables;
+ if (!config_tables)
+ return 0;
+ config_tables = __va(config_tables);
+
+ for (i = 0; i < systab->nr_tables; i++) {
+ if (efi_guidcmp(config_tables[i].guid, HCDP_TABLE_GUID) = 0) {
+ hcdp = (hcdp_t *) config_tables[i].table;
+ break;
+ }
+ }
+ if (!hcdp)
+ return 0;
+ hcdp = __va(hcdp);
+
+ for (i = 0, dev = hcdp->hcdp_dev; i < hcdp->num_entries; i++, dev++) {
+ if (dev->type = HCDP_DEV_CONSOLE) {
+ addr = (u64) dev->base_addr.addrhi << 32 | dev->base_addr.addrlo;
+ break;
+ }
+ }
+ return addr;
+}
+#endif /* CONFIG_IA64_EARLY_PRINTK_UART */
diff -Nru a/drivers/serial/8250_hcdp.h b/drivers/serial/8250_hcdp.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/serial/8250_hcdp.h Fri Jan 24 20:41:06 2003
@@ -0,0 +1,79 @@
+/*
+ * drivers/serial/8250_hcdp.h
+ *
+ * Copyright (C) 2002 Hewlett-Packard Co.
+ * Khalid Aziz <khalid_aziz@hp.com>
+ *
+ * Definitions for HCDP defined serial ports (Serial console and debug
+ * ports)
+ */
+
+/* ACPI table signatures */
+#define HCDP_SIG_LEN 4
+#define HCDP_SIGNATURE "HCDP"
+
+/* Space ID as defined in ACPI generic address structure */
+#define ACPI_MEM_SPACE 0
+#define ACPI_IO_SPACE 1
+#define ACPI_PCICONF_SPACE 2
+
+/*
+ * Maximum number of HCDP devices we want to read in
+ */
+#define MAX_HCDP_DEVICES 6
+
+/*
+ * Default UART clock rate if clock rate is 0 in HCDP table.
+ */
+#define DEFAULT_UARTCLK 115200
+
+/*
+ * ACPI Generic Address Structure
+ */
+typedef struct {
+ u8 space_id;
+ u8 bit_width;
+ u8 bit_offset;
+ u8 resv;
+ u32 addrlo;
+ u32 addrhi;
+} acpi_gen_addr;
+
+/* HCDP Device descriptor entry types */
+#define HCDP_DEV_CONSOLE 0
+#define HCDP_DEV_DEBUG 1
+
+/* HCDP Device descriptor type */
+typedef struct {
+ u8 type;
+ u8 bits;
+ u8 parity;
+ u8 stop_bits;
+ u8 pci_seg;
+ u8 pci_bus;
+ u8 pci_dev;
+ u8 pci_func;
+ u64 baud;
+ acpi_gen_addr base_addr;
+ u16 pci_dev_id;
+ u16 pci_vendor_id;
+ u32 global_int;
+ u32 clock_rate;
+ u8 pci_prog_intfc;
+ u8 resv;
+} hcdp_dev_t;
+
+/* HCDP Table format */
+typedef struct {
+ u8 signature[4];
+ u32 len;
+ u8 rev;
+ u8 chksum;
+ u8 oemid[6];
+ u8 oem_tabid[8];
+ u32 oem_rev;
+ u8 creator_id[4];
+ u32 creator_rev;
+ u32 num_entries;
+ hcdp_dev_t hcdp_dev[MAX_HCDP_DEVICES];
+} hcdp_t;
diff -Nru a/drivers/serial/Kconfig b/drivers/serial/Kconfig
--- a/drivers/serial/Kconfig Fri Jan 24 20:41:05 2003
+++ b/drivers/serial/Kconfig Fri Jan 24 20:41:05 2003
@@ -39,6 +39,13 @@
Most people will say Y or M here, so that they can use serial mice,
modems and similar devices connecting to the standard serial ports.
+config SERIAL_8250_ACPI
+ tristate "8250/16550 device discovery support via ACPI SPCR/DBGP tables"
+ depends on IA64
+ help
+ Locate serial ports via the Microsoft proprietary ACPI SPCR/DBGP tables.
+ This table has been superseded by the EFI HCDP table.
+
config SERIAL_8250_CONSOLE
bool "Console on 8250/16550 and compatible serial port (EXPERIMENTAL)"
depends on SERIAL_8250=y
@@ -76,6 +83,15 @@
The module will be called serial_cs.o. If you want to compile it as
a module, say M here and read <file:Documentation/modules.txt>.
If unsure, say N.
+
+config SERIAL_8250_HCDP
+ bool "8250/16550 device discovery support via EFI HCDP table"
+ depends on IA64
+ ---help---
+ If you wish to make the serial console port described by the EFI
+ HCDP table available for use as serial console or general
+ purpose port, say Y here. See
+ <http://www.dig64.org/specifications/DIG64_HCDPv10a_01.pdf>.
config SERIAL_8250_EXTENDED
bool "Extended 8250/16550 serial driver options"
diff -Nru a/drivers/serial/Makefile b/drivers/serial/Makefile
--- a/drivers/serial/Makefile Fri Jan 24 20:41:05 2003
+++ b/drivers/serial/Makefile Fri Jan 24 20:41:05 2003
@@ -10,6 +10,8 @@
serial-8250-$(CONFIG_GSC) += 8250_gsc.o
serial-8250-$(CONFIG_PCI) += 8250_pci.o
serial-8250-$(CONFIG_PNP) += 8250_pnp.o
+serial-8250-$(CONFIG_SERIAL_8250_ACPI) += acpi.o 8250_acpi.o
+serial-8250-$(CONFIG_SERIAL_8250_HCDP) += 8250_hcdp.o
obj-$(CONFIG_SERIAL_CORE) += core.o
obj-$(CONFIG_SERIAL_21285) += 21285.o
obj-$(CONFIG_SERIAL_8250) += 8250.o $(serial-8250-y)
diff -Nru a/drivers/serial/acpi.c b/drivers/serial/acpi.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/serial/acpi.c Fri Jan 24 20:41:06 2003
@@ -0,0 +1,108 @@
+/*
+ * serial/acpi.c
+ * Copyright (c) 2002-2003 Matthew Wilcox for Hewlett-Packard
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/acpi.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/serial.h>
+#include <asm/io.h>
+#include <asm/serial.h>
+#include "../acpi/acpi_bus.h"
+
+static void acpi_serial_address(struct serial_struct *req, struct acpi_resource_address32 *addr32)
+{
+ unsigned long size;
+
+ size = addr32->max_address_range - addr32->min_address_range + 1;
+ req->iomap_base = addr32->min_address_range;
+ req->iomem_base = ioremap(req->iomap_base, size);
+ req->io_type = SERIAL_IO_MEM;
+}
+
+static void acpi_serial_irq(struct serial_struct *req, struct acpi_resource_ext_irq *ext_irq)
+{
+ if (ext_irq->number_of_interrupts > 0) {
+#ifdef CONFIG_IA64
+ req->irq = acpi_register_irq(ext_irq->interrupts[0],
+ ext_irq->active_high_low = ACPI_ACTIVE_HIGH,
+ ext_irq->edge_level = ACPI_EDGE_SENSITIVE);
+#else
+ req->irq = ext_irq->interrupts[0];
+#endif
+ }
+}
+
+static int acpi_serial_add(struct acpi_device *device)
+{
+ acpi_status result;
+ struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+ struct serial_struct serial_req;
+ int line, offset = 0;
+
+ memset(&serial_req, 0, sizeof(serial_req));
+ result = acpi_get_current_resources(device->handle, &buffer);
+ if (ACPI_FAILURE(result)) {
+ result = -ENODEV;
+ goto out;
+ }
+
+ while (offset <= buffer.length) {
+ struct acpi_resource *res = buffer.pointer + offset;
+ if (res->length = 0)
+ break;
+ offset += res->length;
+ if (res->id = ACPI_RSTYPE_ADDRESS32) {
+ acpi_serial_address(&serial_req, &res->data.address32);
+ } else if (res->id = ACPI_RSTYPE_EXT_IRQ) {
+ acpi_serial_irq(&serial_req, &res->data.extended_irq);
+ }
+ }
+
+ serial_req.baud_base = BASE_BAUD;
+ serial_req.flags = ASYNC_SKIP_TEST|ASYNC_BOOT_AUTOCONF|ASYNC_AUTO_IRQ;
+
+ result = 0;
+ line = register_serial(&serial_req);
+ if (line < 0)
+ result = -ENODEV;
+
+ out:
+ acpi_os_free(buffer.pointer);
+ return result;
+}
+
+static int acpi_serial_remove(struct acpi_device *device, int type)
+{
+ return 0;
+}
+
+static struct acpi_driver acpi_serial_driver = {
+ .name = "serial",
+ .class = "",
+ .ids = "PNP0501",
+ .ops = {
+ .add = acpi_serial_add,
+ .remove = acpi_serial_remove,
+ },
+};
+
+static int __init acpi_serial_init(void)
+{
+ acpi_bus_register_driver(&acpi_serial_driver);
+ return 0;
+}
+
+static void __exit acpi_serial_exit(void)
+{
+ acpi_bus_unregister_driver(&acpi_serial_driver);
+}
+
+module_init(acpi_serial_init);
+module_exit(acpi_serial_exit);
diff -Nru a/drivers/video/radeonfb.c b/drivers/video/radeonfb.c
--- a/drivers/video/radeonfb.c Fri Jan 24 20:41:05 2003
+++ b/drivers/video/radeonfb.c Fri Jan 24 20:41:05 2003
@@ -724,7 +724,6 @@
radeon_set_backlight_level
};
#endif /* CONFIG_PMAC_BACKLIGHT */
-
#endif /* CONFIG_ALL_PPC */
diff -Nru a/fs/exec.c b/fs/exec.c
--- a/fs/exec.c Fri Jan 24 20:41:05 2003
+++ b/fs/exec.c Fri Jan 24 20:41:05 2003
@@ -405,7 +405,7 @@
mpnt->vm_start = PAGE_MASK & (unsigned long) bprm->p;
mpnt->vm_end = STACK_TOP;
#endif
- mpnt->vm_page_prot = PAGE_COPY;
+ mpnt->vm_page_prot = protection_map[VM_STACK_FLAGS & 0x7];
mpnt->vm_flags = VM_STACK_FLAGS;
mpnt->vm_ops = NULL;
mpnt->vm_pgoff = 0;
diff -Nru a/fs/fcntl.c b/fs/fcntl.c
--- a/fs/fcntl.c Fri Jan 24 20:41:05 2003
+++ b/fs/fcntl.c Fri Jan 24 20:41:05 2003
@@ -320,6 +320,7 @@
* to fix this will be in libc.
*/
err = filp->f_owner.pid;
+ force_successful_syscall_return();
break;
case F_SETOWN:
err = f_setown(filp, arg, 1);
diff -Nru a/fs/proc/base.c b/fs/proc/base.c
--- a/fs/proc/base.c Fri Jan 24 20:41:05 2003
+++ b/fs/proc/base.c Fri Jan 24 20:41:05 2003
@@ -533,7 +533,24 @@
}
#endif
+static loff_t mem_lseek(struct file * file, loff_t offset, int orig)
+{
+ switch (orig) {
+ case 0:
+ file->f_pos = offset;
+ break;
+ case 1:
+ file->f_pos += offset;
+ break;
+ default:
+ return -EINVAL;
+ }
+ force_successful_syscall_return();
+ return file->f_pos;
+}
+
static struct file_operations proc_mem_operations = {
+ .llseek = mem_lseek,
.read = mem_read,
.write = mem_write,
.open = mem_open,
diff -Nru a/fs/select.c b/fs/select.c
--- a/fs/select.c Fri Jan 24 20:41:05 2003
+++ b/fs/select.c Fri Jan 24 20:41:05 2003
@@ -176,7 +176,7 @@
{
struct poll_wqueues table;
poll_table *wait;
- int retval, i, off;
+ int retval, i;
long __timeout = *timeout;
read_lock(¤t->files->file_lock);
@@ -193,38 +193,53 @@
wait = NULL;
retval = 0;
for (;;) {
+ unsigned long *rinp, *routp, *rexp, *inp, *outp, *exp;
set_current_state(TASK_INTERRUPTIBLE);
- for (i = 0 ; i < n; i++) {
- unsigned long bit = BIT(i);
- unsigned long mask;
- struct file *file;
- off = i / __NFDBITS;
- if (!(bit & BITS(fds, off)))
+ inp = fds->in; outp = fds->out; exp = fds->ex;
+ rinp = fds->res_in; routp = fds->res_out; rexp = fds->res_ex;
+
+ for (i = 0; i < n; ++rinp, ++routp, ++rexp) {
+ unsigned long in, out, ex, all_bits, bit = 1, mask, j;
+ unsigned long res_in = 0, res_out = 0, res_ex = 0;
+ struct file_operations *f_op = NULL;
+ struct file *file = NULL;
+
+ in = *inp++; out = *outp++; ex = *exp++;
+ all_bits = in | out | ex;
+ if (all_bits = 0)
continue;
- file = fget(i);
- mask = POLLNVAL;
- if (file) {
+
+ for (j = 0; j < __NFDBITS; ++j, ++i, bit <<= 1) {
+ if (i >= n)
+ break;
+ if (!(bit & all_bits))
+ continue;
+ file = fget(i);
+ if (file)
+ f_op = file->f_op;
mask = DEFAULT_POLLMASK;
- if (file->f_op && file->f_op->poll)
- mask = file->f_op->poll(file, wait);
- fput(file);
- }
- if ((mask & POLLIN_SET) && ISSET(bit, __IN(fds,off))) {
- SET(bit, __RES_IN(fds,off));
- retval++;
- wait = NULL;
- }
- if ((mask & POLLOUT_SET) && ISSET(bit, __OUT(fds,off))) {
- SET(bit, __RES_OUT(fds,off));
- retval++;
- wait = NULL;
- }
- if ((mask & POLLEX_SET) && ISSET(bit, __EX(fds,off))) {
- SET(bit, __RES_EX(fds,off));
- retval++;
- wait = NULL;
+ if (file) {
+ if (f_op && f_op->poll)
+ mask = (*f_op->poll)(file, retval ? NULL : wait);
+ fput(file);
+ if ((mask & POLLIN_SET) && (in & bit)) {
+ res_in |= bit;
+ retval++;
+ }
+ if ((mask & POLLOUT_SET) && (out & bit)) {
+ res_out |= bit;
+ retval++;
+ }
+ if ((mask & POLLEX_SET) && (ex & bit)) {
+ res_ex |= bit;
+ retval++;
+ }
+ }
}
+ if (res_in) *rinp = res_in;
+ if (res_out) *routp = res_out;
+ if (res_ex) *rexp = res_ex;
}
wait = NULL;
if (retval || !__timeout || signal_pending(current))
diff -Nru a/include/asm-alpha/agp.h b/include/asm-alpha/agp.h
--- a/include/asm-alpha/agp.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-alpha/agp.h Fri Jan 24 20:41:05 2003
@@ -10,4 +10,11 @@
#define flush_agp_mappings()
#define flush_agp_cache() mb()
+/*
+ * Page-protection value to be used for AGP memory mapped into kernel space. For
+ * platforms which use coherent AGP DMA, this can be PAGE_KERNEL. For others, it needs to
+ * be an uncached mapping (such as write-combining).
+ */
+#define PAGE_AGP PAGE_KERNEL_NOCACHE /* XXX fix me */
+
#endif
diff -Nru a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
--- a/include/asm-generic/vmlinux.lds.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-generic/vmlinux.lds.h Fri Jan 24 20:41:05 2003
@@ -13,18 +13,18 @@
} \
\
/* Kernel symbol table: Normal symbols */ \
- __start___ksymtab = .; \
__ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \
+ __start___ksymtab = .; \
*(__ksymtab) \
+ __stop___ksymtab = .; \
} \
- __stop___ksymtab = .; \
\
/* Kernel symbol table: GPL-only symbols */ \
- __start___gpl_ksymtab = .; \
__gpl_ksymtab : AT(ADDR(__gpl_ksymtab) - LOAD_OFFSET) { \
+ __start___gpl_ksymtab = .; \
*(__gpl_ksymtab) \
+ __stop___gpl_ksymtab = .; \
} \
- __stop___gpl_ksymtab = .; \
\
/* Kernel symbol table: strings */ \
__ksymtab_strings : AT(ADDR(__ksymtab_strings) - LOAD_OFFSET) { \
diff -Nru a/include/asm-i386/agp.h b/include/asm-i386/agp.h
--- a/include/asm-i386/agp.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-i386/agp.h Fri Jan 24 20:41:05 2003
@@ -20,4 +20,11 @@
worth it. Would need a page for it. */
#define flush_agp_cache() asm volatile("wbinvd":::"memory")
+/*
+ * Page-protection value to be used for AGP memory mapped into kernel space. For
+ * platforms which use coherent AGP DMA, this can be PAGE_KERNEL. For others, it needs to
+ * be an uncached mapping (such as write-combining).
+ */
+#define PAGE_AGP PAGE_KERNEL_NOCACHE
+
#endif
diff -Nru a/include/asm-i386/hw_irq.h b/include/asm-i386/hw_irq.h
--- a/include/asm-i386/hw_irq.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-i386/hw_irq.h Fri Jan 24 20:41:05 2003
@@ -140,4 +140,6 @@
static inline void hw_resend_irq(struct hw_interrupt_type *h, unsigned int i) {}
#endif
+extern irq_desc_t irq_desc [NR_IRQS];
+
#endif /* _ASM_HW_IRQ_H */
diff -Nru a/include/asm-i386/ptrace.h b/include/asm-i386/ptrace.h
--- a/include/asm-i386/ptrace.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-i386/ptrace.h Fri Jan 24 20:41:05 2003
@@ -57,6 +57,7 @@
#ifdef __KERNEL__
#define user_mode(regs) ((VM_MASK & (regs)->eflags) || (3 & (regs)->xcs))
#define instruction_pointer(regs) ((regs)->eip)
+#define force_successful_syscall_return() do { } while (0)
#endif
#endif
diff -Nru a/include/asm-ia64/asmmacro.h b/include/asm-ia64/asmmacro.h
--- a/include/asm-ia64/asmmacro.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/asmmacro.h Fri Jan 24 20:41:05 2003
@@ -2,15 +2,22 @@
#define _ASM_IA64_ASMMACRO_H
/*
- * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Copyright (C) 2000-2001, 2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
+#include <linux/config.h>
+
#define ENTRY(name) \
.align 32; \
.proc name; \
name:
+#define ENTRY_MIN_ALIGN(name) \
+ .align 16; \
+ .proc name; \
+name:
+
#define GLOBAL_ENTRY(name) \
.global name; \
ENTRY(name)
@@ -37,19 +44,28 @@
.previous
#if __GNUC__ >= 3
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+# define EX(y,x...) \
+ .xdata4 "__ex_table", 99f-., y-.; \
[99:] x
-# define EXCLR(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
+# define EXCLR(y,x...) \
+ .xdata4 "__ex_table", 99f-., y-.+4; \
[99:] x
#else
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+# define EX(y,x...) \
+ .xdata4 "__ex_table", 99f-., y-.; \
99: x
-# define EXCLR(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
+# define EXCLR(y,x...) \
+ .xdata4 "__ex_table", 99f-., y-.+4; \
99: x
+#endif
+
+#ifdef CONFIG_MCKINLEY
+/* workaround for Itanium 2 Errata 9: */
+# define MCKINLEY_E9_WORKAROUND \
+ br.call.sptk.many b7\x1f;; \
+1:
+#else
+# define MCKINLEY_E9_WORKAROUND
#endif
#endif /* _ASM_IA64_ASMMACRO_H */
diff -Nru a/include/asm-ia64/bitops.h b/include/asm-ia64/bitops.h
--- a/include/asm-ia64/bitops.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/bitops.h Fri Jan 24 20:41:05 2003
@@ -2,7 +2,7 @@
#define _ASM_IA64_BITOPS_H
/*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* 02/06/02 find_next_bit() and find_first_bit() added from Erich Focht's ia64 O(1)
@@ -320,7 +320,7 @@
static inline unsigned long
ia64_fls (unsigned long x)
{
- double d = x;
+ long double d = x;
long exp;
__asm__ ("getf.exp %0=%1" : "=r"(exp) : "f"(d));
diff -Nru a/include/asm-ia64/compat.h b/include/asm-ia64/compat.h
--- a/include/asm-ia64/compat.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/compat.h Fri Jan 24 20:41:05 2003
@@ -14,11 +14,18 @@
typedef s32 compat_pid_t;
typedef u16 compat_uid_t;
typedef u16 compat_gid_t;
+typedef u32 compat_uid32_t;
+typedef u32 compat_gid32_t;
typedef u16 compat_mode_t;
typedef u32 compat_ino_t;
typedef u16 compat_dev_t;
typedef s32 compat_off_t;
+typedef s64 compat_loff_t;
typedef u16 compat_nlink_t;
+typedef u16 compat_ipc_pid_t;
+typedef s32 compat_daddr_t;
+typedef u32 compat_caddr_t;
+typedef __kernel_fsid_t compat_fsid_t;
struct compat_timespec {
compat_time_t tv_sec;
@@ -54,11 +61,31 @@
};
struct compat_flock {
- short l_type;
- short l_whence;
- compat_off_t l_start;
- compat_off_t l_len;
- compat_pid_t l_pid;
+ short l_type;
+ short l_whence;
+ compat_off_t l_start;
+ compat_off_t l_len;
+ compat_pid_t l_pid;
};
+
+struct compat_statfs {
+ int f_type;
+ int f_bsize;
+ int f_blocks;
+ int f_bfree;
+ int f_bavail;
+ int f_files;
+ int f_ffree;
+ compat_fsid_t f_fsid;
+ int f_namelen; /* SunOS ignores this field. */
+ int f_spare[6];
+};
+
+typedef u32 compat_old_sigset_t; /* at least 32 bits */
+
+#define _COMPAT_NSIG 64
+#define _COMPAT_NSIG_BPW 32
+
+typedef u32 compat_sigset_word;
#endif /* _ASM_IA64_COMPAT_H */
diff -Nru a/include/asm-ia64/elf.h b/include/asm-ia64/elf.h
--- a/include/asm-ia64/elf.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/elf.h Fri Jan 24 20:41:05 2003
@@ -4,10 +4,12 @@
/*
* ELF-specific definitions.
*
- * Copyright (C) 1998, 1999, 2002 Hewlett-Packard Co
+ * Copyright (C) 1998-1999, 2002-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
+#include <linux/config.h>
+
#include <asm/fpu.h>
#include <asm/page.h>
@@ -88,6 +90,11 @@
relevant until we have real hardware to play with... */
#define ELF_PLATFORM 0
+/*
+ * This should go into linux/elf.h...
+ */
+#define AT_SYSINFO 32
+
#ifdef __KERNEL__
struct elf64_hdr;
extern void ia64_set_personality (struct elf64_hdr *elf_ex, int ibcs2_interpreter);
@@ -99,7 +106,14 @@
#define ELF_CORE_COPY_TASK_REGS(tsk, elf_gregs) dump_task_regs(tsk, elf_gregs)
#define ELF_CORE_COPY_FPREGS(tsk, elf_fpregs) dump_task_fpu(tsk, elf_fpregs)
-
+#ifdef CONFIG_FSYS
+#define ARCH_DLINFO \
+do { \
+ extern int syscall_via_epc; \
+ NEW_AUX_ENT(AT_SYSINFO, syscall_via_epc); \
+} while (0)
#endif
+
+#endif /* __KERNEL__ */
#endif /* _ASM_IA64_ELF_H */
diff -Nru a/include/asm-ia64/ia32.h b/include/asm-ia64/ia32.h
--- a/include/asm-ia64/ia32.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/ia32.h Fri Jan 24 20:41:05 2003
@@ -12,17 +12,6 @@
* 32 bit structures for IA32 support.
*/
-/* 32bit compatibility types */
-typedef unsigned short __kernel_ipc_pid_t32;
-typedef unsigned int __kernel_uid32_t32;
-typedef unsigned int __kernel_gid32_t32;
-typedef unsigned short __kernel_umode_t32;
-typedef short __kernel_nlink_t32;
-typedef int __kernel_daddr_t32;
-typedef unsigned int __kernel_caddr_t32;
-typedef long __kernel_loff_t32;
-typedef __kernel_fsid_t __kernel_fsid_t32;
-
#define IA32_PAGE_SHIFT 12 /* 4KB pages */
#define IA32_PAGE_SIZE (1UL << IA32_PAGE_SHIFT)
#define IA32_PAGE_MASK (~(IA32_PAGE_SIZE - 1))
@@ -143,10 +132,6 @@
};
/* signal.h */
-#define _IA32_NSIG 64
-#define _IA32_NSIG_BPW 32
-#define _IA32_NSIG_WORDS (_IA32_NSIG / _IA32_NSIG_BPW)
-
#define IA32_SET_SA_HANDLER(ka,handler,restorer) \
((ka)->sa.sa_handler = (__sighandler_t) \
(((unsigned long)(restorer) << 32) \
@@ -154,23 +139,17 @@
#define IA32_SA_HANDLER(ka) ((unsigned long) (ka)->sa.sa_handler & 0xffffffff)
#define IA32_SA_RESTORER(ka) ((unsigned long) (ka)->sa.sa_handler >> 32)
-typedef struct {
- unsigned int sig[_IA32_NSIG_WORDS];
-} sigset32_t;
-
struct sigaction32 {
unsigned int sa_handler; /* Really a pointer, but need to deal with 32 bits */
unsigned int sa_flags;
unsigned int sa_restorer; /* Another 32 bit pointer */
- sigset32_t sa_mask; /* A 32 bit mask */
+ compat_sigset_t sa_mask; /* A 32 bit mask */
};
-typedef unsigned int old_sigset32_t; /* at least 32 bits */
-
struct old_sigaction32 {
unsigned int sa_handler; /* Really a pointer, but need to deal
with 32 bits */
- old_sigset32_t sa_mask; /* A 32 bit mask */
+ compat_old_sigset_t sa_mask; /* A 32 bit mask */
unsigned int sa_flags;
unsigned int sa_restorer; /* Another 32 bit pointer */
};
@@ -212,19 +191,6 @@
unsigned int st_ctime_nsec;
unsigned int st_ino_lo;
unsigned int st_ino_hi;
-};
-
-struct statfs32 {
- int f_type;
- int f_bsize;
- int f_blocks;
- int f_bfree;
- int f_bavail;
- int f_files;
- int f_ffree;
- __kernel_fsid_t32 f_fsid;
- int f_namelen; /* SunOS ignores this field. */
- int f_spare[6];
};
typedef union sigval32 {
diff -Nru a/include/asm-ia64/intrinsics.h b/include/asm-ia64/intrinsics.h
--- a/include/asm-ia64/intrinsics.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/intrinsics.h Fri Jan 24 20:41:05 2003
@@ -4,9 +4,11 @@
/*
* Compiler-dependent intrinsics.
*
- * Copyright (C) 2002 Hewlett-Packard Co
+ * Copyright (C) 2002-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
+
+#include <linux/config.h>
/*
* Force an unresolved reference if someone tries to use
diff -Nru a/include/asm-ia64/mmu_context.h b/include/asm-ia64/mmu_context.h
--- a/include/asm-ia64/mmu_context.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/mmu_context.h Fri Jan 24 20:41:05 2003
@@ -28,6 +28,36 @@
#include <asm/processor.h>
+#define MMU_CONTEXT_DEBUG 0
+
+#if MMU_CONTEXT_DEBUG
+
+#include <ia64intrin.h>
+
+extern struct mmu_trace_entry {
+ char op;
+ u8 cpu;
+ u32 context;
+ void *mm;
+} mmu_tbuf[1024];
+
+extern volatile int mmu_tbuf_index;
+
+# define MMU_TRACE(_op,_cpu,_mm,_ctx) \
+do { \
+ int i = __sync_fetch_and_add(&mmu_tbuf_index, 1) % ARRAY_SIZE(mmu_tbuf); \
+ struct mmu_trace_entry e; \
+ e.op = (_op); \
+ e.cpu = (_cpu); \
+ e.mm = (_mm); \
+ e.context = (_ctx); \
+ mmu_tbuf[i] = e; \
+} while (0)
+
+#else
+# define MMU_TRACE(op,cpu,mm,ctx) do { ; } while (0)
+#endif
+
struct ia64_ctx {
spinlock_t lock;
unsigned int next; /* next context number to use */
@@ -91,6 +121,7 @@
static inline int
init_new_context (struct task_struct *p, struct mm_struct *mm)
{
+ MMU_TRACE('N', smp_processor_id(), mm, 0);
mm->context = 0;
return 0;
}
@@ -99,6 +130,7 @@
destroy_context (struct mm_struct *mm)
{
/* Nothing to do. */
+ MMU_TRACE('D', smp_processor_id(), mm, mm->context);
}
static inline void
@@ -138,12 +170,17 @@
do {
context = get_mmu_context(mm);
+ MMU_TRACE('A', smp_processor_id(), mm, context);
reload_context(context);
+ MMU_TRACE('a', smp_processor_id(), mm, context);
/* in the unlikely event of a TLB-flush by another thread, redo the load: */
} while (unlikely(context != mm->context));
}
-#define deactivate_mm(tsk,mm) do { } while (0)
+#define deactivate_mm(tsk,mm) \
+do { \
+ MMU_TRACE('d', smp_processor_id(), mm, mm->context); \
+} while (0)
/*
* Switch from address space PREV to address space NEXT.
diff -Nru a/include/asm-ia64/page.h b/include/asm-ia64/page.h
--- a/include/asm-ia64/page.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/page.h Fri Jan 24 20:41:05 2003
@@ -88,7 +88,12 @@
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
#ifndef CONFIG_DISCONTIGMEM
-#define pfn_valid(pfn) ((pfn) < max_mapnr)
+# ifdef CONFIG_VIRTUAL_MEM_MAP
+ extern int ia64_pfn_valid (unsigned long pfn);
+# define pfn_valid(pfn) (((pfn) < max_mapnr) && ia64_pfn_valid(pfn))
+# else
+# define pfn_valid(pfn) ((pfn) < max_mapnr)
+# endif
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define page_to_pfn(page) ((unsigned long) (page - mem_map))
#define pfn_to_page(pfn) (mem_map + (pfn))
diff -Nru a/include/asm-ia64/perfmon.h b/include/asm-ia64/perfmon.h
--- a/include/asm-ia64/perfmon.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/perfmon.h Fri Jan 24 20:41:05 2003
@@ -40,6 +40,7 @@
#define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork() */
#define PFM_FL_NOTIFY_BLOCK 0x04 /* block task on user level notifications */
#define PFM_FL_SYSTEM_WIDE 0x08 /* create a system wide context */
+#define PFM_FL_EXCL_IDLE 0x20 /* exclude idle task from system wide session */
/*
* PMC flags
@@ -86,11 +87,12 @@
unsigned long reg_long_reset; /* reset after sampling buffer overflow (large) */
unsigned long reg_short_reset;/* reset after counter overflow (small) */
- unsigned long reg_reset_pmds[4]; /* which other counters to reset on overflow */
- unsigned long reg_random_seed; /* seed value when randomization is used */
- unsigned long reg_random_mask; /* bitmask used to limit random value */
+ unsigned long reg_reset_pmds[4]; /* which other counters to reset on overflow */
+ unsigned long reg_random_seed; /* seed value when randomization is used */
+ unsigned long reg_random_mask; /* bitmask used to limit random value */
+ unsigned long reg_last_reset_value;/* last value used to reset the PMD (PFM_READ_PMDS) */
- unsigned long reserved[14]; /* for future use */
+ unsigned long reserved[13]; /* for future use */
} pfarg_reg_t;
typedef struct {
@@ -123,7 +125,7 @@
* Define the version numbers for both perfmon as a whole and the sampling buffer format.
*/
#define PFM_VERSION_MAJ 1U
-#define PFM_VERSION_MIN 1U
+#define PFM_VERSION_MIN 3U
#define PFM_VERSION (((PFM_VERSION_MAJ&0xffff)<<16)|(PFM_VERSION_MIN & 0xffff))
#define PFM_SMPL_VERSION_MAJ 1U
@@ -156,13 +158,17 @@
unsigned long stamp; /* timestamp */
unsigned long ip; /* where did the overflow interrupt happened */
unsigned long regs; /* bitmask of which registers overflowed */
- unsigned long period; /* unused */
+ unsigned long reserved; /* unused */
} perfmon_smpl_entry_t;
extern int perfmonctl(pid_t pid, int cmd, void *arg, int narg);
#ifdef __KERNEL__
+typedef struct {
+ void (*handler)(int irq, void *arg, struct pt_regs *regs);
+} pfm_intr_handler_desc_t;
+
extern void pfm_save_regs (struct task_struct *);
extern void pfm_load_regs (struct task_struct *);
@@ -174,9 +180,24 @@
extern int pfm_use_debug_registers(struct task_struct *);
extern int pfm_release_debug_registers(struct task_struct *);
extern int pfm_cleanup_smpl_buf(struct task_struct *);
-extern void pfm_syst_wide_update_task(struct task_struct *, int);
+extern void pfm_syst_wide_update_task(struct task_struct *, unsigned long info, int is_ctxswin);
extern void pfm_ovfl_block_reset(void);
-extern void perfmon_init_percpu(void);
+extern void pfm_init_percpu(void);
+
+/*
+ * hooks to allow VTune/Prospect to cooperate with perfmon.
+ * (reserved for system wide monitoring modules only)
+ */
+extern int pfm_install_alternate_syswide_subsystem(pfm_intr_handler_desc_t *h);
+extern int pfm_remove_alternate_syswide_subsystem(pfm_intr_handler_desc_t *h);
+
+/*
+ * describe the content of the local_cpu_date->pfm_syst_info field
+ */
+#define PFM_CPUINFO_SYST_WIDE 0x1 /* if set a system wide session exist */
+#define PFM_CPUINFO_DCR_PP 0x2 /* if set the system wide session has started */
+#define PFM_CPUINFO_EXCL_IDLE 0x4 /* the system wide session excludes the idle task */
+
#endif /* __KERNEL__ */
diff -Nru a/include/asm-ia64/pgtable.h b/include/asm-ia64/pgtable.h
--- a/include/asm-ia64/pgtable.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/pgtable.h Fri Jan 24 20:41:05 2003
@@ -204,7 +204,13 @@
#define VMALLOC_START (0xa000000000000000 + 3*PERCPU_PAGE_SIZE)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
-#define VMALLOC_END (0xa000000000000000 + (1UL << (4*PAGE_SHIFT - 9)))
+#ifdef CONFIG_VIRTUAL_MEM_MAP
+# define VMALLOC_END_INIT (0xa000000000000000 + (1UL << (4*PAGE_SHIFT - 9)))
+# define VMALLOC_END vmalloc_end
+ extern unsigned long vmalloc_end;
+#else
+# define VMALLOC_END (0xa000000000000000 + (1UL << (4*PAGE_SHIFT - 9)))
+#endif
/*
* Conversion functions: convert page frame number (pfn) and a protection value to a page
@@ -422,6 +428,18 @@
typedef pte_t *pte_addr_t;
+# ifdef CONFIG_VIRTUAL_MEM_MAP
+
+ /* arch mem_map init routine is needed due to holes in a virtual mem_map */
+# define HAVE_ARCH_MEMMAP_INIT
+
+ typedef void memmap_init_callback_t (struct page *start, unsigned long size,
+ int nid, unsigned long zone, unsigned long start_pfn);
+
+ extern void arch_memmap_init (memmap_init_callback_t *callback, struct page *start,
+ unsigned long size, int nid, unsigned long zone,
+ unsigned long start_pfn);
+# endif /* CONFIG_VIRTUAL_MEM_MAP */
# endif /* !__ASSEMBLY__ */
/*
diff -Nru a/include/asm-ia64/processor.h b/include/asm-ia64/processor.h
--- a/include/asm-ia64/processor.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/processor.h Fri Jan 24 20:41:05 2003
@@ -2,7 +2,7 @@
#define _ASM_IA64_PROCESSOR_H
/*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
@@ -223,7 +223,10 @@
struct siginfo;
struct thread_struct {
- __u64 flags; /* various thread flags (see IA64_THREAD_*) */
+ __u32 flags; /* various thread flags (see IA64_THREAD_*) */
+ /* writing on_ustack is performance-critical, so it's worth spending 8 bits on it... */
+ __u8 on_ustack; /* executing on user-stacks? */
+ __u8 pad[3];
__u64 ksp; /* kernel stack pointer */
__u64 map_base; /* base address for get_unmapped_area() */
__u64 task_size; /* limit for task size */
@@ -277,6 +280,7 @@
#define INIT_THREAD { \
.flags = 0, \
+ .on_ustack = 0, \
.ksp = 0, \
.map_base = DEFAULT_MAP_BASE, \
.task_size = DEFAULT_TASK_SIZE, \
diff -Nru a/include/asm-ia64/ptrace.h b/include/asm-ia64/ptrace.h
--- a/include/asm-ia64/ptrace.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/ptrace.h Fri Jan 24 20:41:05 2003
@@ -2,7 +2,7 @@
#define _ASM_IA64_PTRACE_H
/*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Stephane Eranian <eranian@hpl.hp.com>
*
@@ -218,6 +218,13 @@
# define ia64_task_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)
# define ia64_psr(regs) ((struct ia64_psr *) &(regs)->cr_ipsr)
# define user_mode(regs) (((struct ia64_psr *) &(regs)->cr_ipsr)->cpl != 0)
+# define user_stack(task,regs) ((long) regs - (long) task = IA64_STK_OFFSET - sizeof(*regs))
+# define fsys_mode(task,regs) \
+ ({ \
+ struct task_struct *_task = (task); \
+ struct pt_regs *_regs = (regs); \
+ !user_mode(_regs) && user_stack(_task, _regs); \
+ })
struct task_struct; /* forward decl */
diff -Nru a/include/asm-ia64/serial.h b/include/asm-ia64/serial.h
--- a/include/asm-ia64/serial.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/serial.h Fri Jan 24 20:41:05 2003
@@ -59,7 +59,6 @@
{ 0, BASE_BAUD, 0x3E8, 4, STD_COM_FLAGS }, /* ttyS2 */ \
{ 0, BASE_BAUD, 0x2E8, 3, STD_COM4_FLAGS }, /* ttyS3 */
-
#ifdef CONFIG_SERIAL_MANY_PORTS
#define EXTRA_SERIAL_PORT_DEFNS \
{ 0, BASE_BAUD, 0x1A0, 9, FOURPORT_FLAGS }, /* ttyS4 */ \
diff -Nru a/include/asm-ia64/spinlock.h b/include/asm-ia64/spinlock.h
--- a/include/asm-ia64/spinlock.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/spinlock.h Fri Jan 24 20:41:05 2003
@@ -74,6 +74,27 @@
#define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 }
#define spin_lock_init(x) ((x)->lock = 0)
+#define DEBUG_SPIN_LOCK 0
+
+#if DEBUG_SPIN_LOCK
+
+#include <ia64intrin.h>
+
+#define _raw_spin_lock(x) \
+do { \
+ unsigned long _timeout = 1000000000; \
+ volatile unsigned int _old = 0, _new = 1, *_ptr = &((x)->lock); \
+ do { \
+ if (_timeout-- = 0) { \
+ extern void dump_stack (void); \
+ printk("kernel DEADLOCK at %s:%d?\n", __FILE__, __LINE__); \
+ dump_stack(); \
+ } \
+ } while (__sync_val_compare_and_swap(_ptr, _old, _new) != _old); \
+} while (0)
+
+#else
+
/*
* Streamlined test_and_set_bit(0, (x)). We use test-and-test-and-set
* rather than a simple xchg to avoid writing the cache-line when
@@ -94,6 +115,8 @@
"(p7) br.cond.spnt.few 1b\n" \
";;\n" \
:: "r"(&(x)->lock) : "ar.ccv", "p7", "r2", "r29", "memory")
+
+#endif /* !DEBUG_SPIN_LOCK */
#define spin_is_locked(x) ((x)->lock != 0)
#define _raw_spin_unlock(x) do { barrier(); ((spinlock_t *) x)->lock = 0; } while (0)
diff -Nru a/include/asm-ia64/system.h b/include/asm-ia64/system.h
--- a/include/asm-ia64/system.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/system.h Fri Jan 24 20:41:05 2003
@@ -7,7 +7,7 @@
* on information published in the Processor Abstraction Layer
* and the System Abstraction Layer manual.
*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
@@ -17,6 +17,7 @@
#include <asm/kregs.h>
#include <asm/page.h>
#include <asm/pal.h>
+#include <asm/percpu.h>
#define KERNEL_START (PAGE_OFFSET + 68*1024*1024)
@@ -26,7 +27,6 @@
#ifndef __ASSEMBLY__
-#include <linux/percpu.h>
#include <linux/kernel.h>
#include <linux/types.h>
@@ -117,62 +117,51 @@
*/
/* For spinlocks etc */
+/* clearing psr.i is implicitly serialized (visible by next insn) */
+/* setting psr.i requires data serialization */
+#define __local_irq_save(x) __asm__ __volatile__ ("mov %0=psr;;" \
+ "rsm psr.i;;" \
+ : "=r" (x) :: "memory")
+#define __local_irq_disable() __asm__ __volatile__ (";; rsm psr.i;;" ::: "memory")
+#define __local_irq_restore(x) __asm__ __volatile__ ("cmp.ne p6,p7=%0,r0;;" \
+ "(p6) ssm psr.i;" \
+ "(p7) rsm psr.i;;" \
+ "(p6) srlz.d" \
+ :: "r" ((x) & IA64_PSR_I) \
+ : "p6", "p7", "memory")
+
#ifdef CONFIG_IA64_DEBUG_IRQ
extern unsigned long last_cli_ip;
-# define local_irq_save(x) \
-do { \
- unsigned long ip, psr; \
- \
- __asm__ __volatile__ ("mov %0=psr;; rsm psr.i;;" : "=r" (psr) :: "memory"); \
- if (psr & (1UL << 14)) { \
- __asm__ ("mov %0=ip" : "=r"(ip)); \
- last_cli_ip = ip; \
- } \
- (x) = psr; \
-} while (0)
+# define __save_ip() __asm__ ("mov %0=ip" : "=r" (last_cli_ip))
-# define local_irq_disable() \
-do { \
- unsigned long ip, psr; \
- \
- __asm__ __volatile__ ("mov %0=psr;; rsm psr.i;;" : "=r" (psr) :: "memory"); \
- if (psr & (1UL << 14)) { \
- __asm__ ("mov %0=ip" : "=r"(ip)); \
- last_cli_ip = ip; \
- } \
+# define local_irq_save(x) \
+do { \
+ unsigned long psr; \
+ \
+ __local_irq_save(psr); \
+ if (psr & IA64_PSR_I) \
+ __save_ip(); \
+ (x) = psr; \
} while (0)
-# define local_irq_restore(x) \
-do { \
- unsigned long ip, old_psr, psr = (x); \
- \
- __asm__ __volatile__ ("mov %0=psr;" \
- "cmp.ne p6,p7=%1,r0;;" \
- "(p6) ssm psr.i;" \
- "(p7) rsm psr.i;;" \
- "(p6) srlz.d" \
- : "=r" (old_psr) : "r"((psr) & IA64_PSR_I) \
- : "p6", "p7", "memory"); \
- if ((old_psr & IA64_PSR_I) && !(psr & IA64_PSR_I)) { \
- __asm__ ("mov %0=ip" : "=r"(ip)); \
- last_cli_ip = ip; \
- } \
+# define local_irq_disable() do { unsigned long x; local_irq_save(x); } while (0)
+
+# define local_irq_restore(x) \
+do { \
+ unsigned long old_psr, psr = (x); \
+ \
+ local_save_flags(old_psr); \
+ __local_irq_restore(psr); \
+ if ((old_psr & IA64_PSR_I) && !(psr & IA64_PSR_I)) \
+ __save_ip(); \
} while (0)
#else /* !CONFIG_IA64_DEBUG_IRQ */
- /* clearing of psr.i is implicitly serialized (visible by next insn) */
-# define local_irq_save(x) __asm__ __volatile__ ("mov %0=psr;; rsm psr.i;;" \
- : "=r" (x) :: "memory")
-# define local_irq_disable() __asm__ __volatile__ (";; rsm psr.i;;" ::: "memory")
-/* (potentially) setting psr.i requires data serialization: */
-# define local_irq_restore(x) __asm__ __volatile__ ("cmp.ne p6,p7=%0,r0;;" \
- "(p6) ssm psr.i;" \
- "(p7) rsm psr.i;;" \
- "srlz.d" \
- :: "r"((x) & IA64_PSR_I) \
- : "p6", "p7", "memory")
+# define local_irq_save(x) __local_irq_save(x)
+# define local_irq_disable() __local_irq_disable()
+# define local_irq_restore(x) __local_irq_restore(x)
#endif /* !CONFIG_IA64_DEBUG_IRQ */
#define local_irq_enable() __asm__ __volatile__ (";; ssm psr.i;; srlz.d" ::: "memory")
@@ -216,8 +205,8 @@
extern void ia64_load_extra (struct task_struct *task);
#ifdef CONFIG_PERFMON
- DECLARE_PER_CPU(int, pfm_syst_wide);
-# define PERFMON_IS_SYSWIDE() (get_cpu_var(pfm_syst_wide) != 0)
+ DECLARE_PER_CPU(unsigned long, pfm_syst_info);
+# define PERFMON_IS_SYSWIDE() (get_cpu_var(pfm_syst_info) & 0x1)
#else
# define PERFMON_IS_SYSWIDE() (0)
#endif
diff -Nru a/include/asm-ia64/tlb.h b/include/asm-ia64/tlb.h
--- a/include/asm-ia64/tlb.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/tlb.h Fri Jan 24 20:41:05 2003
@@ -1,7 +1,7 @@
#ifndef _ASM_IA64_TLB_H
#define _ASM_IA64_TLB_H
/*
- * Copyright (C) 2002 Hewlett-Packard Co
+ * Copyright (C) 2002-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* This file was derived from asm-generic/tlb.h.
@@ -70,8 +70,7 @@
* freed pages that where gathered up to this point.
*/
static inline void
-ia64_tlb_flush_mmu(struct mmu_gather *tlb,
- unsigned long start, unsigned long end)
+ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)
{
unsigned int nr;
@@ -197,8 +196,7 @@
* PTE, not just those pointing to (normal) physical memory.
*/
static inline void
-__tlb_remove_tlb_entry(struct mmu_gather *tlb,
- pte_t *ptep, unsigned long address)
+__tlb_remove_tlb_entry (struct mmu_gather *tlb, pte_t *ptep, unsigned long address)
{
if (tlb->start_addr = ~0UL)
tlb->start_addr = address;
diff -Nru a/include/asm-ia64/tlbflush.h b/include/asm-ia64/tlbflush.h
--- a/include/asm-ia64/tlbflush.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/tlbflush.h Fri Jan 24 20:41:05 2003
@@ -47,19 +47,22 @@
static inline void
flush_tlb_mm (struct mm_struct *mm)
{
+ MMU_TRACE('F', smp_processor_id(), mm, mm->context);
if (!mm)
- return;
+ goto out;
mm->context = 0;
if (atomic_read(&mm->mm_users) = 0)
- return; /* happens as a result of exit_mmap() */
+ goto out; /* happens as a result of exit_mmap() */
#ifdef CONFIG_SMP
smp_flush_tlb_mm(mm);
#else
local_finish_flush_tlb_mm(mm);
#endif
+ out:
+ MMU_TRACE('f', smp_processor_id(), mm, mm->context);
}
extern void flush_tlb_range (struct vm_area_struct *vma, unsigned long start, unsigned long end);
diff -Nru a/include/asm-ia64/uaccess.h b/include/asm-ia64/uaccess.h
--- a/include/asm-ia64/uaccess.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/uaccess.h Fri Jan 24 20:41:05 2003
@@ -26,7 +26,7 @@
* associated and, if so, sets r8 to -EFAULT and clears r9 to 0 and
* then resumes execution at the continuation point.
*
- * Copyright (C) 1998, 1999, 2001-2002 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999, 2001-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
@@ -128,38 +128,28 @@
/* We need to declare the __ex_table section before we can use it in .xdata. */
asm (".section \"__ex_table\", \"a\"\n\t.previous");
-#if __GNUC__ >= 3
-# define GAS_HAS_LOCAL_TAGS /* define if gas supports local tags a la [1:] */
-#endif
-
-#ifdef GAS_HAS_LOCAL_TAGS
-# define _LL "[1:]"
-#else
-# define _LL "1:"
-#endif
-
#define __get_user_64(addr) \
- asm ("\n"_LL"\tld8 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n" \
- _LL \
+ asm ("\n[1:]\tld8 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.+4\n" \
+ "[1:]" \
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
#define __get_user_32(addr) \
- asm ("\n"_LL"\tld4 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n" \
- _LL \
+ asm ("\n[1:]\tld4 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.+4\n" \
+ "[1:]" \
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
#define __get_user_16(addr) \
- asm ("\n"_LL"\tld2 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n" \
- _LL \
+ asm ("\n[1:]\tld2 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.+4\n" \
+ "[1:]" \
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
#define __get_user_8(addr) \
- asm ("\n"_LL"\tld1 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n" \
- _LL \
+ asm ("\n[1:]\tld1 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.+4\n" \
+ "[1:]" \
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)), "1"(__gu_err));
extern void __put_user_unknown (void);
@@ -201,30 +191,30 @@
*/
#define __put_user_64(x,addr) \
asm volatile ( \
- "\n"_LL"\tst8 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n" \
- _LL \
+ "\n[1:]\tst8 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.\n" \
+ "[1:]" \
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_32(x,addr) \
asm volatile ( \
- "\n"_LL"\tst4 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n" \
- _LL \
+ "\n[1:]\tst4 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.\n" \
+ "[1:]" \
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_16(x,addr) \
asm volatile ( \
- "\n"_LL"\tst2 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n" \
- _LL \
+ "\n[1:]\tst2 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.\n" \
+ "[1:]" \
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_8(x,addr) \
asm volatile ( \
- "\n"_LL"\tst1 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
- "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n" \
- _LL \
+ "\n[1:]\tst1 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
+ "\t.xdata4 \"__ex_table\", 1b-., 1f-.\n" \
+ "[1:]" \
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
/*
@@ -314,26 +304,22 @@
int cont; /* gp-relative continuation address; if bit 2 is set, r9 is set to 0 */
};
-struct exception_fixup {
- unsigned long cont; /* continuation point (bit 2: clear r9 if set) */
-};
-
-extern struct exception_fixup search_exception_table (unsigned long addr);
-extern void handle_exception (struct pt_regs *regs, struct exception_fixup fixup);
+extern void handle_exception (struct pt_regs *regs, const struct exception_table_entry *e);
+extern const struct exception_table_entry *search_exception_tables (unsigned long addr);
#ifdef GAS_HAS_LOCAL_TAGS
-#define SEARCH_EXCEPTION_TABLE(regs) search_exception_table(regs->cr_iip + ia64_psr(regs)->ri);
+# define SEARCH_EXCEPTION_TABLE(regs) search_exception_tables(regs->cr_iip + ia64_psr(regs)->ri)
#else
-#define SEARCH_EXCEPTION_TABLE(regs) search_exception_table(regs->cr_iip);
+# define SEARCH_EXCEPTION_TABLE(regs) search_exception_tables(regs->cr_iip)
#endif
static inline int
done_with_exception (struct pt_regs *regs)
{
- struct exception_fixup fix;
- fix = SEARCH_EXCEPTION_TABLE(regs);
- if (fix.cont) {
- handle_exception(regs, fix);
+ const struct exception_table_entry *e;
+ e = SEARCH_EXCEPTION_TABLE(regs);
+ if (e) {
+ handle_exception(regs, e);
return 1;
}
return 0;
diff -Nru a/include/asm-ia64/unistd.h b/include/asm-ia64/unistd.h
--- a/include/asm-ia64/unistd.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-ia64/unistd.h Fri Jan 24 20:41:05 2003
@@ -4,7 +4,7 @@
/*
* IA-64 Linux syscall numbers and inline-functions.
*
- * Copyright (C) 1998-2002 Hewlett-Packard Co
+ * Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
@@ -223,8 +223,8 @@
#define __NR_sched_setaffinity 1231
#define __NR_sched_getaffinity 1232
#define __NR_set_tid_address 1233
-/* #define __NR_alloc_hugepages 1234 reusable */
-/* #define __NR_free_hugepages 1235 reusable */
+/* 1234 available for reuse */
+/* 1235 available for reuse */
#define __NR_exit_group 1236
#define __NR_lookup_dcookie 1237
#define __NR_io_setup 1238
diff -Nru a/include/asm-sparc64/agp.h b/include/asm-sparc64/agp.h
--- a/include/asm-sparc64/agp.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-sparc64/agp.h Fri Jan 24 20:41:05 2003
@@ -8,4 +8,11 @@
#define flush_agp_mappings()
#define flush_agp_cache() mb()
+/*
+ * Page-protection value to be used for AGP memory mapped into kernel space. For
+ * platforms which use coherent AGP DMA, this can be PAGE_KERNEL. For others, it needs to
+ * be an uncached mapping (such as write-combining).
+ */
+#define PAGE_AGP PAGE_KERNEL_NOCACHE
+
#endif
diff -Nru a/include/asm-x86_64/agp.h b/include/asm-x86_64/agp.h
--- a/include/asm-x86_64/agp.h Fri Jan 24 20:41:05 2003
+++ b/include/asm-x86_64/agp.h Fri Jan 24 20:41:05 2003
@@ -20,4 +20,11 @@
worth it. Would need a page for it. */
#define flush_agp_cache() asm volatile("wbinvd":::"memory")
+/*
+ * Page-protection value to be used for AGP memory mapped into kernel space. For
+ * platforms which use coherent AGP DMA, this can be PAGE_KERNEL. For others, it needs to
+ * be an uncached mapping (such as write-combining).
+ */
+#define PAGE_AGP PAGE_KERNEL_NOCACHE
+
#endif
diff -Nru a/include/linux/acpi_serial.h b/include/linux/acpi_serial.h
--- a/include/linux/acpi_serial.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/acpi_serial.h Fri Jan 24 20:41:05 2003
@@ -9,6 +9,8 @@
*
*/
+#include <linux/serial.h>
+
extern void setup_serial_acpi(void *);
#define ACPI_SIG_LEN 4
diff -Nru a/include/linux/highmem.h b/include/linux/highmem.h
--- a/include/linux/highmem.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/highmem.h Fri Jan 24 20:41:05 2003
@@ -3,6 +3,8 @@
#include <linux/config.h>
#include <linux/fs.h>
+#include <linux/mm.h>
+
#include <asm/cacheflush.h>
#ifdef CONFIG_HIGHMEM
diff -Nru a/include/linux/irq.h b/include/linux/irq.h
--- a/include/linux/irq.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/irq.h Fri Jan 24 20:41:05 2003
@@ -56,15 +56,13 @@
*
* Pad this out to 32 bytes for cache and indexing reasons.
*/
-typedef struct {
+typedef struct irq_desc {
unsigned int status; /* IRQ status */
hw_irq_controller *handler;
struct irqaction *action; /* IRQ action list */
unsigned int depth; /* nested irq disables */
spinlock_t lock;
} ____cacheline_aligned irq_desc_t;
-
-extern irq_desc_t irq_desc [NR_IRQS];
#include <asm/hw_irq.h> /* the arch dependent stuff */
diff -Nru a/include/linux/irq_cpustat.h b/include/linux/irq_cpustat.h
--- a/include/linux/irq_cpustat.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/irq_cpustat.h Fri Jan 24 20:41:05 2003
@@ -24,7 +24,7 @@
#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
#else
#define __IRQ_STAT(cpu, member) ((void)(cpu), irq_stat[0].member)
-#endif
+#endif
#endif
/* arch independent irq_stat fields */
@@ -33,5 +33,10 @@
#define ksoftirqd_task(cpu) __IRQ_STAT((cpu), __ksoftirqd_task)
/* arch dependent irq_stat fields */
#define nmi_count(cpu) __IRQ_STAT((cpu), __nmi_count) /* i386, ia64 */
+
+#define local_softirq_pending() softirq_pending(smp_processor_id())
+#define local_syscall_count() syscall_count(smp_processor_id())
+#define local_ksoftirqd_task() ksoftirqd_task(smp_processor_id())
+#define local_nmi_count() nmi_count(smp_processor_id())
#endif /* __irq_cpustat_h */
diff -Nru a/include/linux/percpu.h b/include/linux/percpu.h
--- a/include/linux/percpu.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/percpu.h Fri Jan 24 20:41:05 2003
@@ -1,9 +1,8 @@
#ifndef __LINUX_PERCPU_H
#define __LINUX_PERCPU_H
-#include <linux/spinlock.h> /* For preempt_disable() */
+#include <linux/preempt.h> /* For preempt_disable() */
#include <linux/slab.h> /* For kmalloc_percpu() */
#include <asm/percpu.h>
-
/* Must be an lvalue. */
#define get_cpu_var(var) (*({ preempt_disable(); &__get_cpu_var(var); }))
#define put_cpu_var(var) preempt_enable()
diff -Nru a/include/linux/ptrace.h b/include/linux/ptrace.h
--- a/include/linux/ptrace.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/ptrace.h Fri Jan 24 20:41:05 2003
@@ -4,6 +4,7 @@
/* structs and defines to help the user use the ptrace system call. */
#include <linux/compiler.h>
+#include <linux/sched.h>
/* has the defines to get at the registers. */
diff -Nru a/include/linux/sched.h b/include/linux/sched.h
--- a/include/linux/sched.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/sched.h Fri Jan 24 20:41:05 2003
@@ -148,8 +148,8 @@
extern void init_idle(task_t *idle, int cpu);
extern void show_state(void);
-extern void show_trace(unsigned long *stack);
-extern void show_stack(unsigned long *stack);
+extern void show_trace(struct task_struct *);
+extern void show_stack(struct task_struct *);
extern void show_regs(struct pt_regs *);
void io_schedule(void);
@@ -470,14 +470,14 @@
#ifndef INIT_THREAD_SIZE
# define INIT_THREAD_SIZE 2048*sizeof(long)
-#endif
-
union thread_union {
struct thread_info thread_info;
unsigned long stack[INIT_THREAD_SIZE/sizeof(long)];
};
extern union thread_union init_thread_union;
+#endif
+
extern struct task_struct init_task;
extern struct mm_struct init_mm;
diff -Nru a/include/linux/serial.h b/include/linux/serial.h
--- a/include/linux/serial.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/serial.h Fri Jan 24 20:41:05 2003
@@ -179,14 +179,9 @@
extern int register_serial(struct serial_struct *req);
extern void unregister_serial(int line);
-/* Allow complicated architectures to specify rs_table[] at run time */
-extern int early_serial_setup(struct serial_struct *req);
-
-#ifdef CONFIG_ACPI
-/* tty ports reserved for the ACPI serial console port and debug port */
-#define ACPI_SERIAL_CONSOLE_PORT 4
-#define ACPI_SERIAL_DEBUG_PORT 5
-#endif
+/* Allow architectures to override entries in serial8250_ports[] at run time: */
+struct uart_port; /* forward declaration */
+extern int early_serial_setup(struct uart_port *port);
#endif /* __KERNEL__ */
#endif /* _LINUX_SERIAL_H */
diff -Nru a/include/linux/smp.h b/include/linux/smp.h
--- a/include/linux/smp.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/smp.h Fri Jan 24 20:41:05 2003
@@ -58,10 +58,6 @@
*/
extern int smp_threads_ready;
-extern volatile unsigned long smp_msg_data;
-extern volatile int smp_src_cpu;
-extern volatile int smp_msg_id;
-
#define MSG_ALL_BUT_SELF 0x8000 /* Assume <32768 CPU's */
#define MSG_ALL 0x8001
diff -Nru a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
--- a/include/linux/sunrpc/svc.h Fri Jan 24 20:41:05 2003
+++ b/include/linux/sunrpc/svc.h Fri Jan 24 20:41:05 2003
@@ -73,7 +73,7 @@
* This assumes that the non-page part of an rpc reply will fit
* in a page - NFSd ensures this. lockd also has no trouble.
*/
-#define RPCSVC_MAXPAGES ((RPCSVC_MAXPAYLOAD+PAGE_SIZE-1)/PAGE_SIZE + 1)
+#define RPCSVC_MAXPAGES ((RPCSVC_MAXPAYLOAD+PAGE_SIZE-1)/PAGE_SIZE + 2)
static inline u32 svc_getu32(struct iovec *iov)
{
diff -Nru a/kernel/fork.c b/kernel/fork.c
--- a/kernel/fork.c Fri Jan 24 20:41:05 2003
+++ b/kernel/fork.c Fri Jan 24 20:41:05 2003
@@ -71,6 +71,7 @@
return total;
}
+#if 0
void __put_task_struct(struct task_struct *tsk)
{
if (tsk != current) {
@@ -88,6 +89,7 @@
put_cpu();
}
}
+#endif
void add_wait_queue(wait_queue_head_t *q, wait_queue_t * wait)
{
@@ -190,7 +192,11 @@
init_task.rlim[RLIMIT_NPROC].rlim_max = max_threads/2;
}
-static struct task_struct *dup_task_struct(struct task_struct *orig)
+#if 1
+extern struct task_struct *dup_task_struct (struct task_struct *orig);
+#else
+
+struct task_struct *dup_task_struct(struct task_struct *orig)
{
struct task_struct *tsk;
struct thread_info *ti;
@@ -220,6 +226,8 @@
return tsk;
}
+#endif
+
#ifdef CONFIG_MMU
static inline int dup_mmap(struct mm_struct * mm, struct mm_struct * oldmm)
{
@@ -839,11 +847,15 @@
if (clone_flags & CLONE_CHILD_SETTID)
p->set_child_tid = child_tidptr;
+ else
+ p->set_child_tid = NULL;
/*
* Clear TID on mm_release()?
*/
if (clone_flags & CLONE_CHILD_CLEARTID)
p->clear_child_tid = child_tidptr;
+ else
+ p->clear_child_tid = NULL;
/*
* Syscall tracing should be turned off in the child regardless
diff -Nru a/kernel/ksyms.c b/kernel/ksyms.c
--- a/kernel/ksyms.c Fri Jan 24 20:41:05 2003
+++ b/kernel/ksyms.c Fri Jan 24 20:41:05 2003
@@ -404,7 +404,9 @@
EXPORT_SYMBOL(del_timer);
EXPORT_SYMBOL(request_irq);
EXPORT_SYMBOL(free_irq);
+#if !defined(CONFIG_IA64)
EXPORT_SYMBOL(irq_stat);
+#endif
/* waitqueue handling */
EXPORT_SYMBOL(add_wait_queue);
@@ -600,7 +602,9 @@
/* init task, for moving kthread roots - ought to export a function ?? */
EXPORT_SYMBOL(init_task);
+#ifndef CONFIG_IA64
EXPORT_SYMBOL(init_thread_union);
+#endif
EXPORT_SYMBOL(tasklist_lock);
EXPORT_SYMBOL(find_task_by_pid);
diff -Nru a/kernel/printk.c b/kernel/printk.c
--- a/kernel/printk.c Fri Jan 24 20:41:05 2003
+++ b/kernel/printk.c Fri Jan 24 20:41:05 2003
@@ -315,6 +315,12 @@
__call_console_drivers(start, end);
}
}
+#ifdef CONFIG_IA64_EARLY_PRINTK
+ if (!console_drivers) {
+ void early_printk (const char *str, size_t len);
+ early_printk(&LOG_BUF(start), end - start);
+ }
+#endif
}
/*
@@ -632,7 +638,11 @@
* for us.
*/
spin_lock_irqsave(&logbuf_lock, flags);
+#ifdef CONFIG_IA64_EARLY_PRINTK
+ con_start = log_end;
+#else
con_start = log_start;
+#endif
spin_unlock_irqrestore(&logbuf_lock, flags);
}
release_console_sem();
@@ -685,3 +695,110 @@
tty->driver.write(tty, 0, msg, strlen(msg));
return;
}
+
+#ifdef CONFIG_IA64_EARLY_PRINTK
+
+#include <asm/io.h>
+
+# ifdef CONFIG_IA64_EARLY_PRINTK_VGA
+
+
+#define VGABASE ((char *)0xc0000000000b8000)
+#define VGALINES 24
+#define VGACOLS 80
+
+static int current_ypos = VGALINES, current_xpos = 0;
+
+static void
+early_printk_vga (const char *str, size_t len)
+{
+ char c;
+ int i, k, j;
+
+ while (len-- > 0) {
+ c = *str++;
+ if (current_ypos >= VGALINES) {
+ /* scroll 1 line up */
+ for (k = 1, j = 0; k < VGALINES; k++, j++) {
+ for (i = 0; i < VGACOLS; i++) {
+ writew(readw(VGABASE + 2*(VGACOLS*k + i)),
+ VGABASE + 2*(VGACOLS*j + i));
+ }
+ }
+ for (i = 0; i < VGACOLS; i++) {
+ writew(0x720, VGABASE + 2*(VGACOLS*j + i));
+ }
+ current_ypos = VGALINES-1;
+ }
+ if (c = '\n') {
+ current_xpos = 0;
+ current_ypos++;
+ } else if (c != '\r') {
+ writew(((0x7 << 8) | (unsigned short) c),
+ VGABASE + 2*(VGACOLS*current_ypos + current_xpos++));
+ if (current_xpos >= VGACOLS) {
+ current_xpos = 0;
+ current_ypos++;
+ }
+ }
+ }
+}
+
+# endif /* CONFIG_IA64_EARLY_PRINTK_VGA */
+
+# ifdef CONFIG_IA64_EARLY_PRINTK_UART
+
+#include <linux/serial_reg.h>
+#include <asm/system.h>
+
+static void early_printk_uart(const char *str, size_t len)
+{
+ static char *uart = NULL;
+ unsigned long uart_base;
+ char c;
+
+ if (!uart) {
+ uart_base = 0;
+# ifdef CONFIG_SERIAL_8250_HCDP
+ {
+ extern unsigned long hcdp_early_uart(void);
+ uart_base = hcdp_early_uart();
+ }
+# endif
+# if CONFIG_IA64_EARLY_PRINTK_UART_BASE
+ if (!uart_base)
+ uart_base = CONFIG_IA64_EARLY_PRINTK_UART_BASE;
+# endif
+ if (!uart_base)
+ return;
+
+ uart = ioremap(uart_base, 64);
+ if (!uart)
+ return;
+ }
+
+ while (len-- > 0) {
+ c = *str++;
+ while ((readb(uart + UART_LSR) & UART_LSR_TEMT) = 0)
+ cpu_relax(); /* spin */
+
+ writeb(c, uart + UART_TX);
+
+ if (c = '\n')
+ writeb('\r', uart + UART_TX);
+ }
+}
+
+# endif /* CONFIG_IA64_EARLY_PRINTK_UART */
+
+void early_printk(const char *str, size_t len)
+{
+#ifdef CONFIG_IA64_EARLY_PRINTK_UART
+ early_printk_uart(str, len);
+#endif
+#ifdef CONFIG_IA64_EARLY_PRINTK_VGA
+ early_printk_vga(str, len);
+#endif
+}
+
+#endif /* CONFIG_IA64_EARLY_PRINTK */
diff -Nru a/kernel/softirq.c b/kernel/softirq.c
--- a/kernel/softirq.c Fri Jan 24 20:41:05 2003
+++ b/kernel/softirq.c Fri Jan 24 20:41:05 2003
@@ -32,7 +32,10 @@
- Tasklets: serialized wrt itself.
*/
+/* No separate irq_stat for ia64, it is part of PSA */
+#if !defined(CONFIG_IA64)
irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned;
+#endif /* CONFIG_IA64 */
static struct softirq_action softirq_vec[32] __cacheline_aligned_in_smp;
@@ -63,7 +66,7 @@
local_irq_save(flags);
cpu = smp_processor_id();
- pending = softirq_pending(cpu);
+ pending = local_softirq_pending();
if (pending) {
struct softirq_action *h;
@@ -72,7 +75,7 @@
local_bh_disable();
restart:
/* Reset the pending bitmask before enabling irqs */
- softirq_pending(cpu) = 0;
+ local_softirq_pending() = 0;
local_irq_enable();
@@ -87,7 +90,7 @@
local_irq_disable();
- pending = softirq_pending(cpu);
+ pending = local_softirq_pending();
if (pending & mask) {
mask &= ~pending;
goto restart;
@@ -95,7 +98,7 @@
__local_bh_enable();
if (pending)
- wakeup_softirqd(cpu);
+ wakeup_softirqd(smp_processor_id());
}
local_irq_restore(flags);
@@ -315,15 +318,15 @@
__set_current_state(TASK_INTERRUPTIBLE);
mb();
- ksoftirqd_task(cpu) = current;
+ local_ksoftirqd_task() = current;
for (;;) {
- if (!softirq_pending(cpu))
+ if (!local_softirq_pending())
schedule();
__set_current_state(TASK_RUNNING);
- while (softirq_pending(cpu)) {
+ while (local_softirq_pending()) {
do_softirq();
cond_resched();
}
diff -Nru a/mm/bootmem.c b/mm/bootmem.c
--- a/mm/bootmem.c Fri Jan 24 20:41:05 2003
+++ b/mm/bootmem.c Fri Jan 24 20:41:05 2003
@@ -143,6 +143,7 @@
static void * __init __alloc_bootmem_core (bootmem_data_t *bdata,
unsigned long size, unsigned long align, unsigned long goal)
{
+ static unsigned long last_success;
unsigned long i, start = 0;
void *ret;
unsigned long offset, remaining_size;
@@ -168,6 +169,9 @@
if (goal && (goal >= bdata->node_boot_start) &&
((goal >> PAGE_SHIFT) < bdata->node_low_pfn)) {
preferred = goal - bdata->node_boot_start;
+
+ if (last_success >= preferred)
+ preferred = last_success;
} else
preferred = 0;
@@ -179,6 +183,8 @@
restart_scan:
for (i = preferred; i < eidx; i += incr) {
unsigned long j;
+ i = find_next_zero_bit((char *)bdata->node_bootmem_map, eidx, i);
+ i = (i + incr - 1) & -incr;
if (test_bit(i, bdata->node_bootmem_map))
continue;
for (j = i + 1; j < i + areasize; ++j) {
@@ -197,6 +203,7 @@
}
return NULL;
found:
+ last_success = start << PAGE_SHIFT;
if (start >= eidx)
BUG();
@@ -256,21 +263,21 @@
map = bdata->node_bootmem_map;
for (i = 0; i < idx; ) {
unsigned long v = ~map[i / BITS_PER_LONG];
- if (v) {
+ if (v) {
unsigned long m;
- for (m = 1; m && i < idx; m<<=1, page++, i++) {
+ for (m = 1; m && i < idx; m<<=1, page++, i++) {
if (v & m) {
- count++;
- ClearPageReserved(page);
- set_page_count(page, 1);
- __free_page(page);
- }
- }
+ count++;
+ ClearPageReserved(page);
+ set_page_count(page, 1);
+ __free_page(page);
+ }
+ }
} else {
i+=BITS_PER_LONG;
- page+=BITS_PER_LONG;
- }
- }
+ page+=BITS_PER_LONG;
+ }
+ }
total += count;
/*
diff -Nru a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c Fri Jan 24 20:41:05 2003
+++ b/mm/memory.c Fri Jan 24 20:41:05 2003
@@ -113,8 +113,10 @@
}
pmd = pmd_offset(dir, 0);
pgd_clear(dir);
- for (j = 0; j < PTRS_PER_PMD ; j++)
+ for (j = 0; j < PTRS_PER_PMD ; j++) {
+ prefetchw(pmd + j + PREFETCH_STRIDE/sizeof(*pmd));
free_one_pmd(tlb, pmd+j);
+ }
pmd_free_tlb(tlb, pmd);
}
diff -Nru a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c Fri Jan 24 20:41:05 2003
+++ b/mm/mmap.c Fri Jan 24 20:41:05 2003
@@ -1265,8 +1265,8 @@
tlb = tlb_gather_mmu(mm, 1);
flush_cache_mm(mm);
- mm->map_count -= unmap_vmas(&tlb, mm, mm->mmap, 0,
- TASK_SIZE, &nr_accounted);
+ /* Use ~0UL here to ensure all VMAs ni the mm are unmapped */
+ mm->map_count -= unmap_vmas(&tlb, mm, mm->mmap, 0, ~0UL, &nr_accounted);
vm_unacct_memory(nr_accounted);
BUG_ON(mm->map_count); /* This is just debugging */
clear_page_tables(tlb, FIRST_USER_PGD_NR, USER_PTRS_PER_PGD);
diff -Nru a/mm/page_alloc.c b/mm/page_alloc.c
--- a/mm/page_alloc.c Fri Jan 24 20:41:05 2003
+++ b/mm/page_alloc.c Fri Jan 24 20:41:05 2003
@@ -1078,6 +1078,41 @@
memset(pgdat->valid_addr_bitmap, 0, size);
}
+static void __init memmap_init(struct page *start, unsigned long size,
+ int nid, unsigned long zone, unsigned long start_pfn)
+{
+ struct page *page;
+
+ /*
+ * Initially all pages are reserved - free ones are freed
+ * up by free_all_bootmem() once the early boot process is
+ * done. Non-atomic initialization, single-pass.
+ */
+
+ for (page = start; page < (start + size); page++) {
+ set_page_zone(page, nid * MAX_NR_ZONES + zone);
+ set_page_count(page, 0);
+ SetPageReserved(page);
+ INIT_LIST_HEAD(&page->list);
+#ifdef WANT_PAGE_VIRTUAL
+ if (zone != ZONE_HIGHMEM)
+ /*
+ * The shift left won't overflow because the
+ * ZONE_NORMAL is below 4G.
+ */
+ set_page_address(page, __va(start_pfn << PAGE_SHIFT));
+#endif
+ start_pfn++;
+ }
+}
+
+#ifdef HAVE_ARCH_MEMMAP_INIT
+#define MEMMAP_INIT(start, size, nid, zone, start_pfn) \
+ arch_memmap_init(memmap_init, start, size, nid, zone, start_pfn)
+#else
+#define MEMMAP_INIT(start, size, nid, zone, start_pfn) \
+ memmap_init(start, size, nid, zone, start_pfn)
+#endif
/*
* Set up the zone data structures:
* - mark all pages reserved
@@ -1189,28 +1224,8 @@
if ((zone_start_pfn) & (zone_required_alignment-1))
printk("BUG: wrong zone alignment, it will crash\n");
- /*
- * Initially all pages are reserved - free ones are freed
- * up by free_all_bootmem() once the early boot process is
- * done. Non-atomic initialization, single-pass.
- */
- for (i = 0; i < size; i++) {
- struct page *page = lmem_map + local_offset + i;
- set_page_zone(page, nid * MAX_NR_ZONES + j);
- set_page_count(page, 0);
- SetPageReserved(page);
- INIT_LIST_HEAD(&page->list);
-#ifdef WANT_PAGE_VIRTUAL
- if (j != ZONE_HIGHMEM)
- /*
- * The shift left won't overflow because the
- * ZONE_NORMAL is below 4G.
- */
- set_page_address(page,
- __va(zone_start_pfn << PAGE_SHIFT));
-#endif
- zone_start_pfn++;
- }
+ MEMMAP_INIT(lmem_map + local_offset,size,nid,j,zone_start_pfn);
+ zone_start_pfn += size;
local_offset += size;
for (i = 0; ; i++) {
diff -Nru a/scripts/kallsyms.c b/scripts/kallsyms.c
--- a/scripts/kallsyms.c Fri Jan 24 20:41:05 2003
+++ b/scripts/kallsyms.c Fri Jan 24 20:41:05 2003
@@ -12,6 +12,15 @@
#include <stdlib.h>
#include <string.h>
+#include <linux/config.h>
+
+#if CONFIG_ALPHA || CONFIG_IA64 || CONFIG_MIPS64 || CONFIG_PPC64 || CONFIG_S390X \
+ || CONFIG_SPARC64 || CONFIG_X86_64
+# define ADDR_DIRECTIVE ".quad"
+#else
+# define ADDR_DIRECTIVE ".long"
+#endif
+
struct sym_entry {
unsigned long long addr;
char type;
diff -Nru a/sound/oss/cs4281/cs4281m.c b/sound/oss/cs4281/cs4281m.c
--- a/sound/oss/cs4281/cs4281m.c Fri Jan 24 20:41:05 2003
+++ b/sound/oss/cs4281/cs4281m.c Fri Jan 24 20:41:05 2003
@@ -1946,8 +1946,8 @@
len -= x;
}
CS_DBGOUT(CS_WAVE_WRITE, 4, printk(KERN_INFO
- "cs4281: clear_advance(): memset %d at 0x%.8x for %d size \n",
- (unsigned)c, (unsigned)((char *) buf) + bptr, len));
+ "cs4281: clear_advance(): memset %d at %p for %d size \n",
+ (unsigned)c, ((char *) buf) + bptr, len));
memset(((char *) buf) + bptr, c, len);
}
@@ -1982,9 +1982,8 @@
wake_up(&s->dma_adc.wait);
}
CS_DBGOUT(CS_PARMS, 8, printk(KERN_INFO
- "cs4281: cs4281_update_ptr(): s=0x%.8x hwptr=%d total_bytes=%d count=%d \n",
- (unsigned)s, s->dma_adc.hwptr,
- s->dma_adc.total_bytes, s->dma_adc.count));
+ "cs4281: cs4281_update_ptr(): s=%p hwptr=%d total_bytes=%d count=%d \n",
+ s, s->dma_adc.hwptr, s->dma_adc.total_bytes, s->dma_adc.count));
}
// update DAC pointer
//
@@ -2016,11 +2015,10 @@
// Continue to play silence until the _release.
//
CS_DBGOUT(CS_WAVE_WRITE, 6, printk(KERN_INFO
- "cs4281: cs4281_update_ptr(): memset %d at 0x%.8x for %d size \n",
+ "cs4281: cs4281_update_ptr(): memset %d at %p for %d size \n",
(unsigned)(s->prop_dac.fmt &
(AFMT_U8 | AFMT_U16_LE)) ? 0x80 : 0,
- (unsigned)s->dma_dac.rawbuf,
- s->dma_dac.dmasize));
+ s->dma_dac.rawbuf, s->dma_dac.dmasize));
memset(s->dma_dac.rawbuf,
(s->prop_dac.
fmt & (AFMT_U8 | AFMT_U16_LE)) ?
@@ -2051,9 +2049,8 @@
}
}
CS_DBGOUT(CS_PARMS, 8, printk(KERN_INFO
- "cs4281: cs4281_update_ptr(): s=0x%.8x hwptr=%d total_bytes=%d count=%d \n",
- (unsigned) s, s->dma_dac.hwptr,
- s->dma_dac.total_bytes, s->dma_dac.count));
+ "cs4281: cs4281_update_ptr(): s=%p hwptr=%d total_bytes=%d count=%d \n",
+ s, s->dma_dac.hwptr, s->dma_dac.total_bytes, s->dma_dac.count));
}
}
@@ -2184,8 +2181,7 @@
VALIDATE_STATE(s);
CS_DBGOUT(CS_FUNCTION, 4, printk(KERN_INFO
- "cs4281: mixer_ioctl(): s=0x%.8x cmd=0x%.8x\n",
- (unsigned) s, cmd));
+ "cs4281: mixer_ioctl(): s=%p cmd=0x%.8x\n", s, cmd));
#if CSDEBUG
cs_printioctl(cmd);
#endif
@@ -2750,9 +2746,8 @@
CS_DBGOUT(CS_FUNCTION, 2,
printk(KERN_INFO "cs4281: CopySamples()+ "));
CS_DBGOUT(CS_WAVE_READ, 8, printk(KERN_INFO
- " dst=0x%x src=0x%x count=%d iChannels=%d fmt=0x%x\n",
- (unsigned) dst, (unsigned) src, (unsigned) count,
- (unsigned) iChannels, (unsigned) fmt));
+ " dst=%p src=%p count=%d iChannels=%d fmt=0x%x\n",
+ dst, src, (unsigned) count, (unsigned) iChannels, (unsigned) fmt));
// Gershwin does format conversion in hardware so normally
// we don't do any host based coversion. The data formatter
@@ -2832,9 +2827,9 @@
void *src = hwsrc; //default to the standard destination buffer addr
CS_DBGOUT(CS_FUNCTION, 6, printk(KERN_INFO
- "cs_copy_to_user()+ fmt=0x%x fmt_o=0x%x cnt=%d dest=0x%.8x\n",
+ "cs_copy_to_user()+ fmt=0x%x fmt_o=0x%x cnt=%d dest=%p\n",
s->prop_adc.fmt, s->prop_adc.fmt_original,
- (unsigned) cnt, (unsigned) dest));
+ (unsigned) cnt, dest));
if (cnt > s->dma_adc.dmasize) {
cnt = s->dma_adc.dmasize;
@@ -2879,7 +2874,7 @@
unsigned copied = 0;
CS_DBGOUT(CS_FUNCTION | CS_WAVE_READ, 2,
- printk(KERN_INFO "cs4281: cs4281_read()+ %d \n", count));
+ printk(KERN_INFO "cs4281: cs4281_read()+ %Zu \n", count));
VALIDATE_STATE(s);
if (ppos != &file->f_pos)
@@ -2902,7 +2897,7 @@
//
while (count > 0) {
CS_DBGOUT(CS_WAVE_READ, 8, printk(KERN_INFO
- "_read() count>0 count=%d .count=%d .swptr=%d .hwptr=%d \n",
+ "_read() count>0 count=%Zu .count=%d .swptr=%d .hwptr=%d \n",
count, s->dma_adc.count,
s->dma_adc.swptr, s->dma_adc.hwptr));
spin_lock_irqsave(&s->lock, flags);
@@ -2959,11 +2954,10 @@
// the "cnt" is the number of bytes to read.
CS_DBGOUT(CS_WAVE_READ, 2, printk(KERN_INFO
- "_read() copy_to cnt=%d count=%d ", cnt, count));
+ "_read() copy_to cnt=%d count=%Zu ", cnt, count));
CS_DBGOUT(CS_WAVE_READ, 8, printk(KERN_INFO
- " .dmasize=%d .count=%d buffer=0x%.8x ret=%d\n",
- s->dma_adc.dmasize, s->dma_adc.count,
- (unsigned) buffer, ret));
+ " .dmasize=%d .count=%d buffer=%p ret=%Zd\n",
+ s->dma_adc.dmasize, s->dma_adc.count, buffer, ret));
if (cs_copy_to_user
(s, buffer, s->dma_adc.rawbuf + swptr, cnt, &copied))
@@ -2979,7 +2973,7 @@
start_adc(s);
}
CS_DBGOUT(CS_FUNCTION | CS_WAVE_READ, 2,
- printk(KERN_INFO "cs4281: cs4281_read()- %d\n", ret));
+ printk(KERN_INFO "cs4281: cs4281_read()- %Zd\n", ret));
return ret;
}
@@ -2995,7 +2989,7 @@
int cnt;
CS_DBGOUT(CS_FUNCTION | CS_WAVE_WRITE, 2,
- printk(KERN_INFO "cs4281: cs4281_write()+ count=%d\n",
+ printk(KERN_INFO "cs4281: cs4281_write()+ count=%Zu\n",
count));
VALIDATE_STATE(s);
@@ -3051,7 +3045,7 @@
start_dac(s);
}
CS_DBGOUT(CS_FUNCTION | CS_WAVE_WRITE, 2,
- printk(KERN_INFO "cs4281: cs4281_write()- %d\n", ret));
+ printk(KERN_INFO "cs4281: cs4281_write()- %Zd\n", ret));
return ret;
}
@@ -3172,8 +3166,7 @@
int val, mapped, ret;
CS_DBGOUT(CS_FUNCTION, 4, printk(KERN_INFO
- "cs4281: cs4281_ioctl(): file=0x%.8x cmd=0x%.8x\n",
- (unsigned) file, cmd));
+ "cs4281: cs4281_ioctl(): file=%p cmd=0x%.8x\n", file, cmd));
#if CSDEBUG
cs_printioctl(cmd);
#endif
@@ -3603,8 +3596,8 @@
(struct cs4281_state *) file->private_data;
CS_DBGOUT(CS_FUNCTION | CS_RELEASE, 2, printk(KERN_INFO
- "cs4281: cs4281_release(): inode=0x%.8x file=0x%.8x f_mode=%d\n",
- (unsigned) inode, (unsigned) file, file->f_mode));
+ "cs4281: cs4281_release(): inode=%p file=%p f_mode=%d\n",
+ inode, file, file->f_mode));
VALIDATE_STATE(s);
@@ -3638,8 +3631,8 @@
struct list_head *entry;
CS_DBGOUT(CS_FUNCTION | CS_OPEN, 2, printk(KERN_INFO
- "cs4281: cs4281_open(): inode=0x%.8x file=0x%.8x f_mode=0x%x\n",
- (unsigned) inode, (unsigned) file, file->f_mode));
+ "cs4281: cs4281_open(): inode=%p file=%p f_mode=0x%x\n",
+ inode, file, file->f_mode));
list_for_each(entry, &cs4281_devs)
{
@@ -4348,10 +4341,8 @@
CS_DBGOUT(CS_INIT, 2,
printk(KERN_INFO
- "cs4281: probe() BA0=0x%.8x BA1=0x%.8x pBA0=0x%.8x pBA1=0x%.8x \n",
- (unsigned) temp1, (unsigned) temp2,
- (unsigned) s->pBA0, (unsigned) s->pBA1));
-
+ "cs4281: probe() BA0=0x%.8x BA1=0x%.8x pBA0=%p pBA1=%p \n",
+ (unsigned) temp1, (unsigned) temp2, s->pBA0, s->pBA1));
CS_DBGOUT(CS_INIT, 2,
printk(KERN_INFO
"cs4281: probe() pBA0phys=0x%.8x pBA1phys=0x%.8x\n",
@@ -4398,15 +4389,13 @@
if (pmdev)
{
CS_DBGOUT(CS_INIT | CS_PM, 4, printk(KERN_INFO
- "cs4281: probe() pm_register() succeeded (0x%x).\n",
- (unsigned)pmdev));
+ "cs4281: probe() pm_register() succeeded (%p).\n", pmdev));
pmdev->data = s;
}
else
{
CS_DBGOUT(CS_INIT | CS_PM | CS_ERROR, 0, printk(KERN_INFO
- "cs4281: probe() pm_register() failed (0x%x).\n",
- (unsigned)pmdev));
+ "cs4281: probe() pm_register() failed (%p).\n", pmdev));
s->pm.flags |= CS4281_PM_NOT_REGISTERED;
}
#endif
diff -Nru a/sound/oss/cs4281/cs4281pm-24.c b/sound/oss/cs4281/cs4281pm-24.c
--- a/sound/oss/cs4281/cs4281pm-24.c Fri Jan 24 20:41:05 2003
+++ b/sound/oss/cs4281/cs4281pm-24.c Fri Jan 24 20:41:05 2003
@@ -46,8 +46,8 @@
struct cs4281_state *state;
CS_DBGOUT(CS_PM, 2, printk(KERN_INFO
- "cs4281: cs4281_pm_callback dev=0x%x rqst=0x%x state=%d\n",
- (unsigned)dev,(unsigned)rqst,(unsigned)data));
+ "cs4281: cs4281_pm_callback dev=%p rqst=0x%x state=%p\n",
+ dev,(unsigned)rqst,data));
state = (struct cs4281_state *) dev->data;
if (state) {
switch(rqst) {
diff -Nru a/usr/Makefile b/usr/Makefile
--- a/usr/Makefile Fri Jan 24 20:41:05 2003
+++ b/usr/Makefile Fri Jan 24 20:41:05 2003
@@ -5,12 +5,9 @@
clean-files := initramfs_data.cpio.gz
-LDFLAGS_initramfs_data.o := $(LDFLAGS_BLOB) -r -T
-
-$(obj)/initramfs_data.o: $(src)/initramfs_data.scr $(obj)/initramfs_data.cpio.gz FORCE
- $(call if_changed,ld)
-
$(obj)/initramfs_data.cpio.gz: $(obj)/gen_init_cpio
./$< | gzip -9c > $@
-
+$(obj)/initramfs_data.S: $(obj)/initramfs_data.cpio.gz
+ echo '.section ".init.ramfs", "a"' > $@
+ od -v -An -t x1 -w8 $^ | cut -c2- | sed -e s"/ /,0x/g" -e s"/^/.byte 0x"/ >> $@
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (184 preceding siblings ...)
2003-01-25 5:02 ` [Linux-ia64] kernel update (relative to 2.5.59) David Mosberger
@ 2003-01-25 20:19 ` Sam Ravnborg
2003-01-27 18:47 ` David Mosberger
` (29 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Sam Ravnborg @ 2003-01-25 20:19 UTC (permalink / raw)
To: linux-ia64
On Fri, Jan 24, 2003 at 09:02:32PM -0800, David Mosberger wrote:
> diff -Nru a/Makefile b/Makefile
> --- a/Makefile Fri Jan 24 20:41:05 2003
> +++ b/Makefile Fri Jan 24 20:41:05 2003
> @@ -170,7 +170,7 @@
> NOSTDINC_FLAGS = -nostdinc -iwithprefix include
>
> CPPFLAGS := -D__KERNEL__ -Iinclude
> -CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -O2 \
> +CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 \
Hi David.
I do not think "-g" was included on purpose...
Sam
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (185 preceding siblings ...)
2003-01-25 20:19 ` Sam Ravnborg
@ 2003-01-27 18:47 ` David Mosberger
2003-01-28 19:44 ` Arun Sharma
` (28 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-01-27 18:47 UTC (permalink / raw)
To: linux-ia64
>>>>> On Sat, 25 Jan 2003 21:19:57 +0100, Sam Ravnborg <sam@ravnborg.org> said:
Sam> On Fri, Jan 24, 2003 at 09:02:32PM -0800, David Mosberger
Sam> wrote:
>> diff -Nru a/Makefile b/Makefile --- a/Makefile Fri Jan 24
>> 20:41:05 2003 +++ b/Makefile Fri Jan 24 20:41:05 2003 @@ -170,7
>> +170,7 @@ NOSTDINC_FLAGS = -nostdinc -iwithprefix include
>> CPPFLAGS := -D__KERNEL__ -Iinclude -CFLAGS := $(CPPFLAGS) -Wall
>> -Wstrict-prototypes -Wno-trigraphs -O2 \ +CFLAGS := $(CPPFLAGS)
>> -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 \
Sam> Hi David.
Sam> I do not think "-g" was included on purpose...
It certainly is. Debugging info is very useful, even without kgdb,
and is very easy to get rid of, via strip.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (186 preceding siblings ...)
2003-01-27 18:47 ` David Mosberger
@ 2003-01-28 19:44 ` Arun Sharma
2003-01-28 19:55 ` David Mosberger
` (27 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Arun Sharma @ 2003-01-28 19:44 UTC (permalink / raw)
To: linux-ia64
David Mosberger <davidm@napali.hpl.hp.com> writes:
Hi David,
> I just uploaded the latest ia64 patch to the usual location(s). You
> can get it from ftp.kernel.org/pub/linux/ia64/ports/v2.5/ in
> file:
>
> linux-2.5.59-ia64-030124.diff.gz
This patch was needed to get the kernel to compile with hugetlb enabled.
-Arun
--- linux-2.5.59/arch/ia64/kernel/sys_ia64.c- Mon Jan 27 19:04:07 2003
+++ linux-2.5.59/arch/ia64/kernel/sys_ia64.c Mon Jan 27 19:04:16 2003
@@ -16,6 +16,7 @@
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/highuid.h>
+#include <linux/hugetlb.h>
#include <asm/shmparam.h>
#include <asm/uaccess.h>
--- linux-2.5.59/arch/ia64/mm/hugetlbpage.c- Mon Jan 27 18:49:51 2003
+++ linux-2.5.59/arch/ia64/mm/hugetlbpage.c Tue Jan 28 12:03:16 2003
@@ -242,7 +242,7 @@
ret = -ENOMEM;
goto out;
}
- add_to_page_cache(page, mapping, idx);
+ add_to_page_cache(page, mapping, idx, GFP_ATOMIC);
unlock_page(page);
}
set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE);
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (187 preceding siblings ...)
2003-01-28 19:44 ` Arun Sharma
@ 2003-01-28 19:55 ` David Mosberger
2003-01-28 21:34 ` Arun Sharma
` (26 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-01-28 19:55 UTC (permalink / raw)
To: linux-ia64
>>>>> On 28 Jan 2003 11:44:05 -0800, Arun Sharma <arun.sharma@intel.com> said:
Arun> David Mosberger <davidm@napali.hpl.hp.com> writes: Hi David,
>> I just uploaded the latest ia64 patch to the usual location(s).
>> You can get it from ftp.kernel.org/pub/linux/ia64/ports/v2.5/ in
>> file:
>> linux-2.5.59-ia64-030124.diff.gz
Arun> This patch was needed to get the kernel to compile with
Arun> hugetlb enabled.
Your mailer seems to mangle the patch (it seems to convert TABs to
blanks). Could you resend the patch as an MIME attachement?
Thanks,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (188 preceding siblings ...)
2003-01-28 19:55 ` David Mosberger
@ 2003-01-28 21:34 ` Arun Sharma
2003-01-28 23:09 ` David Mosberger
` (25 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Arun Sharma @ 2003-01-28 21:34 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 714 bytes --]
David Mosberger <davidm@napali.hpl.hp.com> writes:
> >>>>> On 28 Jan 2003 11:44:05 -0800, Arun Sharma <arun.sharma@intel.com> said:
>
> Arun> David Mosberger <davidm@napali.hpl.hp.com> writes: Hi David,
>
> >> I just uploaded the latest ia64 patch to the usual location(s).
> >> You can get it from ftp.kernel.org/pub/linux/ia64/ports/v2.5/ in
> >> file:
>
> >> linux-2.5.59-ia64-030124.diff.gz
>
> Arun> This patch was needed to get the kernel to compile with
> Arun> hugetlb enabled.
>
> Your mailer seems to mangle the patch (it seems to convert TABs to
> blanks). Could you resend the patch as an MIME attachement?
>
My bad. I did a cut and paste. MIME attachment below.
-Arun
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: hugetlb.patch --]
[-- Type: text/x-patch, Size: 712 bytes --]
--- linux-2.5.59/arch/ia64/kernel/sys_ia64.c- Mon Jan 27 19:04:07 2003
+++ linux-2.5.59/arch/ia64/kernel/sys_ia64.c Mon Jan 27 19:04:16 2003
@@ -16,6 +16,7 @@
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/highuid.h>
+#include <linux/hugetlb.h>
#include <asm/shmparam.h>
#include <asm/uaccess.h>
--- linux-2.5.59/arch/ia64/mm/hugetlbpage.c- Mon Jan 27 18:49:51 2003
+++ linux-2.5.59/arch/ia64/mm/hugetlbpage.c Tue Jan 28 12:03:16 2003
@@ -242,7 +242,7 @@
ret = -ENOMEM;
goto out;
}
- add_to_page_cache(page, mapping, idx);
+ add_to_page_cache(page, mapping, idx, GFP_ATOMIC);
unlock_page(page);
}
set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE);
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (189 preceding siblings ...)
2003-01-28 21:34 ` Arun Sharma
@ 2003-01-28 23:09 ` David Mosberger
2003-01-29 4:27 ` Peter Chubb
` (24 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-01-28 23:09 UTC (permalink / raw)
To: linux-ia64
>>>>> On 28 Jan 2003 13:34:51 -0800, Arun Sharma <arun.sharma@intel.com> said:
Arun> My bad. I did a cut and paste. MIME attachment below.
This one worked fine.
Thanks!
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (190 preceding siblings ...)
2003-01-28 23:09 ` David Mosberger
@ 2003-01-29 4:27 ` Peter Chubb
2003-01-29 6:07 ` David Mosberger
` (23 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Peter Chubb @ 2003-01-29 4:27 UTC (permalink / raw)
To: linux-ia64
>>>>> "David" = David Mosberger <davidm@napali.hpl.hp.com> writes:
David> Oh, most importantly: you'll need a new assembler in order to
David> use this patch. There was a nasty bug up until Dec 18 last
David> year which basically made certain place-relative expressions
David> generate bad data. Fortunately, HJ Lu has fixed that bug and I
David> put a read-for-use, static binary of a fixed assembler at:
David> ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz
David> As a measure of safety, I added a sanity check which will cause
David> "make" to refuse to build a kernel with a buggy assembler.
The sanity check doesn't work under nue --- I guess that the
compiler/assembler there is too old.
I see:
check-gas-asm.S: Assembler messages:
check-gas-asm.S:1: Error: Expected ':'
check-gas-asm.S:1: Error: Rest of line ignored. First ignored
character is ':'.
check-has-asm.S:2: Error: backw. ref to unknown label "1:", 0 assumed.
objdump: check-gas-asm.o: No such file or directory
check-gas: [: !=: unary operator expected
(And then check-gas script echoes 'good' and things go on... even
though it's not good)
Is there going to be a new nue available soon, with a later toolchain?
Peter C
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (191 preceding siblings ...)
2003-01-29 4:27 ` Peter Chubb
@ 2003-01-29 6:07 ` David Mosberger
2003-01-29 14:06 ` Erich Focht
` (22 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-01-29 6:07 UTC (permalink / raw)
To: linux-ia64
>>>>> On Wed, 29 Jan 2003 15:27:26 +1100, Peter Chubb <peter@chubb.wattle.id.au> said:
>>>>> "David" = David Mosberger <davidm@napali.hpl.hp.com> writes:
David> Oh, most importantly: you'll need a new assembler in order to
David> use this patch. There was a nasty bug up until Dec 18 last
David> year which basically made certain place-relative expressions
David> generate bad data. Fortunately, HJ Lu has fixed that bug and
David> I put a read-for-use, static binary of a fixed assembler at:
David> ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz
David> As a measure of safety, I added a sanity check which will
David> cause "make" to refuse to build a kernel with a buggy
David> assembler.
Peter> The sanity check doesn't work under nue --- I guess that the
Peter> compiler/assembler there is too old.
Peter> I see: check-gas-asm.S: Assembler messages:
Peter> check-gas-asm.S:1: Error: Expected ':' check-gas-asm.S:1:
Peter> Error: Rest of line ignored. First ignored character is ':'.
Peter> check-has-asm.S:2: Error: backw. ref to unknown label "1:", 0
Peter> assumed. objdump: check-gas-asm.o: No such file or directory
Peter> check-gas: [: !=: unary operator expected
Peter> (And then check-gas script echoes 'good' and things go
Peter> on... even though it's not good)
But even if the sanity-check worked, it would only tell you that your
assembler is too old.
Peter> Is there going to be a new nue available soon, with a later
Peter> toolchain?
I'd like to update the Ski simulator soon, but we don't have any plans
for updating NUE. Since NUE is all based on open source, anyone can
do that (and frankly, I just don't need/use NUE myself anymore; in
constrast to Ski, which is still very useful for low-level work).
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (192 preceding siblings ...)
2003-01-29 6:07 ` David Mosberger
@ 2003-01-29 14:06 ` Erich Focht
2003-01-29 17:10 ` Luck, Tony
` (21 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Erich Focht @ 2003-01-29 14:06 UTC (permalink / raw)
To: linux-ia64
On Saturday 25 January 2003 06:02, David Mosberger wrote:
> Oh, most importantly: you'll need a new assembler in order to use this
> patch. There was a nasty bug up until Dec 18 last year which
> basically made certain place-relative expressions generate bad data.
> Fortunately, HJ Lu has fixed that bug and I put a read-for-use, static
> binary of a fixed assembler at:
>
> ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz
>
> As a measure of safety, I added a sanity check which will cause "make"
> to refuse to build a kernel with a buggy assembler.
David,
which gcc are you using? If I replace "as" with yours, I can't even
"make menuconfig" or "make config". Tried with gcc 2.96 (from RedHat
7.2) and a self compiled gcc 3.2. The executable (mconf) fails. If I
compile the kernel with the new "as", it doesn't boot. Am I the only
one having this problem?
Regards,
Erich
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (193 preceding siblings ...)
2003-01-29 14:06 ` Erich Focht
@ 2003-01-29 17:10 ` Luck, Tony
2003-01-29 17:48 ` Paul Bame
` (20 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Luck, Tony @ 2003-01-29 17:10 UTC (permalink / raw)
To: linux-ia64
I had the same problem with mconf when I built with 2.96 (it just
printed the name of the Kconfig file and exited with status 1,
strace didn't show any unexpected system call failures).
I got a successful build using gcc 3.1 (well after I did
s/.owner = THIS_MODULE;/.owner = THIS_MODULE,/
in a couple of the drivers/char/agp/*.c)
I'm now up and running on a BigSur.
-Tony
> -----Original Message-----
> From: Erich Focht [mailto:efocht@ess.nec.de]
> Sent: Wednesday, January 29, 2003 6:06 AM
> To: davidm@hpl.hp.com
> Cc: linux-ia64@linuxia64.org
> Subject: Re: [Linux-ia64] kernel update (relative to 2.5.59)
>
>
> On Saturday 25 January 2003 06:02, David Mosberger wrote:
> > Oh, most importantly: you'll need a new assembler in order
> to use this
> > patch. There was a nasty bug up until Dec 18 last year which
> > basically made certain place-relative expressions generate bad data.
> > Fortunately, HJ Lu has fixed that bug and I put a
> read-for-use, static
> > binary of a fixed assembler at:
> >
> > ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz
> >
> > As a measure of safety, I added a sanity check which will
> cause "make"
> > to refuse to build a kernel with a buggy assembler.
>
> David,
>
> which gcc are you using? If I replace "as" with yours, I can't even
> "make menuconfig" or "make config". Tried with gcc 2.96 (from RedHat
> 7.2) and a self compiled gcc 3.2. The executable (mconf) fails. If I
> compile the kernel with the new "as", it doesn't boot. Am I the only
> one having this problem?
>
> Regards,
> Erich
>
>
> _______________________________________________
> Linux-IA64 mailing list
> Linux-IA64@linuxia64.org
> http://lists.linuxia64.org/lists/listinfo/linux-ia64
>
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (194 preceding siblings ...)
2003-01-29 17:10 ` Luck, Tony
@ 2003-01-29 17:48 ` Paul Bame
2003-01-29 19:08 ` David Mosberger
` (19 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Paul Bame @ 2003-01-29 17:48 UTC (permalink / raw)
To: linux-ia64
> I had the same problem with mconf when I built with 2.96 (it just
> printed the name of the Kconfig file and exited with status 1,
> strace didn't show any unexpected system call failures).
I had the same experience with the new assembler and gcc 2.96
with scripts/conf used by 'make oldconfig'. Traced it to
fopen(name,"r") returning 0 with errno=EINVAL. open(2) was never called
according to strace and 'name' was perfectly fine. Smelled like a tool
chain problem so moved to Debian and it went away (gcc version 3.2.2
20030124 (Debian prerelease))
-P
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.59)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (195 preceding siblings ...)
2003-01-29 17:48 ` Paul Bame
@ 2003-01-29 19:08 ` David Mosberger
2003-02-12 23:26 ` [Linux-ia64] kernel update (relative to 2.5.60) David Mosberger
` (18 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-01-29 19:08 UTC (permalink / raw)
To: linux-ia64
>>>>> On Wed, 29 Jan 2003 15:06:29 +0100, Erich Focht <efocht@ess.nec.de> said:
Erich> which gcc are you using? If I replace "as" with yours, I
Erich> can't even "make menuconfig" or "make config". Tried with gcc
Erich> 2.96 (from RedHat 7.2) and a self compiled gcc 3.2. The
Erich> executable (mconf) fails. If I compile the kernel with the
Erich> new "as", it doesn't boot. Am I the only one having this
Erich> problem?
I tried with gcc3.1. I tried not to break 2.96 but apparently it did
break. Perhaps it's just time to says that 2.96 is too old and too
buggy to build the v2.5 kernel. Given that (a) 2.96 is known to
miscompile the MCA code and (b) the distros are moving to gcc3.x
anyhow, I think this isn't unreasonable.
And for those who hate to build the toolchain themselves, there is
gcc-3.1.tar.gz at ftp.hpl.hp.com. Just extract it, replace the
assembler, then do:
make CROSS_COMPILE=/opt/gcc3.1/bin
and you should be in business.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (196 preceding siblings ...)
2003-01-29 19:08 ` David Mosberger
@ 2003-02-12 23:26 ` David Mosberger
2003-02-13 5:52 ` j-nomura
` (17 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-02-12 23:26 UTC (permalink / raw)
To: linux-ia64
An updated ia64 patch is now at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/linux-2.5.60-ia64-030212.diff.gz
This is mostly a sync with Linus' tree. The patch above contains some
non-ia64 patches which are there because Linus pushed them into his
bitkeeper tree after releasing 2.5.60 and before I picked it up.
do_gettimeofday() is now lock-free and this required some
modifications to the logic that ensures that non-one can observe time
going backwards. Actually, since multiple do_gettimeofday() calls now
can execute truly in parallel, the very notion of "time not going
backwards" becomes more interesting. So the refined requirement is
that no causally dependent do_gettimeofday() calls will ever observe
time going backwards. The logic that I'm now using guarantees that
(in the absence of bugs ;-), but there is a tiny possibility that time
sometimes will jump forward a little (again, this is rather unlikely
and the amount of the forward jump is normally bounded by the timer
tick period, which is about 1ms on ia64). Based on my testing, it all
works as expected and the observed behavior is virtually
indistinguishable between 2.5.59 and 2.5.60 (apart from the 2.5.60
gettimeofday() being much more scalable and slightly faster). But I'd
definitely appreciate it if interested folks looked over the code and
could try to find holes in it.
For the record, I attached some measurements. Thought they might come
in handy if/when someone develops a lightweight version of
gettimeofday (Peter? ;-)). In the tables below, "lat" is the average
latency of a gettimeofday() system call, "max" is the maximum elapsed
time between a pair of gettimeofday() calls in a sequence of 10
million calls. Of course, the latter number can be quite large, if
the task got rescheduled or if something disabled interrupts for a
long time. But still, tracking this difference on a reasonably idle
system can give some interesting insights into how things work. Each
table contains 10 runs of the test program and all numbers are in
micro-seconds.
Enjoy,
--david
v2.5.60:
1-way McKinley 2-way McKinley
lat max lat max
0.568107 100 0.574615 101
0.569987 307 0.574272 101
0.567009 100 0.573913 101
0.56967 100 0.574309 236
0.564719 100 0.575047 102
0.563109 100 0.576051 102
0.569994 213 0.574715 101
0.567091 100 0.575147 101
0.563048 100 0.575479 101
0.569834 100 0.576054 102
v2.5.59:
1-way McKinley 2-way McKinley
0.589787 177 0.577409 100
0.590431 100 0.57823 100
0.591788 358 0.579722 101
0.590739 100 0.577402 101
0.590888 189 0.579319 100
0.590353 100 0.578492 100
0.590847 100 0.576145 101
0.590158 100 0.573804 101
0.589569 212 0.579002 100
0.590141 100 0.580514 100
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (197 preceding siblings ...)
2003-02-12 23:26 ` [Linux-ia64] kernel update (relative to 2.5.60) David Mosberger
@ 2003-02-13 5:52 ` j-nomura
2003-02-13 17:53 ` Grant Grundler
` (16 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: j-nomura @ 2003-02-13 5:52 UTC (permalink / raw)
To: linux-ia64
Hello,
I had to apply the patch below to build with CONFIG_NUMA.
> An updated ia64 patch is now at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/linux-2.5.60-ia64-030212.diff.gz
>
> This is mostly a sync with Linus' tree. The patch above contains some
> non-ia64 patches which are there because Linus pushed them into his
> bitkeeper tree after releasing 2.5.60 and before I picked it up.
--- linux.old/include/asm-ia64/topology.h
+++ linux/include/asm-ia64/topology.h
@@ -26,7 +26,7 @@
/*
* Returns a bitmask of CPUs on Node 'node'.
*/
-#define node_to_cpumask(node) (node_to_cpumask[node])
+#define node_to_cpumask(node) (node_to_cpu_mask[node])
#else
#define cpu_to_node(cpu) (0)
Best regards.
--
NOMURA, Jun'ichi <j-nomura@ce.jp.nec.com, nomura@hpc.bs1.fc.nec.co.jp>
HPC Operating System Group, 1st Computers Software Division,
Computers Software Operations Unit, NEC Solutions.
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (198 preceding siblings ...)
2003-02-13 5:52 ` j-nomura
@ 2003-02-13 17:53 ` Grant Grundler
2003-02-13 18:36 ` David Mosberger
` (15 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Grant Grundler @ 2003-02-13 17:53 UTC (permalink / raw)
To: linux-ia64
On Wed, Feb 12, 2003 at 03:26:10PM -0800, David Mosberger wrote:
> An updated ia64 patch is now at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/linux-2.5.60-ia64-030212.diff.gz
>
gcc -Wp,-MD,arch/ia64/mm/.hugetlbpage.o.d -D__KERNEL__ -Iinclude -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 -fno-strict-aliasing -fno-common -pipe -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 -falign-functions2 -frename-registers --param max-inline-insnsP00 -fomit-frame-pointer -nostdinc -iwithprefix include -mconstant-gp -DKBUILD_BASENAME=hugetlbpage -DKBUILD_MODNAME=hugetlbpage -c -o arch/ia64/mm/hugetlbpage.o arch/ia64/mm/hugetlbpage.c
arch/ia64/mm/hugetlbpage.c: In function `set_hugetlb_mem_size':
arch/ia64/mm/hugetlbpage.c:305: warning: unused variable `j'
arch/ia64/mm/hugetlbpage.c:306: warning: unused variable `map'
arch/ia64/mm/hugetlbpage.c: At top level:
arch/ia64/mm/hugetlbpage.c:405: `zap_hugetlb_resources' undeclared here (not in a function)
arch/ia64/mm/hugetlbpage.c:405: initializer element is not constant
arch/ia64/mm/hugetlbpage.c:405: (near initialization for `hugetlb_vm_ops.close')
make[1]: *** [arch/ia64/mm/hugetlbpage.o] Error 1
make: *** [arch/ia64/mm] Error 2
I don't have time to learn about HUGE_TLB and will disable for now.
TPC-C folks will want HUGE_TLB working.
thanks,
grant
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (199 preceding siblings ...)
2003-02-13 17:53 ` Grant Grundler
@ 2003-02-13 18:36 ` David Mosberger
2003-02-13 19:17 ` Grant Grundler
` (14 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-02-13 18:36 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 13 Feb 2003 09:53:24 -0800, grundler@cup.hp.com (Grant Grundler) said:
Grant> I don't have time to learn about HUGE_TLB and will disable
Grant> for now. TPC-C folks will want HUGE_TLB working.
Andrew made some changes/fixes to the hugepage support and the
necessary updates aren't in the ia64 patch for now. Rohit is working
on fixing that.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (200 preceding siblings ...)
2003-02-13 18:36 ` David Mosberger
@ 2003-02-13 19:17 ` Grant Grundler
2003-02-13 20:00 ` David Mosberger
` (13 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Grant Grundler @ 2003-02-13 19:17 UTC (permalink / raw)
To: linux-ia64
On Wed, Feb 12, 2003 at 03:26:10PM -0800, David Mosberger wrote:
> An updated ia64 patch is now at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/linux-2.5.60-ia64-030212.diff.gz
*sigh*. Next problem is cpqfc driver doesn't build.
I'll disable that too since I don't really need it.
I'd really like to take a whack at the IDE problem you reported
2 weeks ago instead of chasing misc build failures.
Do you have a .config available that builds/works on HP ZX1 platforms?
(well, obviously not on the ZX2000 you reported problems with)
Maybe drop that on kernel.org as linux-2.5.60-ia64-030212.config ?
gcc -Wp,-MD,drivers/scsi/.cpqfcTSinit.o.d -D__KERNEL__ -Iinclude -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 -fno-strict-aliasing -fno-common -pipe -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 -falign-functions2 -frename-registers --param max-inline-insnsP00 -fomit-frame-pointer -nostdinc -iwithprefix include -mconstant-gp -DKBUILD_BASENAME=cpqfcTSinit -DKBUILD_MODNAME=cpqfc -c -o drivers/scsi/cpqfcTSinit.o drivers/scsi/cpqfcTSinit.c
drivers/scsi/cpqfcTSinit.c: In function `cpqfcTS_proc_info':
drivers/scsi/cpqfcTSinit.c:968: structure has no member named `channel'
drivers/scsi/cpqfcTSinit.c:970: structure has no member named `target'
drivers/scsi/cpqfcTSinit.c: In function `cpqfcTS_TargetDeviceReset':
drivers/scsi/cpqfcTSinit.c:1615: warning: implicit declaration of function `scsi_do_cmd'
make[2]: *** [drivers/scsi/cpqfcTSinit.o] Error 1
make[1]: *** [drivers/scsi] Error 2
make: *** [drivers] Error 2
grant
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (201 preceding siblings ...)
2003-02-13 19:17 ` Grant Grundler
@ 2003-02-13 20:00 ` David Mosberger
2003-02-13 20:11 ` Grant Grundler
` (12 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-02-13 20:00 UTC (permalink / raw)
To: linux-ia64
>>>>> On Thu, 13 Feb 2003 11:17:42 -0800, grundler@cup.hp.com (Grant Grundler) said:
Grant> On Wed, Feb 12, 2003 at 03:26:10PM -0800, David Mosberger
Grant> wrote:
>> An updated ia64 patch is now at:
>> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/linux-2.5.60-ia64-030212.diff.gz
Grant> *sigh*. Next problem is cpqfc driver doesn't build.
What you're noticing is that not enough people are testing 2.5.xx. I
don't think that's good either, because 2.6 will be upon sooner or
later and it'd be ugly if everybody waited until then to do some
testing.
My role as a maintainer is not to test each and every possible
configuration option and guarantee that things work. Even if I wanted
to (which I don't) I couldn't do that for lack of time. My role is to
make sure that if someone sends me a fix or an enhancement that it
gets incorporated into the source tree within a reasonable amount of
time (or to give feedback, in case something is wrong with the
fix/enhancement). I'm sure everybody on this list understands this,
but it makes me feel better to mention it from time to time... ;-)
So, to all who haven't tried 2.5 recently but wants things to work in
2.6: now is a good time to do some testing. Apart from he lack of
module-support and driver-issues, the kernel proper is actually in
very good shape. Which reminds me: nobody has expressed interest so
far in working on the kernel module loader. Somebody will have to
fight that battle and it's not gonna be me.
Grant> I'd really like to take a whack at the IDE problem you
Grant> reported 2 weeks ago instead of chasing misc build failures.
Grant> Do you have a .config available that builds/works on HP ZX1
Grant> platforms?
Sure, I attached the .config I'm using for 2.5.60 on zx1-based
machines (it should work without changes for 2.5.59).
--david
--
#
# Automatically generated make config: don't edit
#
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
#
# General setup
#
CONFIG_SYSVIPC=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_SYSCTL=y
CONFIG_LOG_BUF_SHIFT\x16
#
# Loadable module support
#
# CONFIG_MODULES is not set
#
# Processor type and features
#
CONFIG_IA64=y
CONFIG_MMU=y
CONFIG_SWAP=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_ITANIUM is not set
CONFIG_MCKINLEY=y
# CONFIG_IA64_GENERIC is not set
# CONFIG_IA64_DIG is not set
# CONFIG_IA64_HP_SIM is not set
CONFIG_IA64_HP_ZX1=y
# CONFIG_IA64_SGI_SN1 is not set
# CONFIG_IA64_SGI_SN2 is not set
# CONFIG_IA64_PAGE_SIZE_4KB is not set
# CONFIG_IA64_PAGE_SIZE_8KB is not set
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
CONFIG_ACPI=y
CONFIG_ACPI_EFI=y
CONFIG_ACPI_INTERPRETER=y
CONFIG_ACPI_KERNEL_CONFIG=y
CONFIG_IA64_L1_CACHE_SHIFT=7
# CONFIG_MCKINLEY_ASTEP_SPECIFIC is not set
# CONFIG_NUMA is not set
CONFIG_VIRTUAL_MEM_MAP=y
CONFIG_IA64_MCA=y
CONFIG_PM=y
CONFIG_IOSAPIC=y
CONFIG_KCORE_ELF=y
CONFIG_FORCE_MAX_ZONEORDER\x18
# CONFIG_HUGETLB_PAGE is not set
CONFIG_SMP=y
CONFIG_IA32_SUPPORT=y
CONFIG_COMPAT=y
CONFIG_PERFMON=y
CONFIG_IA64_PALINFO=y
CONFIG_EFI_VARS=y
CONFIG_NR_CPUSd
CONFIG_BINFMT_ELF=y
CONFIG_BINFMT_MISC=y
#
# ACPI Support
#
CONFIG_ACPI_BOOT=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_FAN=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_THERMAL=y
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_BUS=y
CONFIG_ACPI_POWER=y
CONFIG_ACPI_PCI=y
CONFIG_ACPI_SYSTEM=y
CONFIG_PCI=y
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
CONFIG_HOTPLUG=y
#
# PCI Hotplug Support
#
# CONFIG_HOTPLUG_PCI is not set
#
# PCMCIA/CardBus support
#
# CONFIG_PCMCIA is not set
#
# Parallel port support
#
# CONFIG_PARPORT is not set
#
# Memory Technology Devices (MTD)
#
# CONFIG_MTD is not set
#
# Plug and Play support
#
# CONFIG_PNP is not set
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_CPQ_DA is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_NBD is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE@96
CONFIG_BLK_DEV_INITRD=y
#
# IEEE 1394 (FireWire) support (EXPERIMENTAL)
#
# CONFIG_IEEE1394 is not set
#
# I2O device support
#
# CONFIG_I2O is not set
#
# Multi-device support (RAID and LVM)
#
# CONFIG_MD is not set
#
# Fusion MPT device support
#
CONFIG_FUSION=y
CONFIG_FUSION_BOOT=y
CONFIG_FUSION_MAX_SGE@
#
# ATA/ATAPI/MFM/RLL support
#
CONFIG_IDE=y
#
# IDE, ATA and ATAPI Block devices
#
CONFIG_BLK_DEV_IDE=y
#
# Please see Documentation/ide.txt for help/info on IDE drives
#
# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_IDEDISK=y
CONFIG_IDEDISK_MULTI_MODE=y
# CONFIG_IDEDISK_STROKE is not set
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_IDEFLOPPY=y
# CONFIG_BLK_DEV_IDESCSI is not set
CONFIG_IDE_TASK_IOCTL=y
#
# IDE chipset support/bugfixes
#
CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_BLK_DEV_GENERIC is not set
CONFIG_IDEPCI_SHARE_IRQ=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_IDE_TCQ is not set
# CONFIG_BLK_DEV_OFFBOARD is not set
# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
# CONFIG_IDEDMA_PCI_AUTO is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_PCI_WIP is not set
CONFIG_BLK_DEV_ADMA=y
# CONFIG_BLK_DEV_AEC62XX is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_BLK_DEV_AMD74XX is not set
CONFIG_BLK_DEV_CMD64X=y
# CONFIG_BLK_DEV_TRIFLEX is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5520 is not set
# CONFIG_BLK_DEV_HPT34X is not set
# CONFIG_BLK_DEV_HPT366 is not set
# CONFIG_BLK_DEV_SC1200 is not set
# CONFIG_BLK_DEV_PIIX is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
# CONFIG_BLK_DEV_PDC202XX_NEW is not set
# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIIMAGE is not set
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_IDEDMA_IVB is not set
CONFIG_BLK_DEV_IDE_MODES=y
#
# SCSI support
#
CONFIG_SCSI=y
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=y
CONFIG_CHR_DEV_OSST=y
CONFIG_BLK_DEV_SR=y
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=y
#
# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
#
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_REPORT_LUNS=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
#
# SCSI low-level drivers
#
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
CONFIG_SCSI_AIC7XXX_OLD=y
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_IN2000 is not set
# CONFIG_SCSI_AM53C974 is not set
CONFIG_SCSI_MEGARAID=y
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_CPQFCTS is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_EATA_DMA is not set
# CONFIG_SCSI_EATA_PIO is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_GENERIC_NCR5380 is not set
# CONFIG_SCSI_GENERIC_NCR5380_MMIO is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_NCR53C7xx is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_NCR53C8XX is not set
CONFIG_SCSI_SYM53C8XX=y
CONFIG_SCSI_NCR53C8XX_DEFAULT_TAGS=8
CONFIG_SCSI_NCR53C8XX_MAX_TAGS2
CONFIG_SCSI_NCR53C8XX_SYNC
# CONFIG_SCSI_NCR53C8XX_PROFILE is not set
# CONFIG_SCSI_NCR53C8XX_IOMAPPED is not set
# CONFIG_SCSI_NCR53C8XX_PQS_PDS is not set
# CONFIG_SCSI_NCR53C8XX_SYMBIOS_COMPAT is not set
# CONFIG_SCSI_PCI2000 is not set
# CONFIG_SCSI_PCI2220I is not set
# CONFIG_SCSI_QLOGIC_ISP is not set
# CONFIG_SCSI_QLOGIC_FC is not set
CONFIG_SCSI_QLOGIC_1280=y
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_U14_34F is not set
# CONFIG_SCSI_NSP32 is not set
# CONFIG_SCSI_DEBUG is not set
#
# Networking support
#
CONFIG_NET=y
#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
# CONFIG_NETLINK_DEV is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
CONFIG_FILTER=y
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
# CONFIG_IP_ADVANCED_ROUTER is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
# CONFIG_ARPD is not set
# CONFIG_INET_ECN is not set
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_XFRM_USER is not set
#
# IP: Netfilter Configuration
#
# CONFIG_IP_NF_CONNTRACK is not set
# CONFIG_IP_NF_QUEUE is not set
# CONFIG_IP_NF_IPTABLES is not set
CONFIG_IP_NF_ARPTABLES=y
# CONFIG_IP_NF_ARPFILTER is not set
# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
# CONFIG_IP_NF_COMPAT_IPFWADM is not set
# CONFIG_IPV6 is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
CONFIG_IPV6_SCTP__=y
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_LLC is not set
# CONFIG_DECNET is not set
# CONFIG_BRIDGE is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_NET_FASTROUTE is not set
# CONFIG_NET_HW_FLOWCONTROL is not set
#
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
CONFIG_NETDEVICES=y
#
# ARCnet devices
#
# CONFIG_ARCNET is not set
CONFIG_DUMMY=y
CONFIG_BONDING=y
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
# CONFIG_ETHERTAP is not set
#
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_NET_VENDOR_3COM is not set
#
# Tulip family network device support
#
# CONFIG_NET_TULIP is not set
# CONFIG_HP100 is not set
CONFIG_NET_PCI=y
# CONFIG_PCNET32 is not set
# CONFIG_AMD8111_ETH is not set
# CONFIG_ADAPTEC_STARFIRE is not set
# CONFIG_B44 is not set
# CONFIG_DGRS is not set
CONFIG_EEPRO100=y
# CONFIG_E100 is not set
# CONFIG_FEALNX is not set
# CONFIG_NATSEMI is not set
# CONFIG_NE2K_PCI is not set
# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
# CONFIG_SIS900 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SUNDANCE is not set
# CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set
#
# Ethernet (1000 Mbit)
#
# CONFIG_ACENIC is not set
# CONFIG_DL2K is not set
# CONFIG_E1000 is not set
# CONFIG_NS83820 is not set
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SK98LIN is not set
CONFIG_TIGON3=y
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
#
# Wireless LAN (non-hamradio)
#
# CONFIG_NET_RADIO is not set
#
# Token Ring devices (depends on LLC=y)
#
# CONFIG_NET_FC is not set
# CONFIG_RCPCI is not set
# CONFIG_SHAPER is not set
#
# Wan interfaces
#
# CONFIG_WAN is not set
#
# Amateur Radio support
#
# CONFIG_HAMRADIO is not set
#
# ISDN subsystem
#
# CONFIG_ISDN_BOOL is not set
#
# CD-ROM drivers (not for SCSI or IDE/ATAPI drives)
#
# CONFIG_CD_NO_IDESCSI is not set
#
# Input device support
#
CONFIG_INPUT=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X\x1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Yv8
CONFIG_INPUT_JOYDEV=y
# CONFIG_INPUT_TSDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set
#
# Input I/O drivers
#
# CONFIG_GAMEPORT is not set
CONFIG_SOUND_GAMEPORT=y
CONFIG_SERIO=y
# CONFIG_SERIO_I8042 is not set
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ATKBD is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
#
# Character devices
#
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_HW_CONSOLE=y
# CONFIG_SERIAL_NONSTANDARD is not set
#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_ACPI=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_HCDP=y
CONFIG_SERIAL_8250_EXTENDED=y
# CONFIG_SERIAL_8250_MANY_PORTS is not set
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_MULTIPORT is not set
# CONFIG_SERIAL_8250_RSA is not set
#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
CONFIG_UNIX98_PTY_COUNT%6
#
# I2C support
#
# CONFIG_I2C is not set
#
# I2C Hardware Sensors Mainboard support
#
#
# I2C Hardware Sensors Chip support
#
#
# Mice
#
# CONFIG_BUSMOUSE is not set
# CONFIG_QIC02_TAPE is not set
#
# IPMI
#
# CONFIG_IPMI_HANDLER is not set
#
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_INTEL_RNG is not set
# CONFIG_NVRAM is not set
# CONFIG_GEN_RTC is not set
CONFIG_EFI_RTC=y
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
#
# Ftape, the floppy tape device driver
#
# CONFIG_FTAPE is not set
CONFIG_AGP=y
# CONFIG_AGP3 is not set
# CONFIG_AGP_INTEL is not set
# CONFIG_AGP_VIA is not set
# CONFIG_AGP_AMD is not set
# CONFIG_AGP_SIS is not set
# CONFIG_AGP_ALI is not set
# CONFIG_AGP_SWORKS is not set
# CONFIG_AGP_AMD_8151 is not set
# CONFIG_AGP_I460 is not set
CONFIG_AGP_HP_ZX1=y
CONFIG_DRM=y
CONFIG_DRM_TDFX=y
# CONFIG_DRM_R128 is not set
CONFIG_DRM_RADEON=y
# CONFIG_DRM_I810 is not set
# CONFIG_DRM_I830 is not set
# CONFIG_DRM_MGA is not set
# CONFIG_RAW_DRIVER is not set
# CONFIG_HANGCHECK_TIMER is not set
#
# Multimedia devices
#
# CONFIG_VIDEO_DEV is not set
#
# File systems
#
# CONFIG_QUOTA is not set
CONFIG_AUTOFS_FS=y
# CONFIG_AUTOFS4_FS is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_XATTR=y
# CONFIG_EXT3_FS_POSIX_ACL is not set
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
# CONFIG_EFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_TMPFS is not set
CONFIG_RAMFS=y
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
# CONFIG_ZISOFS is not set
# CONFIG_JFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_NTFS_FS is not set
# CONFIG_HPFS_FS is not set
CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
CONFIG_DEVPTS_FS=y
# CONFIG_QNX4FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
# CONFIG_SYSV_FS is not set
CONFIG_UDF_FS=y
# CONFIG_UFS_FS is not set
# CONFIG_XFS_FS is not set
#
# Network File Systems
#
# CONFIG_CODA_FS is not set
# CONFIG_INTERMEZZO_FS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3=y
CONFIG_NFS_V4=y
CONFIG_NFSD=y
CONFIG_NFSD_V3=y
# CONFIG_NFSD_V4 is not set
# CONFIG_NFSD_TCP is not set
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=y
# CONFIG_CIFS is not set
# CONFIG_SMB_FS is not set
# CONFIG_NCP_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_FS_MBCACHE=y
CONFIG_FS_POSIX_ACL=y
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
# CONFIG_BSD_DISKLABEL is not set
# CONFIG_MINIX_SUBPARTITION is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
# CONFIG_UNIXWARE_DISKLABEL is not set
# CONFIG_LDM_PARTITION is not set
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
CONFIG_EFI_PARTITION=y
CONFIG_NLS=y
#
# Native Language Support
#
CONFIG_NLS_DEFAULT="iso8859-1"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y
CONFIG_NLS_CODEPAGE_850=y
CONFIG_NLS_CODEPAGE_852=y
CONFIG_NLS_CODEPAGE_855=y
CONFIG_NLS_CODEPAGE_857=y
CONFIG_NLS_CODEPAGE_860=y
CONFIG_NLS_CODEPAGE_861=y
CONFIG_NLS_CODEPAGE_862=y
CONFIG_NLS_CODEPAGE_863=y
CONFIG_NLS_CODEPAGE_864=y
CONFIG_NLS_CODEPAGE_865=y
CONFIG_NLS_CODEPAGE_866=y
CONFIG_NLS_CODEPAGE_869=y
CONFIG_NLS_CODEPAGE_936=y
CONFIG_NLS_CODEPAGE_950=y
CONFIG_NLS_CODEPAGE_932=y
CONFIG_NLS_CODEPAGE_949=y
CONFIG_NLS_CODEPAGE_874=y
CONFIG_NLS_ISO8859_8=y
# CONFIG_NLS_CODEPAGE_1250 is not set
CONFIG_NLS_CODEPAGE_1251=y
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_ISO8859_2=y
CONFIG_NLS_ISO8859_3=y
CONFIG_NLS_ISO8859_4=y
CONFIG_NLS_ISO8859_5=y
CONFIG_NLS_ISO8859_6=y
CONFIG_NLS_ISO8859_7=y
CONFIG_NLS_ISO8859_9=y
CONFIG_NLS_ISO8859_13=y
CONFIG_NLS_ISO8859_14=y
CONFIG_NLS_ISO8859_15=y
CONFIG_NLS_KOI8_R=y
CONFIG_NLS_KOI8_U=y
CONFIG_NLS_UTF8=y
#
# Graphics support
#
CONFIG_FB=y
# CONFIG_FB_CLGEN is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_VIRTUAL is not set
#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_MDA_CONSOLE is not set
CONFIG_DUMMY_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE is not set
#
# Sound
#
CONFIG_SOUND=y
#
# Advanced Linux Sound Architecture
#
# CONFIG_SND is not set
#
# Open Sound System
#
# CONFIG_SOUND_PRIME is not set
#
# USB support
#
# CONFIG_USB is not set
#
# Library routines
#
CONFIG_CRC32=y
#
# Bluetooth support
#
# CONFIG_BT is not set
#
# Kernel hacking
#
CONFIG_FSYS=y
# CONFIG_IA64_GRANULE_16MB is not set
CONFIG_IA64_GRANULE_64MB=y
CONFIG_DEBUG_KERNEL=y
CONFIG_KALLSYMS=y
CONFIG_IA64_PRINT_HAZARDS=y
# CONFIG_DISABLE_VHPT is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_IA64_EARLY_PRINTK=y
CONFIG_IA64_EARLY_PRINTK_UART=y
CONFIG_IA64_EARLY_PRINTK_UART_BASE=0xff5e0000
CONFIG_IA64_EARLY_PRINTK_VGA=y
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_IA64_DEBUG_CMPXCHG is not set
# CONFIG_IA64_DEBUG_IRQ is not set
#
# Security options
#
# CONFIG_SECURITY is not set
#
# Cryptographic options
#
# CONFIG_CRYPTO is not set
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (202 preceding siblings ...)
2003-02-13 20:00 ` David Mosberger
@ 2003-02-13 20:11 ` Grant Grundler
2003-02-18 19:52 ` Jesse Barnes
` (11 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Grant Grundler @ 2003-02-13 20:11 UTC (permalink / raw)
To: linux-ia64
On Thu, Feb 13, 2003 at 12:00:23PM -0800, David Mosberger wrote:
> What you're noticing is that not enough people are testing 2.5.xx. I
> don't think that's good either, because 2.6 will be upon sooner or
> later and it'd be ugly if everybody waited until then to do some
> testing.
yup - and I get the impression cpqfc driver is an orphaned sf.net project.
I also cc'd the cpqfc project at sf.net.
The 2.5.60 issue will be a test if it is or isn't.
> My role as a maintainer is not to test each and every possible
> configuration option and guarantee that things work.
That would be rather unrealistic.
My goal was to just report it to the list.
I'll direct my "reply" at that list to make it a bit clearer.
> ...I'm sure everybody on this list understands this,
> but it makes me feel better to mention it from time to time... ;-)
*G*. Not everyone does...I've had to clarify it more than once
in meetings with vendors and in-house.
> Sure, I attached the .config I'm using for 2.5.60 on zx1-based
> machines (it should work without changes for 2.5.59).
cool - tnx.
grant
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to 2.5.60)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (203 preceding siblings ...)
2003-02-13 20:11 ` Grant Grundler
@ 2003-02-18 19:52 ` Jesse Barnes
2003-03-07 8:19 ` [Linux-ia64] kernel update (relative to v2.5.64) David Mosberger
` (10 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Jesse Barnes @ 2003-02-18 19:52 UTC (permalink / raw)
To: linux-ia64
On Wed, Feb 12, 2003 at 03:26:10PM -0800, David Mosberger wrote:
> An updated ia64 patch is now at:
>
> ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/v2.5/linux-2.5.60-ia64-030212.diff.gz
Thanks for the update. Here's an error I got when trying to build it
for my Bug Sur machine:
gcc -Wp,-MD,drivers/char/agp/.i460-agp.o.d -D__KERNEL__ -Iinclude -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 -fno-strict-aliasing -fno-common -pipe -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 -falign-functions2 -frename-registers --param max-inline-insnsP00 -fomit-frame-pointer -nostdinc -iwithprefix include -mconstant-gp -DKBUILD_BASENAME=i460_agp -DKBUILD_MODNAME=i460_agp -c -o drivers/char/agp/i460-agp.o drivers/char/agp/i460-agp.c
drivers/char/agp/i460-agp.c:563: parse error before ';' token
make[3]: *** [drivers/char/agp/i460-agp.o] Error 1
make[2]: *** [drivers/char/agp] Error 2
make[1]: *** [drivers/char] Error 2
make: *** [drivers] Error 2
Seems that an extra ';' got into the structure initialization,
probably during conversion to C99 syle. The following patch gets me
to this link error:
ld -static -T arch/ia64/vmlinux.lds.s arch/ia64/kernel/head.o arch/ia64/kernel/init_task.o init/built-in.o --start-group usr/built-in.o arch/ia64/kernel/built-in.o arch/ia64/mm/built-in.o arch/ia64/ia32/built-in.o arch/ia64/dig/built-in.o kernel/built-in.o mm/built-in.o fs/built-in.o ipc/built-in.o security/built-in.o crypto/built-in.o lib/lib.a arch/ia64/lib/lib.a drivers/built-in.o sound/built-in.o arch/ia64/pci/built-in.o net/built-in.o --end-group -o vmlinux
drivers/built-in.o(.text+0xa51f1): In function `agp_return_size':
: undefined reference to `.L111'
drivers/built-in.o(.text+0xa52d1): In function `agp_num_entries':
: undefined reference to `.L121'
make: *** [vmlinux] Error 1
Thanks,
Jesse
--- linux-2.5.60-ia64/drivers/char/agp/i460-agp.c.orig 2003-02-13 10:28:40.000000000 -0800
+++ linux-2.5.60-ia64/drivers/char/agp/i460-agp.c 2003-02-13 10:23:04.000000000 -0800
@@ -560,7 +560,7 @@
}
static struct agp_driver i460_agp_driver = {
- .owner = THIS_MODULE;
+ .owner = THIS_MODULE
};
static int __init agp_intel_i460_probe (struct pci_dev *dev, const struct pci_device_id *ent)
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.5.64)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (204 preceding siblings ...)
2003-02-18 19:52 ` Jesse Barnes
@ 2003-03-07 8:19 ` David Mosberger
2003-04-12 4:28 ` [Linux-ia64] kernel update (relative to v2.5.67) David Mosberger
` (9 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-03-07 8:19 UTC (permalink / raw)
To: linux-ia64
A new ia64 kernel patch is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.5.64-ia64-030307.diff.gz.
See the bk tree for detailed changelog
(http://lia64.bkbits.net:8080/to-linus-2.5/).
This kernel seems to work fine on the Ski simulator and HP zx1-based
machines; the sync with 2.5.64 was reasonably straight-forward though,
so I don't expect any problems on other machines (famous last
words...).
Enjoy,
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (205 preceding siblings ...)
2003-03-07 8:19 ` [Linux-ia64] kernel update (relative to v2.5.64) David Mosberger
@ 2003-04-12 4:28 ` David Mosberger
2003-04-14 12:55 ` Takayoshi Kochi
` (8 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-04-12 4:28 UTC (permalink / raw)
To: linux-ia64
At long last, here is an ia64 patch relative to 2.5.67. As usual, the
ia64 kernel patch is at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.5.67-ia64-030411.diff.gz.
and a detailed changelog is available at
(http://lia64.bkbits.net:8080/to-linus-2.5/).
One of more significant (though small) changes: I replaced the
show_regs() in the MCA INIT platform handler with some code that dumps
the min-state area instead. The pt_regs structure doesn't contain
terribly interesting data when the INIT handler gets called, and the
backtrace never worked anyhow (because PAL/SAL switch to a different
memory stack before calling the INIT handler). Also, I added a 5
second delay before the dump starts. This turns out to be handy when
the console is multiplexed as is the case for zx1-based machines. On
those, you can generate an INIT event via the base-management
controller's command-line interface. But to avoid losing output, you
have to switch back in time before the dump starts. A 5 second delay
achieves that. If someone has any objections with the delay, let me
know. I don't think it should be an issue, though, because after the
dump, the machine enters an endless loop anyhow, so what difference
could 5 seconds make? Having said that, it _would_ be nice if the
INIT handler could be improved to support resuming from INIT events
(when this is possible). Anyone interested?
This kernel was tested on various zx1-based machines and the HP Ski
simulator. I also tried it on a Big Sur, but it failed with ACPI
errors (attached below). Perhaps someone who actually understands
ACPI could look into this? It's also possible that the firmware on my
Big Sur is too old. So before digging into this too deep, you may
want to make sure you have the latest firmware installed.
Also, folks with hp zx1-based machines with a remote console management
(ECI) card installed: please make sure you have option
CONFIG_ACPI_8250 turned OFF. Otherwise, your kernel will get stuck
right after the init process starts running. Since the 8250_acpi.c
code is only needed for very old zx1-prototypes anyhow, this shouldn't
be any loss.
While it took a while to get this kernel up to speed, it's now working
quite well for me (I'm even running it on my deskside workstation).
Thanks to Alex's sba_iommu, even IDE seems to work nicely again (it
may have been running just fine for some time though, on non-zx1
machines).
So, if you haven't tried 2.5 recently, now might be a good time.
Oh, finally: I hope I didn't miss any patches, but if I did, my
apologies. Please resend and I'll try to do better next time.
Enjoy,
--david
ACPI: Subsystem revision 20030328
ACPI-0341: *** Error: Handler for [PCI_Config] returned AE_ERROR
ACPI-1121: *** Error: Method execution failed [\_SB_.CBN_._BBN] (Node e000000005340900), AE_ERROR
ACPI-0098: *** Error: Method execution failed [\_SB_.CBN_._BBN] (Node e000000005340900), AE_ERROR
kernel unaligned access to 0x000000000000059d, ip=0xe0000000048a9821
ACPI-0341: *** Error: Handler for [PCI_Config] returned AE_ERROR
ACPI-1121: *** Error: Method execution failed [\PLAT] (Node e00000003f22f840), AE_AML_NO_RETURN_VALUE
ACPI-1121: *** Error: Method execution failed [\_SB_.PCI2._STA] (Node e000000005347e00), AE_AML_NO_RETURN_VALUE
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (206 preceding siblings ...)
2003-04-12 4:28 ` [Linux-ia64] kernel update (relative to v2.5.67) David Mosberger
@ 2003-04-14 12:55 ` Takayoshi Kochi
2003-04-14 17:00 ` Howell, David P
` (7 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Takayoshi Kochi @ 2003-04-14 12:55 UTC (permalink / raw)
To: linux-ia64
[Added Cc: to Greg KH and acpi-devel list]
From: David Mosberger <davidm@napali.hpl.hp.com>
Subject: [Linux-ia64] kernel update (relative to v2.5.67)
Date: Fri, 11 Apr 2003 21:28:29 -0700
Message-ID: <200304120428.h3C4STTw004554@napali.hpl.hp.com>
> This kernel was tested on various zx1-based machines and the HP Ski
> simulator. I also tried it on a Big Sur, but it failed with ACPI
> errors (attached below). Perhaps someone who actually understands
> ACPI could look into this? It's also possible that the firmware on my
> Big Sur is too old. So before digging into this too deep, you may
> want to make sure you have the latest firmware installed.
<snip>
> ACPI: Subsystem revision 20030328
> ACPI-0341: *** Error: Handler for [PCI_Config] returned AE_ERROR
> ACPI-1121: *** Error: Method execution failed [\_SB_.CBN_._BBN] (Node e000000005340900), AE_ERROR
> ACPI-0098: *** Error: Method execution failed [\_SB_.CBN_._BBN] (Node e000000005340900), AE_ERROR
> kernel unaligned access to 0x000000000000059d, ip=0xe0000000048a9821
> ACPI-0341: *** Error: Handler for [PCI_Config] returned AE_ERROR
> ACPI-1121: *** Error: Method execution failed [\PLAT] (Node e00000003f22f840), AE_AML_NO_RETURN_VALUE
> ACPI-1121: *** Error: Method execution failed [\_SB_.PCI2._STA] (Node e000000005347e00), AE_AML_NO_RETURN_VALUE
It seems that newly integrated PCI segment support is incomplete.
In acpi/osl.c, it does PCI configuration space access like:
struct pci_bus bus;
..
bus.number = __bus_number_to_access__;
pci_root_ops->write(&bus, ...);
But the ia64 dependent pci_root_ops uses &bus->sysdata to know
which PCI segment to access:(
I also noticed that in acpi/pci_root.c, _SEG is detected but
ignored;) So segment support is now meanless (though it's easy
to remove these two lines).
case AE_OK:
root->id.segment = (u16) value;
printk("_SEG exists! Unsupported. Abort.\n"); <= !!!!!
BUG(); <= !!!!!
break;
To make things worse, ia64 2.5 tree has a dependency loop in
PCI config space initialization.
1. To access PCI config space, we need in-kernel PCI tree
2. To construct the PCI tree, we need ACPI namespace to
detect PCI root bridges
3. To check whether a PCI root bridge exists, we have to
execute _STA method
4. Often _STA method is implemented using PCI config
space access, so we need to access the config space
5. To access PCI config space, ...
To resolve this problem,
a) blindly probe bus 0 - 255, as we did in early days before
ACPI initialization
b) resurrect pci_config_read/write functions so that we can access
pci config space (with segment specified) regardless of in-kernel
PCI tree
c) add #ifdef CONFIG_IA64 to acpi/osl.c to make a fake sysdata
to specify segment
d) other (any good ideas?)
I think b) is the most straightforward way to solve the situation,
but as it's once removed, it will be hard to persuade everyone;)
And currently segment support is only implemented on ia64
(using bus->sysdata) which seems ad hoc and is not implemented
on i386. This is also a problem, though it's not so urgent.
Thanks,
---
Takayoshi Kochi <kochi@hpc.bs1.fc.nec.co.jp>
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (207 preceding siblings ...)
2003-04-14 12:55 ` Takayoshi Kochi
@ 2003-04-14 17:00 ` Howell, David P
2003-04-14 18:45 ` David Mosberger
` (6 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Howell, David P @ 2003-04-14 17:00 UTC (permalink / raw)
To: linux-ia64
I just downloaded linux-2.5.67 and the linux-2.5.67-ia64-030411.diff
patch
and built the kernel, and for a config that was booting with
linux-2.5.64
I now get the same errors reported below on the initial PCI
configuration.
This is on a tiger4 with 1 Itanium-2 900MHz processor. The same system
does
boot properly with RH AS 2.1 with the latest patches.
So it does appear that something bad did happen to the ACPI config
support
for this new cut. For now, back to linux-2.5.64.
Thanks,
Dave Howell
These are my opinions and not official opinions of Intel Corp.
David Howell
Intel Corporation
Telco Server Development
Server Products Division
Voice: (803) 461-6112 Fax: (803) 461-6292
Intel Corporation
Columbia Design Center, CBA-2
250 Berryhill Road, Suite 100
Columbia, SC 29210
david.p.howell@intel.com
-----Original Message-----
From: Takayoshi Kochi [mailto:kochi@hpc.bs1.fc.nec.co.jp]
Sent: Monday, April 14, 2003 8:55 AM
To: davidm@hpl.hp.com
Cc: linux-ia64@linuxia64.org; greg@kroah.com;
acpi-devel@lists.sourceforge.net
Subject: Re: [Linux-ia64] kernel update (relative to v2.5.67)
[Added Cc: to Greg KH and acpi-devel list]
From: David Mosberger <davidm@napali.hpl.hp.com>
Subject: [Linux-ia64] kernel update (relative to v2.5.67)
Date: Fri, 11 Apr 2003 21:28:29 -0700
Message-ID: <200304120428.h3C4STTw004554@napali.hpl.hp.com>
> This kernel was tested on various zx1-based machines and the HP Ski
> simulator. I also tried it on a Big Sur, but it failed with ACPI
> errors (attached below). Perhaps someone who actually understands
> ACPI could look into this? It's also possible that the firmware on my
> Big Sur is too old. So before digging into this too deep, you may
> want to make sure you have the latest firmware installed.
<snip>
> ACPI: Subsystem revision 20030328
> ACPI-0341: *** Error: Handler for [PCI_Config] returned AE_ERROR
> ACPI-1121: *** Error: Method execution failed [\_SB_.CBN_._BBN]
(Node e000000005340900), AE_ERROR
> ACPI-0098: *** Error: Method execution failed [\_SB_.CBN_._BBN]
(Node e000000005340900), AE_ERROR
> kernel unaligned access to 0x000000000000059d, ip=0xe0000000048a9821
> ACPI-0341: *** Error: Handler for [PCI_Config] returned AE_ERROR
> ACPI-1121: *** Error: Method execution failed [\PLAT] (Node
e00000003f22f840), AE_AML_NO_RETURN_VALUE
> ACPI-1121: *** Error: Method execution failed [\_SB_.PCI2._STA]
(Node e000000005347e00), AE_AML_NO_RETURN_VALUE
It seems that newly integrated PCI segment support is incomplete.
In acpi/osl.c, it does PCI configuration space access like:
struct pci_bus bus;
..
bus.number = __bus_number_to_access__;
pci_root_ops->write(&bus, ...);
But the ia64 dependent pci_root_ops uses &bus->sysdata to know
which PCI segment to access:(
I also noticed that in acpi/pci_root.c, _SEG is detected but
ignored;) So segment support is now meanless (though it's easy
to remove these two lines).
case AE_OK:
root->id.segment = (u16) value;
printk("_SEG exists! Unsupported. Abort.\n"); <= !!!!!
BUG(); <= !!!!!
break;
To make things worse, ia64 2.5 tree has a dependency loop in
PCI config space initialization.
1. To access PCI config space, we need in-kernel PCI tree
2. To construct the PCI tree, we need ACPI namespace to
detect PCI root bridges
3. To check whether a PCI root bridge exists, we have to
execute _STA method
4. Often _STA method is implemented using PCI config
space access, so we need to access the config space
5. To access PCI config space, ...
To resolve this problem,
a) blindly probe bus 0 - 255, as we did in early days before
ACPI initialization
b) resurrect pci_config_read/write functions so that we can access
pci config space (with segment specified) regardless of in-kernel
PCI tree
c) add #ifdef CONFIG_IA64 to acpi/osl.c to make a fake sysdata
to specify segment
d) other (any good ideas?)
I think b) is the most straightforward way to solve the situation,
but as it's once removed, it will be hard to persuade everyone;)
And currently segment support is only implemented on ia64
(using bus->sysdata) which seems ad hoc and is not implemented
on i386. This is also a problem, though it's not so urgent.
Thanks,
---
Takayoshi Kochi <kochi@hpc.bs1.fc.nec.co.jp>
_______________________________________________
Linux-IA64 mailing list
Linux-IA64@linuxia64.org
http://lists.linuxia64.org/lists/listinfo/linux-ia64
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (208 preceding siblings ...)
2003-04-14 17:00 ` Howell, David P
@ 2003-04-14 18:45 ` David Mosberger
2003-04-14 20:56 ` Alex Williamson
` (5 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-04-14 18:45 UTC (permalink / raw)
To: linux-ia64
>>>>> On Mon, 14 Apr 2003 21:55:19 +0900 (JST), Takayoshi Kochi <kochi@hpc.bs1.fc.nec.co.jp> said:
Takayoshi> It seems that newly integrated PCI segment support is
Takayoshi> incomplete.
Argh. Alex, can you work this out? If fixing the segment support is
too much effort right now, we need to revert the segment support so
the kernel boots on Big Sur (and Tiger) again.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (209 preceding siblings ...)
2003-04-14 18:45 ` David Mosberger
@ 2003-04-14 20:56 ` Alex Williamson
2003-04-14 22:13 ` Howell, David P
` (4 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Alex Williamson @ 2003-04-14 20:56 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 722 bytes --]
David Mosberger wrote:
>
> >>>>> On Mon, 14 Apr 2003 21:55:19 +0900 (JST), Takayoshi Kochi <kochi@hpc.bs1.fc.nec.co.jp> said:
>
> Takayoshi> It seems that newly integrated PCI segment support is
> Takayoshi> incomplete.
>
> Argh. Alex, can you work this out? If fixing the segment support is
> too much effort right now, we need to revert the segment support so
> the kernel boots on Big Sur (and Tiger) again.
>
This patch makes it work on my i2000. There's definitely still
some issues, but I'd rather get something out that works and go from
there, than revert it. Please test and let me know what else you
find. Thanks,
Alex
--
Alex Williamson HP Linux & Open Source Lab
[-- Attachment #2: seg_fixes.diff --]
[-- Type: text/plain, Size: 2513 bytes --]
--- linux-2.5.67/arch/ia64/kernel/acpi.c~ Mon Apr 14 14:32:51 2003
+++ linux-2.5.67/arch/ia64/kernel/acpi.c Mon Apr 14 14:33:45 2003
@@ -152,6 +152,10 @@
return NULL;
res = buf->pointer + *offset;
+
+ if (res->length <= 0)
+ return NULL;
+
*offset += res->length;
return res;
}
--- linux-2.5.67/drivers/acpi/osl.c~ Mon Apr 14 14:32:59 2003
+++ linux-2.5.67/drivers/acpi/osl.c Mon Apr 14 14:33:45 2003
@@ -461,6 +461,9 @@
int result = 0;
int size = 0;
struct pci_bus bus;
+#ifdef CONFIG_IA64
+ struct pci_controller ctrl;
+#endif
if (!value)
return AE_BAD_PARAMETER;
@@ -480,6 +483,10 @@
}
bus.number = pci_id->bus;
+#ifdef CONFIG_IA64
+ ctrl.segment = pci_id->segment;
+ bus.sysdata = &ctrl;
+#endif
result = pci_root_ops->read(&bus, PCI_DEVFN(pci_id->device,
pci_id->function),
reg, size, value);
@@ -497,6 +504,9 @@
int result = 0;
int size = 0;
struct pci_bus bus;
+#ifdef CONFIG_IA64
+ struct pci_controller ctrl;
+#endif
switch (width) {
case 8:
@@ -513,6 +523,10 @@
}
bus.number = pci_id->bus;
+#ifdef CONFIG_IA64
+ ctrl.segment = pci_id->segment;
+ bus.sysdata = &ctrl;
+#endif
result = pci_root_ops->write(&bus, PCI_DEVFN(pci_id->device,
pci_id->function),
reg, size, value);
--- linux-2.5.67/drivers/acpi/pci_irq.c~ Mon Apr 14 14:33:05 2003
+++ linux-2.5.67/drivers/acpi/pci_irq.c Mon Apr 14 14:33:45 2003
@@ -293,7 +293,7 @@
while (!irq && bridge->bus->self) {
pin = (pin + PCI_SLOT(bridge->devfn)) % 4;
bridge = bridge->bus->self;
- irq = acpi_pci_irq_lookup(0, bridge->bus->number, PCI_SLOT(bridge->devfn), pin);
+ irq = acpi_pci_irq_lookup(PCI_SEGMENT(bridge), bridge->bus->number, PCI_SLOT(bridge->devfn), pin);
}
if (!irq) {
@@ -336,7 +336,7 @@
* First we check the PCI IRQ routing table (PRT) for an IRQ. PRT
* values override any BIOS-assigned IRQs set during boot.
*/
- irq = acpi_pci_irq_lookup(0, dev->bus->number, PCI_SLOT(dev->devfn), pin);
+ irq = acpi_pci_irq_lookup(PCI_SEGMENT(dev), dev->bus->number, PCI_SLOT(dev->devfn), pin);
/*
* If no PRT entry was found, we'll try to derive an IRQ from the
--- linux-2.5.67/drivers/acpi/pci_root.c~ Mon Apr 14 14:35:35 2003
+++ linux-2.5.67/drivers/acpi/pci_root.c Mon Apr 14 14:35:52 2003
@@ -202,8 +202,6 @@
switch (status) {
case AE_OK:
root->id.segment = (u16) value;
- printk("_SEG exists! Unsupported. Abort.\n");
- BUG();
break;
case AE_NOT_FOUND:
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
^ permalink raw reply [flat|nested] 217+ messages in thread* RE: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (210 preceding siblings ...)
2003-04-14 20:56 ` Alex Williamson
@ 2003-04-14 22:13 ` Howell, David P
2003-04-15 9:01 ` Takayoshi Kochi
` (3 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Howell, David P @ 2003-04-14 22:13 UTC (permalink / raw)
To: linux-ia64
Thanks. I did get by my initial PCI/ACPI config problems with this
patch.
Dave Howell
These are my opinions and not official opinions of Intel Corp.
David Howell
Intel Corporation
Telco Server Development
Server Products Division
Voice: (803) 461-6112 Fax: (803) 461-6292
Intel Corporation
Columbia Design Center, CBA-2
250 Berryhill Road, Suite 100
Columbia, SC 29210
david.p.howell@intel.com
-----Original Message-----
From: Alex Williamson [mailto:alex_williamson@hp.com]
Sent: Monday, April 14, 2003 4:57 PM
To: davidm@hpl.hp.com
Cc: Takayoshi Kochi; linux-ia64@linuxia64.org
Subject: Re: [Linux-ia64] kernel update (relative to v2.5.67)
David Mosberger wrote:
>
> >>>>> On Mon, 14 Apr 2003 21:55:19 +0900 (JST), Takayoshi Kochi
<kochi@hpc.bs1.fc.nec.co.jp> said:
>
> Takayoshi> It seems that newly integrated PCI segment support is
> Takayoshi> incomplete.
>
> Argh. Alex, can you work this out? If fixing the segment support is
> too much effort right now, we need to revert the segment support so
> the kernel boots on Big Sur (and Tiger) again.
>
This patch makes it work on my i2000. There's definitely still
some issues, but I'd rather get something out that works and go from
there, than revert it. Please test and let me know what else you
find. Thanks,
Alex
--
Alex Williamson HP Linux & Open Source Lab
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (211 preceding siblings ...)
2003-04-14 22:13 ` Howell, David P
@ 2003-04-15 9:01 ` Takayoshi Kochi
2003-04-15 22:03 ` David Mosberger
` (2 subsequent siblings)
215 siblings, 0 replies; 217+ messages in thread
From: Takayoshi Kochi @ 2003-04-15 9:01 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: Text/Plain, Size: 2635 bytes --]
Hi,
Thanks to Alex's fixes for PCI segment support, we could
boot 2.5.67 kernel on Tiger. But a strange thing is happening.
Only cpu0 & 1 are getting timer interrupts and cpu 2 & 3 aren't.
This problem is reproducible and we see this phenomenon after every
boot. It seems that these cpus stop receiving timer
interrupts after a very short while.
Has anyone seen this?
I attached the config. I compiled the kernel with
gcc version 3.2.3 20030407 (Debian prerelease) and
binutils 2.13.90.0.18 20030121 Debian GNU/Linux.
[root@tiger root]# cat /proc/stat
cpu 3634837 0 14291 9632002 32284
cpu0 1816379 0 10110 4809692 20522
cpu1 1818458 0 4181 4822257 11762
cpu2 0 0 0 27 0
cpu3 0 0 0 26 0
intr 13335591 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 15 0 2 0 0 0 0 0 0 0 2 299 9 0 0 0 236 12415 0 0 0 0 8692 42 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 13313475 0 0 0 0 0 0 0 0 0 0 0 0 0 0 399 0
ctxt 88342
btime 1050389895
processes 1921
procs_running 5
procs_blocked 0
[root@tiger root]# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
30: 0 0 0 0 IO-SAPIC-level cpe_hndlr
31: 0 0 0 0 LSAPIC cmc_hndlr
34: 1 0 14 0 IO-SAPIC-edge ide0
39: 0 0 0 0 IO-SAPIC-level acpi
45: 0 0 299 0 IO-SAPIC-edge serial
49: 0 0 0 0 IO-SAPIC-level uhci-hcd
50: 0 0 236 0 IO-SAPIC-level uhci-hcd
51: 0 0 12510 0 IO-SAPIC-level eth0
56: 0 0 8694 0 IO-SAPIC-level ioc0
57: 0 0 42 0 IO-SAPIC-level ioc1
232: 0 0 0 0 LSAPIC mca_rdzv
238: 1 1 1 1 LSAPIC perfmon
239: 6693336 6693231 27 26 LSAPIC timer
240: 0 0 0 0 LSAPIC mca_wkup
254: 8 128 133 133 LSAPIC IPI
NMI: 0 0 0 0
ERR: 0
Thanks,
---
Takayoshi Kochi <kochi@hpc.bs1.fc.nec.co.jp>
[-- Attachment #2: config-tiger.2.5.67 --]
[-- Type: Text/Plain, Size: 17324 bytes --]
#
# Automatically generated make config: don't edit
#
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
#
# General setup
#
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
CONFIG_LOG_BUF_SHIFT=17
#
# Loadable module support
#
# CONFIG_MODULES is not set
#
# Processor type and features
#
CONFIG_IA64=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
# CONFIG_ITANIUM is not set
CONFIG_MCKINLEY=y
# CONFIG_IA64_GENERIC is not set
CONFIG_IA64_DIG=y
# CONFIG_IA64_HP_SIM is not set
# CONFIG_IA64_HP_ZX1 is not set
# CONFIG_IA64_SGI_SN1 is not set
# CONFIG_IA64_SGI_SN2 is not set
# CONFIG_IA64_PAGE_SIZE_4KB is not set
# CONFIG_IA64_PAGE_SIZE_8KB is not set
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
CONFIG_ACPI=y
CONFIG_ACPI_EFI=y
CONFIG_ACPI_INTERPRETER=y
CONFIG_ACPI_KERNEL_CONFIG=y
CONFIG_IA64_L1_CACHE_SHIFT=7
# CONFIG_MCKINLEY_ASTEP_SPECIFIC is not set
# CONFIG_NUMA is not set
CONFIG_VIRTUAL_MEM_MAP=y
CONFIG_IA64_MCA=y
CONFIG_PM=y
CONFIG_IOSAPIC=y
CONFIG_KCORE_ELF=y
CONFIG_FORCE_MAX_ZONEORDER=18
# CONFIG_HUGETLB_PAGE is not set
# CONFIG_IA64_PAL_IDLE is not set
CONFIG_SMP=y
# CONFIG_PREEMPT is not set
CONFIG_IA32_SUPPORT=y
CONFIG_COMPAT=y
CONFIG_PERFMON=y
CONFIG_IA64_PALINFO=y
CONFIG_EFI_VARS=y
CONFIG_NR_CPUS=16
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
#
# ACPI Support
#
CONFIG_ACPI_BOOT=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_FAN=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_THERMAL=y
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_BUS=y
CONFIG_ACPI_POWER=y
CONFIG_ACPI_PCI=y
CONFIG_ACPI_SYSTEM=y
CONFIG_PCI=y
# CONFIG_PCI_LEGACY_PROC is not set
CONFIG_PCI_NAMES=y
# CONFIG_HOTPLUG is not set
#
# Parallel port support
#
# CONFIG_PARPORT is not set
#
# Memory Technology Devices (MTD)
#
# CONFIG_MTD is not set
#
# Plug and Play support
#
# CONFIG_PNP is not set
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_CPQ_DA is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
#
# IEEE 1394 (FireWire) support (EXPERIMENTAL)
#
# CONFIG_IEEE1394 is not set
#
# I2O device support
#
# CONFIG_I2O is not set
#
# Multi-device support (RAID and LVM)
#
# CONFIG_MD is not set
#
# Fusion MPT device support
#
CONFIG_FUSION=y
CONFIG_FUSION_BOOT=y
CONFIG_FUSION_MAX_SGE=40
#
# ATA/ATAPI/MFM/RLL support
#
CONFIG_IDE=y
#
# IDE, ATA and ATAPI Block devices
#
CONFIG_BLK_DEV_IDE=y
#
# Please see Documentation/ide.txt for help/info on IDE drives
#
# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_IDEDISK=y
CONFIG_IDEDISK_MULTI_MODE=y
# CONFIG_IDEDISK_STROKE is not set
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_IDEFLOPPY=y
CONFIG_BLK_DEV_IDESCSI=y
# CONFIG_IDE_TASK_IOCTL is not set
#
# IDE chipset support/bugfixes
#
CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_BLK_DEV_GENERIC is not set
CONFIG_IDEPCI_SHARE_IRQ=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_IDE_TCQ is not set
# CONFIG_BLK_DEV_OFFBOARD is not set
# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
# CONFIG_IDEDMA_PCI_AUTO is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_PCI_WIP is not set
CONFIG_BLK_DEV_ADMA=y
# CONFIG_BLK_DEV_AEC62XX is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_BLK_DEV_AMD74XX is not set
# CONFIG_BLK_DEV_CMD64X is not set
# CONFIG_BLK_DEV_TRIFLEX is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5520 is not set
# CONFIG_BLK_DEV_HPT34X is not set
# CONFIG_BLK_DEV_HPT366 is not set
# CONFIG_BLK_DEV_SC1200 is not set
# CONFIG_BLK_DEV_PIIX is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
# CONFIG_BLK_DEV_PDC202XX_NEW is not set
# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIIMAGE is not set
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_IDEDMA_IVB is not set
#
# SCSI support
#
CONFIG_SCSI=y
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
# CONFIG_CHR_DEV_ST is not set
# CONFIG_CHR_DEV_OSST is not set
# CONFIG_BLK_DEV_SR is not set
# CONFIG_CHR_DEV_SG is not set
#
# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
#
# CONFIG_SCSI_MULTI_LUN is not set
# CONFIG_SCSI_REPORT_LUNS is not set
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
#
# SCSI low-level drivers
#
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC7XXX_OLD is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_IN2000 is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_MEGARAID is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_CPQFCTS is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_EATA_PIO is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_GENERIC_NCR5380 is not set
# CONFIG_SCSI_GENERIC_NCR5380_MMIO is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_NCR53C7xx is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_NCR53C8XX is not set
# CONFIG_SCSI_SYM53C8XX is not set
# CONFIG_SCSI_PCI2000 is not set
# CONFIG_SCSI_PCI2220I is not set
# CONFIG_SCSI_QLOGIC_ISP is not set
# CONFIG_SCSI_QLOGIC_FC is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_U14_34F is not set
# CONFIG_SCSI_NSP32 is not set
# CONFIG_SCSI_DEBUG is not set
#
# Networking support
#
CONFIG_NET=y
#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_MMAP=y
# CONFIG_NETLINK_DEV is not set
# CONFIG_NETFILTER is not set
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_ARPD is not set
# CONFIG_INET_ECN is not set
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_IPV6 is not set
# CONFIG_XFRM_USER is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
CONFIG_IPV6_SCTP__=y
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_LLC is not set
# CONFIG_DECNET is not set
# CONFIG_BRIDGE is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_NET_FASTROUTE is not set
# CONFIG_NET_HW_FLOWCONTROL is not set
#
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
CONFIG_NETDEVICES=y
#
# ARCnet devices
#
# CONFIG_ARCNET is not set
CONFIG_DUMMY=y
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
# CONFIG_ETHERTAP is not set
#
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
# CONFIG_MII is not set
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_NET_VENDOR_3COM is not set
#
# Tulip family network device support
#
# CONFIG_NET_TULIP is not set
# CONFIG_HP100 is not set
CONFIG_NET_PCI=y
# CONFIG_PCNET32 is not set
# CONFIG_AMD8111_ETH is not set
# CONFIG_ADAPTEC_STARFIRE is not set
# CONFIG_B44 is not set
# CONFIG_DGRS is not set
CONFIG_EEPRO100=y
# CONFIG_EEPRO100_PIO is not set
# CONFIG_E100 is not set
# CONFIG_FEALNX is not set
# CONFIG_NATSEMI is not set
# CONFIG_NE2K_PCI is not set
# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
# CONFIG_SIS900 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SUNDANCE is not set
# CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set
#
# Ethernet (1000 Mbit)
#
# CONFIG_ACENIC is not set
# CONFIG_DL2K is not set
CONFIG_E1000=y
# CONFIG_E1000_NAPI is not set
# CONFIG_NS83820 is not set
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SK98LIN is not set
# CONFIG_TIGON3 is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
#
# Wireless LAN (non-hamradio)
#
# CONFIG_NET_RADIO is not set
#
# Token Ring devices (depends on LLC=y)
#
# CONFIG_NET_FC is not set
# CONFIG_RCPCI is not set
# CONFIG_SHAPER is not set
#
# Wan interfaces
#
# CONFIG_WAN is not set
#
# Amateur Radio support
#
# CONFIG_HAMRADIO is not set
#
# ISDN subsystem
#
# CONFIG_ISDN_BOOL is not set
#
# CD-ROM drivers (not for SCSI or IDE/ATAPI drives)
#
# CONFIG_CD_NO_IDESCSI is not set
#
# Input device support
#
CONFIG_INPUT=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
CONFIG_INPUT_MOUSEDEV_PSAUX=y
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
# CONFIG_INPUT_TSDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set
#
# Input I/O drivers
#
# CONFIG_GAMEPORT is not set
CONFIG_SOUND_GAMEPORT=y
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ATKBD is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_UINPUT is not set
#
# Character devices
#
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_HW_CONSOLE=y
# CONFIG_SERIAL_NONSTANDARD is not set
#
# Serial drivers
#
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_ACPI is not set
CONFIG_SERIAL_8250_CONSOLE=y
# CONFIG_SERIAL_8250_HCDP is not set
CONFIG_SERIAL_8250_EXTENDED=y
# CONFIG_SERIAL_8250_MANY_PORTS is not set
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_MULTIPORT is not set
# CONFIG_SERIAL_8250_RSA is not set
#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
CONFIG_UNIX98_PTY_COUNT=256
#
# I2C support
#
CONFIG_I2C=y
CONFIG_I2C_ALGOBIT=y
# CONFIG_I2C_ELV is not set
# CONFIG_I2C_VELLEMAN is not set
# CONFIG_SCx200_ACB is not set
# CONFIG_I2C_ALGOPCF is not set
CONFIG_I2C_CHARDEV=y
#
# I2C Hardware Sensors Mainboard support
#
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_PIIX4 is not set
#
# I2C Hardware Sensors Chip support
#
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_W83781D is not set
#
# Mice
#
# CONFIG_BUSMOUSE is not set
# CONFIG_QIC02_TAPE is not set
#
# IPMI
#
# CONFIG_IPMI_HANDLER is not set
#
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_NVRAM is not set
# CONFIG_GEN_RTC is not set
CONFIG_EFI_RTC=y
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
#
# Ftape, the floppy tape device driver
#
# CONFIG_FTAPE is not set
# CONFIG_AGP is not set
# CONFIG_DRM is not set
# CONFIG_RAW_DRIVER is not set
# CONFIG_HANGCHECK_TIMER is not set
#
# Multimedia devices
#
# CONFIG_VIDEO_DEV is not set
#
# File systems
#
CONFIG_EXT2_FS=y
# CONFIG_EXT2_FS_XATTR is not set
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_XATTR=y
# CONFIG_EXT3_FS_POSIX_ACL is not set
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
# CONFIG_QUOTA is not set
CONFIG_AUTOFS_FS=y
# CONFIG_AUTOFS4_FS is not set
#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
# CONFIG_JOLIET is not set
# CONFIG_ZISOFS is not set
# CONFIG_UDF_FS is not set
#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
CONFIG_DEVPTS_FS=y
# CONFIG_TMPFS is not set
CONFIG_RAMFS=y
#
# Miscellaneous filesystems
#
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
#
# Network File Systems
#
CONFIG_NFS_FS=y
CONFIG_NFS_V3=y
CONFIG_NFS_V4=y
CONFIG_NFSD=y
CONFIG_NFSD_V3=y
# CONFIG_NFSD_V4 is not set
# CONFIG_NFSD_TCP is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SMB_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_INTERMEZZO_FS is not set
# CONFIG_AFS_FS is not set
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
# CONFIG_BSD_DISKLABEL is not set
# CONFIG_MINIX_SUBPARTITION is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
# CONFIG_UNIXWARE_DISKLABEL is not set
# CONFIG_LDM_PARTITION is not set
# CONFIG_NEC98_PARTITION is not set
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
CONFIG_EFI_PARTITION=y
CONFIG_NLS=y
#
# Native Language Support
#
CONFIG_NLS_DEFAULT="iso8859-1"
# CONFIG_NLS_CODEPAGE_437 is not set
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ISO8859_1 is not set
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_UTF8 is not set
#
# Graphics support
#
# CONFIG_FB is not set
#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_MDA_CONSOLE is not set
CONFIG_DUMMY_CONSOLE=y
#
# Sound
#
# CONFIG_SOUND is not set
#
# USB support
#
CONFIG_USB=y
# CONFIG_USB_DEBUG is not set
#
# Miscellaneous USB options
#
CONFIG_USB_DEVICEFS=y
# CONFIG_USB_BANDWIDTH is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
#
# USB Host Controller Drivers
#
# CONFIG_USB_EHCI_HCD is not set
# CONFIG_USB_OHCI_HCD is not set
CONFIG_USB_UHCI_HCD=y
#
# USB Device Class drivers
#
# CONFIG_USB_BLUETOOTH_TTY is not set
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_STORAGE is not set
#
# USB Human Interface Devices (HID)
#
CONFIG_USB_HID=y
CONFIG_USB_HIDINPUT=y
# CONFIG_HID_FF is not set
CONFIG_USB_HIDDEV=y
# CONFIG_USB_AIPTEK is not set
# CONFIG_USB_WACOM is not set
# CONFIG_USB_KBTAB is not set
# CONFIG_USB_POWERMATE is not set
# CONFIG_USB_XPAD is not set
#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_SCANNER is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USB_HPUSBSCSI is not set
#
# USB Multimedia devices
#
# CONFIG_USB_DABUSB is not set
#
# Video4Linux support is needed for USB Multimedia device support
#
#
# USB Network adaptors
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_CDCETHER is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_USBNET is not set
#
# USB port drivers
#
#
# USB Serial Converter support
#
# CONFIG_USB_SERIAL is not set
#
# USB Miscellaneous drivers
#
# CONFIG_USB_TIGL is not set
# CONFIG_USB_AUERSWALD is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_BRLVGER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_TEST is not set
#
# Library routines
#
# CONFIG_CRC32 is not set
#
# Bluetooth support
#
# CONFIG_BT is not set
#
# Kernel hacking
#
# CONFIG_FSYS is not set
# CONFIG_IA64_GRANULE_16MB is not set
CONFIG_IA64_GRANULE_64MB=y
CONFIG_DEBUG_KERNEL=y
CONFIG_KALLSYMS=y
CONFIG_IA64_PRINT_HAZARDS=y
# CONFIG_DISABLE_VHPT is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_IA64_EARLY_PRINTK=y
# CONFIG_IA64_EARLY_PRINTK_UART is not set
CONFIG_IA64_EARLY_PRINTK_VGA=y
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_IA64_DEBUG_CMPXCHG is not set
# CONFIG_IA64_DEBUG_IRQ is not set
#
# Security options
#
# CONFIG_SECURITY is not set
#
# Cryptographic options
#
# CONFIG_CRYPTO is not set
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (212 preceding siblings ...)
2003-04-15 9:01 ` Takayoshi Kochi
@ 2003-04-15 22:03 ` David Mosberger
2003-04-15 22:12 ` Alex Williamson
2003-04-15 22:27 ` David Mosberger
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-04-15 22:03 UTC (permalink / raw)
To: linux-ia64
>>>>> On Mon, 14 Apr 2003 14:56:37 -0600, Alex Williamson <alex_williamson@hp.com> said:
Alex> still some issues, but I'd rather get something out that works
Alex> and go from there, than revert it. Please test and let me
Alex> know what else you find. Thanks,
Thanks, this definitely works much better. My Big Sur still gets
stuck once nit starts running, but that's probably something else. Is
you i2000 a single- or dual-processor?
--david
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (213 preceding siblings ...)
2003-04-15 22:03 ` David Mosberger
@ 2003-04-15 22:12 ` Alex Williamson
2003-04-15 22:27 ` David Mosberger
215 siblings, 0 replies; 217+ messages in thread
From: Alex Williamson @ 2003-04-15 22:12 UTC (permalink / raw)
To: linux-ia64
David Mosberger wrote:
>
> >>>>> On Mon, 14 Apr 2003 14:56:37 -0600, Alex Williamson <alex_williamson@hp.com> said:
>
> Alex> still some issues, but I'd rather get something out that works
> Alex> and go from there, than revert it. Please test and let me
> Alex> know what else you find. Thanks,
>
> Thanks, this definitely works much better. My Big Sur still gets
> stuck once nit starts running, but that's probably something else. Is
> you i2000 a single- or dual-processor?
>
> --david
[David, sorry for the resend, copying the list]
Dual, I was having trouble around launching init too. I turned on
all the 8259 keyboard support and it went away. Not sure what the
issue was there.
Alex
--
Alex Williamson HP Linux & Open Source Lab
^ permalink raw reply [flat|nested] 217+ messages in thread* Re: [Linux-ia64] kernel update (relative to v2.5.67)
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
` (214 preceding siblings ...)
2003-04-15 22:12 ` Alex Williamson
@ 2003-04-15 22:27 ` David Mosberger
215 siblings, 0 replies; 217+ messages in thread
From: David Mosberger @ 2003-04-15 22:27 UTC (permalink / raw)
To: linux-ia64
>>>>> On Tue, 15 Apr 2003 18:01:48 +0900 (JST), Takayoshi Kochi <kochi@hpc.bs1.fc.nec.co.jp> said:
>> Thanks to Alex's fixes for PCI segment support, we could boot
>> 2.5.67 kernel on Tiger. But a strange thing is happening. Only
>> cpu0 & 1 are getting timer interrupts and cpu 2 & 3 aren't. This
>> problem is reproducible and we see this phenomenon after every
>> boot. It seems that these cpus stop receiving timer interrupts
>> after a very short while. Has anyone seen this?
I tried quickly on a 4-way rx5670. It looks fine (so far):
$ uname -r
2.5.67
$ cat /proc/interrupts |grep timer
239: 845908 846158 846460 846390 LSAPIC timer
However, I have been seeing what appears to be deadlocks in the VFS
layer (best guess). I don't know yet what triggers it and it triggers
at variable rates (sometimes a couple of minutes, sometimes a couple
of hours between occurrences), but perhaps it's related to what you're
seeing.
--david
^ permalink raw reply [flat|nested] 217+ messages in thread