* RE: [Linux-ia64] kernel update (second patch relative to 2.4.2)
2001-03-22 8:20 [Linux-ia64] kernel update (second patch relative to 2.4.2) David Mosberger
2001-03-22 11:15 ` Andreas Schwab
@ 2001-03-22 19:22 ` Ahna, Christopher J
2001-03-22 22:18 ` Jim Wilson
` (14 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Ahna, Christopher J @ 2001-03-22 19:22 UTC (permalink / raw)
To: linux-ia64
In order for AGPGART/DRI to work on IA-64, the 010321 kernel update needs to
be combined with a corresponding Xserver patch (appears at the end of this
mail). This patch is against an early February X CVS snapshot and fixes a
few incorrect assumptions regarding kernel page size.
If anyone has trouble getting direct rendering going with the aformentioned
patches, be sure to let me know. Thanks,
Chris
--- xc.clean/programs/Xserver/hw/xfree86/drivers/ati/r128_dri.c Sun Jan 7
17:07:34 2001
+++ xc/programs/Xserver/hw/xfree86/drivers/ati/r128_dri.c Thu Feb 8
16:31:41 2001
@@ -457,11 +457,11 @@
/* Initialize the CCE ring buffer data */
info->ringStart = info->agpOffset;
- info->ringMapSize = info->ringSize*1024*1024 + 4096;
+ info->ringMapSize = info->ringSize*1024*1024 + getpagesize();
info->ringSizeLog2QW = R128MinBits(info->ringSize*1024*1024/8) - 1;
info->ringReadOffset = info->ringStart + info->ringMapSize;
- info->ringReadMapSize = 4096;
+ info->ringReadMapSize = getpagesize();
/* Reserve space for vertex/indirect buffers
*/
info->bufStart = info->ringReadOffset + info->ringReadMapSize;
@@ -788,7 +788,7 @@
* client for SAREA mapping that includes a device private record
*/
pDRIInfo->SAREASize - ((sizeof(XF86DRISAREARec) + 0xfff) & 0x1000); /* round to page */
+ ((sizeof(XF86DRISAREARec) + getpagesize() - 1) & getpagesize()); /*
round to page */
/* + shared memory device private rec */
#else
/* For now the mapping works by using a fixed size defined
--- xc.clean/programs/Xserver/hw/xfree86/drivers/ati/radeon_dri.c Sun
Jan 21 13:19:20 2001
+++ xc/programs/Xserver/hw/xfree86/drivers/ati/radeon_dri.c Tue Mar 13
16:18:51 2001
@@ -701,11 +701,11 @@
/* Initialize the CP ring buffer data */
info->ringStart = info->agpOffset;
- info->ringMapSize = info->ringSize*1024*1024 + 4096;
+ info->ringMapSize = info->ringSize*1024*1024 + getpagesize();
info->ringSizeLog2QW = RADEONMinBits(info->ringSize*1024*1024/8)-1;
info->ringReadOffset = info->ringStart + info->ringMapSize;
- info->ringReadMapSize = 4096;
+ info->ringReadMapSize = getpagesize();
/* Reserve space for vertex/indirect buffers
*/
info->bufStart = info->ringReadOffset + info->ringReadMapSize;
@@ -1186,7 +1186,7 @@
* client for SAREA mapping that includes a device private record
*/
pDRIInfo->SAREASize - ((sizeof(XF86DRISAREARec) + 0xfff) & 0x1000); /* round to page */
+ ((sizeof(XF86DRISAREARec) + getpagesize() - 1) & getpagesize(); /*
round to page */
/* + shared memory device private rec */
#else
/* For now the mapping works by using a fixed size defined
--- xc.clean/programs/Xserver/hw/xfree86/drivers/mga/mga_dri.c Sun Jan 7
17:07:36 2001
+++ xc/programs/Xserver/hw/xfree86/drivers/mga/mga_dri.c Wed Feb 7
09:53:17 2001
@@ -589,7 +589,7 @@
prim_size = 65536;
init_offset = ((prim_size + pMGADRIServer->warp_ucode_size +
- 4096 - 1) / 4096) * 4096;
+ getpagesize() - 1) / getpagesize()) * getpagesize();
pMGADRIServer->agpSizep = init_offset;
pMGADRI->agpSize = (drmAgpSize(pMGA->drmSubFD)) - init_offset;
-----Original Message-----
From: David Mosberger [mailto:davidm@hpl.hp.com]
Sent: Thursday, March 22, 2001 12:21 AM
To: linux-ia64@linuxia64.org
Subject: [Linux-ia64] kernel update (second patch relative to 2.4.2)
The latest IA-64 patch is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.2-ia64-010321.diff*
What this patch does:
o Add AGP support for 460gx chipset [Chris Ahna]
o Add EFI variable support to /proc [Matt Domsch]
o Fix a typo in kernel/acpi.c that caused the wrong number
of CPUs to be detected if some of the CPUs were disabled
[Jack Steiner]
o Fix ivt.S not to pass a bogus "break" number on a
non-syscall break [Keith Owens]
o Add Big Sur ACPI workaround for problem that caused some
machines to hang at boot time. [Jung-ik Lee]
o Fix csum_partial_copy.c to use copy_from_user() instead of
doing a byte-byte-byte. This doubles the throughput for
local TCP connections. Not bad for a one-liner... ;-)
Of course, this will have to be replaced by the real
integrated "copy-and-checksum" routine eventually; my guess
is this will give another nice performance boost
o Drop backwards compatibility with old (Feb 2000) compiler;
this means that CONFIG_NEW_UNWIND is the default now and
that unwind directives are being used unconditionally.
o Add a version of ucontext.h which defines a "ucontext"
structure that perfectly overlays onto the sigcontext
structure. Note: this crap is intended for
application-level compatibility only. IA-64 Linux always
passes the same sigcontext structure as the third argument
to a signal handler, independent of whether or not
SA_SIGINFO is set in the sigaction flags. Also, since
sigcontext only contains the "scratch" regs, you can't
setcontext() to this context. If you need to do this, use
sigsetjmp()/siglongjmp() to return to the signal handler
first and then do a normal "return" from the handler.
Clean, fast, and portable...
o Modified the exception table format to take advantage of
local tags if they're support. To take full advantage of
this, a gcc3.0 based toolchain would be necessary. However,
the prerelease version of this compiler is NOT yet stable
enough for kernel use. So even though you find a bunch of
"#if __GNUC__ >= 3" in the patch, I do not recommend
building with this compiler at this time.
CAVEAT 1: This patch does not work with the compiler Feb 2000 compiler
anymore. This is the compiler included with NUE, for example.
We plan to update NUE shortly. In the meantime, you'll either
have to stick with an earlier kernel version or update the
compiler yourself (not recommended unless you don't mind
getting into gcc cross-building issues). Of course, if you
have a real system running a recent distro, this is not an
issue.
CAVEAT 2: Since this patch changes the format of the exception tables,
it is necessary to rebuild all kernel modules from scratch.
If you don't do so, the kernel modules will fail in strange
ways. They load OK, but they will crash the kernel whenever
an exception occurs. You've been warned.
This patch has been tested on a dual Big Sur only, though I don't
expect any problems for other configurations.
Enjoy,
--david
diff -urN --ignore-all-space linux-davidm/Documentation/Configure.help
linux-2.4.2-lia/Documentation/Configure.help
--- linux-davidm/Documentation/Configure.help Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/Documentation/Configure.help Wed Mar 21 22:49:09
2001
@@ -2394,6 +2394,21 @@
the GLX component for XFree86 3.3.6, which can be downloaded from
http://utah-glx.sourceforge.net/ .
+Intel 460GX support
+CONFIG_AGP_I460
+ This option gives you AGP support for the Intel 460GX chipset. This
+ chipset, the first to support Intel Itanium processors, is new and
+ this option is correspondingly experimental. AGPGART support for 460GX
+ does work with the following {<video card>, <XFree86>} combinations:
+ {ATI Rage128, 4.0.2}, {ATI Radeon, latest CVS}, {Matrox G400, 4.0.2},
+ and {Matrox G450, XFree86 4.0.2}. The G450 only works with a copy
+ of mgaHALlib.a for IA-64.
+
+ If you don't have a 460GX based machine (such as BigSur) with an AGP
+ slot then this option isn't going to do you much good. If you're
+ dying to do Direct Rendering on IA-64, this is what you've been
+ waiting for.
+
Intel I810/I810 DC100/I810e support
CONFIG_AGP_I810
This option gives you AGP support for the Xserver on the Intel 810
@@ -17159,6 +17174,16 @@
Layer) information in /proc/pal. This contains useful information
about the processors in your systems, such as cache and TLB sizes
and the PAL firmware version in use.
+
+ To use this option, you have to check that the "/proc file system
+ support" (CONFIG_PROC_FS) is enabled, too.
+
+/proc/efi support
+CONFIG_IA64_EFIVARS
+ If you say Y here, you are able to get EFI (Extensible Firmware
+ Interface) variable information in /proc/pal. This can be used to
+ change the boot order in the EFI Boot Manager.
+
To use this option, you have to check that the "/proc file system
support" (CONFIG_PROC_FS) is enabled, too.
diff -urN --ignore-all-space linux-davidm/Makefile linux-2.4.2-lia/Makefile
--- linux-davidm/Makefile Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/Makefile Wed Mar 21 23:18:38 2001
@@ -88,6 +89,7 @@
CPPFLAGS := -D__KERNEL__ -I$(HPATH)
CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -g -O2 -fomit-frame-pointer
-fno-strict-aliasing
+
AFLAGS := -D__ASSEMBLY__ $(CPPFLAGS)
#
@@ -133,6 +135,9 @@
DRIVERS-$(CONFIG_PARPORT) += drivers/parport/driver.o
DRIVERS-$(CONFIG_AGP) += drivers/char/agp/agp.o
+ifeq ($(CONFIG_AGP), m)
+DRIVERS-y += drivers/char/agp/agp.o
+endif
DRIVERS-$(CONFIG_DRM) += drivers/char/drm/drm.o
DRIVERS-$(CONFIG_NUBUS) += drivers/nubus/nubus.a
DRIVERS-$(CONFIG_ISDN) += drivers/isdn/isdn.a
diff -urN --ignore-all-space linux-davidm/arch/ia64/config.in
linux-2.4.2-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/config.in Wed Mar 21 23:01:10 2001
@@ -57,7 +57,7 @@
if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" = "y" ]; then
bool ' Enable Itanium C0-step specific code'
CONFIG_ITANIUM_C0_SPECIFIC
fi
- if [ "$CONFIG_ITANIUM_ASTEP_SPECIFIC" = "y" -o
"$CONFIG_ITANIUM_B0_SPECIFIC" = "y"
+ if [ "$CONFIG_ITANIUM_ASTEP_SPECIFIC" = "y" -o
"$CONFIG_ITANIUM_B0_SPECIFIC" = "y" \
-o "$CONFIG_ITANIUM_B1_SPECIFIC" = "y" -o
"$CONFIG_ITANIUM_B2_SPECIFIC" = "y" ]; then
define_bool CONFIG_ITANIUM_PTCG n
else
@@ -74,8 +74,8 @@
define_bool CONFIG_PM y
define_bool CONFIG_ACPI y
define_bool CONFIG_ACPI_INTERPRETER y
- define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align
cache-sensitive data structure to 64 bytes
fi
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data
to 64 bytes
fi
if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
@@ -90,7 +90,7 @@
define_int CONFIG_CACHE_LINE_SHIFT 7
bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM
bool ' Enable NUMA support' CONFIG_NUMA
- define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align
cache-sensitive data structure to 64 bytes
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align cache-sensitive
data to 128 bytes
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF
/proc/kcore.
@@ -98,6 +98,7 @@
bool 'SMP support' CONFIG_SMP
bool 'Performance monitor support' CONFIG_PERFMON
tristate '/proc/pal support' CONFIG_IA64_PALINFO
+tristate '/proc/efi support' CONFIG_IA64_EFIVARS
bool 'Networking support' CONFIG_NET
bool 'System V IPC' CONFIG_SYSVIPC
@@ -243,7 +244,7 @@
if [ "$CONFIG_SCSI" != "n" ]; then
bool 'Simulated SCSI disk' CONFIG_SCSI_SIM
fi
- define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align
cache-sensitive data structure to 64 bytes
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data to
64 bytes
endmenu
fi
@@ -264,7 +265,6 @@
bool 'Turn on compare-and-exchange bug checking (slow!)'
CONFIG_IA64_DEBUG_CMPXCHG
bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
-bool 'Enable new unwind support' CONFIG_IA64_NEW_UNWIND
bool 'Disable VHPT' CONFIG_DISABLE_VHPT
endmenu
diff -urN --ignore-all-space linux-davidm/arch/ia64/ia32/ia32_entry.S
linux-2.4.2-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/ia32/ia32_entry.S Wed Mar 21 23:01:25 2001
@@ -4,6 +4,19 @@
#include "../kernel/entry.h"
+ .section "__ex_table", "a" // declare section & section
attributes
+ .previous
+
+#if __GNUC__ >= 3
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ [99:] x
+#else
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ 99: x
+#endif
+
.text
/*
@@ -12,10 +25,10 @@
* is exec'ing an IA-64 program).
*/
ENTRY(ia32_execve)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(3))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(3)
alloc loc1=ar.pfs,3,2,4,0
mov loc0=rp
- UNW(.body)
+ .body
mov out0=in0 // filename
;; // stop bit between alloc and call
mov out1=in1 // argv
@@ -41,29 +54,20 @@
// We copy this 4-byte aligned value to an 8-byte aligned buffer
// in the task structure and then jump to the IA64 code.
- mov r8=r0 // no memory access errors yet
- add r10=4,r32
+ EX(.Lfail, ld4 r2=[r32],4) // load low part of sigmask
;;
-1:
- ld4 r2=[r32] // get first half of sigmask
- ld4 r3=[r10] // get second half of sigmask
-2:
- cmp.lt p6,p0=r8,r0 // check memory access
- ;;
-(p6) br.ret.sptk.many rp // it failed
-
+ EX(.Lfail, ld4 r3=[r32]) // load high part of sigmask
adds r32=IA64_TASK_THREAD_SIGMASK_OFFSET,r13
+ ;;
+ st8 [r32]=r2
adds r10=IA64_TASK_THREAD_SIGMASK_OFFSET+4,r13
;;
- st4 [r32]=r2
+
st4 [r10]=r3
br.cond.sptk.many sys_rt_sigsuspend
-END(ia32_rt_sigsuspend)
- .section __ex_table,"a"
- data4 @gprel(1b)
- data4 (2b-1b)|1
- .previous
+.Lfail: br.ret.sptk.many rp // failed to read sigmask
+END(ia32_rt_sigsuspend)
GLOBAL_ENTRY(ia32_ret_from_syscall)
PT_REGS_UNWIND_INFO(0)
@@ -106,7 +110,7 @@
END(sys32_vfork)
GLOBAL_ENTRY(sys32_fork)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(2))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2)
alloc r16=ar.pfs,2,2,4,0
mov out0=SIGCHLD // out0 = clone_flags
;;
@@ -115,14 +119,14 @@
mov loc1=r16 // save ar.pfs across
do_fork
DO_SAVE_SWITCH_STACK
- UNW(.body)
+ .body
mov out1=0
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
br.call.sptk.few rp=do_fork
.ret3: mov ar.pfs=loc1
- UNW(.restore sp)
+ .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov rp=loc0
br.ret.sptk.many rp
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/Makefile
linux-2.4.2-lia/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/Makefile Wed Mar 21 23:03:01 2001
@@ -19,6 +19,7 @@
obj-$(CONFIG_IA64_GENERIC) += machvec.o iosapic.o
obj-$(CONFIG_IA64_DIG) += iosapic.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
+obj-$(CONFIG_IA64_EFIVARS) += efivars.o
obj-$(CONFIG_PCI) += pci.o
obj-$(CONFIG_SMP) += smp.o smpboot.o
obj-$(CONFIG_IA64_MCA) += mca.o mca_asm.o
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/acpi.c
linux-2.4.2-lia/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/acpi.c Wed Mar 21 23:03:12 2001
@@ -314,7 +314,7 @@
printk("ACPI: Found 0 CPUS; assuming 1\n");
available_cpus = 1; /* We've got at least one of these, no?
*/
}
- smp_boot_data.cpu_count = available_cpus;
+ smp_boot_data.cpu_count = total_cpus;
#endif
return 1;
}
@@ -463,7 +463,7 @@
printk("ACPI: Found 0 CPUS; assuming 1\n");
available_cpus = 1; /* We've got at least one of these, no?
*/
}
- smp_boot_data.cpu_count = available_cpus;
+ smp_boot_data.cpu_count = total_cpus;
#endif
return 1;
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/efi.c
linux-2.4.2-lia/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/efi.c Wed Mar 21 23:03:22 2001
@@ -374,7 +374,6 @@
efi_map_pal_code();
efi_enter_virtual_mode();
-
}
void
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/efi_stub.S
linux-2.4.2-lia/arch/ia64/kernel/efi_stub.S
--- linux-davidm/arch/ia64/kernel/efi_stub.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/efi_stub.S Wed Mar 21 23:03:33 2001
@@ -50,11 +50,11 @@
*/
GLOBAL_ENTRY(efi_call_phys)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(8))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,5,7,0
ld8 r2=[in0],8 // load EFI function's entry point
mov loc0=rp
- UNW(.body)
+ .body
;;
mov loc2=gp // save global pointer
mov loc4=ar.rsc // save RSE configuration
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/efivars.c
linux-2.4.2-lia/arch/ia64/kernel/efivars.c
--- linux-davidm/arch/ia64/kernel/efivars.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.2-lia/arch/ia64/kernel/efivars.c Wed Mar 21 23:03:43 2001
@@ -0,0 +1,435 @@
+/*
+ * EFI Variables - efivars.c
+ *
+ * Copyright (C) 2001 Dell Computer Corporation <Matt_Domsch@dell.com>
+ *
+ * This code takes all variables accessible from EFI runtime and
+ * exports them via /proc
+ *
+ * Reads to /proc/efi/varname return an efi_variable_t structure.
+ * Writes to /proc/efi/varname must be an efi_variable_t structure.
+ * Writes with DataSize = 0 or Attributes = 0 deletes the variable.
+ * Writes with a new value in VariableName+VendorGuid creates
+ * a new variable.
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
USA
+ *
+ * Changelog:
+ *
+ * 12 March 2001 - Matt Domsch <Matt_Domsch@dell.com>
+ * Feedback received from Stephane Eranian incorporated.
+ * efivar_write() checks copy_from_user() return value.
+ * efivar_read/write() returns proper errno.
+ * v0.02 release to linux-ia64@linuxia64.org
+ *
+ * 26 February 2001 - Matt Domsch <Matt_Domsch@dell.com>
+ * v0.01 release to linux-ia64@linuxia64.org
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+#include <linux/sched.h> /* for capable() */
+#include <linux/mm.h>
+#include <linux/module.h>
+
+#include <asm/efi.h>
+#include <asm/uaccess.h>
+#ifdef CONFIG_SMP
+#include <linux/smp.h>
+#endif
+
+MODULE_AUTHOR("Matt Domsch <Matt_Domsch@Dell.com>");
+MODULE_DESCRIPTION("/proc interface to EFI Variables");
+
+#define EFIVARS_VERSION "0.02 2001-Mar-12"
+
+static int
+efivar_read(char *page, char **start, off_t off,
+ int count, int *eof, void *data);
+static int
+efivar_write(struct file *file, const char *buffer,
+ unsigned long count, void *data);
+
+
+/*
+ * The maximum size of VariableName + Data = 1024
+ * Therefore, it's reasonable to save that much
+ * space in each part of the structure,
+ * and we use a page for reading/writing.
+ */
+
+typedef struct _efi_variable_t {
+ efi_char16_t VariableName[1024/sizeof(efi_char16_t)];
+ efi_guid_t VendorGuid;
+ unsigned long DataSize;
+ __u8 Data[1024];
+ efi_status_t Status;
+ __u32 Attributes;
+} __attribute__((packed)) efi_variable_t;
+
+
+typedef struct _efivar_entry_t {
+ efi_variable_t var;
+ struct proc_dir_entry *entry;
+ struct list_head list;
+} efivar_entry_t;
+
+spinlock_t efivars_lock = SPIN_LOCK_UNLOCKED;
+static LIST_HEAD(efivar_list);
+static struct proc_dir_entry *efi_dir = NULL;
+
+#define efivar_entry(n) list_entry(n, efivar_entry_t, list)
+
+/* Return the number of unicode characters in data */
+static unsigned long
+utf8_strlen(efi_char16_t *data, unsigned long maxlength)
+{
+ unsigned long length = 0;
+ while (*data++ != 0 && length < maxlength)
+ length++;
+ return length;
+}
+
+/* Return the number of bytes is the length of this string */
+/* Note: this is NOT the same as the number of unicode characters */
+static inline unsigned long
+utf8_strsize(efi_char16_t *data, unsigned long maxlength)
+{
+ return utf8_strlen(data, maxlength/sizeof(efi_char16_t)) *
+ sizeof(efi_char16_t);
+}
+
+
+static int
+proc_calc_metrics(char *page, char **start, off_t off,
+ int count, int *eof, int len)
+{
+ if (len <= off+count) *eof = 1;
+ *start = page + off;
+ len -= off;
+ if (len>count) len = count;
+ if (len<0) len = 0;
+ return len;
+}
+
+
+static void
+uuid_unparse(efi_guid_t *guid, char *out)
+{
+ sprintf(out, "%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x",
+ guid->data1, guid->data2, guid->data3,
+ guid->data4[0], guid->data4[1], guid->data4[2],
guid->data4[3],
+ guid->data4[4], guid->data4[5], guid->data4[6],
guid->data4[7]);
+}
+
+
+
+
+
+/*
+ * efivar_create_proc_entry()
+ * Requires:
+ * variable_name_size = number of bytes required to hold
+ * variable_name (not counting the NULL
+ * character at the end.
+ * Returns 1 on failure, 0 on success
+ */
+static int
+efivar_create_proc_entry(unsigned long variable_name_size,
+ efi_char16_t *variable_name,
+ efi_guid_t *vendor_guid)
+{
+
+ int i, short_name_size = variable_name_size /
+ sizeof(efi_char16_t) + 38;
+ char *short_name = kmalloc(short_name_size+1,
+ GFP_KERNEL);
+ efivar_entry_t *new_efivar = kmalloc(sizeof(efivar_entry_t),
+ GFP_KERNEL);
+ if (!short_name || !new_efivar) {
+ if (short_name) kfree(short_name);
+ if (new_efivar) kfree(new_efivar);
+ return 1;
+ }
+ memset(short_name, 0, short_name_size+1);
+ memset(new_efivar, 0, sizeof(efivar_entry_t));
+
+ memcpy(new_efivar->var.VariableName, variable_name,
+ variable_name_size);
+ memcpy(&(new_efivar->var.VendorGuid), vendor_guid,
sizeof(efi_guid_t));
+
+ /* Convert Unicode to normal chars (assume top bits are 0),
+ ala UTF-8 */
+ for (i=0; i<variable_name_size / sizeof(efi_char16_t); i++) {
+ short_name[i] = variable_name[i] & 0xFF;
+ }
+
+ /* This is ugly, but necessary to separate one vendor's
+ private variables from another's. */
+
+ *(short_name + strlen(short_name)) = '-';
+ uuid_unparse(vendor_guid, short_name + strlen(short_name));
+
+
+ /* Create the entry in proc */
+ new_efivar->entry = create_proc_entry(short_name, 0600, efi_dir);
+ kfree(short_name); short_name = NULL;
+ if (!new_efivar->entry) return 1;
+
+
+ new_efivar->entry->data = new_efivar;
+ new_efivar->entry->read_proc = efivar_read;
+ new_efivar->entry->write_proc = efivar_write;
+
+ list_add(&new_efivar->list, &efivar_list);
+
+
+ return 0;
+}
+
+
+
+/***********************************************************
+ * efivar_read()
+ * Requires:
+ * Modifies: page
+ * Returns: number of bytes written, or -EINVAL on failure
+ ***********************************************************/
+
+static int
+efivar_read(char *page, char **start, off_t off, int count, int *eof, void
*data)
+{
+ int len = sizeof(efi_variable_t);
+ efivar_entry_t *efi_var = data;
+ efi_variable_t *var_data = (efi_variable_t *)page;
+
+ if (!page || !data) return -EINVAL;
+
+ spin_lock(&efivars_lock);
+ MOD_INC_USE_COUNT;
+
+ memcpy(var_data, &efi_var->var, len);
+
+ var_data->DataSize = 1024;
+ var_data->Status = efi.get_variable(var_data->VariableName,
+ &var_data->VendorGuid,
+ &var_data->Attributes,
+ &var_data->DataSize,
+ var_data->Data);
+
+ MOD_DEC_USE_COUNT;
+ spin_unlock(&efivars_lock);
+
+ return proc_calc_metrics(page, start, off, count, eof, len);
+}
+
+/***********************************************************
+ * efivar_write()
+ * Requires: data is an efi_setvariable_t data type,
+ * properly filled in, possibly by a call
+ * first to efivar_read().
+ * Caller must have CAP_SYS_ADMIN
+ * Modifies: NVRAM
+ * Returns: var_data->DataSize on success, errno on failure
+ *
+ ***********************************************************/
+static int
+efivar_write(struct file *file, const char *buffer,
+ unsigned long count, void *data)
+{
+ unsigned long strsize1, strsize2;
+ int found=0;
+ struct list_head *pos;
+ unsigned long size = sizeof(efi_variable_t);
+ efi_status_t status;
+ efivar_entry_t *efivar = data, *search_efivar = NULL;
+ efi_variable_t *var_data;
+ if (!data || count != size) {
+ printk(KERN_WARNING "efivars: improper struct of size 0x%lx
passed.\n", count);
+ return -EINVAL;
+ }
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ spin_lock(&efivars_lock);
+ MOD_INC_USE_COUNT;
+
+ var_data = kmalloc(size, GFP_KERNEL);
+ if (!var_data) {
+ MOD_DEC_USE_COUNT;
+ spin_unlock(&efivars_lock);
+ return -ENOMEM;
+ }
+ if (copy_from_user(var_data, buffer, size)) {
+ MOD_DEC_USE_COUNT;
+ spin_unlock(&efivars_lock);
+ return -EFAULT;
+ }
+
+
+ /* Since the data ptr we've currently got is probably for
+ a different variable find the right variable.
+ This allows any properly formatted data structure to
+ be written to any of the files in /proc/efi and it will work.
+ */
+ list_for_each(pos, &efivar_list) {
+ search_efivar = efivar_entry(pos);
+ strsize1 = utf8_strsize(search_efivar->var.VariableName,
1024);
+ strsize2 = utf8_strsize(var_data->VariableName, 1024);
+ if ( strsize1 = strsize2 &&
+ !memcmp(&(search_efivar->var.VariableName),
+ var_data->VariableName, strsize1) &&
+ !efi_guidcmp(search_efivar->var.VendorGuid,
+ var_data->VendorGuid)) {
+ found = 1;
+ break;
+ }
+ }
+ if (found) efivar = search_efivar;
+
+ status = efi.set_variable(var_data->VariableName,
+ &var_data->VendorGuid,
+ var_data->Attributes,
+ var_data->DataSize,
+ var_data->Data);
+
+ if (status != EFI_SUCCESS) {
+ printk(KERN_WARNING "set_variable() failed: status=%lx\n",
status);
+ kfree(var_data);
+ MOD_DEC_USE_COUNT;
+ spin_unlock(&efivars_lock);
+ return -EIO;
+ }
+
+
+ if (!var_data->DataSize || !var_data->Attributes) {
+ /* We just deleted the NVRAM variable */
+ remove_proc_entry(efivar->entry->name, efi_dir);
+ list_del(&efivar->list);
+ kfree(efivar);
+ }
+
+ /* If this is a new variable, set up the proc entry for it. */
+ if (!found) {
+
efivar_create_proc_entry(utf8_strsize(var_data->VariableName,
+ 1024),
+ var_data->VariableName,
+ &var_data->VendorGuid);
+ }
+
+ kfree(var_data);
+ MOD_DEC_USE_COUNT;
+ spin_unlock(&efivars_lock);
+ return size;
+}
+
+
+
+static int __init
+efivars_init(void)
+{
+
+ efi_status_t status;
+ efi_guid_t vendor_guid;
+ efi_char16_t *variable_name = kmalloc(1024, GFP_KERNEL);
+ unsigned long variable_name_size = 1024;
+
+ spin_lock(&efivars_lock);
+
+ printk(KERN_INFO "EFI Variables Facility v%s\n", EFIVARS_VERSION);
+
+ /* Per EFI spec, the maximum storage allocated for both
+ the variable name and variable data is 1024 bytes.
+ */
+
+ efi_dir = proc_mkdir("efi", NULL);
+
+ memset(variable_name, 0, 1024);
+
+ do {
+ variable_name_size\x1024;
+
+ status = efi.get_next_variable(&variable_name_size,
+ variable_name,
+ &vendor_guid);
+
+
+ switch (status) {
+ case EFI_SUCCESS:
+ efivar_create_proc_entry(variable_name_size,
+ variable_name,
+ &vendor_guid);
+ break;
+ case EFI_NOT_FOUND:
+ break;
+ default:
+ printk(KERN_WARNING "get_next_variable()
status=%lx\n",
+ status);
+ BUG();
+ status = EFI_NOT_FOUND;
+ break;
+ }
+
+ } while (status != EFI_NOT_FOUND);
+
+ kfree(variable_name);
+ spin_unlock(&efivars_lock);
+ return 0;
+}
+
+static void __exit
+efivars_exit(void)
+{
+ struct list_head *pos;
+ efivar_entry_t *efivar;
+
+ spin_lock(&efivars_lock);
+
+ list_for_each(pos, &efivar_list) {
+ efivar = efivar_entry(pos);
+ remove_proc_entry(efivar->entry->name, efi_dir);
+ list_del(&efivar->list);
+ kfree(efivar);
+ }
+ remove_proc_entry(efi_dir->name, NULL);
+ spin_unlock(&efivars_lock);
+
+}
+
+module_init(efivars_init);
+module_exit(efivars_exit);
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ *
---------------------------------------------------------------------------
+ * Local variables:
+ * c-indent-level: 4
+ * c-brace-imaginary-offset: 0
+ * c-brace-offset: -4
+ * c-argdecl-indent: 4
+ * c-label-offset: -4
+ * c-continued-statement-offset: 4
+ * c-continued-brace-offset: 0
+ * indent-tabs-mode: nil
+ * tab-width: 8
+ * End:
+ */
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/entry.S
linux-2.4.2-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/entry.S Wed Mar 21 23:04:22 2001
@@ -1,5 +1,3 @@
-#define NEW_LEAVE_KERNEL_HEAD 1
-#define CLEAR_INVALID 1
/*
* ia64/kernel/entry.S
*
@@ -53,10 +51,10 @@
* setup a null register window frame.
*/
ENTRY(ia64_execve)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(3))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(3)
alloc loc1=ar.pfs,3,2,4,0
mov loc0=rp
- UNW(.body)
+ .body
mov out0=in0 // filename
;; // stop bit between alloc and call
mov out1=in1 // argv
@@ -96,18 +94,18 @@
END(ia64_execve)
GLOBAL_ENTRY(sys_clone2)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(2))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2)
alloc r16=ar.pfs,3,2,4,0
DO_SAVE_SWITCH_STACK
mov loc0=rp
mov loc1=r16 // save ar.pfs across
do_fork
- UNW(.body)
+ .body
mov out1=in1
mov out3=in2
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
br.call.sptk.few rp=do_fork
-.ret1: UNW(.restore sp)
+.ret1: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
mov rp=loc0
@@ -115,18 +113,18 @@
END(sys_clone2)
GLOBAL_ENTRY(sys_clone)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(2))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2)
alloc r16=ar.pfs,2,2,4,0
DO_SAVE_SWITCH_STACK
mov loc0=rp
mov loc1=r16 // save ar.pfs across
do_fork
- UNW(.body)
+ .body
mov out1=in1
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
br.call.sptk.few rp=do_fork
-.ret2: UNW(.restore sp)
+.ret2: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
mov rp=loc0
@@ -137,10 +135,10 @@
* prev_task <- ia64_switch_to(struct task_struct *next)
*/
GLOBAL_ENTRY(ia64_switch_to)
- UNW(.prologue)
+ .prologue
alloc r16=ar.pfs,1,0,0,0
DO_SAVE_SWITCH_STACK
- UNW(.body)
+ .body
adds r22=IA64_TASK_THREAD_KSP_OFFSET,r13
mov r27=IA64_KR(CURRENT_STACK)
@@ -194,19 +192,6 @@
;;
END(ia64_switch_to)
-#ifndef CONFIG_IA64_NEW_UNWIND
- /*
- * Like save_switch_stack, but also save the stack frame that is
active
- * at the time this function is called.
- */
-ENTRY(save_switch_stack_with_current_frame)
- UNW(.prologue)
- alloc r16=ar.pfs,0,0,0,0 // pass ar.pfs to
save_switch_stack
- DO_SAVE_SWITCH_STACK
- br.ret.sptk.few rp
-END(save_switch_stack_with_current_frame)
-#endif /* !CONFIG_IA64_NEW_UNWIND */
-
/*
* Note that interrupts are enabled during save_switch_stack and
* load_switch_stack. This means that we may get an interrupt with
@@ -226,12 +211,12 @@
* - rp (b0) holds return address to save
*/
GLOBAL_ENTRY(save_switch_stack)
- UNW(.prologue)
- UNW(.altrp b7)
+ .prologue
+ .altrp b7
flushrs // flush dirty regs to backing store (must
be first in insn group)
- UNW(.save @priunat,r17)
+ .save @priunat,r17
mov r17=ar.unat // preserve caller's
- UNW(.body)
+ .body
#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
|| defined(CONFIG_ITANIUM_B0_SPECIFIC) ||
defined(CONFIG_ITANIUM_B1_SPECIFIC))
adds r3€,sp
@@ -342,9 +327,9 @@
* - must not touch r8-r11
*/
ENTRY(load_switch_stack)
- UNW(.prologue)
- UNW(.altrp b7)
- UNW(.body)
+ .prologue
+ .altrp b7
+ .body
#if !(defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) \
|| defined(CONFIG_ITANIUM_B0_SPECIFIC) ||
defined(CONFIG_ITANIUM_B1_SPECIFIC))
@@ -459,11 +444,10 @@
* also use it to preserve b6, which contains the syscall entry
point.
*/
GLOBAL_ENTRY(invoke_syscall_trace)
-#ifdef CONFIG_IA64_NEW_UNWIND
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(8))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,3,0,0
mov loc0=rp
- UNW(.body)
+ .body
mov loc2¶
;;
br.call.sptk.few rp=syscall_trace
@@ -471,21 +455,6 @@
mov ar.pfs=loc1
mov b6=loc2
br.ret.sptk.few rp
-#else /* !CONFIG_IA64_NEW_SYSCALL */
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(8))
- alloc loc1=ar.pfs,8,3,0,0
- ;; // WAW on CFM at the br.call
- mov loc0=rp
- br.call.sptk.many rp=save_switch_stack_with_current_frame //
must preserve b6!!
-.ret4: mov loc2¶
- br.call.sptk.few rp=syscall_trace
-.ret5: adds sp=IA64_SWITCH_STACK_SIZE,sp // drop switch_stack frame
- mov rp=loc0
- mov ar.pfs=loc1
- mov b6=loc2
- ;;
- br.ret.sptk.few rp
-#endif /* !CONFIG_IA64_NEW_UNWIND */
END(invoke_syscall_trace)
/*
@@ -566,55 +535,6 @@
// fall through
GLOBAL_ENTRY(ia64_leave_kernel)
PT_REGS_UNWIND_INFO(0)
-#if !NEW_LEAVE_KERNEL_HEAD
- movl r3=PERCPU_ADDR+IA64_CPU_SOFTIRQ_ACTIVE_OFFSET //
softirq_active
- ;;
- ld8 r2=[r3] // r3 (softirq_active+softirq_mask) is guaranteed to
be 8-byte aligned!
- ;;
- shr r3=r2,32
- ;;
- and r2=r2,r3
- ;;
- cmp4.ne p6,p7=r2,r0
-(p6) br.call.spnt.many rp=invoke_do_softirq
-1:
-(pKern) br.cond.dpnt.many restore_all // yup -> skip check for
rescheduling & signal delivery
-
- // call schedule() until we find a task that doesn't have
need_resched set:
-
-back_from_resched:
- { .mii
- adds r2=IA64_TASK_NEED_RESCHED_OFFSET,r13
- mov r3=ip // r3 <-
&back_from_resched
- adds r14=IA64_TASK_SIGPENDING_OFFSET,r13
- }
-#ifdef CONFIG_PERFMON
- adds r15=IA64_TASK_PFM_NOTIFY_OFFSET,r13
-#endif
- ;;
-#ifdef CONFIG_PERFMON
- ld8 r15=[r15]
-#endif
- ld8 r2=[r2]
- ld4 r14=[r14]
- mov rp=r3 // arrange for schedule() to return
to back_from_resched
- ;;
- cmp.ne p16,p0=r14,r0
-#ifdef CONFIG_PERFMON
- cmp.ne p6,p0=r15,r0 // current->task.pfm_notify != 0?
-#endif
- cmp.ne p7,p0=r2,r0 // current->need_resched != 0?
-#ifdef CONFIG_PERFMON
-(p6) br.call.spnt.many b6=pfm_overflow_notify
-#endif
-(p7) br.call.spnt.many b7=invoke_schedule
-(p16) br.call.spnt.many rp=handle_signal_delivery // check & deliver
pending signals
-.ret9:
-restore_all:
- adds r2=PT(R8)+16,r12
- adds r3=PT(R9)+16,r12
- ;;
-#else /* NEW_LEAVE_KERNEL_HEAD */
cmp.eq p16,p0=r0,r0 // set the "first_time" flag
movl r15=PERCPU_ADDR+IA64_CPU_SOFTIRQ_ACTIVE_OFFSET // r15 &cpu_data.softirq.active
;;
@@ -647,16 +567,23 @@
#endif
cmp.ne p16,p0=r0,r0 // clear the "first_time"
flag
;;
+# if __GNUC__ < 3
(p6) br.call.spnt.many b7=invoke_do_softirq
+# else
+(p6) br.call.spnt.many b7=do_softirq
+# endif
#ifdef CONFIG_PERFMON
(p9) br.call.spnt.many b7=pfm_overflow_notify
#endif
+# if __GNUC__ < 3
(p7) br.call.spnt.many b7=invoke_schedule
+#else
+(p7) br.call.spnt.many b7=schedule
+#endif
adds r2=PT(R8)+16,r12
adds r3=PT(R9)+16,r12
(p8) br.call.spnt.many b7=handle_signal_delivery // check & deliver
pending signals
;;
-#endif /* NEW_LEAVE_KERNEL_HEAD */
// start restoring the state saved on the kernel stack (struct
pt_regs):
ld8.fill r8=[r2],16
ld8.fill r9=[r3],16
@@ -753,11 +680,9 @@
shr.u r18=r19,16 // get byte size of existing "dirty"
partition
;;
mov r16=ar.bsp // get existing backing store pointer
-#if CLEAR_INVALID
movl r17=PERCPU_ADDR+IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET
;;
ld4 r17=[r17] // r17 = cpu_data->phys_stacked_size_p8
-#endif
(pKern) br.cond.dpnt.few skip_rbs_switch
/*
* Restore user backing store.
@@ -777,7 +702,6 @@
shl r19=r19,16 // shift size of dirty partition
into loadrs position
;;
dont_preserve_current_frame:
-#if CLEAR_INVALID
/*
* To prevent leaking bits between the kernel and user-space,
* we must clear the stacked registers in the "invalid" partition
here.
@@ -828,9 +752,6 @@
}
# undef pRecurse
# undef pReturn
-#else
- mov ar.rsc=r19 // load ar.rsc to be used for
"loadrs"
-#endif
alloc r17=ar.pfs,0,0,0,0 // drop current register frame
;;
@@ -879,15 +800,10 @@
#ifdef CONFIG_SMP
/*
* Invoke schedule_tail(task) while preserving in0-in7, which may be
needed
- * in case a system call gets restarted. Note that declaring
schedule_tail()
- * with asmlinkage() is NOT enough because that will only preserve
as many
- * registers as there are formal arguments.
- *
- * XXX fix me: with gcc 3.0, we won't need this anymore because
syscall_linkage
- * renders all eight input registers (in0-in7) as
"untouchable".
+ * in case a system call gets restarted.
*/
ENTRY(invoke_schedule_tail)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(8))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,1,0
mov loc0=rp
mov out0=r8 // Address of previous task
@@ -900,6 +816,7 @@
#endif /* CONFIG_SMP */
+#if __GNUC__ < 3
/*
* Invoke do_softirq() while preserving in0-in7, which may be needed
* in case a system call gets restarted. Note that declaring
do_softirq()
@@ -910,11 +827,11 @@
* renders all eight input registers (in0-in7) as
"untouchable".
*/
ENTRY(invoke_do_softirq)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(8))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,0,0
mov loc0=rp
;;
- UNW(.body)
+ .body
br.call.sptk.few rp=do_softirq
.ret13: mov ar.pfs=loc1
mov rp=loc0
@@ -931,24 +848,25 @@
* renders all eight input registers (in0-in7) as
"untouchable".
*/
ENTRY(invoke_schedule)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(8))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,0,0
mov loc0=rp
;;
- UNW(.body)
+ .body
br.call.sptk.few rp=schedule
.ret14: mov ar.pfs=loc1
mov rp=loc0
br.ret.sptk.many rp
END(invoke_schedule)
+#endif /* __GNUC__ < 3 */
+
/*
* Setup stack and call ia64_do_signal. Note that pSys and pNonSys
need to
* be set up by the caller. We declare 8 input registers so the
system call
* args get preserved, in case we need to restart a system call.
*/
ENTRY(handle_signal_delivery)
-#ifdef CONFIG_IA64_NEW_UNWIND
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case
of syscall restart!
mov r9=ar.unat
@@ -972,26 +890,9 @@
mov ar.unat=r9
mov ar.pfs=loc1
br.ret.sptk.many rp
-#else /* !CONFIG_IA64_NEW_UNWIND */
- .prologue
- alloc r16=ar.pfs,8,0,3,0 // preserve all eight input regs in case of
syscall restart!
- DO_SAVE_SWITCH_STACK
- UNW(.body)
-
- mov out0=0 // there is no "oldset"
- adds out1\x16,sp // out1=&sigscratch
- .pred.rel.mutex pSys, pNonSys
-(pSys) mov out2=1 // out2=1 => we're in a
syscall
-(pNonSys) mov out2=0 // out2=0 => not a syscall
- br.call.sptk.few rp=ia64_do_signal
-.ret16: // restore the switch stack (ptrace may have modified it)
- DO_LOAD_SWITCH_STACK
- br.ret.sptk.many rp
-#endif /* !CONFIG_IA64_NEW_UNWIND */
END(handle_signal_delivery)
GLOBAL_ENTRY(sys_rt_sigsuspend)
-#ifdef CONFIG_IA64_NEW_UNWIND
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case
of syscall restart!
mov r9=ar.unat
@@ -1014,26 +915,11 @@
mov ar.unat=r9
mov ar.pfs=loc1
br.ret.sptk.many rp
-#else /* !CONFIG_IA64_NEW_UNWIND */
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(2))
- alloc r16=ar.pfs,2,0,3,0
- DO_SAVE_SWITCH_STACK
- UNW(.body)
-
- mov out0=in0 // mask
- mov out1=in1 // sigsetsize
- adds out2\x16,sp // out1=&sigscratch
- br.call.sptk.many rp=ia64_rt_sigsuspend
-.ret18: // restore the switch stack (ptrace may have modified it)
- DO_LOAD_SWITCH_STACK
- br.ret.sptk.many rp
-#endif /* !CONFIG_IA64_NEW_UNWIND */
END(sys_rt_sigsuspend)
ENTRY(sys_rt_sigreturn)
-#ifdef CONFIG_IA64_NEW_UNWIND
- .regstk 0,0,3,0 // inherited from gate.s:invoke_sighandler()
PT_REGS_UNWIND_INFO(0)
+ alloc r2=ar.pfs,0,0,1,0
.prologue
PT_REGS_SAVES(16)
adds sp=-16,sp
@@ -1042,7 +928,7 @@
;;
adds out0\x16,sp // out0 = &sigscratch
br.call.sptk.few rp=ia64_rt_sigreturn
-.ret19: .restore sp
+.ret19: .restore sp 0
adds sp\x16,sp
;;
ld8 r9=[sp] // load new ar.unat
@@ -1050,32 +936,6 @@
;;
mov ar.unat=r9
br b7
-#else /* !CONFIG_IA64_NEW_UNWIND */
- .regstk 0,0,3,0 // inherited from gate.s:invoke_sighandler()
- PT_REGS_UNWIND_INFO(0)
- UNW(.prologue)
- UNW(.fframe IA64_PT_REGS_SIZE+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp rp, PT(CR_IIP)+16+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp ar.pfs, PT(CR_IFS)+16+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp ar.unat, PT(AR_UNAT)+16+IA64_SWITCH_STACK_SIZE)
- UNW(.spillsp pr, PT(PR)+16+IA64_SWITCH_STACK_SIZE)
- adds sp=-IA64_SWITCH_STACK_SIZE,sp
- cmp.eq pNonSys,pSys=r0,r0 // sigreturn isn't a normal
syscall...
- ;;
- UNW(.body)
-
- adds out0\x16,sp // out0 = &sigscratch
- br.call.sptk.few rp=ia64_rt_sigreturn
-.ret20: adds r3=SW(CALLER_UNAT)+16,sp
- ;;
- ld8 r9=[r3] // load new ar.unat
- MOVBR(.sptk,b7,r8,ia64_leave_kernel)
- ;;
- PT_REGS_UNWIND_INFO(0)
- adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch-stack
frame
- mov ar.unat=r9
- br b7
-#endif /* !CONFIG_IA64_NEW_UNWIND */
END(sys_rt_sigreturn)
GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
@@ -1084,7 +944,7 @@
// privilege is still 0
//
mov r16=r0
- UNW(.prologue)
+ .prologue
DO_SAVE_SWITCH_STACK
br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in
ivt
.ret21: .body
@@ -1092,8 +952,6 @@
br.cond.sptk.many rp // goes to
ia64_leave_kernel
END(ia64_prepare_handle_unaligned)
-#ifdef CONFIG_IA64_NEW_UNWIND
-
//
// unw_init_running(void (*callback)(info, arg), void *arg)
//
@@ -1138,8 +996,6 @@
mov rp=loc0
br.ret.sptk.many rp
END(unw_init_running)
-
-#endif
.rodata
.align 8
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/entry.h
linux-2.4.2-lia/arch/ia64/kernel/entry.h
--- linux-davidm/arch/ia64/kernel/entry.h Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/entry.h Wed Mar 21 23:05:45 2001
@@ -20,42 +20,42 @@
#define SW(f) (IA64_SWITCH_STACK_##f##_OFFSET)
#define PT_REGS_SAVES(off) \
- UNW(.unwabi @svr4, 'i'); \
- UNW(.fframe IA64_PT_REGS_SIZE+16+(off)); \
- UNW(.spillsp rp, PT(CR_IIP)+16+(off)); \
- UNW(.spillsp ar.pfs, PT(CR_IFS)+16+(off)); \
- UNW(.spillsp ar.unat, PT(AR_UNAT)+16+(off)); \
- UNW(.spillsp ar.fpsr, PT(AR_FPSR)+16+(off)); \
- UNW(.spillsp pr, PT(PR)+16+(off));
+ .unwabi @svr4, 'i'; \
+ .fframe IA64_PT_REGS_SIZE+16+(off); \
+ .spillsp rp, PT(CR_IIP)+16+(off); \
+ .spillsp ar.pfs, PT(CR_IFS)+16+(off); \
+ .spillsp ar.unat, PT(AR_UNAT)+16+(off); \
+ .spillsp ar.fpsr, PT(AR_FPSR)+16+(off); \
+ .spillsp pr, PT(PR)+16+(off);
#define PT_REGS_UNWIND_INFO(off) \
- UNW(.prologue); \
+ .prologue; \
PT_REGS_SAVES(off); \
- UNW(.body)
+ .body
#define SWITCH_STACK_SAVES(off)
\
- UNW(.savesp ar.unat,SW(CALLER_UNAT)+16+(off));
\
- UNW(.savesp ar.fpsr,SW(AR_FPSR)+16+(off));
\
- UNW(.spillsp f2,SW(F2)+16+(off)); UNW(.spillsp f3,SW(F3)+16+(off));
\
- UNW(.spillsp f4,SW(F4)+16+(off)); UNW(.spillsp f5,SW(F5)+16+(off));
\
- UNW(.spillsp f16,SW(F16)+16+(off)); UNW(.spillsp
f17,SW(F17)+16+(off)); \
- UNW(.spillsp f18,SW(F18)+16+(off)); UNW(.spillsp
f19,SW(F19)+16+(off)); \
- UNW(.spillsp f20,SW(F20)+16+(off)); UNW(.spillsp
f21,SW(F21)+16+(off)); \
- UNW(.spillsp f22,SW(F22)+16+(off)); UNW(.spillsp
f23,SW(F23)+16+(off)); \
- UNW(.spillsp f24,SW(F24)+16+(off)); UNW(.spillsp
f25,SW(F25)+16+(off)); \
- UNW(.spillsp f26,SW(F26)+16+(off)); UNW(.spillsp
f27,SW(F27)+16+(off)); \
- UNW(.spillsp f28,SW(F28)+16+(off)); UNW(.spillsp
f29,SW(F29)+16+(off)); \
- UNW(.spillsp f30,SW(F30)+16+(off)); UNW(.spillsp
f31,SW(F31)+16+(off)); \
- UNW(.spillsp r4,SW(R4)+16+(off)); UNW(.spillsp r5,SW(R5)+16+(off));
\
- UNW(.spillsp r6,SW(R6)+16+(off)); UNW(.spillsp r7,SW(R7)+16+(off));
\
- UNW(.spillsp b0,SW(B0)+16+(off)); UNW(.spillsp b1,SW(B1)+16+(off));
\
- UNW(.spillsp b2,SW(B2)+16+(off)); UNW(.spillsp b3,SW(B3)+16+(off));
\
- UNW(.spillsp b4,SW(B4)+16+(off)); UNW(.spillsp b5,SW(B5)+16+(off));
\
- UNW(.spillsp ar.pfs,SW(AR_PFS)+16+(off)); UNW(.spillsp
ar.lc,SW(AR_LC)+16+(off)); \
- UNW(.spillsp @priunat,SW(AR_UNAT)+16+(off));
\
- UNW(.spillsp ar.rnat,SW(AR_RNAT)+16+(off));
\
- UNW(.spillsp ar.bspstore,SW(AR_BSPSTORE)+16+(off));
\
- UNW(.spillsp pr,SW(PR)+16+(off))
+ .savesp ar.unat,SW(CALLER_UNAT)+16+(off);
\
+ .savesp ar.fpsr,SW(AR_FPSR)+16+(off);
\
+ .spillsp f2,SW(F2)+16+(off); .spillsp f3,SW(F3)+16+(off);
\
+ .spillsp f4,SW(F4)+16+(off); .spillsp f5,SW(F5)+16+(off);
\
+ .spillsp f16,SW(F16)+16+(off); .spillsp f17,SW(F17)+16+(off);
\
+ .spillsp f18,SW(F18)+16+(off); .spillsp f19,SW(F19)+16+(off);
\
+ .spillsp f20,SW(F20)+16+(off); .spillsp f21,SW(F21)+16+(off);
\
+ .spillsp f22,SW(F22)+16+(off); .spillsp f23,SW(F23)+16+(off);
\
+ .spillsp f24,SW(F24)+16+(off); .spillsp f25,SW(F25)+16+(off);
\
+ .spillsp f26,SW(F26)+16+(off); .spillsp f27,SW(F27)+16+(off);
\
+ .spillsp f28,SW(F28)+16+(off); .spillsp f29,SW(F29)+16+(off);
\
+ .spillsp f30,SW(F30)+16+(off); .spillsp f31,SW(F31)+16+(off);
\
+ .spillsp r4,SW(R4)+16+(off); .spillsp r5,SW(R5)+16+(off);
\
+ .spillsp r6,SW(R6)+16+(off); .spillsp r7,SW(R7)+16+(off);
\
+ .spillsp b0,SW(B0)+16+(off); .spillsp b1,SW(B1)+16+(off);
\
+ .spillsp b2,SW(B2)+16+(off); .spillsp b3,SW(B3)+16+(off);
\
+ .spillsp b4,SW(B4)+16+(off); .spillsp b5,SW(B5)+16+(off);
\
+ .spillsp ar.pfs,SW(AR_PFS)+16+(off); .spillsp
ar.lc,SW(AR_LC)+16+(off); \
+ .spillsp @priunat,SW(AR_UNAT)+16+(off);
\
+ .spillsp ar.rnat,SW(AR_RNAT)+16+(off);
\
+ .spillsp ar.bspstore,SW(AR_BSPSTORE)+16+(off);
\
+ .spillsp pr,SW(PR)+16+(off))
#define DO_SAVE_SWITCH_STACK \
movl r28\x1f; \
@@ -73,5 +73,5 @@
invala; \
MOVBR(.ret.sptk,b7,r28,1f); \
br.cond.sptk.many load_switch_stack; \
-1: UNW(.restore sp); \
+1: .restore sp; \
adds sp=IA64_SWITCH_STACK_SIZE,sp
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/head.S
linux-2.4.2-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/head.S Wed Mar 21 23:05:58 2001
@@ -58,10 +58,10 @@
.text
GLOBAL_ENTRY(_start)
- UNW(.prologue)
- UNW(.save rp, r4) // terminate unwind chain with a
NULL rp
- UNW(mov r4=r0)
- UNW(.body)
+ .prologue
+ .save rp, r4 // terminate unwind chain with a NULL rp
+ mov r4=r0
+ .body
// set IVT entry point---can't access I/O ports without it
movl r3=ia64_ivt
;;
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/ia64_ksyms.c
linux-2.4.2-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/ia64_ksyms.c Wed Mar 21 23:06:55
2001
@@ -52,6 +52,9 @@
EXPORT_SYMBOL_NOVERS(__down_write_failed);
EXPORT_SYMBOL_NOVERS(__rwsem_wake);
+#include <asm/pgalloc.h>
+EXPORT_SYMBOL(smp_flush_tlb_all);
+
#include <asm/page.h>
EXPORT_SYMBOL(clear_page);
@@ -127,3 +130,5 @@
EXPORT_SYMBOL(ia64_pal_call_stacked);
EXPORT_SYMBOL(ia64_pal_call_static);
+extern struct efi efi;
+EXPORT_SYMBOL(efi);
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/ivt.S
linux-2.4.2-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/ivt.S Wed Mar 21 23:07:30 2001
@@ -899,7 +899,7 @@
// suitable spot...
alloc r14=ar.pfs,0,0,2,0
- mov out0=r8
+ mov out0=cr.iim
add out1\x16,sp
adds r3=8,r2 // set up second base pointer for
SAVE_REST
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/pal.S
linux-2.4.2-lia/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/pal.S Wed Mar 21 23:08:00 2001
@@ -58,7 +58,7 @@
*
*/
GLOBAL_ENTRY(ia64_pal_call_static)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(6))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(6)
alloc loc1 = ar.pfs,6,90,0,0
movl loc2 = pal_entry_point
1: {
@@ -73,7 +73,7 @@
;;
mov loc3 = psr
mov loc0 = rp
- UNW(.body)
+ .body
mov r30 = in2
(p6) rsm psr.i | psr.ic
@@ -101,14 +101,14 @@
* in2 - in3 Remaning PAL arguments
*/
GLOBAL_ENTRY(ia64_pal_call_stacked)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(5))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(5)
alloc loc1 = ar.pfs,5,4,87,0
movl loc2 = pal_entry_point
mov r28 = in0 // Index MUST be copied to r28
mov out0 = in0 // AND in0 of PAL function
mov loc0 = rp
- UNW(.body)
+ .body
;;
ld8 loc2 = [loc2] // loc2 <- entry point
mov out1 = in1
@@ -148,7 +148,7 @@
GLOBAL_ENTRY(ia64_pal_call_phys_static)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(6))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(6)
alloc loc1 = ar.pfs,6,90,0,0
movl loc2 = pal_entry_point
1: {
@@ -156,7 +156,7 @@
mov r8 = ip // save ip to compute branch
mov loc0 = rp // save rp
}
- UNW(.body)
+ .body
;;
ld8 loc2 = [loc2] // loc2 <- entry point
mov r29 = in1 // first argument
@@ -204,7 +204,7 @@
* in2 - in3 Remaning PAL arguments
*/
GLOBAL_ENTRY(ia64_pal_call_phys_stacked)
- UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS,
ASM_UNW_PRLG_GRSAVE(5))
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(5)
alloc loc1 = ar.pfs,5,5,86,0
movl loc2 = pal_entry_point
1: {
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/process.c
linux-2.4.2-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/process.c Wed Mar 21 23:08:19 2001
@@ -28,8 +28,6 @@
#include <asm/unwind.h>
#include <asm/user.h>
-#ifdef CONFIG_IA64_NEW_UNWIND
-
static void
do_show_stack (struct unw_frame_info *info, void *arg)
{
@@ -47,12 +45,9 @@
} while (unw_unwind(info) >= 0);
}
-#endif
-
void
show_stack (struct task_struct *task)
{
-#ifdef CONFIG_IA64_NEW_UNWIND
if (!task)
unw_init_running(do_show_stack, 0);
else {
@@ -61,7 +56,6 @@
unw_init_from_blocked_task(&info, task);
do_show_stack(&info, 0);
}
-#endif
}
void
@@ -109,10 +103,8 @@
((i = sof - 1) || (i % 3) = 2) ? "\n" : "
");
}
}
-#ifdef CONFIG_IA64_NEW_UNWIND
if (!user_mode(regs))
show_stack(0);
-#endif
}
void __attribute__((noreturn))
@@ -299,8 +291,6 @@
return retval;
}
-#ifdef CONFIG_IA64_NEW_UNWIND
-
void
do_copy_regs (struct unw_frame_info *info, void *arg)
{
@@ -347,7 +337,6 @@
unw_get_gr(info, i, &dst[i], &nat);
if (nat)
nat_bits |= mask;
-printk("r%u = %c%016lx\n", i, nat ? '*' : ' ', dst[i]);
mask <<= 1;
}
dst[32] = nat_bits;
@@ -398,91 +387,16 @@
memcpy(dst + 32, current->thread.fph, 96*16);
}
-#endif /* CONFIG_IA64_NEW_UNWIND */
-
void
ia64_elf_core_copy_regs (struct pt_regs *pt, elf_gregset_t dst)
{
-#ifdef CONFIG_IA64_NEW_UNWIND
unw_init_running(do_copy_regs, dst);
-#else
- struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
- unsigned long ar_ec, cfm, ar_bsp, ndirty, *krbs, addr;
-
- ar_ec = (sw->ar_pfs >> 52) & 0x3f;
-
- cfm = pt->cr_ifs & ((1UL << 63) - 1);
- if ((pt->cr_ifs & (1UL << 63)) = 0) {
- /* if cr_ifs isn't valid, we got here through a syscall or a
break */
- cfm = sw->ar_pfs & ((1UL << 38) - 1);
- }
-
- krbs = (unsigned long *) current + IA64_RBS_OFFSET/8;
- ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 19));
- ar_bsp = (unsigned long) ia64_rse_skip_regs((long *)
pt->ar_bspstore, ndirty);
-
- /*
- * Write portion of RSE backing store living on the kernel
- * stack to the VM of the process.
- */
- for (addr = pt->ar_bspstore; addr < ar_bsp; addr += 8) {
- long val;
- if (ia64_peek(pt, current, addr, &val) = 0)
- access_process_vm(current, addr, &val, sizeof(val),
1);
- }
-
- /* r0-r31
- * NaT bits (for r0-r31; bit N = 1 iff rN is a NaT)
- * predicate registers (p0-p63)
- * b0-b7
- * ip cfm user-mask
- * ar.rsc ar.bsp ar.bspstore ar.rnat
- * ar.ccv ar.unat ar.fpsr ar.pfs ar.lc ar.ec
- */
- memset(dst, 0, sizeof(dst)); /* don't leak any "random" bits */
-
- /* r0 is zero */ dst[ 1] = pt->r1; dst[ 2] = pt->r2; dst[ 3] pt->r3;
- dst[ 4] = sw->r4; dst[ 5] = sw->r5; dst[ 6] = sw->r6; dst[ 7] sw->r7;
- dst[ 8] = pt->r8; dst[ 9] = pt->r9; dst[10] = pt->r10; dst[11] pt->r11;
- dst[12] = pt->r12; dst[13] = pt->r13; dst[14] = pt->r14; dst[15] pt->r15;
- memcpy(dst + 16, &pt->r16, 16*8); /* r16-r31 are contiguous */
-
- dst[32] = ia64_get_nat_bits(pt, sw);
- dst[33] = pt->pr;
-
- /* branch regs: */
- dst[34] = pt->b0; dst[35] = sw->b1; dst[36] = sw->b2; dst[37] sw->b3;
- dst[38] = sw->b4; dst[39] = sw->b5; dst[40] = pt->b6; dst[41] pt->b7;
-
- dst[42] = pt->cr_iip + ia64_psr(pt)->ri;
- dst[43] = pt->cr_ifs;
- dst[44] = pt->cr_ipsr & IA64_PSR_UM;
-
- dst[45] = pt->ar_rsc; dst[46] = ar_bsp; dst[47] = pt->ar_bspstore;
dst[48] = pt->ar_rnat;
- dst[49] = pt->ar_ccv; dst[50] = pt->ar_unat; dst[51] = sw->ar_fpsr;
dst[52] = pt->ar_pfs;
- dst[53] = sw->ar_lc; dst[54] = (sw->ar_pfs >> 52) & 0x3f;
-#endif /* !CONFIG_IA64_NEW_UNWIND */
}
int
dump_fpu (struct pt_regs *pt, elf_fpregset_t dst)
{
-#ifdef CONFIG_IA64_NEW_UNWIND
unw_init_running(do_dump_fpu, dst);
-#else
- struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
-
- memset(dst, 0, sizeof (dst)); /* don't leak any "random" bits */
-
- /* f0 is 0.0 */ /* f1 is 1.0 */ dst[2] = sw->f2; dst[3] = sw->f3;
- dst[4] = sw->f4; dst[5] = sw->f5; dst[6] = pt->f6; dst[7] = pt->f7;
- dst[8] = pt->f8; dst[9] = pt->f9;
- memcpy(dst + 10, &sw->f10, 22*16); /* f10-f31 are contiguous */
-
- ia64_flush_fph(current);
- if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0)
- memcpy(dst + 32, current->thread.fph, 96*16);
-#endif
return 1; /* f0-f31 are always valid so we always return 1 */
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/ptrace.c
linux-2.4.2-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Wed Feb 28 12:57:33 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/ptrace.c Wed Mar 21 23:08:50 2001
@@ -22,6 +22,7 @@
#include <asm/rse.h>
#include <asm/system.h>
#include <asm/uaccess.h>
+#include <asm/unwind.h>
/*
* Bits in the PSR that we allow ptrace() to change:
@@ -36,8 +37,6 @@
(IA64_PSR_UM | IA64_PSR_DB | IA64_PSR_IS | IA64_PSR_ID | IA64_PSR_DD
| IA64_PSR_RI)
#define IPSR_READ_MASK IPSR_WRITE_MASK
-#ifdef CONFIG_IA64_NEW_UNWIND
-
#define PTRACE_DEBUG 1
#if PTRACE_DEBUG
@@ -97,57 +96,6 @@
# undef PUT_BITS
}
-#else /* !CONFIG_IA64_NEW_UNWIND */
-
-/*
- * Collect the NaT bits for r1-r31 from sw->caller_unat and
- * sw->ar_unat and return a NaT bitset where bit i is set iff the NaT
- * bit of register i is set.
- */
-long
-ia64_get_nat_bits (struct pt_regs *pt, struct switch_stack *sw)
-{
-# define GET_BITS(str, first, last, unat)
\
- ({
\
- unsigned long bit = ia64_unat_pos(&str->r##first);
\
- unsigned long mask = ((1UL << (last - first + 1)) - 1) <<
first; \
- (ia64_rotl(unat, first) >> bit) & mask;
\
- })
- unsigned long val;
-
- val = GET_BITS(pt, 1, 3, sw->caller_unat);
- val |= GET_BITS(pt, 12, 15, sw->caller_unat);
- val |= GET_BITS(pt, 8, 11, sw->caller_unat);
- val |= GET_BITS(pt, 16, 31, sw->caller_unat);
- val |= GET_BITS(sw, 4, 7, sw->ar_unat);
- return val;
-
-# undef GET_BITS
-}
-
-/*
- * Store the NaT bitset NAT in pt->caller_unat and sw->ar_unat.
- */
-void
-ia64_put_nat_bits (struct pt_regs *pt, struct switch_stack *sw, unsigned
long nat)
-{
-# define PUT_BITS(str, first, last, nat)
\
- ({
\
- unsigned long bit = ia64_unat_pos(&str->r##first);
\
- unsigned long mask = ((1UL << (last - first + 1)) - 1) <<
bit; \
- (ia64_rotr(nat, first) << bit) & mask;
\
- })
- sw->caller_unat = PUT_BITS(pt, 1, 3, nat);
- sw->caller_unat |= PUT_BITS(pt, 12, 15, nat);
- sw->caller_unat |= PUT_BITS(pt, 8, 11, nat);
- sw->caller_unat |= PUT_BITS(pt, 16, 31, nat);
- sw->ar_unat = PUT_BITS(sw, 4, 7, nat);
-
-# undef PUT_BITS
-}
-
-#endif /* !CONFIG_IA64_NEW_UNWIND */
-
#define IA64_MLX_TEMPLATE 0x2
#define IA64_MOVL_OPCODE 6
@@ -352,11 +300,7 @@
laddr = (unsigned long *) addr;
child_regs = ia64_task_regs(child);
-#ifdef CONFIG_IA64_NEW_UNWIND
child_stack = (struct switch_stack *) (child->thread.ksp + 16);
-#else
- child_stack = (struct switch_stack *) child_regs - 1;
-#endif
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *)
child_stack->ar_bspstore);
@@ -401,11 +345,7 @@
laddr = (unsigned long *) addr;
child_regs = ia64_task_regs(child);
-#ifdef CONFIG_IA64_NEW_UNWIND
child_stack = (struct switch_stack *) (child->thread.ksp + 16);
-#else
- child_stack = (struct switch_stack *) child_regs - 1;
-#endif
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *)
child_stack->ar_bspstore);
@@ -468,7 +408,6 @@
long ndirty, ret = 0;
struct pt_regs *child_regs = ia64_task_regs(child);
-#ifdef CONFIG_IA64_NEW_UNWIND
struct unw_frame_info info;
unsigned long cfm, sof;
@@ -488,19 +427,6 @@
unw_get_cfm(&info, &cfm);
sof = (cfm & 0x7f);
rbs_end = (long) ia64_rse_skip_regs((long *)bspstore, sof);
-#else
- struct switch_stack *child_stack;
- unsigned long krbs_num_regs;
-
- child_stack = (struct switch_stack *) child_regs - 1;
- kbspstore = (unsigned long *) child_stack->ar_bspstore;
- krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
- ndirty = ia64_rse_num_regs(krbs, krbs + (child_regs->loadrs >> 19));
- bspstore = child_regs->ar_bspstore;
- bsp = (long) ia64_rse_skip_regs((long *)bspstore, ndirty);
- krbs_num_regs = ia64_rse_num_regs(krbs, kbspstore);
- rbs_end = (long) ia64_rse_skip_regs((long *)bspstore,
krbs_num_regs);
-#endif
/* Return early if nothing to do */
if (bsp = new_bsp)
@@ -588,10 +514,6 @@
psr->dfh = 1;
}
-#ifdef CONFIG_IA64_NEW_UNWIND
-
-#include <asm/unwind.h>
-
static int
access_fr (struct unw_frame_info *info, int regnum, int hi, unsigned long
*data, int write_access)
{
@@ -829,166 +751,6 @@
*data = *ptr;
return 0;
}
-
-#else /* !CONFIG_IA64_NEW_UNWIND */
-
-static int
-access_uarea (struct task_struct *child, unsigned long addr, unsigned long
*data, int write_access)
-{
- unsigned long *ptr = NULL, *rbs, *bspstore, ndirty, regnum;
- struct switch_stack *sw;
- struct pt_regs *pt;
-
- if ((addr & 0x7) != 0)
- return -1;
-
- if (addr < PT_F127+16) {
- /* accessing fph */
- if (write_access)
- ia64_sync_fph(child);
- else
- ia64_flush_fph(child);
- ptr = (unsigned long *) ((unsigned long) &child->thread.fph
+ addr);
- } else if (addr < PT_F9+16) {
- /* accessing switch_stack or pt_regs: */
- pt = ia64_task_regs(child);
- sw = (struct switch_stack *) pt - 1;
-
- switch (addr) {
- case PT_NAT_BITS:
- if (write_access)
- ia64_put_nat_bits(pt, sw, *data);
- else
- *data = ia64_get_nat_bits(pt, sw);
- return 0;
-
- case PT_AR_BSP:
- if (write_access)
- /* FIXME? Account for lack of ``cover'' in
the syscall case */
- return
sync_kernel_register_backing_store(child, *data, 1);
- else {
- rbs = (unsigned long *) child +
IA64_RBS_OFFSET/8;
- bspstore = (unsigned long *)
pt->ar_bspstore;
- ndirty = ia64_rse_num_regs(rbs, rbs +
(pt->loadrs >> 19));
-
- /*
- * If we're in a system call, no ``cover''
was done. So to
- * make things uniform, we'll add the
appropriate displacement
- * onto bsp if we're in a system call.
- */
- if (!(pt->cr_ifs & (1UL << 63)))
- ndirty += sw->ar_pfs & 0x7f;
- *data = (unsigned long)
ia64_rse_skip_regs(bspstore, ndirty);
- return 0;
- }
-
- case PT_CFM:
- if (write_access) {
- if (pt->cr_ifs & (1UL << 63))
- pt->cr_ifs = ((pt->cr_ifs &
~0x3fffffffffUL)
- | (*data &
0x3fffffffffUL));
- else
- sw->ar_pfs = ((sw->ar_pfs &
~0x3fffffffffUL)
- | (*data &
0x3fffffffffUL));
- return 0;
- } else {
- if ((pt->cr_ifs & (1UL << 63)) = 0)
- *data = sw->ar_pfs;
- else
- /* return only the CFM */
- *data = pt->cr_ifs & 0x3fffffffffUL;
- return 0;
- }
-
- case PT_CR_IPSR:
- if (write_access)
- pt->cr_ipsr = ((*data & IPSR_WRITE_MASK)
- | (pt->cr_ipsr &
~IPSR_WRITE_MASK));
- else
- *data = (pt->cr_ipsr & IPSR_READ_MASK);
- return 0;
-
- case PT_AR_EC:
- if (write_access)
- sw->ar_pfs = (((*data & 0x3f) << 52)
- | (sw->ar_pfs & ~(0x3fUL <<
52)));
- else
- *data = (sw->ar_pfs >> 52) & 0x3f;
- break;
-
- case PT_R1: case PT_R2: case PT_R3:
- case PT_R4: case PT_R5: case PT_R6: case PT_R7:
- case PT_R8: case PT_R9: case PT_R10: case PT_R11:
- case PT_R12: case PT_R13: case PT_R14: case PT_R15:
- case PT_R16: case PT_R17: case PT_R18: case PT_R19:
- case PT_R20: case PT_R21: case PT_R22: case PT_R23:
- case PT_R24: case PT_R25: case PT_R26: case PT_R27:
- case PT_R28: case PT_R29: case PT_R30: case PT_R31:
- case PT_B0: case PT_B1: case PT_B2: case PT_B3:
- case PT_B4: case PT_B5: case PT_B6: case PT_B7:
- case PT_F2: case PT_F2+8: case PT_F3: case PT_F3+8:
- case PT_F4: case PT_F4+8: case PT_F5: case PT_F5+8:
- case PT_F6: case PT_F6+8: case PT_F7: case PT_F7+8:
- case PT_F8: case PT_F8+8: case PT_F9: case PT_F9+8:
- case PT_F10: case PT_F10+8: case PT_F11: case
PT_F11+8:
- case PT_F12: case PT_F12+8: case PT_F13: case
PT_F13+8:
- case PT_F14: case PT_F14+8: case PT_F15: case
PT_F15+8:
- case PT_F16: case PT_F16+8: case PT_F17: case
PT_F17+8:
- case PT_F18: case PT_F18+8: case PT_F19: case
PT_F19+8:
- case PT_F20: case PT_F20+8: case PT_F21: case
PT_F21+8:
- case PT_F22: case PT_F22+8: case PT_F23: case
PT_F23+8:
- case PT_F24: case PT_F24+8: case PT_F25: case
PT_F25+8:
- case PT_F26: case PT_F26+8: case PT_F27: case
PT_F27+8:
- case PT_F28: case PT_F28+8: case PT_F29: case
PT_F29+8:
- case PT_F30: case PT_F30+8: case PT_F31: case
PT_F31+8:
- case PT_AR_BSPSTORE:
- case PT_AR_RSC: case PT_AR_UNAT: case PT_AR_PFS: case
PT_AR_RNAT:
- case PT_AR_CCV: case PT_AR_FPSR: case PT_CR_IIP: case
PT_PR:
- case PT_AR_LC:
- ptr = (unsigned long *) ((long) sw + addr -
PT_NAT_BITS);
- break;
-
- default:
- /* disallow accessing anything else... */
- return -1;
- }
- } else {
-
- /* access debug registers */
-
- if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
- child->thread.flags |= IA64_THREAD_DBG_VALID;
- memset(child->thread.dbr, 0, sizeof
child->thread.dbr);
- memset(child->thread.ibr, 0, sizeof
child->thread.ibr);
- }
- if (addr >= PT_IBR) {
- regnum = (addr - PT_IBR) >> 3;
- ptr = &child->thread.ibr[0];
- } else {
- regnum = (addr - PT_DBR) >> 3;
- ptr = &child->thread.dbr[0];
- }
-
- if (regnum >= 8)
- return -1;
-
- ptr += regnum;
-
- if (write_access)
- /* don't let the user set kernel-level
breakpoints... */
- *ptr = *data & ~(7UL << 56);
- else
- *data = *ptr;
- return 0;
- }
- if (write_access)
- *ptr = *data;
- else
- *data = *ptr;
- return 0;
-}
-
-#endif /* !CONFIG_IA64_NEW_UNWIND */
asmlinkage long
sys_ptrace (long request, pid_t pid, unsigned long addr, unsigned long
data,
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/signal.c
linux-2.4.2-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/signal.c Wed Mar 21 23:10:25 2001
@@ -38,12 +38,8 @@
#endif
struct sigscratch {
-#ifdef CONFIG_IA64_NEW_UNWIND
unsigned long scratch_unat; /* ar.unat for the general registers
saved in pt */
unsigned long pad;
-#else
- struct switch_stack sw;
-#endif
struct pt_regs pt;
};
@@ -140,11 +136,7 @@
ia64_psr(&scr->pt)->ri = ip & 0x3;
scr->pt.cr_ipsr = (scr->pt.cr_ipsr & ~IA64_PSR_UM) | (um &
IA64_PSR_UM);
-#ifdef CONFIG_IA64_NEW_UNWIND
scr->scratch_unat = ia64_put_scratch_nat_bits(&scr->pt, nat);
-#else
- ia64_put_nat_bits(&scr->pt, &scr->sw, nat); /* restore the
original scratch NaT bits */
-#endif
if ((flags & IA64_SC_FLAG_FPH_VALID) != 0) {
struct ia64_psr *psr = ia64_psr(&scr->pt);
@@ -303,11 +295,7 @@
* preserved registers (r4-r7) are never being looked at by
* the signal handler (registers r4-r7 are used instead).
*/
-#ifdef CONFIG_IA64_NEW_UNWIND
nat = ia64_get_scratch_nat_bits(&scr->pt, scr->scratch_unat);
-#else
- nat = ia64_get_nat_bits(&scr->pt, &scr->sw);
-#endif
err = __put_user(flags, &sc->sc_flags);
@@ -373,21 +361,11 @@
scr->pt.cr_iip = tramp_addr;
ia64_psr(&scr->pt)->ri = 0; /* start executing
in first slot */
-#ifdef CONFIG_IA64_NEW_UNWIND
/*
* Note: this affects only the NaT bits of the scratch regs
* (the ones saved in pt_regs), which is exactly what we want.
*/
scr->scratch_unat = 0; /* ensure NaT bits of at least r2, r3, r12,
and r15 are clear */
-#else
- /*
- * Note: this affects only the NaT bits of the scratch regs
- * (the ones saved in pt_regs), which is exactly what we want.
- * The NaT bits for the preserved regs (r4-r7) are in
- * sw->ar_unat iff this process is being PTRACED.
- */
- scr->sw.caller_unat = 0; /* ensure NaT bits of at least r2, r3, r12,
and r15 are clear */
-#endif
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sig=%d sp=%lx ip=%lx handler=%lx\n",
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/unaligned.c
linux-2.4.2-lia/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/unaligned.c Wed Mar 21 23:11:04
2001
@@ -1255,7 +1255,7 @@
void
ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
{
- const struct exception_table_entry *fix = NULL;
+ struct exception_fixup fix = { 0 };
struct ia64_psr *ipsr = ia64_psr(regs);
mm_segment_t old_fs = get_fs();
unsigned long bundle[2];
@@ -1275,10 +1275,17 @@
/*
* Treat kernel accesses for which there is an exception handler
entry the same as
- * user-level unaligned accesses. Otherwise, a clever program could
user could
- * trick this handler into reading an arbitrary kernel addresses...
+ * user-level unaligned accesses. Otherwise, a clever program could
trick this
+ * handler into reading an arbitrary kernel addresses...
*/
- if (user_mode(regs) || (fix = search_exception_table(regs->cr_iip)))
{
+ if (!user_mode(regs)) {
+#ifdef GAS_HAS_LOCAL_TAGS
+ fix = search_exception_table(regs->cr_iip +
ia64_psr(regs)->ri);
+#else
+ fix = search_exception_table(regs->cr_iip);
+#endif
+ }
+ if (user_mode(regs) || fix.cont) {
if ((current->thread.flags & IA64_THREAD_UAC_SIGBUS) != 0)
goto force_sigbus;
@@ -1439,12 +1446,8 @@
failure:
/* something went wrong... */
if (!user_mode(regs)) {
- if (fix) {
- regs->r8 = -EFAULT;
- if (fix->skip & 1)
- regs->r9 = 0;
- regs->cr_iip += ((long) fix->skip) & ~15;
- regs->cr_ipsr &= ~IA64_PSR_RI; /* clear exception
slot number */
+ if (fix.cont) {
+ handle_exception(regs, fix);
goto done;
}
die_if_kernel("error during unaligned kernel access\n",
regs, ret);
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/unwind.c
linux-2.4.2-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/kernel/unwind.c Wed Mar 21 23:12:25 2001
@@ -31,8 +31,6 @@
#include <asm/unwind.h>
-#ifdef CONFIG_IA64_NEW_UNWIND
-
#include <asm/delay.h>
#include <asm/page.h>
#include <asm/ptrace.h>
@@ -1829,140 +1827,24 @@
STAT(unw.stat.api.init_time += ia64_get_itc() - start;
local_irq_restore(flags));
}
-#endif /* CONFIG_IA64_NEW_UNWIND */
-
void
unw_init_from_blocked_task (struct unw_frame_info *info, struct task_struct
*t)
{
struct switch_stack *sw = (struct switch_stack *) (t->thread.ksp +
16);
-#ifdef CONFIG_IA64_NEW_UNWIND
unw_init_frame_info(info, t, sw);
-#else
- unsigned long sol, limit, top;
-
- memset(info, 0, sizeof(*info));
-
- sol = (sw->ar_pfs >> 7) & 0x7f; /* size of locals */
-
- limit = (unsigned long) t + IA64_RBS_OFFSET;
- top = sw->ar_bspstore;
- if (top - (unsigned long) t >= IA64_STK_OFFSET)
- top = limit;
-
- info->regstk.limit = limit;
- info->regstk.top = top;
- info->sw = sw;
- info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *)
info->regstk.top, -sol);
- info->cfm_loc = &sw->ar_pfs;
- info->ip = sw->b0;
-#endif
}
void
unw_init_from_current (struct unw_frame_info *info, struct pt_regs *regs)
{
-#ifdef CONFIG_IA64_NEW_UNWIND
struct switch_stack *sw = (struct switch_stack *) regs - 1;
unw_init_frame_info(info, current, sw);
/* skip over interrupt frame: */
unw_unwind(info);
-#else
- struct switch_stack *sw = (struct switch_stack *) regs - 1;
- unsigned long sol, sof, *bsp, limit, top;
-
- limit = (unsigned long) current + IA64_RBS_OFFSET;
- top = sw->ar_bspstore;
- if (top - (unsigned long) current >= IA64_STK_OFFSET)
- top = limit;
-
- memset(info, 0, sizeof(*info));
-
- sol = (sw->ar_pfs >> 7) & 0x7f; /* size of frame */
-
- /* this gives us the bsp top level frame (kdb interrupt frame): */
- bsp = ia64_rse_skip_regs((unsigned long *) top, -sol);
-
- /* now skip past the interrupt frame: */
- sof = regs->cr_ifs & 0x7f; /* size of frame */
-
- info->regstk.limit = limit;
- info->regstk.top = top;
- info->sw = sw;
- info->bsp = (unsigned long) ia64_rse_skip_regs(bsp, -sof);
- info->cfm_loc = ®s->cr_ifs;
- info->ip = regs->cr_iip;
-#endif
-}
-
-#ifndef CONFIG_IA64_NEW_UNWIND
-
-static unsigned long
-read_reg (struct unw_frame_info *info, int regnum, int *is_nat)
-{
- unsigned long *addr, *rnat_addr, rnat;
-
- addr = ia64_rse_skip_regs((unsigned long *) info->bsp, regnum);
- if ((unsigned long) addr < info->regstk.limit
- || (unsigned long) addr >= info->regstk.top || ((long) addr &
0x7) != 0)
- {
- *is_nat = 1;
- return 0xdeadbeefdeadbeef;
- }
- rnat_addr = ia64_rse_rnat_addr(addr);
-
- if ((unsigned long) rnat_addr >= info->regstk.top)
- rnat = info->sw->ar_rnat;
- else
- rnat = *rnat_addr;
- *is_nat = (rnat & (1UL << ia64_rse_slot_num(addr))) != 0;
- return *addr;
}
-/*
- * On entry, info->regstk.top should point to the register backing
- * store for r32.
- */
-int
-unw_unwind (struct unw_frame_info *info)
-{
- unsigned long sol, cfm = *info->cfm_loc;
- int is_nat;
-
- sol = (cfm >> 7) & 0x7f; /* size of locals */
-
- /*
- * In general, we would have to make use of unwind info to
- * unwind an IA-64 stack, but for now gcc uses a special
- * convention that makes this possible without full-fledged
- * unwindo info. Specifically, we expect "rp" in the second
- * last, and "ar.pfs" in the last local register, so the
- * number of locals in a frame must be at least two. If it's
- * less than that, we reached the end of the C call stack.
- */
- if (sol < 2)
- return -1;
-
- info->ip = read_reg(info, sol - 2, &is_nat);
- if (is_nat || (info->ip & (local_cpu_data->unimpl_va_mask | 0xf)))
- /* reject let obviously bad addresses */
- return -1;
-
- info->cfm_loc = ia64_rse_skip_regs((unsigned long *) info->bsp, sol
- 1);
- cfm = read_reg(info, sol - 1, &is_nat);
- if (is_nat)
- return -1;
-
- sol = (cfm >> 7) & 0x7f;
-
- info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *)
info->bsp, -sol);
- return 0;
-}
-#endif /* !CONFIG_IA64_NEW_UNWIND */
-
-#ifdef CONFIG_IA64_NEW_UNWIND
-
static void
init_unwind_table (struct unw_table *table, const char *name, unsigned long
segment_base,
unsigned long gp, void *table_start, void *table_end)
@@ -2063,12 +1945,10 @@
kfree(table);
}
-#endif /* CONFIG_IA64_NEW_UNWIND */
void
unw_init (void)
{
-#ifdef CONFIG_IA64_NEW_UNWIND
extern int ia64_unw_start, ia64_unw_end, __gp;
extern void unw_hash_index_t_is_too_narrow (void);
long i, off;
@@ -2104,5 +1984,4 @@
init_unwind_table(&unw.kernel_table, "kernel", KERNEL_START,
(unsigned long) &__gp,
&ia64_unw_start, &ia64_unw_end);
-#endif /* CONFIG_IA64_NEW_UNWIND */
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/clear_page.S
linux-2.4.2-lia/arch/ia64/lib/clear_page.S
--- linux-davidm/arch/ia64/lib/clear_page.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/clear_page.S Wed Mar 21 23:13:00 2001
@@ -10,25 +10,20 @@
* Output:
* none
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
#include <asm/page.h>
- .text
- .psr abi64
- .psr lsb
- .lsb
-
GLOBAL_ENTRY(clear_page)
- UNW(.prologue)
+ .prologue
alloc r11=ar.pfs,1,0,0,0
- UNW(.save ar.lc, r16)
+ .save ar.lc, r16
mov r16=ar.lc // slow
- UNW(.body)
+ .body
mov r17=PAGE_SIZE/32-1 // -1 = repeat/until
;;
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/clear_user.S
linux-2.4.2-lia/arch/ia64/lib/clear_user.S
--- linux-davidm/arch/ia64/lib/clear_user.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/clear_user.S Wed Mar 21 23:13:27 2001
@@ -7,7 +7,7 @@
* Outputs:
* r8: number of bytes that didn't get cleared due to a fault
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
*/
@@ -51,27 +51,28 @@
// have side effects (same thing for writing).
//
+ .section "__ex_table", "a" // declare section & section
attributes
+ .previous
+
// The label comes first because our store instruction contains a comma
// and confuse the preprocessor otherwise
-//
+
+#if __GNUC__ >= 3
#define EX(y,x...) \
- .section __ex_table,"a"; \
- data4 @gprel(99f); \
- data4 y-99f; \
- .previous; \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ [99:] x
+#else
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
99: x
-
- .text
- .psr abi64
- .psr lsb
- .lsb
+#endif
GLOBAL_ENTRY(__do_clear_user)
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc saved_pfs=ar.pfs,2,0,0,0
cmp.eq p6,p0=r0,len // check for zero length
- UNW(.save ar.lc, saved_lc)
+ .save ar.lc, saved_lc
mov saved_lc=ar.lc // preserve ar.lc (slow)
.body
;; // avoid WAW on CFM
@@ -150,7 +151,7 @@
//
//
// We need to keep track of the remaining length. A possible
(optimistic)
- // way would be to ue ar.lc and derive how many byte were left by
+ // way would be to use ar.lc and derive how many byte were left by
// doing : left= 16*ar.lc + 16. this would avoid the addition at
// every iteration.
// However we need to keep the synchronization point. A template
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/copy_page.S
linux-2.4.2-lia/arch/ia64/lib/copy_page.S
--- linux-davidm/arch/ia64/lib/copy_page.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/copy_page.S Wed Mar 21 23:13:39 2001
@@ -10,7 +10,7 @@
* Output:
* no return value
*
- * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999, 2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
*/
#include <asm/asmmacro.h>
@@ -28,25 +28,20 @@
#define tgt1 r22
#define tgt2 r23
- .text
- .psr abi64
- .psr lsb
- .lsb
-
GLOBAL_ENTRY(copy_page)
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc
saved_pfs=ar.pfs,3,((2*PIPE_DEPTH+7)&~7),0,((2*PIPE_DEPTH+7)&~7)
.rotr t1[PIPE_DEPTH], t2[PIPE_DEPTH]
.rotp p[PIPE_DEPTH]
- UNW(.save ar.lc, saved_lc)
+ .save ar.lc, saved_lc
mov saved_lc=ar.lc // save ar.lc ahead of time
- UNW(.save pr, saved_pr)
+ .save pr, saved_pr
mov saved_pr=pr // rotating predicates are preserved
// resgisters we must save.
- UNW(.body)
+ .body
mov src1=in1 // initialize 1st stream source
adds src2=8,in1 // initialize 2nd stream source
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/copy_user.S
linux-2.4.2-lia/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/lib/copy_user.S Wed Mar 21 23:13:50 2001
@@ -31,19 +31,19 @@
#include <asm/asmmacro.h>
+ .section "__ex_table", "a" // declare section & section
attributes
+ .previous
+
// The label comes first because our store instruction contains a comma
// and confuse the preprocessor otherwise
-//
-#undef DEBUG
-#ifdef DEBUG
+
+#if __GNUC__ >= 3
#define EX(y,x...) \
-99: x
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ [99:] x
#else
#define EX(y,x...) \
- .section __ex_table,"a"; \
- data4 @gprel(99f); \
- data4 y-99f; \
- .previous; \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
99: x
#endif
@@ -85,13 +85,10 @@
#define enddst r29
#define endsrc r30
#define saved_pfs r31
- .text
- .psr abi64
- .psr lsb
GLOBAL_ENTRY(__copy_user)
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc
saved_pfs=ar.pfs,3,((2*PIPE_DEPTH+7)&~7),0,((2*PIPE_DEPTH+7)&~7)
.rotr val1[PIPE_DEPTH],val2[PIPE_DEPTH]
@@ -102,16 +99,16 @@
;; // RAW of cfm when len=0
cmp.eq p8,p0=r0,len // check for zero length
- UNW(.save ar.lc, saved_lc)
+ .save ar.lc, saved_lc
mov saved_lc=ar.lc // preserve ar.lc (slow)
(p8) br.ret.spnt.few rp // empty mempcy()
;;
add enddst=dst,len // first byte after end of source
add endsrc=src,len // first byte after end of destination
- UNW(.save pr, saved_pr)
+ .save pr, saved_pr
mov saved_pr=pr // preserve predicates
- UNW(.body)
+ .body
mov dst1=dst // copy because of rotation
mov ar.ec=PIPE_DEPTH
@@ -130,7 +127,6 @@
// p7 is necessarily false by now
1:
EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
-
EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 1b
;;
@@ -213,7 +209,6 @@
;;
2:
EX(failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
- ;;
EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 2b
;;
@@ -315,7 +310,6 @@
;;
5:
EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
-
EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 5b
;;
@@ -354,6 +348,7 @@
// we have never executed the ld1, therefore st1 is not executed.
//
EX(failure_in1,(p8) ld4 val2[0]=[src1],4) // 4-byte aligned
+ ;;
EX(failure_out,(p6) st1 [dst1]=val1[0],1)
tbit.nz p9,p0=src1,3
;;
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/csum_partial_copy.c
linux-2.4.2-lia/arch/ia64/lib/csum_partial_copy.c
--- linux-davidm/arch/ia64/lib/csum_partial_copy.c Sun Feb 6 18:42:40
2000
+++ linux-2.4.2-lia/arch/ia64/lib/csum_partial_copy.c Wed Mar 21 23:14:01
2001
@@ -107,10 +107,8 @@
do_csum_partial_copy_from_user (const char *src, char *dst, int len,
unsigned int psum, int *errp)
{
- const unsigned char *psrc = src;
unsigned long result;
- int cplen = len;
- int r = 0;
+ int r;
/* XXX Fixme
* for now we separate the copy from checksum for obvious
@@ -118,7 +116,7 @@
* scared.
*/
- while ( cplen-- ) r |=__get_user(*dst++,psrc++);
+ r = copy_from_user(dst, src, len);
if ( r && errp ) *errp = r;
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/do_csum.S
linux-2.4.2-lia/arch/ia64/lib/do_csum.S
--- linux-davidm/arch/ia64/lib/do_csum.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/do_csum.S Wed Mar 21 23:14:18 2001
@@ -8,7 +8,7 @@
* in0: address of buffer to checksum (char *)
* in1: length of the buffer (int)
*
- * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999, 2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
*
*/
@@ -94,17 +94,11 @@
#define buf in0
#define len in1
-
- .text
- .psr abi64
- .psr lsb
- .lsb
-
// unsigned long do_csum(unsigned char *buf,int len)
GLOBAL_ENTRY(do_csum)
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc saved_pfs=ar.pfs,2,8,0,8
.rotr p[4], result[3]
@@ -126,7 +120,7 @@
;;
and lastoff=7,tmp1 // how many bytes off for last element
andcm last=tmp2,tmp3 // address of word containing last byte
- UNW(.save pr, saved_pr)
+ .save pr, saved_pr
mov saved_pr=pr // preserve predicates (rotation)
;;
sub tmp3=last,first // tmp3=distance from first to last
@@ -147,11 +141,11 @@
shl hmask=hmask,tmp2 // build head mask, mask off [0,firstoff[
;;
shr.u tmask=tmask,tmp1 // build tail mask, mask off ]8,lastoff]
- UNW(.save ar.lc, saved_lc)
+ .save ar.lc, saved_lc
mov saved_lc=ar.lc // save lc
;;
- UNW(.body)
+ .body
(p8) and hmask=hmask,tmask // apply tail mask to head mask if 1 word
only
(p9) and p[1]=lastval,tmask // mask last it as appropriate
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/flush.S
linux-2.4.2-lia/arch/ia64/lib/flush.S
--- linux-davidm/arch/ia64/lib/flush.S Thu Jan 4 22:40:10 2001
+++ linux-2.4.2-lia/arch/ia64/lib/flush.S Wed Mar 21 23:14:33 2001
@@ -1,29 +1,24 @@
/*
* Cache flushing routines.
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
#include <asm/page.h>
- .text
- .psr abi64
- .psr lsb
- .lsb
-
/*
* flush_icache_range(start,end)
* Must flush range from start to end-1 but nothing else (need
to
* be careful not to touch addresses that may be unmapped).
*/
GLOBAL_ENTRY(flush_icache_range)
- UNW(.prologue)
+ .prologue
alloc r2=ar.pfs,2,0,0,0
sub r8=in1,in0,1
;;
shr.u r8=r8,5 // we flush 32 bytes per iteration
- UNW(.save ar.lc, r3)
+ .save ar.lc, r3
mov r3=ar.lc // save ar.lc
;;
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/idiv64.S
linux-2.4.2-lia/arch/ia64/lib/idiv64.S
--- linux-davidm/arch/ia64/lib/idiv64.S Wed Mar 21 23:46:39 2001
+++ linux-2.4.2-lia/arch/ia64/lib/idiv64.S Wed Mar 21 23:14:47 2001
@@ -37,23 +37,23 @@
#define NAME PASTE(PASTE(__,SGN),PASTE(OP,di3))
GLOBAL_ENTRY(NAME)
- UNW(.prologue)
+ .prologue
.regstk 2,0,0,0
// Transfer inputs to FP registers.
setf.sig f8 = in0
setf.sig f9 = in1
;;
- UNW(.fframe 16)
- UNW(.save.f 0x20)
+ .fframe 16
+ .save.f 0x20
stf.spill [sp] = f17,-16
// Convert the inputs to FP, to avoid FP software-assist faults.
INT_TO_FP(f8, f8)
;;
- UNW(.save.f 0x10)
+ .save.f 0x10
stf.spill [sp] = f16
- UNW(.body)
+ .body
INT_TO_FP(f9, f9)
;;
frcpa.s1 f17, p6 = f8, f9 // y0 = frcpa(b)
@@ -79,7 +79,7 @@
#endif
(p6) fma.s1 f17 = f7, f6, f16 // q3 = r*y2 + q2
;;
- UNW(.restore sp)
+ .restore sp
ldf.fill f16 = [sp], 16
FP_TO_INT(f17, f17) // q = trunc(q3)
;;
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/memcpy.S
linux-2.4.2-lia/arch/ia64/lib/memcpy.S
--- linux-davidm/arch/ia64/lib/memcpy.S Thu Jan 4 22:40:10 2001
+++ linux-2.4.2-lia/arch/ia64/lib/memcpy.S Wed Mar 21 23:14:56 2001
@@ -68,19 +68,19 @@
* the more general copy routine handling arbitrary
* sizes/alignment etc.
*/
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc saved_pfs=ar.pfs,3,Nrot,0,Nrot
- UNW(.save ar.lc, saved_lc)
+ .save ar.lc, saved_lc
mov saved_lc=ar.lc
or t0=in0,in1
;;
or t0=t0,in2
- UNW(.save pr, saved_pr)
+ .save pr, saved_pr
mov saved_pr=pr
- UNW(.body)
+ .body
cmp.eq p6,p0=in2,r0 // zero length?
mov retval=in0 // return dst
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/memset.S
linux-2.4.2-lia/arch/ia64/lib/memset.S
--- linux-davidm/arch/ia64/lib/memset.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/memset.S Wed Mar 21 23:15:05 2001
@@ -4,13 +4,12 @@
*
* Return: none
*
- *
* Inputs:
* in0: address of buffer
* in1: byte value to use for storing
* in2: length of the buffer
*
- * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999, 2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
*/
@@ -31,20 +30,16 @@
#define saved_lc r20
#define tmp r21
- .text
- .psr abi64
- .psr lsb
-
GLOBAL_ENTRY(memset)
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc saved_pfs=ar.pfs,3,0,0,0 // cnt is sink here
cmp.eq p8,p0=r0,len // check for zero length
- UNW(.save ar.lc, saved_lc)
+ .save ar.lc, saved_lc
mov saved_lc=ar.lc // preserve ar.lc (slow)
;;
- UNW(.body)
+ .body
adds tmp=-1,len // br.ctop is repeat/until
tbit.nz p6,p0=buf,0 // odd alignment
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/strlen.S
linux-2.4.2-lia/arch/ia64/lib/strlen.S
--- linux-davidm/arch/ia64/lib/strlen.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/strlen.S Wed Mar 21 23:15:15 2001
@@ -10,7 +10,7 @@
* ret0 the number of characters in the string (0 if empty string)
* does not count the \0
*
- * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999, 2001 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
*
* 09/24/99 S.Eranian add speculation recovery code
@@ -78,15 +78,9 @@
#define val1 r22
#define val2 r23
-
- .text
- .psr abi64
- .psr lsb
- .lsb
-
GLOBAL_ENTRY(strlen)
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc saved_pfs=ar.pfs,11,0,0,8 // rotating must be multiple of 8
.rotr v[2], w[2] // declares our 4 aliases
@@ -94,11 +88,11 @@
extr.u tmp=in0,0,3 // tmp=least significant 3 bits
mov orig=in0 // keep trackof initial byte address
dep src=0,in0,0,3 // src‹yte-aligned in0 address
- UNW(.save pr, saved_pr)
+ .save pr, saved_pr
mov saved_pr=pr // preserve predicates (rotation)
;;
- UNW(.body)
+ .body
ld8 v[1]=[src],8 // must not speculate: can fail here
shl tmp=tmp,3 // multiply by 8bits/byte
@@ -132,11 +126,7 @@
// - there must be a better way of doing the test
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
-#ifdef notyet
tnat.nz p6,p7=val1 // test NaT on val1
-#else
- tnat.z p7,p6=val1 // test NaT on val1
-#endif
(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
;;
//
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/strlen_user.S
linux-2.4.2-lia/arch/ia64/lib/strlen_user.S
--- linux-davidm/arch/ia64/lib/strlen_user.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/strlen_user.S Wed Mar 21 23:15:28 2001
@@ -7,8 +7,8 @@
* Outputs:
* ret0 0 in case of fault, strlen(buffer)+1 otherwise
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
*
* 01/19/99 S.Eranian heavily enhanced version (see details below)
@@ -68,15 +68,19 @@
//
// - Clearly performance tuning is required.
//
-//
-//
+ .section "__ex_table", "a" // declare section & section
attributes
+ .previous
+
+#if __GNUC__ >= 3
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ [99:] x
+#else
#define EX(y,x...) \
- .section __ex_table,"a"; \
- data4 @gprel(99f); \
- data4 y-99f; \
- .previous; \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
99: x
+#endif
#define saved_pfs r11
#define tmp r10
@@ -89,15 +93,9 @@
#define val1 r22
#define val2 r23
-
- .text
- .psr abi64
- .psr lsb
- .lsb
-
GLOBAL_ENTRY(__strlen_user)
- UNW(.prologue)
- UNW(.save ar.pfs, saved_pfs)
+ .prologue
+ .save ar.pfs, saved_pfs
alloc saved_pfs=ar.pfs,11,0,0,8
.rotr v[2], w[2] // declares our 4 aliases
@@ -105,7 +103,7 @@
extr.u tmp=in0,0,3 // tmp=least significant 3 bits
mov orig=in0 // keep trackof initial byte address
dep src=0,in0,0,3 // src‹yte-aligned in0 address
- UNW(.save pr, saved_pr)
+ .save pr, saved_pr
mov saved_pr=pr // preserve predicates (rotation)
;;
@@ -144,11 +142,7 @@
// - there must be a better way of doing the test
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
-#ifdef notyet
tnat.nz p6,p7=val1 // test NaT on val1
-#else
- tnat.z p7,p6=val1 // test NaT on val1
-#endif
(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
;;
//
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/strncpy_from_user.S
linux-2.4.2-lia/arch/ia64/lib/strncpy_from_user.S
--- linux-davidm/arch/ia64/lib/strncpy_from_user.S Thu Jun 22 07:09:44
2000
+++ linux-2.4.2-lia/arch/ia64/lib/strncpy_from_user.S Wed Mar 21 23:15:42
2001
@@ -9,8 +9,8 @@
* Outputs:
* r8: -EFAULT in case of fault or number of bytes copied if no
fault
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*
* 00/03/06 D. Mosberger Fixed to return proper return value (bug found by
* by Andreas Schwab <schwab@suse.de>).
@@ -18,17 +18,18 @@
#include <asm/asmmacro.h>
-#define EX(x...) \
-99: x; \
- .section __ex_table,"a"; \
- data4 @gprel(99b); \
- data4 .Lexit-99b; \
+ .section "__ex_table", "a" // declare section & section
attributes
.previous
- .text
- .psr abi64
- .psr lsb
- .lsb
+#if __GNUC__ >= 3
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ [99:] x
+#else
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ 99: x
+#endif
GLOBAL_ENTRY(__strncpy_from_user)
alloc r2=ar.pfs,3,0,0,0
@@ -41,15 +42,23 @@
// XXX braindead copy loop---this needs to be optimized
.Loop1:
- EX(ld1 r8=[in1],1;; st1 [in0]=r8,1; cmp.ne p6,p7=r8,r0)
+ EX(.Lexit, ld1 r8=[in1],1)
+ ;;
+ EX(.Lexit, st1 [in0]=r8,1)
+ cmp.ne p6,p7=r8,r0
;;
(p6) cmp.ne.unc p8,p0=in1,r10
(p8) br.cond.dpnt.few .Loop1
;;
(p6) mov r8=in2 // buffer filled up---return buffer length
(p7) sub r8=in1,r9,1 // return string length (excluding NUL
character)
+#ifdef __GNUC__ >= 3
+[.Lexit:]
+ br.ret.sptk.few rp
+#else
br.ret.sptk.few rp
.Lexit:
br.ret.sptk.few rp
+#endif
END(__strncpy_from_user)
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/strnlen_user.S
linux-2.4.2-lia/arch/ia64/lib/strnlen_user.S
--- linux-davidm/arch/ia64/lib/strnlen_user.S Thu Jun 22 07:09:44 2000
+++ linux-2.4.2-lia/arch/ia64/lib/strnlen_user.S Wed Mar 21 23:16:29
2001
@@ -9,40 +9,41 @@
* Outputs:
* r8: 0 in case of fault, strlen(buffer)+1 otherwise
*
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
+ .section "__ex_table", "a" // declare section & section
attributes
+ .previous
+
/* If a fault occurs, r8 gets set to -EFAULT and r9 gets cleared. */
-#define EX(x...) \
- .section __ex_table,"a"; \
- data4 @gprel(99f); \
- data4 (.Lexit-99f)|1; \
- .previous \
-99: x;
-
- .text
- .psr abi64
- .psr lsb
- .lsb
+#if __GNUC__ >= 3
+# define EXC(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
+ [99:] x
+#else
+# define EXC(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
+ 99: x
+#endif
GLOBAL_ENTRY(__strnlen_user)
- UNW(.prologue)
+ .prologue
alloc r2=ar.pfs,2,0,0,0
- UNW(.save ar.lc, r16)
+ .save ar.lc, r16
mov r16=ar.lc // preserve ar.lc
- UNW(.body)
+ .body
add r3=-1,in1
;;
mov ar.lc=r3
mov r9=0
-
+ ;;
// XXX braindead strlen loop---this needs to be optimized
.Loop1:
- EX(ld1 r8=[in0],1)
+ EXC(.Lexit, ld1 r8=[in0],1)
add r9=1,r9
;;
cmp.eq p6,p0=r8,r0
diff -urN --ignore-all-space linux-davidm/arch/ia64/mm/extable.c
linux-2.4.2-lia/arch/ia64/mm/extable.c
--- linux-davidm/arch/ia64/mm/extable.c Sun Feb 6 18:42:40 2000
+++ linux-2.4.2-lia/arch/ia64/mm/extable.c Wed Mar 21 23:16:48 2001
@@ -1,8 +1,8 @@
/*
* Kernel exception handling table support. Derived from
arch/alpha/mm/extable.c.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
@@ -43,14 +43,22 @@
return 0;
}
-register unsigned long gp __asm__("gp");
+#ifndef CONFIG_MODULE
+register unsigned long main_gp __asm__("gp");
+#endif
-const struct exception_table_entry *
+struct exception_fixup
search_exception_table (unsigned long addr)
{
+ const struct exception_table_entry *entry;
+ struct exception_fixup fix = { 0 };
+
#ifndef CONFIG_MODULE
/* There is only the kernel to search. */
- return search_one_table(__start___ex_table, __stop___ex_table - 1,
addr - gp);
+ entry = search_one_table(__start___ex_table, __stop___ex_table - 1,
addr - main_gp);
+ if (entry)
+ fix.cont = entry->cont + main_gp;
+ return fix;
#else
struct exception_table_entry *ret;
/* The kernel is the last "module" -- no need to treat it special.
*/
@@ -59,10 +67,22 @@
for (mp = module_list; mp ; mp = mp->next) {
if (!mp->ex_table_start)
continue;
- ret = search_one_table(mp->ex_table_start, mp->ex_table_end
- 1, addr - mp->gp);
- if (ret)
- return ret;
+ entry = search_one_table(mp->ex_table_start,
mp->ex_table_end - 1, addr - mp->gp);
+ if (entry) {
+ fix.cont = entry->cont + mp->gp;
+ return fix;
+ }
}
- return 0;
#endif
+ return fix;
+}
+
+void
+handle_exception (struct pt_regs *regs, struct exception_fixup fix)
+{
+ regs->r8 = -EFAULT;
+ if (fix.cont & 4)
+ regs->r9 = 0;
+ regs->cr_iip = (long) fix.cont & ~0xf;
+ ia64_psr(regs)->ri = fix.cont & 0x3; /* set continuation
slot number */
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/mm/fault.c
linux-2.4.2-lia/arch/ia64/mm/fault.c
--- linux-davidm/arch/ia64/mm/fault.c Thu Jan 4 22:40:10 2001
+++ linux-2.4.2-lia/arch/ia64/mm/fault.c Wed Mar 21 23:17:05 2001
@@ -47,7 +47,7 @@
ia64_do_page_fault (unsigned long address, unsigned long isr, struct
pt_regs *regs)
{
struct mm_struct *mm = current->mm;
- const struct exception_table_entry *fix;
+ struct exception_fixup fix;
struct vm_area_struct *vma, *prev_vma;
struct siginfo si;
int signal = SIGSEGV;
@@ -163,14 +163,13 @@
return;
}
+#ifdef GAS_HAS_LOCAL_TAGS
+ fix = search_exception_table(regs->cr_iip + ia64_psr(regs)->ri);
+#else
fix = search_exception_table(regs->cr_iip);
- if (fix) {
- regs->r8 = -EFAULT;
- if (fix->skip & 1) {
- regs->r9 = 0;
- }
- regs->cr_iip += ((long) fix->skip) & ~15;
- regs->cr_ipsr &= ~IA64_PSR_RI; /* clear exception slot
number */
+#endif
+ if (fix.cont) {
+ handle_exception(regs, fix);
return;
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/tools/print_offsets.c
linux-2.4.2-lia/arch/ia64/tools/print_offsets.c
--- linux-davidm/arch/ia64/tools/print_offsets.c Wed Mar 21 23:46:40
2001
+++ linux-2.4.2-lia/arch/ia64/tools/print_offsets.c Wed Mar 21 23:17:18
2001
@@ -46,9 +46,7 @@
{ "IA64_SWITCH_STACK_SIZE", sizeof (struct switch_stack)
},
{ "IA64_SIGINFO_SIZE", sizeof (struct siginfo) },
{ "IA64_CPU_SIZE", sizeof (struct cpuinfo_ia64) },
-#ifdef CONFIG_IA64_NEW_UNWIND
{ "UNW_FRAME_INFO_SIZE", sizeof (struct unw_frame_info) },
-#endif
{ "", 0 }, /* spacer */
{ "IA64_TASK_PTRACE_OFFSET", offsetof (struct task_struct,
ptrace) },
{ "IA64_TASK_SIGPENDING_OFFSET", offsetof (struct task_struct,
sigpending) },
diff -urN --ignore-all-space linux-davidm/drivers/acpi/common/cmxface.c
linux-2.4.2-lia/drivers/acpi/common/cmxface.c
--- linux-davidm/drivers/acpi/common/cmxface.c Wed Feb 28 12:57:48 2001
+++ linux-2.4.2-lia/drivers/acpi/common/cmxface.c Wed Mar 21 23:17:31
2001
@@ -158,7 +158,12 @@
if (!(flags & ACPI_NO_ACPI_ENABLE)) {
status = acpi_enable ();
if (ACPI_FAILURE (status)) {
+#define BIGSUR_WA
+#ifdef BIGSUR_WA
+ printk("acpi_enable fail:ignored\n");
+#else
return (status);
+#endif
}
}
diff -urN --ignore-all-space linux-davidm/drivers/acpi/resources/rscalc.c
linux-2.4.2-lia/drivers/acpi/resources/rscalc.c
--- linux-davidm/drivers/acpi/resources/rscalc.c Wed Feb 28 12:57:48
2001
+++ linux-2.4.2-lia/drivers/acpi/resources/rscalc.c Wed Mar 21 23:17:41
2001
@@ -866,7 +866,7 @@
}
- *buffer_size_needed = temp_size_needed;
+ *buffer_size_needed = temp_size_needed + sizeof(PCI_ROUTING_TABLE);
return (AE_OK);
}
diff -urN --ignore-all-space linux-davidm/drivers/char/Config.in
linux-2.4.2-lia/drivers/char/Config.in
--- linux-davidm/drivers/char/Config.in Wed Feb 28 12:57:49 2001
+++ linux-2.4.2-lia/drivers/char/Config.in Wed Mar 21 23:17:50 2001
@@ -179,6 +179,12 @@
tristate '/dev/agpgart (AGP Support)' CONFIG_AGP $CONFIG_DRM_AGP
if [ "$CONFIG_AGP" != "n" ]; then
bool ' Intel 440LX/BX/GX and I815/I840/I850 support' CONFIG_AGP_INTEL
+ bool ' Intel 460GX support (EXPERIMENTAL)' CONFIG_AGP_I460
+ if [ "$CONFIG_AGP_I460" != "n" ]; then
+ define_bool CONFIG_AGP_PTE_FIXUPS y
+ bool ' Enable Full AGP RQ (Requires BigSur BIOS 99 or Newer)'
CONFIG_AGP_I460_FULLRQ
+ bool ' AGPGART PTE Fixups (Required and In-Kernel)'
CONFIG_AGP_PTE_FIXUPS
+ fi
bool ' Intel I810/I815 (on-board) support' CONFIG_AGP_I810
bool ' VIA chipset support' CONFIG_AGP_VIA
bool ' AMD Irongate support' CONFIG_AGP_AMD
diff -urN --ignore-all-space linux-davidm/drivers/char/Makefile
linux-2.4.2-lia/drivers/char/Makefile
--- linux-davidm/drivers/char/Makefile Wed Mar 21 23:46:40 2001
+++ linux-2.4.2-lia/drivers/char/Makefile Wed Mar 21 23:18:12 2001
@@ -160,6 +160,9 @@
subdir-$(CONFIG_DRM) += drm
subdir-$(CONFIG_PCMCIA) += pcmcia
subdir-$(CONFIG_AGP) += agp
+ifeq ($(CONFIG_AGP), m)
+ subdir-y += agp
+endif
ifeq ($(CONFIG_FTAPE),y)
obj-y += ftape/ftape.o
diff -urN --ignore-all-space linux-davidm/drivers/char/agp/Makefile
linux-2.4.2-lia/drivers/char/agp/Makefile
--- linux-davidm/drivers/char/agp/Makefile Fri Dec 29 14:07:21 2000
+++ linux-2.4.2-lia/drivers/char/agp/Makefile Wed Mar 21 23:18:51 2001
@@ -11,6 +11,7 @@
agpgart-objs := agpgart_fe.o agpgart_be.o
obj-$(CONFIG_AGP) += agpgart.o
+obj-y += vmmap.o
include $(TOPDIR)/Rules.make
diff -urN --ignore-all-space linux-davidm/drivers/char/agp/agp.h
linux-2.4.2-lia/drivers/char/agp/agp.h
--- linux-davidm/drivers/char/agp/agp.h Wed Feb 28 12:57:50 2001
+++ linux-2.4.2-lia/drivers/char/agp/agp.h Wed Mar 21 23:19:01 2001
@@ -27,6 +27,8 @@
#ifndef _AGP_BACKEND_PRIV_H
#define _AGP_BACKEND_PRIV_H 1
+#include <linux/config.h>
+
enum aper_size_type {
U8_APER_SIZE,
U16_APER_SIZE,
@@ -84,8 +86,8 @@
void *dev_private_data;
struct pci_dev *dev;
gatt_mask *masks;
- unsigned long *gatt_table;
- unsigned long *gatt_table_real;
+ u32 *gatt_table;
+ u32 *gatt_table_real;
unsigned long scratch_page;
unsigned long gart_bus_addr;
unsigned long gatt_bus_addr;
@@ -110,6 +112,7 @@
void (*cleanup) (void);
void (*tlb_flush) (agp_memory *);
unsigned long (*mask_memory) (unsigned long, int);
+ unsigned long (*unmask_memory) (unsigned long);
void (*cache_flush) (void);
int (*create_gatt_table) (void);
int (*free_gatt_table) (void);
@@ -121,6 +124,11 @@
void (*agp_destroy_page) (unsigned long);
};
+#ifdef CONFIG_AGP_PTE_FIXUPS
+extern void *agp_vmmap(unsigned long offset, unsigned long size);
+extern void agp_vmunmap(void *handle);
+#endif
+
#define OUTREG32(mmap, addr, val) __raw_writel((val), (mmap)+(addr))
#define OUTREG16(mmap, addr, val) __raw_writew((val), (mmap)+(addr))
#define OUTREG8 (mmap, addr, val) __raw_writeb((val), (mmap)+(addr))
@@ -146,6 +154,10 @@
#define min(a,b) (((a)<(b))?(a):(b))
#endif
+#ifndef max
+#define max(a,b) (((a)>(b))?(a):(b))
+#endif
+
#define AGPGART_MODULE_NAME "agpgart"
#define PFX AGPGART_MODULE_NAME ": "
@@ -196,6 +208,9 @@
#ifndef PCI_DEVICE_ID_INTEL_82443GX_1
#define PCI_DEVICE_ID_INTEL_82443GX_1 0x71a1
#endif
+#ifndef PCI_DEVICE_ID_INTEL_460GX
+#define PCI_DEVICE_ID_INTEL_460GX 0x84ea
+#endif
#ifndef PCI_DEVICE_ID_AMD_IRONGATE_0
#define PCI_DEVICE_ID_AMD_IRONGATE_0 0x7006
#endif
@@ -231,6 +246,15 @@
#define INTEL_AGPCTRL 0xb0
#define INTEL_NBXCFG 0x50
#define INTEL_ERRSTS 0x91
+
+/* Intel 460GX Registers */
+#define INTEL_I460_APBASE 0x10
+#define INTEL_I460_BAPBASE 0x98
+#define INTEL_I460_GXBCTL 0xa0
+#define INTEL_I460_AGPSIZ 0xa2
+#define INTEL_I460_ATTBASE 0xfe200000
+#define INTEL_I460_GATT_VALID (1UL << 24)
+#define INTEL_I460_GATT_COHERENT (1UL << 25)
/* intel i840 registers */
#define INTEL_I840_MCHCFG 0x50
diff -urN --ignore-all-space linux-davidm/drivers/char/agp/agpgart_be.c
linux-2.4.2-lia/drivers/char/agp/agpgart_be.c
--- linux-davidm/drivers/char/agp/agpgart_be.c Wed Feb 28 12:57:50 2001
+++ linux-2.4.2-lia/drivers/char/agp/agpgart_be.c Wed Mar 21 23:19:12
2001
@@ -22,6 +22,7 @@
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
+ * 460GX support by Chris Ahna <christopher.j.ahna@intel.com>
*/
#include <linux/config.h>
#include <linux/version.h>
@@ -42,6 +43,11 @@
#include <asm/uaccess.h>
#include <asm/io.h>
#include <asm/page.h>
+#ifdef CONFIG_AGP_PTE_FIXUPS
+# include <asm/pgalloc.h>
+# include <asm/pgtable.h>
+# include <asm/smplock.h>
+#endif
#include <linux/agp_backend.h>
#include "agp.h"
@@ -56,9 +62,19 @@
EXPORT_SYMBOL(agp_enable);
EXPORT_SYMBOL(agp_backend_acquire);
EXPORT_SYMBOL(agp_backend_release);
+#ifdef CONFIG_AGP_PTE_FIXUPS
+EXPORT_SYMBOL(agp_add_fixup);
+EXPORT_SYMBOL(agp_remove_fixup);
+#endif
static void flush_cache(void);
+#ifdef CONFIG_AGP_PTE_FIXUPS
+static void agp_fixup_map_list(unsigned long pg_start, unsigned long
pg_count);
+static int agp_vma_fixup(struct vm_area_struct *vma, unsigned long start,
+ size_t size, unsigned long offset);
+#endif
+
static struct agp_bridge_data agp_bridge;
static int agp_try_unsupported __initdata = 0;
@@ -205,7 +221,8 @@
}
if (curr->page_count != 0) {
for (i = 0; i < curr->page_count; i++) {
- curr->memory[i] &= ~(0x00000fff);
+ curr->memory[i] = agp_bridge.unmask_memory(
+ curr->memory[i]);
agp_bridge.agp_destroy_page((unsigned long)
phys_to_virt(curr->memory[i]));
}
@@ -348,6 +365,11 @@
}
curr->is_bound = TRUE;
curr->pg_start = pg_start;
+
+#ifdef CONFIG_AGP_PTE_FIXUPS
+ agp_fixup_map_list(curr->pg_start, curr->page_count);
+#endif
+
return 0;
}
@@ -366,6 +388,11 @@
if (ret_val != 0) {
return ret_val;
}
+
+#ifdef CONFIG_AGP_PTE_FIXUPS
+ agp_fixup_map_list(curr->pg_start, curr->page_count);
+#endif
+
curr->is_bound = FALSE;
curr->pg_start = 0;
return 0;
@@ -431,6 +458,21 @@
min((mode & 0xff000000),
min((command & 0xff000000),
(scratch & 0xff000000))));
+ /*
+ * With BigSur BIOSes older than build 99, chipset
+ * misconfiguration limits safe RQ depth to 2.
+ * Only use the expanded queue if the user says so
+ * in menuconfig. I don't think this warrants a
+ * separate agp_enable function for 460 as it is a
+ * temporary and minor change.
+ */
+#ifndef CONFIG_AGP_I460_FULLRQ
+ if(agp_bridge.type = INTEL_460GX) {
+ command + ((command & ~0xff000000) |
+ ((0x02 << 24) & 0xff000000));
+ }
+#endif
/* disable SBA if it's not supported */
if (!((command & 0x00000200) &&
@@ -603,7 +645,7 @@
for (page = virt_to_page(table); page <= virt_to_page(table_end);
page++)
set_bit(PG_reserved, &page->flags);
- agp_bridge.gatt_table_real = (unsigned long *) table;
+ agp_bridge.gatt_table_real = (u32 *) table;
CACHE_FLUSH();
agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
(PAGE_SIZE * (1 << page_order)));
@@ -809,6 +851,11 @@
agp_bridge.agp_enable(mode);
}
+static unsigned long agp_generic_unmask_memory(unsigned long addr)
+{
+ return addr & ~(0x00000fff);
+}
+
/* End - Generic Agp routines */
#ifdef CONFIG_AGP_I810
@@ -1073,6 +1120,7 @@
agp_bridge.cleanup = intel_i810_cleanup;
agp_bridge.tlb_flush = intel_i810_tlbflush;
agp_bridge.mask_memory = intel_i810_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = intel_i810_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1089,6 +1137,991 @@
#endif /* CONFIG_AGP_I810 */
+#ifdef CONFIG_AGP_I460
+
+/* BIOS configures the chipset so that one of two apbase registers are used
*/
+static u8 intel_i460_dynamic_apbase = 0x10;
+
+/* 460 supports multiple GART page sizes, so GART pageshift is dynamic */
+static u8 intel_i460_pageshift = 12;
+
+/* To speed mmap fixups, track the last entry used in the GATT. */
+static u32 intel_i460_tog = 0;
+
+/* Keep track of which is larger, chipset or kernel page size. */
+static u32 intel_i460_cpk = 1;
+
+/* Structure for tracking partial use of 4MB GART pages */
+static u32 **i460_pg_detail = NULL;
+static u32 *i460_pg_count = NULL;
+
+#define I460_CPAGES_PER_KPAGE (PAGE_SIZE >> intel_i460_pageshift)
+#define I460_KPAGES_PER_CPAGE ((1 << intel_i460_pageshift) >> PAGE_SHIFT)
+
+#define I460_SRAM_IO_DISABLE (1 << 4)
+#define I460_BAPBASE_ENABLE (1 << 3)
+#define I460_AGPSIZ_MASK 0x7
+#define I460_4M_PS (1 << 1)
+
+#define log2(x) ffz(~(x))
+
+static int intel_i460_fetch_size(void)
+{
+ int i;
+ u8 temp;
+ aper_size_info_8 *values;
+
+ /* Determine the GART page size */
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &temp);
+ intel_i460_pageshift = (temp & I460_4M_PS) ? 22 : 12;
+
+ values = A_SIZE_8(agp_bridge.aperture_sizes);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+
+ /* Exit now if the IO drivers for the GART SRAMS are turned off */
+ if(temp & I460_SRAM_IO_DISABLE) {
+ printk("[agpgart] GART SRAMS disabled on 460GX chipset\n");
+ printk("[agpgart] AGPGART operation not possible\n");
+ return 0;
+ }
+
+ /* Make sure we don't try to create an 2 ^ 23 entry GATT */
+ if((intel_i460_pageshift = 0) && ((temp & I460_AGPSIZ_MASK) = 4))
{
+ printk("[agpgart] We can't have a 32GB aperture with 4KB"
+ " GART pages\n");
+ return 0;
+ }
+
+ /* Determine the proper APBASE register */
+ if(temp & I460_BAPBASE_ENABLE)
+ intel_i460_dynamic_apbase = INTEL_I460_BAPBASE;
+ else intel_i460_dynamic_apbase = INTEL_I460_APBASE;
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+
+ /*
+ * Dynamically calculate the proper num_entries and
page_order
+ * values for the define aperture sizes. Take care not to
+ * shift off the end of values[i].size.
+ */
+ values[i].num_entries = (values[i].size << 8) >>
+ (intel_i460_pageshift - 12);
+ values[i].page_order log2((sizeof(u32)*values[i].num_entries)
+ >> PAGE_SHIFT);
+ }
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+ /* Neglect control bits when matching up size_value */
+ if ((temp & I460_AGPSIZ_MASK) = values[i].size_value) {
+ agp_bridge.previous_size + agp_bridge.current_size = (void *) (values + i);
+ agp_bridge.aperture_size_idx = i;
+ return values[i].size;
+ }
+ }
+
+ return 0;
+}
+
+/* There isn't anything to do here since 460 has no GART TLB. */
+static void intel_i460_tlb_flush(agp_memory * mem)
+{
+ return;
+}
+
+/*
+ * This utility function is needed to prevent corruption of the control
bits
+ * which are stored along with the aperture size in 460's AGPSIZ register
+ */
+static void intel_i460_write_agpsiz(u8 size_value)
+{
+ u8 temp;
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ,
+ ((temp & ~I460_AGPSIZ_MASK) | size_value));
+}
+
+static void intel_i460_cleanup(void)
+{
+ aper_size_info_8 *previous_size;
+
+ previous_size = A_SIZE_8(agp_bridge.previous_size);
+ intel_i460_write_agpsiz(previous_size->size_value);
+
+ if(intel_i460_cpk = 0)
+ {
+ vfree(i460_pg_detail);
+ vfree(i460_pg_count);
+ }
+}
+
+
+/* Control bits for Out-Of-GART coherency and Burst Write Combining */
+#define I460_GXBCTL_OOG (1UL << 0)
+#define I460_GXBCTL_BWC (1UL << 2)
+
+static int intel_i460_configure(void)
+{
+ union {
+ u32 small[2];
+ u64 large;
+ } temp;
+ u8 scratch;
+ int i;
+
+ aper_size_info_8 *current_size;
+
+ temp.large = 0;
+
+ current_size = A_SIZE_8(agp_bridge.current_size);
+ intel_i460_write_agpsiz(current_size->size_value);
+
+ /*
+ * Do the necessary rigmarole to read all eight bytes of APBASE.
+ * This has to be done since the AGP aperture can be above 4GB on
+ * 460 based systems.
+ */
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase,
+ &(temp.small[0]));
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase + 4,
+ &(temp.small[1]));
+
+ /* Clear BAR control bits */
+ agp_bridge.gart_bus_addr = temp.large & ~((1UL << 3) - 1);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &scratch);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL,
+ (scratch & 0x02) | I460_GXBCTL_OOG |
I460_GXBCTL_BWC);
+
+ /*
+ * Initialize partial allocation trackers if a GART page is bigger
than
+ * a kernel page.
+ */
+ if(I460_CPAGES_PER_KPAGE >= 1) {
+ intel_i460_cpk = 1;
+ } else {
+ intel_i460_cpk = 0;
+
+ i460_pg_detail = (void *) vmalloc(sizeof(*i460_pg_detail) *
+ current_size->num_entries);
+ i460_pg_count = (void *) vmalloc(sizeof(*i460_pg_count) *
+ current_size->num_entries);
+
+ for (i = 0; i < current_size->num_entries; i++) {
+ i460_pg_count[i] = 0;
+ i460_pg_detail[i] = NULL;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_i460_create_gatt_table(void) {
+
+ char *table;
+ int i;
+ int page_order;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ /*
+ * Load up the fixed address of the GART SRAMS which hold our
+ * GATT table.
+ */
+ table = (char *) __va(INTEL_I460_ATTBASE);
+
+ temp = agp_bridge.current_size;
+ page_order = A_SIZE_8(temp)->page_order;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ agp_bridge.gatt_table_real = (u32 *) table;
+ agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
+ (PAGE_SIZE * (1 <<
page_order)));
+ agp_bridge.gatt_bus_addr = virt_to_phys(agp_bridge.gatt_table_real);
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ /* Set the Top-Of-GATT to 0 */
+ intel_i460_tog = 0;
+
+ return 0;
+}
+
+static int intel_i460_free_gatt_table(void)
+{
+ int num_entries;
+ int i;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ iounmap(agp_bridge.gatt_table);
+ intel_i460_tog = 0;
+
+ return 0;
+}
+
+/*
+ * When the Top-Of-GATT is cleared, this routine is called to roll tog back
+ * to point to the last valid GATT entry. This code is needed to traverse
+ * unused portions of the GATT, since we don't know about such gaps in
+ * intel_i460_remove_memory.
+ */
+static void intel_i460_rollback_tog(void) {
+
+ while(intel_i460_tog > 0) {
+
+ if(agp_bridge.gatt_table[intel_i460_tog - 1] &
+ INTEL_I460_GATT_VALID)
+ return;
+
+ intel_i460_tog--;
+ }
+}
+
+/* These functions are called when PAGE_SIZE exceeds the GART page size */
+
+static int intel_i460_insert_memory_cpk(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, j, k, num_entries;
+ void *temp;
+ unsigned int hold;
+ unsigned int read_back;
+
+ /*
+ * The rest of the kernel will compute page offsets in terms of
+ * PAGE_SIZE.
+ */
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ if((pg_start + I460_CPAGES_PER_KPAGE * mem->page_count) >
num_entries) {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ j = pg_start;
+ while (j < (pg_start + I460_CPAGES_PER_KPAGE * mem->page_count)) {
+ if (!PGE_EMPTY(agp_bridge.gatt_table[j])) {
+ return -EBUSY;
+ }
+ j++;
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for (i = 0, j = pg_start; i < mem->page_count; i++) {
+
+ hold = (unsigned int) (mem->memory[i]);
+
+ for (k = 0; k < I460_CPAGES_PER_KPAGE; k++, j++, hold++)
+ agp_bridge.gatt_table[j] = hold;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[j - 1];
+
+ if(j > intel_i460_tog)
+ intel_i460_tog = j;
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_cpk(agp_memory * mem, off_t pg_start,
+ int type)
+{
+ int i;
+ unsigned int read_back;
+
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ for (i = pg_start; i < (pg_start + I460_CPAGES_PER_KPAGE *
+ mem->page_count);
i++)
+ agp_bridge.gatt_table[i] = 0;
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ if(i = intel_i460_tog)
+ intel_i460_rollback_tog();
+
+ return 0;
+}
+
+/*
+ * These functions are called when the GART page size exceeds PAGE_SIZE.
+ *
+ * This situation is interesting since AGP memory allocations that are
+ * smaller than a single GART page are possible. The structures
i460_pg_count
+ * and i460_pg_detail track partial allocation of the large GART pages to
+ * work around this issue.
+ *
+ * i460_pg_count[pg_num] tracks the number of kernel pages in use within
+ * GART page pg_num. i460_pg_detail[pg_num] is an array containing a
+ * psuedo-GART entry for each of the aforementioned kernel pages. The
whole
+ * of i460_pg_detail is equivalent to a giant GATT with page size equal to
+ * that of the kernel.
+ */
+
+static void *intel_i460_alloc_large_page(int pg_num)
+{
+ int i;
+ void *bp, *bp_end;
+ struct page *page;
+
+ i460_pg_detail[pg_num] = (void *) vmalloc(sizeof(u32) *
+
I460_KPAGES_PER_CPAGE);
+
+ if(i460_pg_detail[pg_num] = NULL) {
+ printk("[agpgart] Out of memory, we're in trouble...\n");
+ return NULL;
+ }
+
+ for(i = 0; i < I460_KPAGES_PER_CPAGE; i++)
+ i460_pg_detail[pg_num][i] = 0;
+
+ bp = (void *) __get_free_pages(GFP_KERNEL,
+ intel_i460_pageshift - PAGE_SHIFT);
+ bp_end = bp + ((PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT))) -
1);
+
+ for (page = virt_to_page(bp); page <= virt_to_page(bp_end); page++)
+ {
+ atomic_inc(&page->count);
+ set_bit(PG_locked, &page->flags);
+ atomic_inc(&agp_bridge.current_memory_agp);
+ }
+
+ return bp;
+}
+
+static void intel_i460_free_large_page(int pg_num, unsigned long addr)
+{
+ struct page *page;
+ void *bp, *bp_end;
+
+ bp = (void *) __va(addr);
+ bp_end = bp + (PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT)));
+
+ vfree(i460_pg_detail[pg_num]);
+ i460_pg_detail[pg_num] = NULL;
+
+ for (page = virt_to_page(bp); page < virt_to_page(bp_end); page++)
+ {
+ atomic_dec(&page->count);
+ clear_bit(PG_locked, &page->flags);
+ wake_up(&page->wait);
+ atomic_dec(&agp_bridge.current_memory_agp);
+ }
+
+ free_pages((unsigned long) bp, intel_i460_pageshift - PAGE_SHIFT);
+}
+
+static int intel_i460_insert_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages
*/
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+
I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+
I460_KPAGES_PER_CPAGE;
+
+ if(end_pg > num_entries)
+ {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ /* Check if the requested region of the aperture is free */
+ for(pg = start_pg; pg <= end_pg; pg++)
+ {
+ /* Allocate new GART pages if necessary */
+ if(i460_pg_detail[pg] = NULL) {
+ agp_bridge.gatt_table[pg] = agp_bridge.mask_memory(
+ (unsigned long) intel_i460_alloc_large_page(pg),
0);
+ read_back = agp_bridge.gatt_table[pg];
+ }
+
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++)
+ {
+ if(i460_pg_detail[pg][idx] != 0)
+ return -EBUSY;
+ }
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for(pg = start_pg, i = 0; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ i460_pg_detail[pg][idx] = agp_bridge.gatt_table[pg]
+
+ ((idx * PAGE_SIZE) >>
12);
+ i460_pg_count[pg]++;
+
+ /* Finally we fill in mem->memory... */
+ mem->memory[i] = ((unsigned long) (0xffffff &
+ i460_pg_detail[pg][idx])) <<
12;
+ }
+ }
+
+ if(end_pg > intel_i460_tog)
+ intel_i460_tog = end_pg + 1;
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries, addr;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages
*/
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+ I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+ I460_KPAGES_PER_CPAGE;
+
+ for(i = 0, pg = start_pg; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ mem->memory[i] = 0;
+ i460_pg_detail[pg][idx] = 0;
+ i460_pg_count[pg]--;
+ }
+
+ /* Free GART pages if they are unused */
+ if(i460_pg_count[pg] = 0) {
+ addr = (0xffffff & agp_bridge.gatt_table[pg]) << 12;
+
+ agp_bridge.gatt_table[pg] = 0;
+ read_back = agp_bridge.gatt_table[pg];
+
+ intel_i460_free_large_page(pg, addr);
+ }
+ }
+
+ if(end_pg = intel_i460_tog - 1)
+ intel_i460_rollback_tog();
+
+ return 0;
+}
+
+/* Dummy routines to call the approriate {cpk,kpc} function */
+
+static int intel_i460_insert_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_insert_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_insert_memory_kpc(mem, pg_start, type);
+}
+
+static int intel_i460_remove_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_remove_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_remove_memory_kpc(mem, pg_start, type);
+}
+
+/*
+ * This block contains support routines which attempt to circumvent an
+ * extremely "interesting" feature of 460. On (I think) all other AGP
+ * chipsets, accesses from the processor into the AGP aperture are
dutifully
+ * translated by the AGP bridge. On 460, only accesses from the
+ * graphics adapter itself are translated. Accesses into the aperture
+ * from the processor result in master aborts and return all 1's.
+ *
+ * This problem is of course not insurmountable. The GART just provides
+ * a convenient service, we can always access the physical RAM addresses
+ * that correspond to the different parts of the AGP aperture. Doing this
+ * via brute force would require graphics applications to track two
+ * different pointers for all memory in the aperture. The first would be
+ * the address in the aperture, for use by the graphics adapter. The
second
+ * would be the corresponding RAM address, for use by processes running on
+ * the host processor. Making this change would require a pretty thorough
+ * rework of the AGP savvy video drivers that exist today.
+ *
+ * An alternate, more elegant solution is found when one considers using
+ * the processor's virtual addressing mechanisms to emulate GART
+ * translation. Using this technique, applications could be presented
+ * with a block of virtual memory which represents the AGP aperture.
+ * Accesses to these virtual addresses could then be routed to the
+ * appropriate physical memory through the processor's existing address
+ * translation mechanisms (i.e. page tables, TLBs, etc...).
+ *
+ * The following routines try to do what is described above. There could
be
+ * some things wrong with this method (I'm not a kernel mm expert),
+ * so suggestions are very welcome.
+ */
+
+#ifdef CONFIG_AGP_PTE_FIXUPS
+
+/*
+ * agp_fixup_entry_t defines the list we'll use to track memory mappings
+ * in the AGP aperture. We have to do this because maps overlapping with
+ * the aperture must be kept in exact agreement with the current state of
+ * the GART table. Whenever this driver changes the GART, we can walk this
+ * list and keep all of the maps up to date. Currently, all maps are
+ * registered through DRM.
+ *
+ * User space maps will be tracked by their corresponding vm_area_struct,
+ * kernel space vmmap's (see vmmap.c) will be tracked by their virtual
+ * address (only one pgd, pgd_offset_k).
+ */
+typedef struct agp_fixup_entry {
+ unsigned long offset;
+ size_t size;
+ unsigned long handle;
+ struct vm_area_struct *vma;
+ struct agp_fixup_entry *next;
+} agp_fixup_entry_t;
+
+static agp_fixup_entry_t *agp_fixup_list = NULL;
+static spinlock_t agp_fixup_lock = SPIN_LOCK_UNLOCKED;
+
+#define RID(_x) ((_x) >> 61)
+
+void agp_fixup_map_list(unsigned long pg_offset, unsigned long pg_count)
+{
+ agp_fixup_entry_t *pt;
+ unsigned long offset, size;
+ unsigned long start, end;
+
+ offset = agp_bridge.gart_bus_addr + (pg_offset << PAGE_SHIFT);
+ size = pg_count << PAGE_SHIFT;
+
+ for (pt = agp_fixup_list; pt; pt = pt->next) {
+ /* Calculate the overlap */
+ start = max(offset, pt->offset);
+ end = min(offset + size, pt->offset + pt->size);
+
+ /* Perform the fixup if necessary */
+ if(start < end)
+ agp_vma_fixup(pt->vma, (unsigned long) pt->handle +
+ start - pt->offset, end - start, start);
+ }
+}
+
+void *agp_add_fixup(struct vm_area_struct *vma, unsigned long size,
+ unsigned long
offset)
+{
+ agp_fixup_entry_t *entry;
+ void *handle;
+
+ if(!(entry = kmalloc(sizeof(*entry), GFP_KERNEL))) {
+ printk("[agpgart][%s] out of memory!\n", __FUNCTION__);
+ return NULL;
+ }
+ memset(entry, 0, sizeof(*entry));
+
+ if(vma = NULL) {
+ handle = agp_vmmap(offset, size);
+ } else {
+ handle = (void *) vma->vm_start;
+ }
+
+ spin_lock(&agp_fixup_lock);
+ entry->next = agp_fixup_list;
+ entry->offset = offset;
+ entry->size = size;
+ entry->handle = (unsigned long) handle;
+ entry->vma = vma;
+ agp_fixup_list = entry;
+ spin_unlock(&agp_fixup_lock);
+
+ /* Do the initial fixup */
+ agp_vma_fixup(entry->vma, entry->handle, entry->size,
entry->offset);
+
+ return handle;
+}
+
+void agp_remove_fixup(struct vm_area_struct *vma, void *handle)
+{
+ agp_fixup_entry_t *pt, *prev;
+
+ if(vma = NULL) {
+ if(RID((unsigned long) handle) = 5)
+ agp_vmunmap(handle);
+ else {
+ printk("[agpgart][%s] need a vma with non-kernel"
+ "maps!!\n", __FUNCTION__);
+ return;
+ }
+ }
+
+ spin_lock(&agp_fixup_lock);
+ for (pt = agp_fixup_list, prev = NULL; pt; prev = pt, pt = pt->next)
+ {
+ if ((vma = NULL && pt->handle = ((unsigned long) handle))
||
+ (pt->vma = vma)) {
+
+ if (prev) {
+ prev->next = pt->next;
+ } else {
+ agp_fixup_list = pt->next;
+ }
+
+ kfree(pt);
+ break;
+ }
+ }
+ spin_unlock(&agp_fixup_lock);
+}
+
+/*
+ * Look up and return the pte corresponding to addr. Take into account
that
+ * addr might be part of a vmmap in the vmalloc area.
+ */
+static pte_t * agp_lookup_pte(struct vm_area_struct *vma, unsigned long
addr,
+ int kernel)
{
+
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ if(kernel) {
+ dir = pgd_offset_k(addr);
+ } else {
+ dir = pgd_offset(vma->vm_mm, addr);
+ }
+
+ pmd = pmd_offset(dir, addr);
+
+ if(pmd) {
+ pte = pte_offset(pmd, addr);
+
+ if(pte)
+ return pte;
+ else
+ return NULL;
+ } else
+ return NULL;
+}
+
+#define I460_GART_PAGE_MASK ((1UL << 24) - 1)
+#define APER_SIZE (((unsigned
long)A_SIZE_8(agp_bridge.current_size) \
+ ->size) << 20)
+/*
+ * Return a GART page number (equivalent to an offset in the GATT) for
+ * address _a. Don't call this unless you know _a is in the AGP aperture.
+ */
+#define APGN(_a) (((_a) - agp_bridge.gart_bus_addr) >> \
+ intel_i460_pageshift)
+/*
+ * Return the address' offset within its GART page.
+ */
+#define APGOFF(_a) ((_a) & ((1 << intel_i460_pageshift) - 1))
+
+#define HIGH_ENOUGH(addr) ((addr) >= agp_bridge.gart_bus_addr)
+#define LOW_ENOUGH(addr) ((addr) < agp_bridge.gart_bus_addr + APER_SIZE)
+#define IN_APER(addr) (HIGH_ENOUGH(addr) && LOW_ENOUGH(addr))
+
+/*
+ * Do the page table fixup. This routine walks page by page through the
+ * block of virtual addresses in question and manually alters the
+ * corresponding page table entries so that they point to the appropriate
+ * physical addresses.
+ */
+int agp_vma_fixup(struct vm_area_struct *vma, unsigned long start, size_t
size,
+ unsigned long offset) {
+
+ unsigned long current_addr;
+ unsigned long old_pa, new_pa;
+ pte_t *pte;
+ int kflag = 0;
+ u32 entry;
+ int num_entries;
+
+ if((start & (~PAGE_MASK)) || (size & (~PAGE_MASK)))
+ return -EINVAL;
+
+ if(!IN_APER(offset))
+ return 0;
+
+ num_entries = A_SIZE_8(agp_bridge.current_size)->num_entries;
+
+ /*
+ * Figure out if start is in the vmalloc area. This method
obviously
+ * only works on IA-64.
+ */
+ if(vma = NULL) {
+ if(RID(start) = 5)
+ kflag = 1;
+ else
+ printk("[agpgart][%s] need vma for non-kernel
maps!!\n",
+ __FUNCTION__);
+ }
+
+ lock_kernel();
+
+ for(current_addr = start;
+ ((current_addr < start + size) &&
+ (APGN(offset) < intel_i460_tog));
+ current_addr += PAGE_SIZE, offset += PAGE_SIZE)
+ {
+ pte = agp_lookup_pte(vma, current_addr, kflag);
+
+ if(!pte)
+ goto finish_fixup;
+
+ /*
+ * Look at the pte and the current GART table to determine
this
+ * pte's old target and the current physical page
corresponding
+ * to this offset in the aperture.
+ */
+
+ old_pa = pte_val(*pte) & _PFN_MASK;
+
+ if(intel_i460_cpk)
+ entry = agp_bridge.gatt_table[APGN(offset)];
+ else {
+ /*
+ * We might be fixing up a map that covers unbound
+ * section of the aperture. Consequently there's no
+ * guarantee that i460_pg_detail is valid for this
+ * section.
+ */
+ if(i460_pg_detail[APGN(offset)] = NULL)
+ entry = 0;
+ else
+ entry = i460_pg_detail[APGN(offset)]
+ [APGOFF(offset) >>
PAGE_SHIFT];
+ }
+
+ /*
+ * Handle a valid GATT entry at the current offset
+ */
+ if(entry & INTEL_I460_GATT_VALID) {
+ new_pa = (entry & I460_GART_PAGE_MASK) << 12;
+
+ /*
+ * Only do a fix up if the physical page addressed
by
+ * the GATT entry has changed.
+ */
+ if(old_pa != new_pa) {
+ /*
+ * Only decrement reference counts on pages
+ * of physical memory which appear here due
+ * to previous fixups.
+ */
+ if(old_pa != offset)
+
atomic_dec(&virt_to_page(__va(old_pa))
+ ->count);
+ /*
+ * Replace the physical page referenced by
pte
+ * with the new one.
+ */
+ *pte = mk_pte_phys(new_pa,
+ __pgprot(pte_val(*pte) &
~_PFN_MASK));
+
+ /*
+ * Indicate that we're using this page.
(This
+ * really has to be done since __free_pages
+ * will be called on the page corresponding
+ * to new_pa when the mmap we're dealing
with
+ * is released).
+ */
+
atomic_inc(&virt_to_page(__va(new_pa))->count);
+ }
+ /*
+ * If this pte points somewhere other than its original
place
+ * in the aperture, it has already been fixed up. Therefore
+ * the reference count corresponding to old_pa's page has
+ * been incremented by this routine. Now that this GATT
entry
+ * has been somehow invalidated, the map we're working with
+ * no longer needs this page of physical memory.
+ *
+ * Decrement the count for old_pa and return this pte to its
+ * original state (pointing to the aperture).
+ */
+ } else if(old_pa != offset) {
+ atomic_dec(&virt_to_page(__va(old_pa))->count);
+ *pte = mk_pte_phys(offset,
+ __pgprot(pte_val(*pte) & ~_PFN_MASK));
+ }
+
+ /*
+ * If we miss the above conditional structure, then we're
+ * at a point in the map that hasn't been fixed up and
+ * corresponds to an unbound part of the AGP aperture.
+ * Continue looping to account for holes of unused space in
+ * the aperture.
+ */
+ }
+
+finish_fixup:
+ unlock_kernel();
+ flush_tlb_all();
+ return 0;
+}
+
+#endif /* CONFIG_AGP_PTE_FIXUPS */
+
+/*
+ * If the kernel page size is smaller that the chipset page size, we don't
+ * want to allocate memory until we know where it is to be bound in the
+ * aperture (a multi-kernel-page alloc might fit inside of an already
+ * allocated GART page). Consequently, don't allocate or free anything
+ * if i460_cpk (meaning chipset pages per kernel page) isn't set.
+ *
+ * Let's just hope nobody counts on the allocated AGP memory being there
+ * before bind time (I don't think current drivers do)...
+ */
+static unsigned long intel_i460_alloc_page(void)
+{
+ if(intel_i460_cpk)
+ return agp_generic_alloc_page();
+
+ /* Returning NULL would cause problems */
+ return ((unsigned long) __va(0));
+}
+
+static void intel_i460_destroy_page(unsigned long page)
+{
+ if(intel_i460_cpk)
+ agp_generic_destroy_page(page);
+}
+
+static gatt_mask intel_i460_masks[] +{
+ {
+ INTEL_I460_GATT_COHERENT |
+ INTEL_I460_GATT_VALID,
+ 0
+ }
+};
+
+static unsigned long intel_i460_mask_memory(unsigned long addr, int type)
+{
+ /* Make sure the returned address is a valid GATT entry */
+ return (agp_bridge.masks[0].mask | (((addr &
+ ~((1 << intel_i460_pageshift) - 1)) & 0xffffff000) >>
12));
+}
+
+static unsigned long intel_i460_unmask_memory(unsigned long addr)
+{
+ /* Turn a GATT entry into a physical address */
+ return ((addr & 0xffffff) << 12);
+}
+
+static aper_size_info_8 intel_i460_sizes[3] +{
+ /*
+ * The 32GB aperture is only available with a 4M GART page size.
+ * Due to the dynamic GART page size, we can't figure out page_order
+ * or num_entries until runtime.
+ */
+ {32768, 0, 0, 4},
+ {1024, 0, 0, 2},
+ {256, 0, 0, 1}
+};
+
+static int __init intel_i460_setup (struct pci_dev *pdev)
+{
+
+ agp_bridge.masks = intel_i460_masks;
+ agp_bridge.num_of_masks = 1;
+ agp_bridge.aperture_sizes = (void *) intel_i460_sizes;
+ agp_bridge.size_type = U8_APER_SIZE;
+ agp_bridge.num_aperture_sizes = 3;
+ agp_bridge.dev_private_data = NULL;
+ agp_bridge.needs_scratch_page = FALSE;
+ agp_bridge.configure = intel_i460_configure;
+ agp_bridge.fetch_size = intel_i460_fetch_size;
+ agp_bridge.cleanup = intel_i460_cleanup;
+ agp_bridge.tlb_flush = intel_i460_tlb_flush;
+ agp_bridge.mask_memory = intel_i460_mask_memory;
+ agp_bridge.unmask_memory = intel_i460_unmask_memory;
+ agp_bridge.agp_enable = agp_generic_agp_enable;
+ agp_bridge.cache_flush = global_cache_flush;
+ agp_bridge.create_gatt_table = intel_i460_create_gatt_table;
+ agp_bridge.free_gatt_table = intel_i460_free_gatt_table;
+ agp_bridge.insert_memory = intel_i460_insert_memory;
+ agp_bridge.remove_memory = intel_i460_remove_memory;
+ agp_bridge.alloc_by_type = agp_generic_alloc_by_type;
+ agp_bridge.free_by_type = agp_generic_free_by_type;
+ agp_bridge.agp_alloc_page = intel_i460_alloc_page;
+ agp_bridge.agp_destroy_page = intel_i460_destroy_page;
+
+ return 0;
+
+ (void) pdev; /* unused */
+}
+
+#endif /* CONFIG_AGP_I460 */
+
#ifdef CONFIG_AGP_INTEL
static int intel_fetch_size(void)
@@ -1265,6 +2298,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1295,6 +2329,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1325,6 +2360,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1442,6 +2478,7 @@
agp_bridge.cleanup = via_cleanup;
agp_bridge.tlb_flush = via_tlbflush;
agp_bridge.mask_memory = via_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1553,6 +2590,7 @@
agp_bridge.cleanup = sis_cleanup;
agp_bridge.tlb_flush = sis_tlbflush;
agp_bridge.mask_memory = sis_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1928,6 +2966,7 @@
agp_bridge.cleanup = amd_irongate_cleanup;
agp_bridge.tlb_flush = amd_irongate_tlbflush;
agp_bridge.mask_memory = amd_irongate_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = amd_create_gatt_table;
@@ -2171,6 +3210,7 @@
agp_bridge.cleanup = ali_cleanup;
agp_bridge.tlb_flush = ali_tlbflush;
agp_bridge.mask_memory = ali_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = ali_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -2314,6 +3354,15 @@
intel_generic_setup },
#endif /* CONFIG_AGP_INTEL */
+#ifdef CONFIG_AGP_I460
+ { PCI_DEVICE_ID_INTEL_460GX,
+ PCI_VENDOR_ID_INTEL,
+ INTEL_460GX,
+ "Intel",
+ "460GX",
+ intel_i460_setup },
+#endif
+
#ifdef CONFIG_AGP_SIS
{ PCI_DEVICE_ID_SI_630,
PCI_VENDOR_ID_SI,
@@ -2494,6 +3543,18 @@
return -ENODEV;
}
+static int agp_check_supported_device(struct pci_dev *dev) {
+
+ int i;
+
+ for(i = 0; i < ARRAY_SIZE (agp_bridge_info); i++) {
+ if(dev->vendor = agp_bridge_info[i].vendor_id &&
+ dev->device = agp_bridge_info[i].device_id)
+ return 1;
+ }
+
+ return 0;
+}
/* Supported Device Scanning routine */
@@ -2503,8 +3564,14 @@
u8 cap_ptr = 0x00;
u32 cap_id, scratch;
- if ((dev = pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, NULL)) =
NULL)
+ /*
+ * Some systems have multiple host bridges (i.e. BigSur), so
+ * we can't just use the first one we find.
+ */
+ do {
+ if ((dev = pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, dev))
= NULL)
return -ENODEV;
+ } while(!agp_check_supported_device(dev));
agp_bridge.dev = dev;
@@ -2697,7 +3764,7 @@
size_value = agp_bridge.fetch_size();
if (size_value = 0) {
- printk(KERN_ERR PFX "unable to detrimine aperture size.\n");
+ printk(KERN_ERR PFX "unable to determine aperture size.\n");
rc = -EINVAL;
goto err_out;
}
@@ -2770,7 +3837,13 @@
&agp_enable,
&agp_backend_acquire,
&agp_backend_release,
+#ifdef CONFIG_AGP_PTE_FIXUPS
+ &agp_copy_info,
+ &agp_add_fixup,
+ &agp_remove_fixup
+#else
&agp_copy_info
+#endif
};
static int __init agp_init(void)
diff -urN --ignore-all-space linux-davidm/drivers/char/agp/vmmap.c
linux-2.4.2-lia/drivers/char/agp/vmmap.c
--- linux-davidm/drivers/char/agp/vmmap.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.2-lia/drivers/char/agp/vmmap.c Wed Mar 21 23:19:51 2001
@@ -0,0 +1,231 @@
+/*
+ * vmmap.c
+ *
+ * Hack to allow virtual addressing fixups in the kernel's
+ * vmalloc area. This is needed so that the page tables in the
+ * kernel's vmalloc area can be used to emulate GART translation
+ * (which sadly isn't done on accesses from the processor in 460GX,
+ * see the comments in the I460 section of the AGPGART driver).
+ *
+ * This file is basically a copy of mm/vmalloc.c modified to not
+ * allocate and free pages.
+ *
+ * Chris Ahna <christopher.j.ahna@intel.com>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/malloc.h>
+#include <linux/vmalloc.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+#include <linux/agp_backend.h>
+#include <linux/module.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+
+#include "agp.h"
+
+#ifdef CONFIG_AGP_PTE_FIXUPS
+EXPORT_SYMBOL(agp_vmmap);
+EXPORT_SYMBOL(agp_vmunmap);
+
+static inline void agp_free_area_pte(pmd_t * pmd, unsigned long address,
unsigned long size)
+{
+ pte_t * pte;
+ unsigned long end;
+
+ if (pmd_none(*pmd))
+ return;
+ if (pmd_bad(*pmd)) {
+ pmd_ERROR(*pmd);
+ pmd_clear(pmd);
+ return;
+ }
+ pte = pte_offset(pmd, address);
+ address &= ~PMD_MASK;
+ end = address + size;
+ if (end > PMD_SIZE)
+ end = PMD_SIZE;
+ do {
+ pte_t page;
+ page = ptep_get_and_clear(pte);
+ address += PAGE_SIZE;
+ pte++;
+ if (pte_none(page) || pte_present(page)) {
+ continue;
+ }
+ printk(KERN_CRIT "Whee.. Swapped out page in kernel page
table\n");
+ } while (address < end);
+}
+
+static inline void agp_free_area_pmd(pgd_t * dir, unsigned long address,
unsigned long size)
+{
+ pmd_t * pmd;
+ unsigned long end;
+
+ if (pgd_none(*dir))
+ return;
+ if (pgd_bad(*dir)) {
+ pgd_ERROR(*dir);
+ pgd_clear(dir);
+ return;
+ }
+ pmd = pmd_offset(dir, address);
+ address &= ~PGDIR_MASK;
+ end = address + size;
+ if (end > PGDIR_SIZE)
+ end = PGDIR_SIZE;
+ do {
+ agp_free_area_pte(pmd, address, end - address);
+ address = (address + PMD_SIZE) & PMD_MASK;
+ pmd++;
+ } while (address < end);
+}
+
+void agp_vmfree_area_pages(unsigned long address, unsigned long size)
+{
+ pgd_t * dir;
+ unsigned long end = address + size;
+
+ dir = pgd_offset_k(address);
+ flush_cache_all();
+ do {
+ agp_free_area_pmd(dir, address, end - address);
+ address = (address + PGDIR_SIZE) & PGDIR_MASK;
+ dir++;
+ } while (address && (address < end));
+ flush_tlb_all();
+}
+
+static inline int agp_alloc_area_pte (pte_t * pte, unsigned long address,
+ unsigned long size, unsigned long target, pgprot_t
prot)
+{
+ unsigned long end;
+
+ address &= ~PMD_MASK;
+ end = address + size;
+ if (end > PMD_SIZE)
+ end = PMD_SIZE;
+ target = (unsigned long) __va(target);
+ do {
+ struct page * page;
+ if (!pte_none(*pte))
+ printk(KERN_ERR "alloc_area_pte: page already
exists\n");
+ page = virt_to_page(target);
+ if (!page)
+ return -ENOMEM;
+ set_pte(pte, mk_pte(page, prot));
+ address += PAGE_SIZE;
+ target += PAGE_SIZE;
+ pte++;
+ } while (address < end);
+ return 0;
+}
+
+static inline int agp_alloc_area_pmd(pmd_t * pmd, unsigned long address,
unsigned long size, unsigned long target, pgprot_t prot)
+{
+ unsigned long end;
+
+ address &= ~PGDIR_MASK;
+ end = address + size;
+ if (end > PGDIR_SIZE)
+ end = PGDIR_SIZE;
+ do {
+ pte_t * pte = pte_alloc_kernel(pmd, address);
+ if (!pte)
+ return -ENOMEM;
+ if (agp_alloc_area_pte(pte, address, end - address, target,
prot))
+ return -ENOMEM;
+ target += ((address + PMD_SIZE) & PMD_MASK) - address;
+ address = (address + PMD_SIZE) & PMD_MASK;
+ pmd++;
+ } while (address < end);
+ return 0;
+}
+
+inline int agp_vmalloc_area_pages (unsigned long address, unsigned long
size,
+ unsigned long target, pgprot_t prot)
+{
+ pgd_t * dir;
+ unsigned long end = address + size;
+ int ret;
+
+ dir = pgd_offset_k(address);
+ flush_cache_all();
+ lock_kernel();
+ do {
+ pmd_t *pmd;
+
+ pmd = pmd_alloc_kernel(dir, address);
+ ret = -ENOMEM;
+ if (!pmd)
+ break;
+
+ ret = -ENOMEM;
+ if (agp_alloc_area_pmd(pmd, address, end - address, target,
prot))
+ break;
+
+ target += ((address + PGDIR_SIZE) & PGDIR_MASK) - address;
+ address = (address + PGDIR_SIZE) & PGDIR_MASK;
+ dir++;
+
+ ret = 0;
+ } while (address && (address < end));
+ unlock_kernel();
+ flush_tlb_all();
+ return ret;
+}
+
+void agp_vmunmap(void *addr)
+{
+ struct vm_struct **p, *tmp;
+
+ if (!addr)
+ return;
+ if ((PAGE_SIZE-1) & (unsigned long) addr) {
+ printk(KERN_ERR "Trying to vfree() bad address (%p)\n",
addr);
+ return;
+ }
+ write_lock(&vmlist_lock);
+ for (p = &vmlist ; (tmp = *p) ; p = &tmp->next) {
+ if (tmp->addr = addr) {
+ *p = tmp->next;
+ agp_vmfree_area_pages(VMALLOC_VMADDR(tmp->addr),
tmp->size);
+ kfree(tmp);
+ write_unlock(&vmlist_lock);
+ return;
+ }
+ }
+ write_unlock(&vmlist_lock);
+ printk(KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
addr);
+}
+
+void *__agp_vmmap (unsigned long size, unsigned long target, pgprot_t prot)
+{
+ void * addr;
+ struct vm_struct *area;
+
+ size = PAGE_ALIGN(size);
+ if (!size) {
+ BUG();
+ return NULL;
+ }
+ area = get_vm_area(size, VM_ALLOC);
+ if (!area)
+ return NULL;
+ addr = area->addr;
+ if (agp_vmalloc_area_pages(VMALLOC_VMADDR(addr), size, target,
prot)) {
+ vfree(addr);
+ return NULL;
+ }
+
+ return addr;
+}
+
+void *agp_vmmap(unsigned long offset, unsigned long size)
+{
+ return __agp_vmmap(size, offset, PAGE_KERNEL);
+}
+#endif /* CONFIG_AGP_PTE_FIXUPS */
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/agpsupport.c
linux-2.4.2-lia/drivers/char/drm/agpsupport.c
--- linux-davidm/drivers/char/drm/agpsupport.c Thu Nov 16 14:05:49 2000
+++ linux-2.4.2-lia/drivers/char/drm/agpsupport.c Wed Mar 21 23:21:29
2001
@@ -30,6 +30,7 @@
#define __NO_VERSION__
#include "drmP.h"
+#include <linux/config.h>
#include <linux/module.h>
#if LINUX_VERSION_CODE < 0x020400
#include "agpsupport-pre24.h"
@@ -40,6 +41,11 @@
static const drm_agp_t *drm_agp = NULL;
+#ifdef CONFIG_AGP_PTE_FIXUPS
+unsigned long drm_agp_aper_base = 0;
+unsigned long drm_agp_aper_size = 0;
+#endif
+
int drm_agp_info(struct inode *inode, struct file *filp, unsigned int cmd,
unsigned long arg)
{
@@ -192,6 +198,19 @@
return drm_unbind_agp(entry->memory);
}
+#ifdef CONFIG_AGP_PTE_FIXUPS
+void *drm_agp_add_fixup(struct vm_area_struct *vma, size_t size,
+ unsigned long offset)
+{
+ return drm_agp->add_fixup(vma, size, offset);
+}
+
+void drm_agp_remove_fixup(struct vm_area_struct *vma, void *handle)
+{
+ drm_agp->remove_fixup(vma, handle);
+}
+#endif /* CONFIG_AGP_PTE_FIXUPS */
+
int drm_agp_bind(struct inode *inode, struct file *filp, unsigned int cmd,
unsigned long arg)
{
@@ -264,6 +283,7 @@
#if LINUX_VERSION_CODE >= 0x020400
case INTEL_I840: head->chipset = "Intel i840";
break;
#endif
+ case INTEL_460GX: head->chipset = "Intel 460GX";
break;
case VIA_GENERIC: head->chipset = "VIA";
break;
case VIA_VP3: head->chipset = "VIA VP3";
break;
@@ -292,6 +312,12 @@
head->chipset,
head->agp_info.aper_base,
head->agp_info.aper_size);
+
+#ifdef CONFIG_AGP_PTE_FIXUPS
+ drm_agp_aper_base = head->agp_info.aper_base;
+ drm_agp_aper_size = ((unsigned long)
head->agp_info.aper_size)
+ << 20;
+#endif
}
return head;
}
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/bufs.c
linux-2.4.2-lia/drivers/char/drm/bufs.c
--- linux-davidm/drivers/char/drm/bufs.c Tue Aug 29 14:09:15 2000
+++ linux-2.4.2-lia/drivers/char/drm/bufs.c Wed Mar 21 23:21:41 2001
@@ -73,7 +73,7 @@
switch (map->type) {
case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER:
-#ifndef __sparc__
+#if !defined(__sparc__) && !defined(__ia64__)
if (map->offset + map->size < map->offset
|| map->offset < virt_to_phys(high_memory)) {
drm_free(map, sizeof(*map), DRM_MEM_MAPS);
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/drmP.h
linux-2.4.2-lia/drivers/char/drm/drmP.h
--- linux-davidm/drivers/char/drm/drmP.h Sun Dec 31 11:17:22 2000
+++ linux-2.4.2-lia/drivers/char/drm/drmP.h Wed Mar 21 23:21:59 2001
@@ -293,6 +293,20 @@
#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x))
#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist)
+#ifdef CONFIG_AGP_PTE_FIXUPS
+# define HIGH_ENOUGH(addr) ((addr) >= drm_agp_aper_base)
+# define LOW_ENOUGH(addr) ((addr) < drm_agp_aper_base +
drm_agp_aper_size)
+# define IN_APER(addr) (HIGH_ENOUGH(addr) && LOW_ENOUGH(addr))
+# define RID(_x) ((_x) >> 61)
+# define IOREMAP_SAFE(_x, _y) (IN_APER(_x) ? drm_agp_add_fixup(NULL, _y,
_x) \
+ : ioremap(_x, _y))
+# define IOUNMAP_SAFE(_x) ((RID((unsigned long) (_x)) = 5) ? \
+ drm_agp_remove_fixup(NULL, _x) :
iounmap(_x))
+#else
+# define IOREMAP_SAFE(_x, _y) ioremap(_x, _y)
+# define IOUNMAP_SAFE(_x) iounmap(_x)
+#endif
+
typedef int drm_ioctl_t(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
@@ -605,6 +619,14 @@
sigset_t sigmask;
} drm_device_t;
+#ifdef CONFIG_AGP_PTE_FIXUPS
+/*
+ * We need these to determine if addresses are in the aperture when we
don't
+ * have access to the drm_device_t structure.
+ */
+extern unsigned long drm_agp_aper_base;
+extern unsigned long drm_agp_aper_size;
+#endif
/* Internal function definitions */
@@ -830,6 +852,12 @@
extern int drm_agp_free_memory(agp_memory *handle);
extern int drm_agp_bind_memory(agp_memory *handle, off_t start);
extern int drm_agp_unbind_memory(agp_memory *handle);
+#ifdef CONFIG_AGP_PTE_FIXUPS
+extern void *drm_agp_add_fixup(struct vm_area_struct *vma,
+ size_t size, unsigned long offset);
+extern void drm_agp_remove_fixup(struct vm_area_struct *vma,
+ void *handle);
+#endif
#endif
#endif
#endif
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/memory.c
linux-2.4.2-lia/drivers/char/drm/memory.c
--- linux-davidm/drivers/char/drm/memory.c Thu Nov 16 14:05:49 2000
+++ linux-2.4.2-lia/drivers/char/drm/memory.c Wed Mar 21 23:23:22 2001
@@ -306,7 +306,7 @@
return NULL;
}
- if (!(pt = ioremap(offset, size))) {
+ if (!(pt = IOREMAP_SAFE(offset, size))) {
spin_lock(&drm_mem_lock);
++drm_mem_stats[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&drm_mem_lock);
@@ -328,7 +328,7 @@
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
else
- iounmap(pt);
+ IOUNMAP_SAFE(pt);
spin_lock(&drm_mem_lock);
drm_mem_stats[DRM_MEM_MAPPINGS].bytes_freed += size;
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/mga_dma.c
linux-2.4.2-lia/drivers/char/drm/mga_dma.c
--- linux-davidm/drivers/char/drm/mga_dma.c Mon Dec 11 12:39:44 2000
+++ linux-2.4.2-lia/drivers/char/drm/mga_dma.c Wed Mar 21 23:23:33 2001
@@ -673,6 +673,8 @@
drm_mga_private_t *dev_priv;
drm_map_t *sarea_map = NULL;
+ u32 status_page_flag = 0;
+
dev_priv = drm_alloc(sizeof(drm_mga_private_t), DRM_MEM_DRIVER);
if(dev_priv = NULL) return -ENOMEM;
dev->dev_private = (void *) dev_priv;
@@ -741,10 +743,21 @@
return -ENOMEM;
}
+ /*
+ * Make sure we don't ask the G400 to do reads and writes above
+ * 4GB for status information.
+ */
+ if((sizeof(void *) = 8) &&
+ (((unsigned long) virt_to_phys(high_memory)) >= (1UL
<< 32))) {
+ status_page_flag = 0x00000000;
+ } else {
+ status_page_flag = 0x00000003;
+ }
+
/* Write status page when secend or softrap occurs */
MGA_WRITE(MGAREG_PRIMPTR,
- virt_to_bus((void *)dev_priv->real_status_page) |
0x00000003);
-
+ virt_to_bus((void *)dev_priv->real_status_page) |
+ status_page_flag);
/* Private is now filled in, initialize the hardware */
{
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/mga_drv.h
linux-2.4.2-lia/drivers/char/drm/mga_drv.h
--- linux-davidm/drivers/char/drm/mga_drv.h Thu Jan 4 22:40:11 2001
+++ linux-2.4.2-lia/drivers/char/drm/mga_drv.h Wed Mar 21 23:23:58 2001
@@ -295,7 +295,7 @@
num_dwords + 1 + outcount, ADRINDEX(reg), val); \
if( ++outcount = 4) { \
outcount = 0; \
- dma_ptr[0] = *(unsigned long *)tempIndex; \
+ dma_ptr[0] = *(u32 *)tempIndex; \
dma_ptr+=5; \
num_dwords += 5; \
} \
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/r128_cce.c
linux-2.4.2-lia/drivers/char/drm/r128_cce.c
--- linux-davidm/drivers/char/drm/r128_cce.c Thu Jan 4 22:40:11 2001
+++ linux-2.4.2-lia/drivers/char/drm/r128_cce.c Wed Mar 21 23:24:08 2001
@@ -229,7 +229,17 @@
int i;
for ( i = 0 ; i < dev_priv->usec_timeout ; i++ ) {
- if ( *dev_priv->ring.head = dev_priv->ring.tail ) {
+ /*
+ * XXX - this is (I think) a 460GX specific hack
+ *
+ * When doing texturing, ring.tail sometimes gets ahead of
+ * PM4_BUFFER_DL_WPTR by 2; consequently, the card processes
+ * its whole quota of instructions and *ring.head is still 2
+ * short of ring.tail. Work around this for now in lieu of
+ * a better solution.
+ */
+ if ( (*dev_priv->ring.head = dev_priv->ring.tail) ||
+ ((dev_priv->ring.tail - *dev_priv->ring.head) = 2)
) {
int pm4stat = R128_READ( R128_PM4_STAT );
if ( ( (pm4stat & R128_PM4_FIFOCNT_MASK) > dev_priv->cce_fifo_size ) &&
@@ -342,10 +352,20 @@
R128_WRITE( R128_PM4_BUFFER_DL_WPTR, 0 );
R128_WRITE( R128_PM4_BUFFER_DL_RPTR, 0 );
- /* DL_RPTR_ADDR is a physical address in AGP space. */
+ /*
+ * XXX - This is a 460GX specific hack
+ *
+ * We have to hack this right now. 460GX isn't claiming PCI writes
+ * from the card into the AGP aperture. Because of this, we have
+ * to get space outside of the aperture for RPTR_ADDR.
+ */
+ dev_priv->ring.head = (void *) __get_free_page(GFP_KERNEL);
+ atomic_inc(&virt_to_page(dev_priv->ring.head)->count);
+ set_bit(PG_locked, &virt_to_page(dev_priv->ring.head)->flags);
+ dev_priv->ring.head = __va(dev_priv->ring.head);
+
*dev_priv->ring.head = 0;
- R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
- dev_priv->ring_rptr->offset );
+ R128_WRITE(R128_PM4_BUFFER_DL_RPTR_ADDR, __pa(dev_priv->ring.head));
/* Set watermark control */
R128_WRITE( R128_PM4_BUFFER_WM_CNTL,
@@ -530,6 +550,14 @@
}
#endif
+ /*
+ * Free the page we grabbed for RPTR_ADDR
+ */
+ atomic_dec(&virt_to_page(dev_priv->ring.head)->count);
+ clear_bit(PG_locked,
&virt_to_page(dev_priv->ring.head)->flags);
+ wake_up(&virt_to_page(dev_priv->ring.head)->wait);
+ free_page((unsigned long) dev_priv->ring.head);
+
drm_free( dev->dev_private, sizeof(drm_r128_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/r128_drv.h
linux-2.4.2-lia/drivers/char/drm/r128_drv.h
--- linux-davidm/drivers/char/drm/r128_drv.h Thu Jan 4 22:40:11 2001
+++ linux-2.4.2-lia/drivers/char/drm/r128_drv.h Wed Mar 21 23:24:21 2001
@@ -385,7 +385,7 @@
#define R128_MAX_VB_VERTS (0xffff)
-#define R128_BASE(reg) ((u32)(dev_priv->mmio->handle))
+#define R128_BASE(reg) ((unsigned long)(dev_priv->mmio->handle))
#define R128_ADDR(reg) (R128_BASE(reg) + reg)
#define R128_DEREF(reg) *(__volatile__ int *)R128_ADDR(reg)
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/radeon_cp.c
linux-2.4.2-lia/drivers/char/drm/radeon_cp.c
--- linux-davidm/drivers/char/drm/radeon_cp.c Tue Jan 30 10:43:27 2001
+++ linux-2.4.2-lia/drivers/char/drm/radeon_cp.c Wed Mar 21 23:24:32
2001
@@ -592,10 +592,23 @@
/* Initialize the ring buffer's read and write pointers */
cur_read_ptr = RADEON_READ( RADEON_CP_RB_RPTR );
RADEON_WRITE( RADEON_CP_RB_WPTR, cur_read_ptr );
+
+ /*
+ * XXX - This is a 460GX specific hack
+ *
+ * We have to hack this right now. The GXB isn't claiming PCI
writes
+ * from the card into the AGP aperture. Because of this, we have
+ * to get space outside of the aperture for RPTR_ADDR.
+ */
+ dev_priv->ring.head = (void *) __get_free_page(GFP_KERNEL);
+ atomic_inc(&virt_to_page(dev_priv->ring.head)->count);
+ set_bit(PG_locked, &virt_to_page(dev_priv->ring.head)->flags);
+ dev_priv->ring.head = __va(dev_priv->ring.head);
+
*dev_priv->ring.head = cur_read_ptr;
dev_priv->ring.tail = cur_read_ptr;
- RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR, dev_priv->ring_rptr->offset );
+ RADEON_WRITE(RADEON_CP_RB_RPTR_ADDR, __pa(dev_priv->ring.head));
/* Set ring buffer size */
RADEON_WRITE( RADEON_CP_RB_CNTL, dev_priv->ring.size_l2qw );
@@ -837,6 +850,14 @@
}
#endif
+ /*
+ * Free the page we grabbed for RPTR_ADDR.
+ */
+ atomic_dec(&virt_to_page(dev_priv->ring.head)->count);
+ clear_bit(PG_locked,
&virt_to_page(dev_priv->ring.head)->flags);
+ wake_up(&virt_to_page(dev_priv->ring.head)->wait);
+ free_page((unsigned long) dev_priv->ring.head);
+
drm_free( dev->dev_private, sizeof(drm_radeon_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/radeon_drv.h
linux-2.4.2-lia/drivers/char/drm/radeon_drv.h
--- linux-davidm/drivers/char/drm/radeon_drv.h Tue Jan 30 10:43:27 2001
+++ linux-2.4.2-lia/drivers/char/drm/radeon_drv.h Wed Mar 21 23:24:45
2001
@@ -535,7 +535,7 @@
#define RADEON_MAX_VB_VERTS (0xffff)
-#define RADEON_BASE(reg) ((u32)(dev_priv->mmio->handle))
+#define RADEON_BASE(reg) ((unsigned long)(dev_priv->mmio->handle))
#define RADEON_ADDR(reg) (RADEON_BASE(reg) + reg)
#define RADEON_DEREF(reg) *(__volatile__ u32 *)RADEON_ADDR(reg)
diff -urN --ignore-all-space linux-davidm/drivers/char/drm/vm.c
linux-2.4.2-lia/drivers/char/drm/vm.c
--- linux-davidm/drivers/char/drm/vm.c Wed Mar 21 23:46:40 2001
+++ linux-2.4.2-lia/drivers/char/drm/vm.c Wed Mar 21 23:25:22 2001
@@ -30,6 +30,7 @@
*/
#define __NO_VERSION__
+#include <linux/config.h>
#include "drmP.h"
struct vm_operations_struct drm_vm_ops = {
@@ -200,6 +201,12 @@
up(&dev->struct_sem);
}
#endif
+
+#ifdef CONFIG_AGP_PTE_FIXUPS
+ if(IN_APER(VM_OFFSET(vma)))
+ drm_agp_add_fixup(vma, vma->vm_end - vma->vm_start,
+ VM_OFFSET(vma));
+#endif
}
void drm_vm_close(struct vm_area_struct *vma)
@@ -231,6 +238,11 @@
}
}
up(&dev->struct_sem);
+#endif
+
+#ifdef CONFIG_AGP_PTE_FIXUPS
+ if(IN_APER(VM_OFFSET(vma)))
+ drm_agp_remove_fixup(vma, (void *) vma->vm_start);
#endif
}
diff -urN --ignore-all-space linux-davidm/fs/exec.c
linux-2.4.2-lia/fs/exec.c
--- linux-davidm/fs/exec.c Wed Feb 28 12:58:16 2001
+++ linux-2.4.2-lia/fs/exec.c Wed Mar 21 23:25:45 2001
@@ -150,7 +150,7 @@
}
/*
- * count() counts the number of arguments/envelopes
+ * count() counts the number of strings in array ARGV.
*/
static int count(char ** argv, int max)
{
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/asmmacro.h
linux-2.4.2-lia/include/asm-ia64/asmmacro.h
--- linux-davidm/include/asm-ia64/asmmacro.h Fri Aug 11 19:09:06 2000
+++ linux-2.4.2-lia/include/asm-ia64/asmmacro.h Wed Mar 21 23:26:04 2001
@@ -2,25 +2,9 @@
#define _ASM_IA64_ASMMACRO_H
/*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Copyright (C) 2000-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
-
-#if 1
-
-/*
- * This is a hack that's necessary as long as we support old versions
- * of gas, that have no unwind support.
- */
-#include <linux/config.h>
-
-#ifdef CONFIG_IA64_NEW_UNWIND
-# define UNW(args...) args
-#else
-# define UNW(args...)
-#endif
-
-#endif
#define ENTRY(name) \
.align 32; \
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/efi.h
linux-2.4.2-lia/include/asm-ia64/efi.h
--- linux-davidm/include/asm-ia64/efi.h Wed Mar 21 23:46:40 2001
+++ linux-2.4.2-lia/include/asm-ia64/efi.h Wed Mar 21 23:26:14 2001
@@ -20,9 +20,12 @@
#include <asm/system.h>
#define EFI_SUCCESS 0
-#define EFI_INVALID_PARAMETER 2
-#define EFI_UNSUPPORTED 3
-#define EFI_BUFFER_TOO_SMALL 4
+#define EFI_LOAD_ERROR (1L | (1L << 63))
+#define EFI_INVALID_PARAMETER (2L | (1L << 63))
+#define EFI_UNSUPPORTED (3L | (1L << 63))
+#define EFI_BAD_BUFFER_SIZE (4L | (1L << 63))
+#define EFI_BUFFER_TOO_SMALL (5L | (1L << 63))
+#define EFI_NOT_FOUND (14L | (1L << 63))
typedef unsigned long efi_status_t;
typedef u8 efi_bool_t;
@@ -234,5 +237,13 @@
extern void efi_memmap_walk (efi_freemem_callback_t callback, void *arg);
extern void efi_gettimeofday (struct timeval *tv);
extern void efi_enter_virtual_mode (void); /* switch EFI to virtual
mode, if possible */
+
+
+/*
+ * Variable Attributes
+ */
+#define EFI_VARIABLE_NON_VOLATILE 0x0000000000000001
+#define EFI_VARIABLE_BOOTSERVICE_ACCESS 0x0000000000000002
+#define EFI_VARIABLE_RUNTIME_ACCESS 0x0000000000000004
#endif /* _ASM_IA64_EFI_H */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/module.h
linux-2.4.2-lia/include/asm-ia64/module.h
--- linux-davidm/include/asm-ia64/module.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.2-lia/include/asm-ia64/module.h Wed Mar 21 23:26:39 2001
@@ -35,7 +35,6 @@
static inline int
ia64_module_init(struct module *mod)
{
-#ifdef CONFIG_IA64_NEW_UNWIND
struct archdata *archdata;
if (!mod_member_present(mod, archdata_start) ||
!mod->archdata_start)
@@ -79,14 +78,12 @@
(unsigned long)
archdata->segment_base,
(unsigned long)
archdata->gp,
archdata->unw_start,
archdata->unw_end);
-#endif /* CONFIG_IA64_NEW_UNWIND */
return 0;
}
static inline void
ia64_module_unmap(void * addr)
{
-#ifdef CONFIG_IA64_NEW_UNWIND
struct module *mod = (struct module *) addr;
struct archdata *archdata;
@@ -100,7 +97,6 @@
if (archdata->unw_table != NULL)
unw_remove_unwind_table((void *)
archdata->unw_table);
}
-#endif /* CONFIG_IA64_NEW_UNWIND */
vfree(addr);
}
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/ptrace.h
linux-2.4.2-lia/include/asm-ia64/ptrace.h
--- linux-davidm/include/asm-ia64/ptrace.h Thu Jan 4 22:40:20 2001
+++ linux-2.4.2-lia/include/asm-ia64/ptrace.h Wed Mar 21 23:26:51 2001
@@ -2,8 +2,8 @@
#define _ASM_IA64_PTRACE_H
/*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
*
* 12/07/98 S. Eranian added pt_regs & switch_stack
@@ -225,17 +225,10 @@
extern void ia64_flush_fph (struct task_struct *t);
extern void ia64_sync_fph (struct task_struct *t);
-#ifdef CONFIG_IA64_NEW_UNWIND
/* get nat bits for scratch registers such that bit N=1 iff scratch
register rN is a NaT */
extern unsigned long ia64_get_scratch_nat_bits (struct pt_regs *pt,
unsigned long scratch_unat);
/* put nat bits for scratch registers such that scratch register rN is a
NaT iff bit N=1 */
extern unsigned long ia64_put_scratch_nat_bits (struct pt_regs *pt,
unsigned long nat);
-#else
- /* get nat bits for r1-r31 such that bit N=1 iff rN is a NaT */
- extern long ia64_get_nat_bits (struct pt_regs *pt, struct switch_stack
*sw);
- /* put nat bits for r1-r31 such that rN is a NaT iff bit N=1 */
- extern void ia64_put_nat_bits (struct pt_regs *pt, struct switch_stack
*sw, unsigned long nat);
-#endif
extern void ia64_increment_ip (struct pt_regs *pt);
extern void ia64_decrement_ip (struct pt_regs *pt);
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/sigcontext.h
linux-2.4.2-lia/include/asm-ia64/sigcontext.h
--- linux-davidm/include/asm-ia64/sigcontext.h Sun Feb 6 18:42:40 2000
+++ linux-2.4.2-lia/include/asm-ia64/sigcontext.h Wed Mar 21 23:27:19
2001
@@ -36,6 +36,7 @@
unsigned long sc_ar_lc; /* loop count register */
unsigned long sc_pr; /* predicate registers */
unsigned long sc_br[8]; /* branch registers */
+ /* Note: sc_gr[0] is used as the "uc_link" member of ucontext_t */
unsigned long sc_gr[32]; /* general registers (static
partition) */
struct ia64_fpreg sc_fr[128]; /* floating-point registers
*/
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/uaccess.h
linux-2.4.2-lia/include/asm-ia64/uaccess.h
--- linux-davidm/include/asm-ia64/uaccess.h Thu Jan 4 22:40:21 2001
+++ linux-2.4.2-lia/include/asm-ia64/uaccess.h Wed Mar 21 23:27:37 2001
@@ -26,8 +26,8 @@
* associated and, if so, sets r8 to -EFAULT and clears r9 to 0 and
* then resumes execution at the continuation point.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/errno.h>
@@ -90,8 +90,8 @@
#define __get_user_nocheck(x,ptr,size) \
({ \
- register long __gu_err __asm__ ("r8") = 0; \
- register long __gu_val __asm__ ("r9") = 0; \
+ register long __gu_err asm ("r8") = 0; \
+ register long __gu_val asm ("r9") = 0; \
switch (size) { \
case 1: __get_user_8(ptr); break; \
case 2: __get_user_16(ptr); break; \
@@ -105,8 +105,8 @@
#define __get_user_check(x,ptr,size,segment) \
({ \
- register long __gu_err __asm__ ("r8") = -EFAULT; \
- register long __gu_val __asm__ ("r9") = 0; \
+ register long __gu_err asm ("r8") = -EFAULT; \
+ register long __gu_val asm ("r9") = 0; \
const __typeof__(*(ptr)) *__gu_addr = (ptr); \
if (__access_ok((long)__gu_addr,size,segment)) { \
__gu_err = 0; \
@@ -126,33 +126,47 @@
#define __m(x) (*(struct __large_struct *)(x))
/* We need to declare the __ex_table section before we can use it in
.xdata. */
-__asm__ (".section \"__ex_table\", \"a\"\n\t.previous");
+asm (".section \"__ex_table\", \"a\"\n\t.previous");
+
+#if __GNUC__ >= 3
+# define GAS_HAS_LOCAL_TAGS /* define if gas supports local tags a la
[1:] */
+#endif
+
+#ifdef GAS_HAS_LOCAL_TAGS
+# define _LL "[1:]"
+#else
+# define _LL "1:"
+#endif
#define __get_user_64(addr)
\
- __asm__ ("\n1:\tld8 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n"
\
+ asm ("\n"_LL"\tld8 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n"
\
+ _LL
\
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)),
"1"(__gu_err));
#define __get_user_32(addr)
\
- __asm__ ("\n1:\tld4 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n"
\
+ asm ("\n"_LL"\tld4 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n"
\
+ _LL
\
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)),
"1"(__gu_err));
#define __get_user_16(addr)
\
- __asm__ ("\n1:\tld2 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n"
\
+ asm ("\n"_LL"\tld2 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n"
\
+ _LL
\
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)),
"1"(__gu_err));
#define __get_user_8(addr)
\
- __asm__ ("\n1:\tld1 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)|1\n"
\
+ asm ("\n"_LL"\tld1 %0=%2%P2\t// %0 and %1 get overwritten by
exception handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)+4\n"
\
+ _LL
\
: "=r"(__gu_val), "=r"(__gu_err) : "m"(__m(addr)),
"1"(__gu_err));
extern void __put_user_unknown (void);
#define __put_user_nocheck(x,ptr,size) \
({ \
- register long __pu_err __asm__ ("r8") = 0; \
+ register long __pu_err asm ("r8") = 0; \
switch (size) { \
case 1: __put_user_8(x,ptr); break; \
case 2: __put_user_16(x,ptr); break; \
@@ -165,7 +179,7 @@
#define __put_user_check(x,ptr,size,segment) \
({ \
- register long __pu_err __asm__ ("r8") = -EFAULT; \
+ register long __pu_err asm ("r8") = -EFAULT; \
__typeof__(*(ptr)) *__pu_addr = (ptr); \
if (__access_ok((long)__pu_addr,size,segment)) { \
__pu_err = 0; \
@@ -186,27 +200,31 @@
* any memory gcc knows about, so there are no aliasing issues
*/
#define __put_user_64(x,addr)
\
- __asm__ __volatile__ (
\
- "\n1:\tst8 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n"
\
+ asm volatile (
\
+ "\n"_LL"\tst8 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n"
\
+ _LL
\
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_32(x,addr)
\
- __asm__ __volatile__ (
\
- "\n1:\tst4 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n"
\
+ asm volatile (
\
+ "\n"_LL"\tst4 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n"
\
+ _LL
\
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_16(x,addr)
\
- __asm__ __volatile__ (
\
- "\n1:\tst2 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n"
\
+ asm volatile (
\
+ "\n"_LL"\tst2 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n"
\
+ _LL
\
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
#define __put_user_8(x,addr)
\
- __asm__ __volatile__ (
\
- "\n1:\tst1 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
- "2:\n\t.xdata4 \"__ex_table\", @gprel(1b), (2b-1b)\n"
\
+ asm volatile (
\
+ "\n"_LL"\tst1 %1=%r2%P1\t// %0 gets overwritten by exception
handler\n" \
+ "\t.xdata4 \"__ex_table\", @gprel(1b), @gprel(1f)\n"
\
+ _LL
\
: "=r"(__pu_err) : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
/*
@@ -293,10 +311,14 @@
struct exception_table_entry {
int addr; /* gp-relative address of insn this fixup is for */
- int skip; /* number of bytes to skip to get to the
continuation point.
- Bit 0 tells us if r9 should be cleared to 0*/
+ int cont; /* gp-relative continuation address; if bit 2 is
set, r9 is set to 0 */
+};
+
+struct exception_fixup {
+ unsigned long cont; /* continuation point (bit 2: clear r9 if
set) */
};
-extern const struct exception_table_entry *search_exception_table (unsigned
long addr);
+extern struct exception_fixup search_exception_table (unsigned long addr);
+extern void handle_exception (struct pt_regs *regs, struct exception_fixup
fixup);
#endif /* _ASM_IA64_UACCESS_H */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/ucontext.h
linux-2.4.2-lia/include/asm-ia64/ucontext.h
--- linux-davidm/include/asm-ia64/ucontext.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.2-lia/include/asm-ia64/ucontext.h Wed Mar 21 23:27:56 2001
@@ -0,0 +1,12 @@
+#ifndef _ASM_IA64_UCONTEXT_H
+#define _ASM_IA64_UCONTEXT_H
+
+struct ucontext {
+ struct sigcontext uc_mcontext;
+};
+
+#define uc_link uc_mcontext.sc_gr[0] /* wrong type;
nobody cares */
+#define uc_sigmask uc_mcontext.sc_sigmask
+#define uc_stack uc_mcontext.sc_stack
+
+#endif /* _ASM_IA64_UCONTEXT_H */
diff -urN --ignore-all-space linux-davidm/include/linux/agp_backend.h
linux-2.4.2-lia/include/linux/agp_backend.h
--- linux-davidm/include/linux/agp_backend.h Wed Feb 28 12:58:27 2001
+++ linux-2.4.2-lia/include/linux/agp_backend.h Wed Mar 21 23:28:07 2001
@@ -27,6 +27,8 @@
#ifndef _AGP_BACKEND_H
#define _AGP_BACKEND_H 1
+#include <linux/config.h>
+
#ifndef TRUE
#define TRUE 1
#endif
@@ -48,6 +50,7 @@
INTEL_I815,
INTEL_I840,
INTEL_I850,
+ INTEL_460GX,
VIA_GENERIC,
VIA_VP3,
VIA_MVP3,
@@ -234,6 +237,25 @@
*
*/
+#ifdef CONFIG_AGP_PTE_FIXUPS
+extern void *agp_add_fixup(struct vm_area_struct *vma,
+ size_t size, unsigned long offset);
+/*
+ * agp_add_fixup :
+ *
+ * This function notifies AGPGART of a new virtual mapping it needs to keep
+ * in sync with the GART table.
+ */
+
+extern void agp_remove_fixup(struct vm_area_struct *vma, void *handle);
+/*
+ * agp_remove_fixup :
+ *
+ * This function tells AGPGART to forget about a previously registered
+ * fixup area.
+ */
+#endif
+
typedef struct {
void (*free_memory)(agp_memory *);
agp_memory *(*allocate_memory)(size_t, u32);
@@ -243,6 +265,11 @@
int (*acquire)(void);
void (*release)(void);
void (*copy_info)(agp_kern_info *);
+#ifdef CONFIG_AGP_PTE_FIXUPS
+ void *(*add_fixup)(struct vm_area_struct *, size_t,
+ unsigned long);
+ void (*remove_fixup)(struct vm_area_struct *, void *);
+#endif
} drm_agp_t;
extern const drm_agp_t *drm_agp_p;
diff -urN --ignore-all-space linux-davidm/kernel/printk.c
linux-2.4.2-lia/kernel/printk.c
--- linux-davidm/kernel/printk.c Wed Mar 21 23:46:40 2001
+++ linux-2.4.2-lia/kernel/printk.c Wed Mar 21 23:29:37 2001
@@ -513,8 +513,10 @@
#include <asm/io.h>
#define VGABASE ((char *)0xc0000000000b8000)
+#define VGALINES 24
+#define VGACOLS 80
-static int current_ypos = 50, current_xpos = 0;
+static int current_ypos = VGALINES, current_xpos = 0;
void
early_printk (const char *str)
@@ -523,26 +525,26 @@
int i, k, j;
while ((c = *str++) != '\0') {
- if (current_ypos >= 50) {
+ if (current_ypos >= VGALINES) {
/* scroll 1 line up */
- for (k = 1, j = 0; k < 50; k++, j++) {
- for (i = 0; i < 80; i++) {
- writew(readw(VGABASE + 2*(80*k +
i)),
- VGABASE + 2*(80*j + i));
+ for (k = 1, j = 0; k < VGALINES; k++, j++) {
+ for (i = 0; i < VGACOLS; i++) {
+ writew(readw(VGABASE + 2*(VGACOLS*k
+ i)),
+ VGABASE + 2*(VGACOLS*j + i));
}
}
- for (i = 0; i < 80; i++) {
- writew(0x720, VGABASE + 2*(80*j + i));
+ for (i = 0; i < VGACOLS; i++) {
+ writew(0x720, VGABASE + 2*(VGACOLS*j + i));
}
- current_ypos = 49;
+ current_ypos = VGALINES-1;
}
if (c = '\n') {
current_xpos = 0;
current_ypos++;
} else if (c != '\r') {
writew(((0x7 << 8) | (unsigned short) c),
- VGABASE + 2*(80*current_ypos +
current_xpos++));
- if (current_xpos >= 80) {
+ VGABASE + 2*(VGACOLS*current_ypos +
current_xpos++));
+ if (current_xpos >= VGACOLS) {
current_xpos = 0;
current_ypos++;
}
_______________________________________________
Linux-IA64 mailing list
Linux-IA64@linuxia64.org
http://lists.linuxia64.org/lists/listinfo/linux-ia64
^ permalink raw reply [flat|nested] 18+ messages in thread* [Linux-ia64] kernel update (second patch relative to 2.4.3)
2001-03-22 8:20 [Linux-ia64] kernel update (second patch relative to 2.4.2) David Mosberger
` (6 preceding siblings ...)
2001-03-23 6:53 ` Jim Wilson
@ 2001-04-05 20:26 ` David Mosberger
2001-04-24 11:46 ` Gustavo Niemeyer
` (8 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: David Mosberger @ 2001-04-05 20:26 UTC (permalink / raw)
To: linux-ia64
The latest IA-64 patch is now available at:
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/
in file linux-2.4.3-ia64-010405.diff*
CAVEAT: You will need a NEW BOOTLOADER with this patch. More on this
below.
What this patch does:
- Switch over to new bootstrap procedure developed by
Stephane: the kernel is now booted in physical mode and the
bootloader is now completely decoupled from the kernel: it
simply loads the kernel as an ELF file and then jumps to the
entry point. It has no special knowledge of where the
ZERO_PAGE is anymore, etc. The new bootloader also supports
compressed kernel. The Makefile supports this via "make
compressed": this target will produce "vmlinux" with all
debug info in it and "vmlinux.gz", a stripped and compressed
version of the kernel.
- Sync up with Linus' 2.4.3 changes. Warning: the locking
strategy changed for page table allocation. If you have
code that calls pmd_alloc(), you'll need to update that code
to use the mm->page_table_lock (I already made these changes
for the IA-32 subsystem and for the AGP driver).
- Add McKinley support (Alex).
- The IA-32 execve() no longer has to clear r8 through r15
(it's already done in the generic execve() path).
- Use 64MB instead of 256MB pages in the identity mapped
regions and load the kernel at address 68MB. This avoids
problems with conflicting memory attributes due to the first
1MB of address space (Asit).
- Don't panic in efivars.c just because EFI variables are not
supported. Note: please do not call BUG() *UNLESS* you're
dealing with a problem that absolutely positiveley would
kill the kernel, delete your files, or some such. Linux is
very careful not to panic for silly reasons and we'd like to
keep it that way.
- Change ptrace.c so sync_kernel_register_backing_store()
works again. Update unaligned.c and process.c accordingly.
This hasn't been well tested yet, but strace still works and
doing an inferior call with gdb no longer seems to corrupt
the stacked registers, so there is hope. ;-) Thanks to Kevin
for tracking down this issue. Also, hopefully fix
PTRACE_GETSIGINFO, PTRACE_SETSIGINFO, and the ptrace and
core dump handling of ar.rnat.
- Fix the initialization ordering bug that caused the 2.4.2
kernels to get stuck when using a page size smaller than
16KB.
- Fix a buglet in get_unmapped_area(). Thanks to Matthew
Wilcox for reporting this. (This bug had no negative
effect on any existing IA-64 implementation.)
- Fix thinko in printk() rate limiting code (Khalid).
- Update efirtc driver to move /proc/efirtc to
/proc/driver/efirtc and to use a format more in line with
the regular RTC driver (Stephane).
- Hack fs/binfmt_elf so that the auxiliary information is
passed independent of whether the binary is statically or
dynamically linked (Rich, I haven't run this past Linus yet,
but I hope he won't have an issue with it; it really makes
no sense at all to not pass this info for static binaries:
an ELF file is an ELF file, no matter whether it's static or
dynamic).
- Fix access_ok() to reject attempts to fool the kernel into
giving access to the virtually mapped page table (this
required moving the initial stack pointer down by one page).
- Various minor clean ups.
In case it isn't clear yet: this patch has far more changes than I'd
normally feel comfortable with (we're trying to _stabilize_ things,
remember...). However, as things go, several issues cropped up all at
the same time and this pretty much forced us to adopt a new bootstrap
procedure. And since we had to update the bootloader anyhow, this
provided an opportunity to clean up some of the cruft that has
accumulated over the past couple of months. The good news is that
while this patch may cause some pain, in my opinion the compressed
kernel support alone makes it well worth it. ;-) Still, test long and
well before shipping this kernel as part of a distro...
Now, as far as the new bootloader is concerned: Stephane has done all
the heavy lifting here and he'll soon send out an announcement with
pointers to both source and binary.
Usual disclaimers:
o The patch below is for informational purposes only. Get the real
thing from ftp.kernel.org.
o The patch below has been tested on 2P Big Sur and on the HP Ski
simulator, in both cases with UP and MP. As usual, YMMV.
Enjoy,
--david
diff -urN --ignore-all-space linux-davidm/arch/ia64/Makefile lia64/arch/ia64/Makefile
--- linux-davidm/arch/ia64/Makefile Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/Makefile Thu Apr 5 09:44:42 2001
@@ -20,7 +20,7 @@
CFLAGS := $(CFLAGS) -pipe $(EXTRA) -Wa,-x -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 \
-funwind-tables -falign-functions2
-# -frename-registers
+# -frename-registers (this crashes the Nov 17 compiler...)
CFLAGS_KERNEL := -mconstant-gp
ifeq ($(CONFIG_ITANIUM_ASTEP_SPECIFIC),y)
@@ -102,6 +102,11 @@
-traditional arch/$(ARCH)/vmlinux.lds.S > $@
FORCE: ;
+
+compressed: vmlinux
+ $(OBJCOPY) --strip-all vmlinux vmlinux-tmp
+ gzip -9 vmlinux-tmp
+ mv vmlinux-tmp.gz vmlinux.gz
rawboot:
@$(MAKEBOOT) rawboot
diff -urN --ignore-all-space linux-davidm/arch/ia64/boot/bootloader.c lia64/arch/ia64/boot/bootloader.c
--- linux-davidm/arch/ia64/boot/bootloader.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/boot/bootloader.c Thu Apr 5 09:50:01 2001
@@ -65,35 +65,22 @@
}
}
-void
-enter_virtual_mode (unsigned long new_psr)
-{
- long tmp;
-
- asm volatile ("movl %0\x1f" : "=r"(tmp));
- asm volatile ("mov cr.ipsr=%0" :: "r"(new_psr));
- asm volatile ("mov cr.iip=%0" :: "r"(tmp));
- asm volatile ("mov cr.ifs=r0");
- asm volatile ("rfi;;");
- asm volatile ("1:");
-}
-
#define MAX_ARGS 32
void
_start (void)
{
- register long sp asm ("sp");
static char stack[16384] __attribute__ ((aligned (16)));
static char mem[4096];
static char buffer[1024];
- unsigned long flags, off;
+ unsigned long off;
int fd, i;
struct disk_req req;
struct disk_stat stat;
struct elfhdr *elf;
struct elf_phdr *elf_phdr; /* program header */
unsigned long e_entry, e_phoff, e_phnum;
+ register struct ia64_boot_param *bp;
char *kpath, *args;
long arglen = 0;
@@ -107,15 +94,13 @@
ssc(0, 0, 0, 0, SSC_CONSOLE_INIT);
/*
- * S.Eranian: extract the commandline argument from the
- * simulator
+ * S.Eranian: extract the commandline argument from the simulator
*
* The expected format is as follows:
*
* kernelname args...
*
- * Both are optional but you can't have the second one without the
- * first.
+ * Both are optional but you can't have the second one without the first.
*/
arglen = ssc((long) buffer, 0, 0, 0, SSC_GET_ARGS);
@@ -183,6 +168,10 @@
e_phoff += sizeof(*elf_phdr);
elf_phdr = (struct elf_phdr *) mem;
+
+ if (elf_phdr->p_type != PT_LOAD)
+ continue;
+
req.len = elf_phdr->p_filesz;
req.addr = __pa(elf_phdr->p_vaddr);
ssc(fd, 1, (long) &req, elf_phdr->p_offset, SSC_READ);
@@ -197,41 +186,12 @@
/* fake an I/O base address: */
asm volatile ("mov ar.k0=%0" :: "r"(0xffffc000000UL));
- /*
- * Install a translation register that identity maps the kernel's 256MB page.
- */
- ia64_clear_ic(flags);
- ia64_set_rr( 0, (0x1000 << 8) | (_PAGE_SIZE_1M << 2));
- ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
- ia64_srlz_d();
- ia64_itr(0x3, IA64_TR_KERNEL, PAGE_OFFSET,
- pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
- _PAGE_SIZE_256M);
- /*
- * Map the bootloader with itr1 and dtr1; dtr1 will later be re-used for other
- * purposes, but itr1 will stick.
- */
- ia64_itr(0x3, IA64_TR_PALCODE, 1024*1024,
- pte_val(mk_pte_phys(1024*1024, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
- _PAGE_SIZE_1M);
- ia64_srlz_i();
-
- enter_virtual_mode(flags | IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT
- | IA64_PSR_DFH | IA64_PSR_BN);
-
- sys_fw_init(args, arglen);
+ bp = sys_fw_init(args, arglen);
ssc(0, (long) kpath, 0, 0, SSC_LOAD_SYMBOLS);
- /*
- * Install the kernel's command line argument on ZERO_PAGE
- * just after the botoparam structure.
- * In case we don't have any argument just put \0
- */
- memcpy(((struct ia64_boot_param *)ZERO_PAGE_ADDR) + 1, args, arglen);
- sp = __pa(&stack);
-
- asm volatile ("br.sptk.few %0" :: "b"(e_entry));
+ asm volatile ("mov sp=%2; mov r28=%1; br.sptk.few %0"
+ :: "b"(e_entry), "r"(bp), "r"(__pa(&stack)));
cons_write("kernel returned!\n");
ssc(-1, 0, 0, 0, SSC_EXIT);
diff -urN --ignore-all-space linux-davidm/arch/ia64/config.in lia64/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/config.in Thu Apr 5 09:50:37 2001
@@ -24,6 +24,10 @@
define_bool CONFIG_MCA n
define_bool CONFIG_SBUS n
+choice 'IA-64 processor type' \
+ "Itanium CONFIG_ITANIUM \
+ McKinley CONFIG_MCKINLEY" Itanium
+
choice 'IA-64 system type' \
"generic CONFIG_IA64_GENERIC \
DIG-compliant CONFIG_IA64_DIG \
@@ -36,12 +40,8 @@
16KB CONFIG_IA64_PAGE_SIZE_16KB \
64KB CONFIG_IA64_PAGE_SIZE_64KB" 16KB
-if [ "$CONFIG_IA64_DIG" = "y" -o "$CONFIG_IA64_SGI_SN1" = "y" ]; then
- define_bool CONFIG_ITANIUM y
- define_bool CONFIG_IA64_BRL_EMU y
-fi
-
if [ "$CONFIG_ITANIUM" = "y" ]; then
+ define_bool CONFIG_IA64_BRL_EMU y
bool ' Enable Itanium A-step specific code' CONFIG_ITANIUM_ASTEP_SPECIFIC
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
@@ -63,6 +63,20 @@
else
define_bool CONFIG_ITANIUM_PTCG y
fi
+ if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align cache-sensitive data to 128 bytes
+ else
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data to 64 bytes
+ fi
+fi
+
+if [ "$CONFIG_MCKINLEY" = "y" ]; then
+ define_bool CONFIG_ITANIUM_PTCG y
+ define_int CONFIG_IA64_L1_CACHE_SHIFT 7
+ bool ' Enable McKinley A-step specific code' CONFIG_MCKINLEY_ASTEP_SPECIFIC
+ if [ "$CONFIG_MCKINLEY_ASTEP_SPECIFIC" = "y" ]; then
+ bool ' Enable McKinley A0/A1-step specific code' CONFIG_MCKINLEY_A0_SPECIFIC
+ fi
fi
if [ "$CONFIG_IA64_DIG" = "y" ]; then
@@ -75,7 +89,6 @@
define_bool CONFIG_ACPI y
define_bool CONFIG_ACPI_INTERPRETER y
fi
- define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data to 64 bytes
fi
if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
@@ -90,7 +103,6 @@
define_int CONFIG_CACHE_LINE_SHIFT 7
bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM
bool ' Enable NUMA support' CONFIG_NUMA
- define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align cache-sensitive data to 128 bytes
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
@@ -244,7 +256,6 @@
if [ "$CONFIG_SCSI" != "n" ]; then
bool 'Simulated SCSI disk' CONFIG_SCSI_SIM
fi
- define_int CONFIG_IA64_L1_CACHE_SHIFT 6 # align cache-sensitive data to 64 bytes
endmenu
fi
diff -urN --ignore-all-space linux-davidm/arch/ia64/dig/setup.c lia64/arch/ia64/dig/setup.c
--- linux-davidm/arch/ia64/dig/setup.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/dig/setup.c Thu Apr 5 09:50:52 2001
@@ -54,8 +54,8 @@
memset(&screen_info, 0, sizeof(screen_info));
- if (!ia64_boot_param.console_info.num_rows
- || !ia64_boot_param.console_info.num_cols)
+ if (!ia64_boot_param->console_info.num_rows
+ || !ia64_boot_param->console_info.num_cols)
{
printk("dig_setup: warning: invalid screen-info, guessing 80x25\n");
orig_x = 0;
@@ -64,10 +64,10 @@
num_rows = 25;
font_height = 16;
} else {
- orig_x = ia64_boot_param.console_info.orig_x;
- orig_y = ia64_boot_param.console_info.orig_y;
- num_cols = ia64_boot_param.console_info.num_cols;
- num_rows = ia64_boot_param.console_info.num_rows;
+ orig_x = ia64_boot_param->console_info.orig_x;
+ orig_y = ia64_boot_param->console_info.orig_y;
+ num_cols = ia64_boot_param->console_info.num_cols;
+ num_rows = ia64_boot_param->console_info.num_rows;
font_height = 400 / num_rows;
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/ia32/binfmt_elf32.c lia64/arch/ia64/ia32/binfmt_elf32.c
--- linux-davidm/arch/ia64/ia32/binfmt_elf32.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/ia32/binfmt_elf32.c Thu Apr 5 09:51:03 2001
@@ -57,28 +57,30 @@
if (page_count(page) != 1)
printk("mem_map disagrees with %p at %08lx\n", (void *) page, address);
+
pgd = pgd_offset(tsk->mm, address);
- pmd = pmd_alloc(pgd, address);
- if (!pmd) {
- __free_page(page);
- force_sig(SIGKILL, tsk);
- return 0;
- }
- pte = pte_alloc(pmd, address);
- if (!pte) {
- __free_page(page);
- force_sig(SIGKILL, tsk);
- return 0;
- }
- if (!pte_none(*pte)) {
- pte_ERROR(*pte);
- __free_page(page);
- return 0;
- }
+
+ spin_lock(&tsk->mm->page_table_lock);
+ {
+ pmd = pmd_alloc(tsk->mm, pgd, address);
+ if (!pmd)
+ goto out;
+ pte = pte_alloc(tsk->mm, pmd, address);
+ if (!pte)
+ goto out;
+ if (!pte_none(*pte))
+ goto out;
flush_page_to_ram(page);
set_pte(pte, pte_mkwrite(mk_pte(page, PAGE_SHARED)));
+ }
+ spin_unlock(&tsk->mm->page_table_lock);
/* no need for flush_tlb */
return page;
+
+ out:
+ spin_unlock(&tsk->mm->page_table_lock);
+ __free_page(page);
+ return 0;
}
void ia64_elf32_init(struct pt_regs *regs)
@@ -148,19 +150,6 @@
regs->cr_ipsr &= ~IA64_PSR_AC;
regs->loadrs = 0;
- /*
- * According to the ABI %edx points to an `atexit' handler.
- * Since we don't have one we'll set it to 0 and initialize
- * all the other registers just to make things more deterministic,
- * ala the i386 implementation.
- */
- regs->r8 = 0; /* %eax */
- regs->r11 = 0; /* %ebx */
- regs->r9 = 0; /* %ecx */
- regs->r10 = 0; /* %edx */
- regs->r13 = 0; /* %ebp */
- regs->r14 = 0; /* %esi */
- regs->r15 = 0; /* %edi */
}
#undef STACK_TOP
diff -urN --ignore-all-space linux-davidm/arch/ia64/ia32/ia32_entry.S lia64/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/ia32/ia32_entry.S Wed Mar 28 21:42:44 2001
@@ -4,21 +4,6 @@
#include "../kernel/entry.h"
- .section "__ex_table", "a" // declare section & section attributes
- .previous
-
-#if __GNUC__ >= 3
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- [99:] x
-#else
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- 99: x
-#endif
-
- .text
-
/*
* execve() is special because in case of success, we need to
* setup a null register window frame (in case an IA-32 process
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/efi.c lia64/arch/ia64/kernel/efi.c
--- linux-davidm/arch/ia64/kernel/efi.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/efi.c Thu Apr 5 09:51:42 2001
@@ -129,9 +129,9 @@
efi_memory_desc_t *md;
u64 efi_desc_size, start, end;
- efi_map_start = __va(ia64_boot_param.efi_memmap);
- efi_map_end = efi_map_start + ia64_boot_param.efi_memmap_size;
- efi_desc_size = ia64_boot_param.efi_memdesc_size;
+ efi_map_start = __va(ia64_boot_param->efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
+ efi_desc_size = ia64_boot_param->efi_memdesc_size;
for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
md = p;
@@ -204,9 +204,9 @@
u64 mask, flags;
u64 vaddr;
- efi_map_start = __va(ia64_boot_param.efi_memmap);
- efi_map_end = efi_map_start + ia64_boot_param.efi_memmap_size;
- efi_desc_size = ia64_boot_param.efi_memdesc_size;
+ efi_map_start = __va(ia64_boot_param->efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
+ efi_desc_size = ia64_boot_param->efi_memdesc_size;
for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
md = p;
@@ -219,47 +219,42 @@
continue;
}
/*
- * We must use the same page size as the one used
- * for the kernel region when we map the PAL code.
- * This way, we avoid overlapping TRs if code is
- * executed nearby. The Alt I-TLB installs 256MB
- * page sizes as defined for region 7.
+ * The only ITLB entry in region 7 that is used is the one installed by
+ * __start(). That entry covers a 64MB range.
*
* XXX Fixme: should be dynamic here (for page size)
*/
- mask = ~((1 << _PAGE_SIZE_256M)-1);
+ mask = ~((1 << _PAGE_SIZE_64M) - 1);
vaddr = PAGE_OFFSET + md->phys_addr;
/*
- * We must check that the PAL mapping won't overlap
- * with the kernel mapping.
+ * We must check that the PAL mapping won't overlap with the kernel
+ * mapping.
*
- * PAL code is guaranteed to be aligned on a power of 2
- * between 4k and 256KB.
- * Also from the documentation, it seems like there is an
- * implicit guarantee that you will need only ONE ITR to
- * map it. This implies that the PAL code is always aligned
- * on its size, i.e., the closest matching page size supported
- * by the TLB. Therefore PAL code is guaranteed never to cross
- * a 256MB unless it is bigger than 256MB (very unlikely!).
- * So for now the following test is enough to determine whether
- * or not we need a dedicated ITR for the PAL code.
+ * PAL code is guaranteed to be aligned on a power of 2 between 4k and
+ * 256KB. Also from the documentation, it seems like there is an implicit
+ * guarantee that you will need only ONE ITR to map it. This implies that
+ * the PAL code is always aligned on its size, i.e., the closest matching
+ * page size supported by the TLB. Therefore PAL code is guaranteed never
+ * to cross a 64MB unless it is bigger than 64MB (very unlikely!). So for
+ * now the following test is enough to determine whether or not we need a
+ * dedicated ITR for the PAL code.
*/
- if ((vaddr & mask) = (PAGE_OFFSET & mask)) {
- printk(__FUNCTION__ " : no need to install ITR for PAL Code\n");
+ if ((vaddr & mask) = (KERNEL_START & mask)) {
+ printk(__FUNCTION__ " : no need to install ITR for PAL code\n");
continue;
}
printk("CPU %d: mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
smp_processor_id(), md->phys_addr, md->phys_addr + (md->num_pages << 12),
- vaddr & mask, (vaddr & mask) + 256*1024*1024);
+ vaddr & mask, (vaddr & mask) + 64*1024*1024);
/*
* Cannot write to CRx with PSR.ic=1
*/
ia64_clear_ic(flags);
ia64_itr(0x1, IA64_TR_PALCODE, vaddr & mask,
- pte_val(mk_pte_phys(md->phys_addr, PAGE_KERNEL)), _PAGE_SIZE_256M);
+ pte_val(mk_pte_phys(md->phys_addr, PAGE_KERNEL)), _PAGE_SIZE_64M);
local_irq_restore(flags);
ia64_srlz_i();
}
@@ -294,7 +289,7 @@
if (mem_limit != ~0UL)
printk("Ignoring memory above %luMB\n", mem_limit >> 20);
- efi.systab = __va(ia64_boot_param.efi_systab);
+ efi.systab = __va(ia64_boot_param->efi_systab);
/*
* Verify the EFI Table
@@ -353,9 +348,9 @@
efi.get_next_high_mono_count = phys_get_next_high_mono_count;
efi.reset_system = phys_reset_system;
- efi_map_start = __va(ia64_boot_param.efi_memmap);
- efi_map_end = efi_map_start + ia64_boot_param.efi_memmap_size;
- efi_desc_size = ia64_boot_param.efi_memdesc_size;
+ efi_map_start = __va(ia64_boot_param->efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
+ efi_desc_size = ia64_boot_param->efi_memdesc_size;
#if EFI_DEBUG
/* print EFI memory map: */
@@ -384,9 +379,9 @@
efi_status_t status;
u64 efi_desc_size;
- efi_map_start = __va(ia64_boot_param.efi_memmap);
- efi_map_end = efi_map_start + ia64_boot_param.efi_memmap_size;
- efi_desc_size = ia64_boot_param.efi_memdesc_size;
+ efi_map_start = __va(ia64_boot_param->efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
+ efi_desc_size = ia64_boot_param->efi_memdesc_size;
for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
md = p;
@@ -425,9 +420,9 @@
}
status = efi_call_phys(__va(runtime->set_virtual_address_map),
- ia64_boot_param.efi_memmap_size,
- efi_desc_size, ia64_boot_param.efi_memdesc_version,
- ia64_boot_param.efi_memmap);
+ ia64_boot_param->efi_memmap_size,
+ efi_desc_size, ia64_boot_param->efi_memdesc_version,
+ ia64_boot_param->efi_memmap);
if (status != EFI_SUCCESS) {
printk("Warning: unable to switch EFI into virtual mode (status=%lu)\n", status);
return;
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/efi_stub.S lia64/arch/ia64/kernel/efi_stub.S
--- linux-davidm/arch/ia64/kernel/efi_stub.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/efi_stub.S Wed Mar 28 21:43:23 2001
@@ -33,13 +33,6 @@
#include <asm/processor.h>
#include <asm/asmmacro.h>
- .text
- .psr abi64
- .psr lsb
- .lsb
-
- .text
-
/*
* Inputs:
* in0 = address of function descriptor of EFI routine to call
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/efivars.c lia64/arch/ia64/kernel/efivars.c
--- linux-davidm/arch/ia64/kernel/efivars.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/efivars.c Thu Apr 5 09:52:02 2001
@@ -379,9 +379,7 @@
case EFI_NOT_FOUND:
break;
default:
- printk(KERN_WARNING "get_next_variable() status=%lx\n",
- status);
- BUG();
+ printk(KERN_WARNING "get_next_variable: status=%lx\n", status);
status = EFI_NOT_FOUND;
break;
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/entry.S lia64/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/entry.S Thu Apr 5 09:52:18 2001
@@ -41,11 +41,6 @@
#include "minstate.h"
- .text
- .psr abi64
- .psr lsb
- .lsb
-
/*
* execve() is special because in case of success, we need to
* setup a null register window frame.
@@ -145,9 +140,10 @@
dep r20=0,in0,61,3 // physical address of "current"
;;
st8 [r22]=sp // save kernel stack pointer of old task
- shr.u r26=r20,_PAGE_SIZE_256M
+ shr.u r26=r20,_PAGE_SIZE_64M
+ mov r16=1
;;
- cmp.eq p7,p6=r26,r0 // check < 256M
+ cmp.ne p6,p7=r26,r16 // check >= 64M && < 128M
adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
;;
/*
@@ -175,11 +171,11 @@
.map:
rsm psr.i | psr.ic
- movl r25=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX
+ movl r25=PAGE_KERNEL
;;
srlz.d
or r23=r25,r20 // construct PA | page properties
- mov r25=_PAGE_SIZE_256M<<2
+ mov r25=_PAGE_SIZE_64M<<2
;;
mov cr.itir=r25
mov cr.ifa=in0 // VA of next task...
@@ -189,7 +185,6 @@
;;
itr.d dtr[r25]=r23 // wire in new mapping...
br.cond.sptk.many .done
- ;;
END(ia64_switch_to)
/*
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/fw-emu.c lia64/arch/ia64/kernel/fw-emu.c
--- linux-davidm/arch/ia64/kernel/fw-emu.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/fw-emu.c Thu Apr 5 09:53:15 2001
@@ -22,7 +22,8 @@
#define NUM_MEM_DESCS 2
-static char fw_mem[( sizeof(efi_system_table_t)
+static char fw_mem[( sizeof(struct ia64_boot_param)
+ + sizeof(efi_system_table_t)
+ sizeof(efi_runtime_services_t)
+ 1*sizeof(efi_config_table_t)
+ sizeof(struct ia64_sal_systab)
@@ -333,7 +334,7 @@
return (void *) addr;
}
-void
+struct ia64_boot_param *
sys_fw_init (const char *args, int arglen)
{
efi_system_table_t *efi_systab;
@@ -359,6 +360,7 @@
sal_systab = (void *) cp; cp += sizeof(*sal_systab);
sal_ed = (void *) cp; cp += sizeof(*sal_ed);
efi_memmap = (void *) cp; cp += NUM_MEM_DESCS*sizeof(*efi_memmap);
+ bp = (void *) cp; cp += sizeof(*bp);
cmd_line = (void *) cp;
if (args) {
@@ -441,7 +443,7 @@
md->pad = 0;
md->phys_addr = 2*MB;
md->virt_addr = 0;
- md->num_pages = (64*MB) >> 12; /* 64MB (in 4KB pages) */
+ md->num_pages = (128*MB) >> 12; /* 128MB (in 4KB pages) */
md->attribute = EFI_MEMORY_WB;
/* descriptor for firmware emulator: */
@@ -469,7 +471,6 @@
md->attribute = EFI_MEMORY_WB;
#endif
- bp = id(ZERO_PAGE_ADDR);
bp->efi_systab = __pa(&fw_mem);
bp->efi_memmap = __pa(efi_memmap);
bp->efi_memmap_size = NUM_MEM_DESCS*sizeof(efi_memory_desc_t);
@@ -480,6 +481,7 @@
bp->console_info.num_rows = 25;
bp->console_info.orig_x = 0;
bp->console_info.orig_y = 24;
- bp->num_pci_vectors = 0;
bp->fpswa = 0;
+
+ return bp;
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/gate.S lia64/arch/ia64/kernel/gate.S
--- linux-davidm/arch/ia64/kernel/gate.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/gate.S Wed Mar 28 21:43:58 2001
@@ -13,10 +13,6 @@
#include <asm/unistd.h>
#include <asm/page.h>
- .psr abi64
- .psr lsb
- .lsb
-
.section .text.gate,"ax"
.align PAGE_SIZE
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/head.S lia64/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/head.S Thu Apr 5 09:53:28 2001
@@ -5,8 +5,9 @@
* to set up the kernel's global pointer and jump to the kernel
* entry point.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2001 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Intel Corp.
@@ -18,17 +19,15 @@
#include <asm/asmmacro.h>
#include <asm/fpu.h>
+#include <asm/kregs.h>
+#include <asm/mmu_context.h>
+#include <asm/offsets.h>
#include <asm/pal.h>
#include <asm/pgtable.h>
-#include <asm/offsets.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/system.h>
- .psr abi64
- .psr lsb
- .lsb
-
.section __special_page_section,"ax"
.global empty_zero_page
@@ -39,29 +38,66 @@
swapper_pg_dir:
.skip PAGE_SIZE
- .global empty_bad_page
-empty_bad_page:
- .skip PAGE_SIZE
-
- .global empty_bad_pte_table
-empty_bad_pte_table:
- .skip PAGE_SIZE
-
- .global empty_bad_pmd_table
-empty_bad_pmd_table:
- .skip PAGE_SIZE
-
.rodata
halt_msg:
stringz "Halting kernel\n"
.text
+ .global start_ap
+
+ /*
+ * Start the kernel. When the bootloader passes control to _start(), r28
+ * points to the address of the boot parameter area. Execution reaches
+ * here in physical mode.
+ */
GLOBAL_ENTRY(_start)
+start_ap:
.prologue
.save rp, r4 // terminate unwind chain with a NULL rp
mov r4=r0
.body
+
+ /*
+ * Initialize the region register for region 7 and install a translation register
+ * that maps the kernel's text and data:
+ */
+ rsm psr.i | psr.ic
+ mov r16=((ia64_rid(IA64_REGION_ID_KERNEL, PAGE_OFFSET) << 8) | (_PAGE_SIZE_64M << 2))
+ ;;
+ srlz.i
+ mov r18=_PAGE_SIZE_64M<<2
+ movl r17=PAGE_OFFSET + 64*1024*1024
+ ;;
+ mov rr[r17]=r16
+ mov cr.itir=r18
+ mov cr.ifa=r17
+ mov r16=IA64_TR_KERNEL
+ movl r18=(64*1024*1024 | PAGE_KERNEL)
+ ;;
+ srlz.i
+ ;;
+ itr.i itr[r16]=r18
+ ;;
+ itr.d dtr[r16]=r18
+ ;;
+ srlz.i
+
+ /*
+ * Switch into virtual mode:
+ */
+ movl r16=(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN)
+ ;;
+ mov cr.ipsr=r16
+ movl r17\x1f
+ ;;
+ mov cr.iip=r17
+ mov cr.ifs=r0
+ ;;
+ rfi
+ ;;
+1: // now we are in virtual mode
+
// set IVT entry point---can't access I/O ports without it
movl r3=ia64_ivt
;;
@@ -75,7 +111,7 @@
;;
#ifdef CONFIG_IA64_EARLY_PRINTK
- mov r3=(6<<8) | (28<<2)
+ mov r3=(6<<8) | (_PAGE_SIZE_64M<<2)
movl r2=6<<61
;;
mov rr[r2]=r3
@@ -84,7 +120,8 @@
;;
#endif
-#define isAP p2 // are we booting an Application Processor (not the BSP)?
+#define isAP p2 // are we an Application Processor?
+#define isBP p3 // are we the Bootstrap Processor?
/*
* Find the init_task for the currently booting CPU. At poweron, and in
@@ -98,14 +135,17 @@
shladd r2=r3,3,r2
;;
ld8 r2=[r2]
- cmp4.ne isAP,p0=r3,r0 // p9 = true if this is an application processor (ap)
+ cmp4.ne isAP,isBP=r3,r0
;; // RAW on r2
extr r3=r2,0,61 // r3 = phys addr of task struct
;;
// load the "current" pointer (r13) and ar.k6 with the current task
mov r13=r2
- mov ar.k6=r3 // Physical address
+ mov IA64_KR(CURRENT)=r3 // Physical address
+
+ // initialize k4 to a safe value (64-128MB is mapped by TR_KERNEL)
+ mov IA64_KR(CURRENT_STACK)=1
/*
* Reserve space at the top of the stack for "struct pt_regs". Kernel threads
* don't store interesting values in that structure, but the space still needs
@@ -117,10 +157,17 @@
addl r2=IA64_RBS_OFFSET,r2 // initialize the RSE
mov ar.rsc=0 // place RSE in enforced lazy mode
;;
+ loadrs // clear the dirty partition
+ ;;
mov ar.bspstore=r2 // establish the new RSE stack
;;
mov ar.rsc=0x3 // place RSE in eager mode
+
+(isBP) dep r28=-1,r28,61,3 // make address virtual
+(isBP) movl r2=ia64_boot_param
;;
+(isBP) st8 [r2]=r28 // save the address of the boot param area passed by the bootloader
+
#ifdef CONFIG_IA64_EARLY_PRINTK
.rodata
alive_msg:
@@ -134,16 +181,12 @@
1: // force new bundle
#endif /* CONFIG_IA64_EARLY_PRINTK */
- alloc r2=ar.pfs,8,0,2,0
- ;;
#ifdef CONFIG_SMP
(isAP) br.call.sptk.few rp=smp_callin
.ret0:
(isAP) br.cond.sptk.few self
#endif
-#undef isAP
-
// This is executed by the bootstrap processor (bsp) only:
#ifdef CONFIG_IA64_FW_EMU
@@ -152,9 +195,11 @@
.ret1:
#endif
br.call.sptk.few rp=start_kernel
-.ret2: addl r2=@ltoff(halt_msg),gp
+.ret2: addl r3=@ltoff(halt_msg),gp
+ ;;
+ alloc r2=ar.pfs,8,0,2,0
;;
- ld8 out0=[r2]
+ ld8 out0=[r3]
br.call.sptk.few b0=console_print
self: br.sptk.few self // endless loop
END(_start)
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/ia64_ksyms.c lia64/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/ia64_ksyms.c Thu Apr 5 09:53:49 2001
@@ -52,11 +52,6 @@
EXPORT_SYMBOL_NOVERS(__down_write_failed);
EXPORT_SYMBOL_NOVERS(__rwsem_wake);
-#ifdef CONFIG_SMP
-#include <asm/pgalloc.h>
-EXPORT_SYMBOL(smp_flush_tlb_all);
-#endif
-
#include <asm/page.h>
EXPORT_SYMBOL(clear_page);
@@ -69,8 +64,12 @@
EXPORT_SYMBOL(last_cli_ip);
#endif
+#include <asm/pgalloc.h>
+
#ifdef CONFIG_SMP
+EXPORT_SYMBOL(smp_flush_tlb_all);
+
#include <asm/current.h>
#include <asm/hardirq.h>
EXPORT_SYMBOL(synchronize_irq);
@@ -92,7 +91,11 @@
EXPORT_SYMBOL(__global_save_flags);
EXPORT_SYMBOL(__global_restore_flags);
-#endif
+#else /* !CONFIG_SMP */
+
+EXPORT_SYMBOL(__flush_tlb_all);
+
+#endif /* !CONFIG_SMP */
#include <asm/uaccess.h>
EXPORT_SYMBOL(__copy_user);
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/iosapic.c lia64/arch/ia64/kernel/iosapic.c
--- linux-davidm/arch/ia64/kernel/iosapic.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/iosapic.c Thu Apr 5 09:53:59 2001
@@ -352,8 +352,8 @@
acpi_cf_get_pci_vectors(&pci_irq.route, &pci_irq.num_routes);
#else
pci_irq.route - (struct pci_vector_struct *) __va(ia64_boot_param.pci_vectors);
- pci_irq.num_routes = ia64_boot_param.num_pci_vectors;
+ (struct pci_vector_struct *) __va(ia64_boot_param->pci_vectors);
+ pci_irq.num_routes = ia64_boot_param->num_pci_vectors;
#endif
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/irq_ia64.c lia64/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/irq_ia64.c Thu Apr 5 09:54:20 2001
@@ -61,6 +61,8 @@
return next_irq++;
}
+extern unsigned int do_IRQ(unsigned long irq, struct pt_regs *regs);
+
/*
* That's where the IVT branches when we get an external
* interrupt. This branches to the correct hardware IRQ handler via
@@ -89,7 +91,7 @@
static unsigned char count;
static long last_time;
- if (count > 5 && jiffies - last_time > 5*HZ)
+ if (jiffies - last_time > 5*HZ)
count = 0;
if (++count < 5) {
last_time = jiffies;
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/ivt.S lia64/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/ivt.S Thu Apr 5 09:54:33 2001
@@ -53,6 +53,16 @@
# define PSR_DEFAULT_BITS 0
#endif
+#if 0
+ /*
+ * This lets you track the last eight faults that occurred on the CPU. Make sure ar.k2 isn't
+ * needed for something else before enabling this...
+ */
+# define DBG_FAULT(i) mov r16=ar.k2;; shl r16=r16,8;; add r16=(i),r16;;mov ar.k2=r16
+#else
+# define DBG_FAULT(i)
+#endif
+
#define MINSTATE_VIRT /* needed by minstate.h */
#include "minstate.h"
@@ -79,10 +89,6 @@
*/
#define BREAK_BUNDLE8(a); BREAK_BUNDLE4(a); BREAK_BUNDLE4(a)
- .psr abi64
- .psr lsb
- .lsb
-
.section .text.ivt,"ax"
.align 32768 // align on 32KB boundary
@@ -91,6 +97,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0000 Entry 0 (size 64 bundles) VHPT Translation (8,20,47)
ENTRY(vhpt_miss)
+ DBG_FAULT(0)
/*
* The VHPT vector is invoked when the TLB entry for the virtual page table
* is missing. This happens only as a result of a previous
@@ -190,6 +197,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0400 Entry 1 (size 64 bundles) ITLB (21)
ENTRY(itlb_miss)
+ DBG_FAULT(1)
/*
* The ITLB handler accesses the L3 PTE via the virtually mapped linear
* page table. If a nested TLB miss occurs, we switch into physical
@@ -227,6 +235,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0800 Entry 2 (size 64 bundles) DTLB (9,48)
ENTRY(dtlb_miss)
+ DBG_FAULT(2)
/*
* The DTLB handler accesses the L3 PTE via the virtually mapped linear
* page table. If a nested TLB miss occurs, we switch into physical
@@ -264,6 +273,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
ENTRY(alt_itlb_miss)
+ DBG_FAULT(3)
mov r16=cr.ifa // get address that caused the TLB miss
movl r17=PAGE_KERNEL
mov r21=cr.ipsr
@@ -300,6 +310,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1000 Entry 4 (size 64 bundles) Alt DTLB (7,46)
ENTRY(alt_dtlb_miss)
+ DBG_FAULT(4)
mov r16=cr.ifa // get address that caused the TLB miss
movl r17=PAGE_KERNEL
mov r20=cr.isr
@@ -429,6 +440,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1800 Entry 6 (size 64 bundles) Instruction Key Miss (24)
ENTRY(ikey_miss)
+ DBG_FAULT(6)
FAULT(6)
END(ikey_miss)
@@ -436,6 +448,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x1c00 Entry 7 (size 64 bundles) Data Key Miss (12,51)
ENTRY(dkey_miss)
+ DBG_FAULT(7)
FAULT(7)
END(dkey_miss)
@@ -443,6 +456,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2000 Entry 8 (size 64 bundles) Dirty-bit (54)
ENTRY(dirty_bit)
+ DBG_FAULT(8)
/*
* What we do here is to simply turn on the dirty bit in the PTE. We need to
* update both the page-table and the TLB entry. To efficiently access the PTE,
@@ -498,6 +512,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2400 Entry 9 (size 64 bundles) Instruction Access-bit (27)
ENTRY(iaccess_bit)
+ DBG_FAULT(9)
// Like Entry 8, except for instruction access
mov r16=cr.ifa // get the address that caused the fault
movl r30\x1f // load continuation point in case of nested fault
@@ -576,6 +591,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2800 Entry 10 (size 64 bundles) Data Access-bit (15,55)
ENTRY(daccess_bit)
+ DBG_FAULT(10)
// Like Entry 8, except for data access
mov r16=cr.ifa // get the address that caused the fault
movl r30\x1f // load continuation point in case of nested fault
@@ -622,6 +638,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2c00 Entry 11 (size 64 bundles) Break instruction (33)
ENTRY(break_fault)
+ DBG_FAULT(11)
mov r16=cr.iim
mov r17=__IA64_BREAK_SYSCALL
mov r31=pr // prepare to save predicates
@@ -719,6 +736,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4)
ENTRY(interrupt)
+ DBG_FAULT(12)
mov r31=pr // prepare to save predicates
;;
@@ -744,16 +762,19 @@
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3400 Entry 13 (size 64 bundles) Reserved
+ DBG_FAULT(13)
FAULT(13)
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3800 Entry 14 (size 64 bundles) Reserved
+ DBG_FAULT(14)
FAULT(14)
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x3c00 Entry 15 (size 64 bundles) Reserved
+ DBG_FAULT(15)
FAULT(15)
/*
@@ -798,6 +819,7 @@
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4000 Entry 16 (size 64 bundles) Reserved
+ DBG_FAULT(16)
FAULT(16)
#ifdef CONFIG_IA32_SUPPORT
@@ -888,6 +910,7 @@
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4400 Entry 17 (size 64 bundles) Reserved
+ DBG_FAULT(17)
FAULT(17)
ENTRY(non_syscall)
@@ -919,6 +942,7 @@
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4800 Entry 18 (size 64 bundles) Reserved
+ DBG_FAULT(18)
FAULT(18)
/*
@@ -952,6 +976,7 @@
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x4c00 Entry 19 (size 64 bundles) Reserved
+ DBG_FAULT(19)
FAULT(19)
/*
@@ -998,13 +1023,14 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5000 Entry 20 (size 16 bundles) Page Not Present (10,22,49)
ENTRY(page_not_present)
+ DBG_FAULT(20)
mov r16=cr.ifa
rsm psr.dt
/*
* The Linux page fault handler doesn't expect non-present pages to be in
* the TLB. Flush the existing entry now, so we meet that expectation.
*/
- mov r17=_PAGE_SIZE_4K<<2
+ mov r17=PAGE_SHIFT<<2
;;
ptc.l r16,r17
;;
@@ -1017,6 +1043,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5100 Entry 21 (size 16 bundles) Key Permission (13,25,52)
ENTRY(key_permission)
+ DBG_FAULT(21)
mov r16=cr.ifa
rsm psr.dt
mov r31=pr
@@ -1029,6 +1056,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5200 Entry 22 (size 16 bundles) Instruction Access Rights (26)
ENTRY(iaccess_rights)
+ DBG_FAULT(22)
mov r16=cr.ifa
rsm psr.dt
mov r31=pr
@@ -1041,6 +1069,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5300 Entry 23 (size 16 bundles) Data Access Rights (14,53)
ENTRY(daccess_rights)
+ DBG_FAULT(23)
mov r16=cr.ifa
rsm psr.dt
mov r31=pr
@@ -1053,6 +1082,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5400 Entry 24 (size 16 bundles) General Exception (5,32,34,36,38,39)
ENTRY(general_exception)
+ DBG_FAULT(24)
mov r16=cr.isr
mov r31=pr
;;
@@ -1067,6 +1097,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5500 Entry 25 (size 16 bundles) Disabled FP-Register (35)
ENTRY(disabled_fp_reg)
+ DBG_FAULT(25)
rsm psr.dfh // ensure we can access fph
;;
srlz.d
@@ -1079,6 +1110,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5600 Entry 26 (size 16 bundles) Nat Consumption (11,23,37,50)
ENTRY(nat_consumption)
+ DBG_FAULT(26)
FAULT(26)
END(nat_consumption)
@@ -1086,6 +1118,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5700 Entry 27 (size 16 bundles) Speculation (40)
ENTRY(speculation_vector)
+ DBG_FAULT(27)
/*
* A [f]chk.[as] instruction needs to take the branch to the recovery code but
* this part of the architecture is not implemented in hardware on some CPUs, such
@@ -1121,12 +1154,14 @@
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5800 Entry 28 (size 16 bundles) Reserved
+ DBG_FAULT(28)
FAULT(28)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5900 Entry 29 (size 16 bundles) Debug (16,28,56)
ENTRY(debug_vector)
+ DBG_FAULT(29)
FAULT(29)
END(debug_vector)
@@ -1134,6 +1169,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5a00 Entry 30 (size 16 bundles) Unaligned Reference (57)
ENTRY(unaligned_access)
+ DBG_FAULT(30)
mov r16=cr.ipsr
mov r31=pr // prepare to save predicates
;;
@@ -1143,77 +1179,92 @@
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5b00 Entry 31 (size 16 bundles) Unsupported Data Reference (57)
+ DBG_FAULT(31)
FAULT(31)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5c00 Entry 32 (size 16 bundles) Floating-Point Fault (64)
+ DBG_FAULT(32)
FAULT(32)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5d00 Entry 33 (size 16 bundles) Floating Point Trap (66)
+ DBG_FAULT(33)
FAULT(33)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5e00 Entry 34 (size 16 bundles) Lower Privilege Tranfer Trap (66)
+ DBG_FAULT(34)
FAULT(34)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x5f00 Entry 35 (size 16 bundles) Taken Branch Trap (68)
+ DBG_FAULT(35)
FAULT(35)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6000 Entry 36 (size 16 bundles) Single Step Trap (69)
+ DBG_FAULT(36)
FAULT(36)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6100 Entry 37 (size 16 bundles) Reserved
+ DBG_FAULT(37)
FAULT(37)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6200 Entry 38 (size 16 bundles) Reserved
+ DBG_FAULT(38)
FAULT(38)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6300 Entry 39 (size 16 bundles) Reserved
+ DBG_FAULT(39)
FAULT(39)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6400 Entry 40 (size 16 bundles) Reserved
+ DBG_FAULT(40)
FAULT(40)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6500 Entry 41 (size 16 bundles) Reserved
+ DBG_FAULT(41)
FAULT(41)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6600 Entry 42 (size 16 bundles) Reserved
+ DBG_FAULT(42)
FAULT(42)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6700 Entry 43 (size 16 bundles) Reserved
+ DBG_FAULT(43)
FAULT(43)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6800 Entry 44 (size 16 bundles) Reserved
+ DBG_FAULT(44)
FAULT(44)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6900 Entry 45 (size 16 bundles) IA-32 Exeception (17,18,29,41,42,43,44,58,60,61,62,72,73,75,76,77)
ENTRY(ia32_exception)
+ DBG_FAULT(45)
FAULT(45)
END(ia32_exception)
@@ -1221,6 +1272,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6a00 Entry 46 (size 16 bundles) IA-32 Intercept (30,31,59,70,71)
ENTRY(ia32_intercept)
+ DBG_FAULT(46)
#ifdef CONFIG_IA32_SUPPORT
mov r31=pr
mov r16=cr.isr
@@ -1250,6 +1302,7 @@
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6b00 Entry 47 (size 16 bundles) IA-32 Interrupt (74)
ENTRY(ia32_interrupt)
+ DBG_FAULT(47)
#ifdef CONFIG_IA32_SUPPORT
mov r31=pr
br.sptk.many dispatch_to_ia32_handler
@@ -1261,99 +1314,119 @@
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6c00 Entry 48 (size 16 bundles) Reserved
+ DBG_FAULT(48)
FAULT(48)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6d00 Entry 49 (size 16 bundles) Reserved
+ DBG_FAULT(49)
FAULT(49)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6e00 Entry 50 (size 16 bundles) Reserved
+ DBG_FAULT(50)
FAULT(50)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x6f00 Entry 51 (size 16 bundles) Reserved
+ DBG_FAULT(51)
FAULT(51)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7000 Entry 52 (size 16 bundles) Reserved
+ DBG_FAULT(52)
FAULT(52)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7100 Entry 53 (size 16 bundles) Reserved
+ DBG_FAULT(53)
FAULT(53)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7200 Entry 54 (size 16 bundles) Reserved
+ DBG_FAULT(54)
FAULT(54)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7300 Entry 55 (size 16 bundles) Reserved
+ DBG_FAULT(55)
FAULT(55)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7400 Entry 56 (size 16 bundles) Reserved
+ DBG_FAULT(56)
FAULT(56)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7500 Entry 57 (size 16 bundles) Reserved
+ DBG_FAULT(57)
FAULT(57)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7600 Entry 58 (size 16 bundles) Reserved
+ DBG_FAULT(58)
FAULT(58)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7700 Entry 59 (size 16 bundles) Reserved
+ DBG_FAULT(59)
FAULT(59)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7800 Entry 60 (size 16 bundles) Reserved
+ DBG_FAULT(60)
FAULT(60)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7900 Entry 61 (size 16 bundles) Reserved
+ DBG_FAULT(61)
FAULT(61)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7a00 Entry 62 (size 16 bundles) Reserved
+ DBG_FAULT(62)
FAULT(62)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7b00 Entry 63 (size 16 bundles) Reserved
+ DBG_FAULT(63)
FAULT(63)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7c00 Entry 64 (size 16 bundles) Reserved
+ DBG_FAULT(64)
FAULT(64)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7d00 Entry 65 (size 16 bundles) Reserved
+ DBG_FAULT(65)
FAULT(65)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7e00 Entry 66 (size 16 bundles) Reserved
+ DBG_FAULT(66)
FAULT(66)
.align 256
/////////////////////////////////////////////////////////////////////////////////////////
// 0x7f00 Entry 67 (size 16 bundles) Reserved
+ DBG_FAULT(67)
FAULT(67)
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/mca_asm.S lia64/arch/ia64/kernel/mca_asm.S
--- linux-davidm/arch/ia64/kernel/mca_asm.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/mca_asm.S Wed Mar 28 21:44:55 2001
@@ -22,10 +22,6 @@
#include "minstate.h"
- .psr abi64
- .psr lsb
- .lsb
-
/*
* SAL_TO_OS_MCA_HANDOFF_STATE
* 1. GR1 = OS GP
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/pal.S lia64/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/pal.S Wed Mar 28 21:45:02 2001
@@ -14,11 +14,6 @@
#include <asm/asmmacro.h>
#include <asm/processor.h>
- .text
- .psr abi64
- .psr lsb
- .lsb
-
.data
pal_entry_point:
data8 ia64_pal_default_handler
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/process.c lia64/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/process.c Thu Apr 5 09:55:58 2001
@@ -294,7 +294,7 @@
void
do_copy_regs (struct unw_frame_info *info, void *arg)
{
- unsigned long ar_bsp, ndirty, *krbs, addr, mask, sp, nat_bits = 0, ip;
+ unsigned long ar_bsp, addr, mask, sp, nat_bits = 0, ip, ar_rnat;
elf_greg_t *dst = arg;
struct pt_regs *pt;
char nat;
@@ -309,18 +309,18 @@
unw_get_sp(info, &sp);
pt = (struct pt_regs *) (sp + 16);
- krbs = (unsigned long *) current + IA64_RBS_OFFSET/8;
- ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 19));
- ar_bsp = (unsigned long) ia64_rse_skip_regs((long *) pt->ar_bspstore, ndirty);
+ ar_bsp = ia64_get_user_bsp(current, pt);
/*
- * Write portion of RSE backing store living on the kernel
- * stack to the VM of the process.
+ * Write portion of RSE backing store living on the kernel stack to the VM of the
+ * process.
*/
for (addr = pt->ar_bspstore; addr < ar_bsp; addr += 8)
- if (ia64_peek(pt, current, addr, &val) = 0)
+ if (ia64_peek(current, ar_bsp, addr, &val) = 0)
access_process_vm(current, addr, &val, sizeof(val), 1);
+ ia64_peek(current, ar_bsp, (long) ia64_rse_rnat_addr((long *) addr - 1), &ar_rnat);
+
/*
* coredump format:
* r0-r31
@@ -357,7 +357,7 @@
*/
dst[46] = ar_bsp;
dst[47] = pt->ar_bspstore;
- unw_get_ar(info, UNW_AR_RNAT, &dst[48]);
+ dst[48] = ar_rnat;
unw_get_ar(info, UNW_AR_CCV, &dst[49]);
unw_get_ar(info, UNW_AR_UNAT, &dst[50]);
unw_get_ar(info, UNW_AR_FPSR, &dst[51]);
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/ptrace.c lia64/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/ptrace.c Thu Apr 5 09:56:12 2001
@@ -1,8 +1,8 @@
/*
* Kernel support for the ptrace() and syscall tracing interfaces.
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*
* Derived from the x86 and Alpha versions. Most of the code in here
* could actually be factored into a common set of routines.
@@ -290,9 +290,9 @@
}
long
-ia64_peek (struct pt_regs *regs, struct task_struct *child, unsigned long addr, long *val)
+ia64_peek (struct task_struct *child, unsigned long user_bsp, unsigned long addr, long *val)
{
- unsigned long *bspstore, *krbs, krbs_num_regs, regnum, *rbs_end, *laddr;
+ unsigned long *bspstore, *krbs, regnum, *laddr, *ubsp = (long *) user_bsp;
struct switch_stack *child_stack;
struct pt_regs *child_regs;
size_t copied;
@@ -303,28 +303,19 @@
child_stack = (struct switch_stack *) (child->thread.ksp + 16);
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
- krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
- rbs_end = ia64_rse_skip_regs(bspstore, krbs_num_regs);
- if (laddr >= bspstore && laddr <= ia64_rse_rnat_addr(rbs_end)) {
- /*
- * Attempt to read the RBS in an area that's actually
- * on the kernel RBS => read the corresponding bits in
- * the kernel RBS.
+ if (laddr >= bspstore && laddr <= ia64_rse_rnat_addr(ubsp)) {
+ /*
+ * Attempt to read the RBS in an area that's actually on the kernel RBS =>
+ * read the corresponding bits in the kernel RBS.
*/
if (ia64_rse_is_rnat_slot(laddr))
ret = get_rnat(child_regs, child_stack, krbs, laddr);
else {
- regnum = ia64_rse_num_regs(bspstore, laddr);
- laddr = ia64_rse_skip_regs(krbs, regnum);
- if (regnum >= krbs_num_regs) {
+ if (laddr >= ubsp)
ret = 0;
- } else {
- if ((unsigned long) laddr >= (unsigned long) high_memory) {
- printk("yikes: trying to access long at %p\n",
- (void *) laddr);
- return -EIO;
- }
- ret = *laddr;
+ else {
+ regnum = ia64_rse_num_regs(bspstore, laddr);
+ ret = *ia64_rse_skip_regs(krbs, regnum);
}
}
} else {
@@ -337,9 +328,9 @@
}
long
-ia64_poke (struct pt_regs *regs, struct task_struct *child, unsigned long addr, long val)
+ia64_poke (struct task_struct *child, unsigned long user_bsp, unsigned long addr, long val)
{
- unsigned long *bspstore, *krbs, krbs_num_regs, regnum, *rbs_end, *laddr;
+ unsigned long *bspstore, *krbs, regnum, *laddr, *ubsp = (long *) user_bsp;
struct switch_stack *child_stack;
struct pt_regs *child_regs;
@@ -348,21 +339,17 @@
child_stack = (struct switch_stack *) (child->thread.ksp + 16);
bspstore = (unsigned long *) child_regs->ar_bspstore;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
- krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
- rbs_end = ia64_rse_skip_regs(bspstore, krbs_num_regs);
- if (laddr >= bspstore && laddr <= ia64_rse_rnat_addr(rbs_end)) {
- /*
- * Attempt to write the RBS in an area that's actually
- * on the kernel RBS => write the corresponding bits
- * in the kernel RBS.
+ if (laddr >= bspstore && laddr <= ia64_rse_rnat_addr(ubsp)) {
+ /*
+ * Attempt to write the RBS in an area that's actually on the kernel RBS
+ * => write the corresponding bits in the kernel RBS.
*/
if (ia64_rse_is_rnat_slot(laddr))
put_rnat(child_regs, child_stack, krbs, laddr, val);
else {
+ if (laddr < ubsp) {
regnum = ia64_rse_num_regs(bspstore, laddr);
- laddr = ia64_rse_skip_regs(krbs, regnum);
- if (regnum < krbs_num_regs) {
- *laddr = val;
+ *ia64_rse_skip_regs(krbs, regnum) = val;
}
}
} else if (access_process_vm(child, addr, &val, sizeof(val), 1) != sizeof(val)) {
@@ -372,69 +359,76 @@
}
/*
- * Synchronize (i.e, write) the RSE backing store living in kernel
- * space to the VM of the indicated child process.
- *
- * If new_bsp is non-zero, the bsp will (effectively) be updated to
- * the new value upon resumption of the child process. This is
- * accomplished by setting the loadrs value to zero and the bspstore
- * value to the new bsp value.
- *
- * When new_bsp and force_loadrs_to_zero are both 0, the register
- * backing store in kernel space is written to user space and the
- * loadrs and bspstore values are left alone.
- *
- * When new_bsp is zero and force_loadrs_to_zero is 1 (non-zero),
- * loadrs is set to 0, and the bspstore value is set to the old bsp
- * value. This will cause the stacked registers (r32 and up) to be
- * obtained entirely from the child's memory space rather than
- * from the kernel. (This makes it easier to write code for
- * modifying the stacked registers in multi-threaded programs.)
- *
- * Note: I had originally written this function without the
- * force_loadrs_to_zero parameter; it was written so that loadrs would
- * always be set to zero. But I had problems with certain system
- * calls apparently causing a portion of the RBS to be zeroed. (I
- * still don't understand why this was happening.) Anyway, it'd
- * definitely less intrusive to leave loadrs and bspstore alone if
- * possible.
+ * Calculate the user-level address that would have been in ar.bsp had the user executed a
+ * "cover" instruction right before entering the kernel.
*/
-static long
-sync_kernel_register_backing_store (struct task_struct *child,
- long new_bsp,
- int force_loadrs_to_zero)
+unsigned long
+ia64_get_user_bsp (struct task_struct *child, struct pt_regs *pt)
{
- unsigned long *krbs, bspstore, *kbspstore, bsp, rbs_end, addr, val;
- long ndirty, ret = 0;
- struct pt_regs *child_regs = ia64_task_regs(child);
-
+ unsigned long *krbs, *bspstore, cfm;
struct unw_frame_info info;
- unsigned long cfm, sof;
-
- unw_init_from_blocked_task(&info, child);
- if (unw_unwind_to_user(&info) < 0)
- return -1;
-
- unw_get_bsp(&info, (unsigned long *) &kbspstore);
+ long ndirty;
krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
- ndirty = ia64_rse_num_regs(krbs, krbs + (child_regs->loadrs >> 19));
- bspstore = child_regs->ar_bspstore;
- bsp = (long) ia64_rse_skip_regs((long *)bspstore, ndirty);
+ bspstore = (unsigned long *) pt->ar_bspstore;
+ ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 19));
- cfm = child_regs->cr_ifs;
- if (!(cfm & (1UL << 63)))
+ if ((long) pt->cr_ifs >= 0) {
+ /*
+ * If bit 63 of cr.ifs is cleared, the kernel was entered via a system
+ * call and we need to recover the CFM that existed on entry to the
+ * kernel by unwinding the kernel stack.
+ */
+ unw_init_from_blocked_task(&info, child);
+ if (unw_unwind_to_user(&info) = 0) {
unw_get_cfm(&info, &cfm);
- sof = (cfm & 0x7f);
- rbs_end = (long) ia64_rse_skip_regs((long *)bspstore, sof);
+ ndirty += (cfm & 0x7f);
+ }
+ }
+ return (unsigned long) ia64_rse_skip_regs(bspstore, ndirty);
+}
+
+/*
+ * Synchronize (i.e, write) the RSE backing store living in kernel space to the VM of the
+ * indicated child process.
+ *
+ * If new_bsp is non-zero, the bsp will (effectively) be updated to the new value upon
+ * resumption of the child process. This is accomplished by setting the loadrs value to
+ * zero and the bspstore value to the new bsp value.
+ *
+ * When new_bsp and flush_user_rbs are both 0, the register backing store in kernel space
+ * is written to user space and the loadrs and bspstore values are left alone.
+ *
+ * When new_bsp is zero and flush_user_rbs is 1 (non-zero), loadrs is set to 0, and the
+ * bspstore value is set to the old bsp value. This will cause the stacked registers (r32
+ * and up) to be obtained entirely from the child's memory space rather than from the
+ * kernel. (This makes it easier to write code for modifying the stacked registers in
+ * multi-threaded programs.)
+ *
+ * Note: I had originally written this function without the flush_user_rbs parameter; it
+ * was written so that loadrs would always be set to zero. But I had problems with
+ * certain system calls apparently causing a portion of the RBS to be zeroed. (I still
+ * don't understand why this was happening.) Anyway, it'd definitely less intrusive to
+ * leave loadrs and bspstore alone if possible.
+ */
+static long
+sync_kernel_register_backing_store (struct task_struct *child, long user_bsp, long new_bsp,
+ int flush_user_rbs)
+{
+ struct pt_regs *child_regs = ia64_task_regs(child);
+ unsigned long addr, val;
+ long ret;
- /* Return early if nothing to do */
- if (bsp = new_bsp)
+ /*
+ * Return early if nothing to do. Note that new_bsp will be zero if the caller
+ * wants to force synchronization without changing bsp.
+ */
+ if (user_bsp = new_bsp)
return 0;
/* Write portion of backing store living on kernel stack to the child's VM. */
- for (addr = bspstore; addr < rbs_end; addr += 8) {
- ret = ia64_peek(child_regs, child, addr, &val);
+ for (addr = child_regs->ar_bspstore; addr < user_bsp; addr += 8) {
+ ret = ia64_peek(child, user_bsp, addr, &val);
if (ret != 0)
return ret;
if (access_process_vm(child, addr, &val, sizeof(val), 1) != sizeof(val))
@@ -442,27 +436,26 @@
}
if (new_bsp != 0) {
- force_loadrs_to_zero = 1;
- bsp = new_bsp;
+ flush_user_rbs = 1;
+ user_bsp = new_bsp;
}
- if (force_loadrs_to_zero) {
+ if (flush_user_rbs) {
child_regs->loadrs = 0;
- child_regs->ar_bspstore = bsp;
+ child_regs->ar_bspstore = user_bsp;
}
-
- return ret;
+ return 0;
}
static void
-sync_thread_rbs (struct task_struct *child, struct mm_struct *mm, int make_writable)
+sync_thread_rbs (struct task_struct *child, long bsp, struct mm_struct *mm, int make_writable)
{
struct task_struct *p;
read_lock(&tasklist_lock);
{
for_each_task(p) {
if (p->mm = mm && p->state != TASK_RUNNING)
- sync_kernel_register_backing_store(p, 0, make_writable);
+ sync_kernel_register_backing_store(p, bsp, 0, make_writable);
}
}
read_unlock(&tasklist_lock);
@@ -535,7 +528,7 @@
static int
access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
{
- unsigned long *ptr, *rbs, *bspstore, ndirty, regnum;
+ unsigned long *ptr, regnum, bsp, rnat_addr;
struct switch_stack *sw;
struct unw_frame_info info;
struct pt_regs *pt;
@@ -632,36 +625,16 @@
/* scratch state */
switch (addr) {
case PT_AR_BSP:
+ bsp = ia64_get_user_bsp(child, pt);
if (write_access)
- /* FIXME? Account for lack of ``cover'' in the syscall case */
- return sync_kernel_register_backing_store(child, *data, 1);
+ return sync_kernel_register_backing_store(child, bsp, *data, 1);
else {
- rbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
- bspstore = (unsigned long *) pt->ar_bspstore;
- ndirty = ia64_rse_num_regs(rbs, rbs + (pt->loadrs >> 19));
-
- /*
- * If we're in a system call, no ``cover'' was done. So to
- * make things uniform, we'll add the appropriate displacement
- * onto bsp if we're in a system call.
- */
- if (!(pt->cr_ifs & (1UL << 63))) {
- struct unw_frame_info info;
- unsigned long cfm;
-
- unw_init_from_blocked_task(&info, child);
- if (unw_unwind_to_user(&info) < 0)
- return -1;
-
- unw_get_cfm(&info, &cfm);
- ndirty += cfm & 0x7f;
- }
- *data = (unsigned long) ia64_rse_skip_regs(bspstore, ndirty);
+ *data = bsp;
return 0;
}
case PT_CFM:
- if (pt->cr_ifs & (1UL << 63)) {
+ if ((long) pt->cr_ifs < 0) {
if (write_access)
pt->cr_ifs = ((pt->cr_ifs & ~0x3fffffffffUL)
| (*data & 0x3fffffffffUL));
@@ -692,6 +665,14 @@
*data = (pt->cr_ipsr & IPSR_READ_MASK);
return 0;
+ case PT_AR_RNAT:
+ bsp = ia64_get_user_bsp(child, pt);
+ rnat_addr = (long) ia64_rse_rnat_addr((long *) bsp - 1);
+ if (write_access)
+ return ia64_poke(child, bsp, rnat_addr, *data);
+ else
+ return ia64_peek(child, bsp, rnat_addr, data);
+
case PT_R1: case PT_R2: case PT_R3:
case PT_R8: case PT_R9: case PT_R10: case PT_R11:
case PT_R12: case PT_R13: case PT_R14: case PT_R15:
@@ -703,7 +684,7 @@
case PT_F6: case PT_F6+8: case PT_F7: case PT_F7+8:
case PT_F8: case PT_F8+8: case PT_F9: case PT_F9+8:
case PT_AR_BSPSTORE:
- case PT_AR_RSC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
+ case PT_AR_RSC: case PT_AR_UNAT: case PT_AR_PFS:
case PT_AR_CCV: case PT_AR_FPSR: case PT_CR_IIP: case PT_PR:
/* scratch register */
ptr = (unsigned long *) ((long) pt + addr - PT_CR_IPSR);
@@ -756,9 +737,9 @@
sys_ptrace (long request, pid_t pid, unsigned long addr, unsigned long data,
long arg4, long arg5, long arg6, long arg7, long stack)
{
- struct pt_regs *regs = (struct pt_regs *) &stack;
+ struct pt_regs *pt, *regs = (struct pt_regs *) &stack;
struct task_struct *child;
- unsigned long flags;
+ unsigned long flags, bsp;
long ret;
lock_kernel();
@@ -827,9 +808,12 @@
if (child->p_pptr != current)
goto out_tsk;
+ pt = ia64_task_regs(child);
+
switch (request) {
case PTRACE_PEEKTEXT:
case PTRACE_PEEKDATA: /* read word at location addr */
+ bsp = ia64_get_user_bsp(child, pt);
if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)) {
struct mm_struct *mm;
long do_sync;
@@ -841,9 +825,9 @@
}
task_unlock(child);
if (do_sync)
- sync_thread_rbs(child, mm, 0);
+ sync_thread_rbs(child, bsp, mm, 0);
}
- ret = ia64_peek(regs, child, addr, &data);
+ ret = ia64_peek(child, bsp, addr, &data);
if (ret = 0) {
ret = data;
regs->r8 = 0; /* ensure "ret" is not mistaken as an error code */
@@ -852,6 +836,7 @@
case PTRACE_POKETEXT:
case PTRACE_POKEDATA: /* write the word at location addr */
+ bsp = ia64_get_user_bsp(child, pt);
if (!(child->thread.flags & IA64_THREAD_KRBS_SYNCED)) {
struct mm_struct *mm;
long do_sync;
@@ -863,9 +848,9 @@
}
task_unlock(child);
if (do_sync)
- sync_thread_rbs(child, mm, 1);
+ sync_thread_rbs(child, bsp, mm, 1);
}
- ret = ia64_poke(regs, child, addr, data);
+ ret = ia64_poke(child, bsp, addr, data);
goto out_tsk;
case PTRACE_PEEKUSR: /* read the word at addr in the USER area */
@@ -887,21 +872,19 @@
case PTRACE_GETSIGINFO:
ret = -EIO;
- if (!access_ok(VERIFY_WRITE, data, sizeof (siginfo_t))
- || child->thread.siginfo = 0)
+ if (!access_ok(VERIFY_WRITE, data, sizeof (siginfo_t)) || !child->thread.siginfo)
goto out_tsk;
- copy_to_user((siginfo_t *) data, child->thread.siginfo, sizeof (siginfo_t));
- ret = 0;
+ ret = copy_siginfo_to_user((siginfo_t *) data, child->thread.siginfo);
goto out_tsk;
- break;
+
case PTRACE_SETSIGINFO:
ret = -EIO;
if (!access_ok(VERIFY_READ, data, sizeof (siginfo_t))
|| child->thread.siginfo = 0)
goto out_tsk;
- copy_from_user(child->thread.siginfo, (siginfo_t *) data, sizeof (siginfo_t));
- ret = 0;
+ ret = copy_siginfo_from_user(child->thread.siginfo, (siginfo_t *) data);
goto out_tsk;
+
case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
case PTRACE_CONT: /* restart after signal. */
ret = -EIO;
@@ -914,8 +897,8 @@
child->exit_code = data;
/* make sure the single step/take-branch tra bits are not set: */
- ia64_psr(ia64_task_regs(child))->ss = 0;
- ia64_psr(ia64_task_regs(child))->tb = 0;
+ ia64_psr(pt)->ss = 0;
+ ia64_psr(pt)->tb = 0;
/* Turn off flag indicating that the KRBS is sync'd with child's VM: */
child->thread.flags &= ~IA64_THREAD_KRBS_SYNCED;
@@ -935,8 +918,8 @@
child->exit_code = SIGKILL;
/* make sure the single step/take-branch tra bits are not set: */
- ia64_psr(ia64_task_regs(child))->ss = 0;
- ia64_psr(ia64_task_regs(child))->tb = 0;
+ ia64_psr(pt)->ss = 0;
+ ia64_psr(pt)->tb = 0;
/* Turn off flag indicating that the KRBS is sync'd with child's VM: */
child->thread.flags &= ~IA64_THREAD_KRBS_SYNCED;
@@ -953,9 +936,9 @@
child->ptrace &= ~PT_TRACESYS;
if (request = PTRACE_SINGLESTEP) {
- ia64_psr(ia64_task_regs(child))->ss = 1;
+ ia64_psr(pt)->ss = 1;
} else {
- ia64_psr(ia64_task_regs(child))->tb = 1;
+ ia64_psr(pt)->tb = 1;
}
child->exit_code = data;
@@ -981,8 +964,8 @@
write_unlock_irqrestore(&tasklist_lock, flags);
/* make sure the single step/take-branch tra bits are not set: */
- ia64_psr(ia64_task_regs(child))->ss = 0;
- ia64_psr(ia64_task_regs(child))->tb = 0;
+ ia64_psr(pt)->ss = 0;
+ ia64_psr(pt)->tb = 0;
/* Turn off flag indicating that the KRBS is sync'd with child's VM: */
child->thread.flags &= ~IA64_THREAD_KRBS_SYNCED;
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/setup.c lia64/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/setup.c Thu Apr 5 09:56:52 2001
@@ -52,7 +52,7 @@
struct cpuinfo_ia64 cpu_data[NR_CPUS] __attribute__ ((section ("__special_page_section")));
unsigned long ia64_cycles_per_usec;
-struct ia64_boot_param ia64_boot_param;
+struct ia64_boot_param *ia64_boot_param;
struct screen_info screen_info;
/* This tells _start which CPU is booting. */
int cpu_now_booting;
@@ -123,14 +123,7 @@
unw_init();
- /*
- * The secondary bootstrap loader passes us the boot
- * parameters at the beginning of the ZERO_PAGE, so let's
- * stash away those values before ZERO_PAGE gets cleared out.
- */
- memcpy(&ia64_boot_param, (void *) ZERO_PAGE_ADDR, sizeof(ia64_boot_param));
-
- *cmdline_p = __va(ia64_boot_param.command_line);
+ *cmdline_p = __va(ia64_boot_param->command_line);
strncpy(saved_command_line, *cmdline_p, sizeof(saved_command_line));
saved_command_line[COMMAND_LINE_SIZE-1] = '\0'; /* for safety */
@@ -144,9 +137,8 @@
* change APIs, they'd do things for the better. Grumble...
*/
bootmap_start = PAGE_ALIGN(__pa(&_end));
- if (ia64_boot_param.initrd_size)
- bootmap_start = PAGE_ALIGN(bootmap_start
- + ia64_boot_param.initrd_size);
+ if (ia64_boot_param->initrd_size)
+ bootmap_start = PAGE_ALIGN(bootmap_start + ia64_boot_param->initrd_size);
bootmap_size = init_bootmem(bootmap_start >> PAGE_SHIFT, max_pfn);
efi_memmap_walk(free_available_memory, 0);
@@ -154,7 +146,7 @@
reserve_bootmem(bootmap_start, bootmap_size);
#ifdef CONFIG_BLK_DEV_INITRD
- initrd_start = ia64_boot_param.initrd_start;
+ initrd_start = ia64_boot_param->initrd_start;
if (initrd_start) {
u64 start, size;
@@ -171,12 +163,12 @@
* The loader ONLY passes physical addresses
*/
initrd_start = (unsigned long)__va(initrd_start);
- initrd_end = initrd_start+ia64_boot_param.initrd_size;
+ initrd_end = initrd_start+ia64_boot_param->initrd_size;
start = initrd_start;
- size = ia64_boot_param.initrd_size;
+ size = ia64_boot_param->initrd_size;
printk("Initial ramdisk at: 0x%p (%lu bytes)\n",
- (void *) initrd_start, ia64_boot_param.initrd_size);
+ (void *) initrd_start, ia64_boot_param->initrd_size);
/*
* The kernel end and the beginning of initrd can be
@@ -398,6 +390,14 @@
pal_vm_info_2_u_t vmi;
unsigned int max_ctx;
+ /*
+ * We can't pass "local_cpu_data" do identify_cpu() because we haven't called
+ * ia64_mmu_init() yet. And we can't call ia64_mmu_init() first because it
+ * depends on the data returned by identify_cpu(). We break the dependency by
+ * accessing cpu_data[] the old way, through identity mapped space.
+ */
+ identify_cpu(&cpu_data[smp_processor_id()]);
+
/* Clear the stack memory reserved for pt_regs: */
memset(ia64_task_regs(current), 0, sizeof(struct pt_regs));
@@ -417,8 +417,6 @@
ia64_mmu_init();
- identify_cpu(local_cpu_data);
-
#ifdef CONFIG_IA32_SUPPORT
/* initialize global ia32 state - CR0 and CR4 */
__asm__("mov ar.cflg = %0"
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/signal.c lia64/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/signal.c Thu Apr 5 09:57:11 2001
@@ -194,6 +194,43 @@
}
return err;
}
+}
+
+int
+copy_siginfo_from_user (siginfo_t *to, siginfo_t *from)
+{
+ if (!access_ok(VERIFY_READ, from, sizeof(siginfo_t)))
+ return -EFAULT;
+ if (__copy_from_user(to, from, sizeof(siginfo_t)) != 0)
+ return -EFAULT;
+
+ if (SI_FROMUSER(to))
+ return 0;
+
+ to->si_code &= ~__SI_MASK;
+ if (to->si_code != 0) {
+ switch (to->si_signo) {
+ case SIGILL: case SIGFPE: case SIGSEGV: case SIGBUS: case SIGTRAP:
+ to->si_code |= __SI_FAULT;
+ break;
+
+ case SIGCHLD:
+ to->si_code |= __SI_CHLD;
+ break;
+
+ case SIGPOLL:
+ to->si_code |= __SI_POLL;
+ break;
+
+ case SIGPROF:
+ to->si_code |= __SI_PROF;
+ break;
+
+ default:
+ break;
+ }
+ }
+ return 0;
}
long
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/smpboot.c lia64/arch/ia64/kernel/smpboot.c
--- linux-davidm/arch/ia64/kernel/smpboot.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/smpboot.c Thu Apr 5 09:57:30 2001
@@ -1,52 +1,4 @@
/*
- * Application processor startup code, moved from smp.c to better support kernel profile
- *
- * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
*/
-#include <asm/kregs.h>
-#include <asm/pgtable.h>
-#include <asm/processor.h>
-
-/*
- * SAL shoves the AP's here when we start them. Physical mode, no kernel TR,
- * no RRs set, better than even chance that psr is bogus. Fix all that and
- * call _start. In effect, pretend to be lilo.
- *
- * Stolen from lilo_start.c. Thanks David!
- */
-void
-start_ap (void)
-{
- extern void _start (void);
- unsigned long flags;
-
- /*
- * Install a translation register that identity maps the kernel's 256MB page(s).
- */
- ia64_clear_ic(flags);
- ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
- ia64_srlz_d();
- ia64_itr(0x3, IA64_TR_KERNEL, PAGE_OFFSET,
- pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
- _PAGE_SIZE_256M);
- ia64_srlz_i();
-
- flags = (IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT | IA64_PSR_DFH |
- IA64_PSR_BN);
-
- asm volatile ("movl r8 = 1f\n"
- ";;\n"
- "mov cr.ipsr=%0\n"
- "mov cr.iip=r8\n"
- "mov cr.ifs=r0\n"
- ";;\n"
- "rfi;;"
- "1:\n"
- "movl r1 = __gp" :: "r"(flags) : "r8");
- _start();
-}
-
-
+/* place holder... */
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/sys_ia64.c lia64/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c Mon Apr 2 19:00:18 2001
+++ lia64/arch/ia64/kernel/sys_ia64.c Thu Apr 5 09:57:58 2001
@@ -25,13 +25,14 @@
get_unmapped_area (unsigned long addr, unsigned long len)
{
struct vm_area_struct * vmm;
+ long map_shared = (current->thread.flags & IA64_THREAD_MAP_SHARED) != 0;
if (len > RGN_MAP_LIMIT)
return 0;
if (!addr)
addr = TASK_UNMAPPED_BASE;
- if (current->thread.flags & IA64_THREAD_MAP_SHARED)
+ if (map_shared)
addr = COLOR_ALIGN(addr);
else
addr = PAGE_ALIGN(addr);
@@ -45,6 +46,8 @@
if (!vmm || addr + len <= vmm->vm_start)
return addr;
addr = vmm->vm_end;
+ if (map_shared)
+ addr = COLOR_ALIGN(addr);
}
}
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/traps.c lia64/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/traps.c Thu Apr 5 09:59:25 2001
@@ -45,31 +45,10 @@
void __init
trap_init (void)
{
- printk("fpswa interface at %lx\n", ia64_boot_param.fpswa);
- if (ia64_boot_param.fpswa) {
-#define OLD_FIRMWARE
-#ifdef OLD_FIRMWARE
- /*
- * HACK to work around broken firmware. This code
- * applies the label fixup to the FPSWA interface and
- * works both with old and new (fixed) firmware.
- */
- unsigned long addr = (unsigned long) __va(ia64_boot_param.fpswa);
- unsigned long gp_val = *(unsigned long *)(addr + 8);
-
- /* go indirect and indexed to get table address */
- addr = gp_val;
- gp_val = *(unsigned long *)(addr + 8);
-
- while (gp_val = *(unsigned long *)(addr + 8)) {
- *(unsigned long *)addr |= PAGE_OFFSET;
- *(unsigned long *)(addr + 8) |= PAGE_OFFSET;
- addr += 16;
- }
-#endif
+ printk("fpswa interface at %lx\n", ia64_boot_param->fpswa);
+ if (ia64_boot_param->fpswa)
/* FPSWA fixup: make the interface pointer a kernel virtual address: */
- fpswa_interface = __va(ia64_boot_param.fpswa);
- }
+ fpswa_interface = __va(ia64_boot_param->fpswa);
}
void
@@ -238,6 +217,7 @@
{
fp_state_t fp_state;
fpswa_ret_t ret;
+#define FPSWA_BUG
#ifdef FPSWA_BUG
struct ia64_fpreg f6_15[10];
#endif
@@ -317,7 +297,7 @@
if (copy_from_user(bundle, (void *) fault_ip, sizeof(bundle)))
return -1;
- if (fpu_swa_count > 5 && jiffies - last_time > 5*HZ)
+ if (jiffies - last_time > 5*HZ)
fpu_swa_count = 0;
if (++fpu_swa_count < 5) {
last_time = jiffies;
@@ -441,7 +421,7 @@
unsigned long n = vector;
char buf[32], *cp;
- if (count > 5 && jiffies - last_time > 5*HZ)
+ if (jiffies - last_time > 5*HZ)
count = 0;
if (count++ < 5) {
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/unaligned.c lia64/arch/ia64/kernel/unaligned.c
--- linux-davidm/arch/ia64/kernel/unaligned.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/unaligned.c Thu Apr 5 09:59:47 2001
@@ -10,6 +10,7 @@
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/smp_lock.h>
+
#include <asm/uaccess.h>
#include <asm/rse.h>
#include <asm/processor.h>
@@ -324,11 +325,11 @@
DPRINT("ubs_end=%p bsp=%p addr=%px\n", (void *) ubs_end, (void *) bsp, (void *) addr);
- ia64_poke(regs, current, (unsigned long) addr, val);
+ ia64_poke(current, (unsigned long) ubs_end, (unsigned long) addr, val);
rnat_addr = ia64_rse_rnat_addr(addr);
- ia64_peek(regs, current, (unsigned long) rnat_addr, &rnats);
+ ia64_peek(current, (unsigned long) ubs_end, (unsigned long) rnat_addr, &rnats);
DPRINT("rnat @%p = 0x%lx nat=%d old nat=%ld\n",
(void *) rnat_addr, rnats, nat, (rnats >> ia64_rse_slot_num(addr)) & 1);
@@ -337,7 +338,7 @@
rnats |= nat_mask;
else
rnats &= ~nat_mask;
- ia64_poke(regs, current, (unsigned long) rnat_addr, rnats);
+ ia64_poke(current, (unsigned long) ubs_end, (unsigned long) rnat_addr, rnats);
DPRINT("rnat changed to @%p = 0x%lx\n", (void *) rnat_addr, rnats);
}
@@ -393,7 +394,7 @@
DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
- ia64_peek(regs, current, (unsigned long) addr, val);
+ ia64_peek(current, (unsigned long) ubs_end, (unsigned long) addr, val);
if (nat) {
rnat_addr = ia64_rse_rnat_addr(addr);
@@ -401,7 +402,7 @@
DPRINT("rnat @%p = 0x%lx\n", (void *) rnat_addr, rnats);
- ia64_peek(regs, current, (unsigned long) rnat_addr, &rnats);
+ ia64_peek(current, (unsigned long) ubs_end, (unsigned long) rnat_addr, &rnats);
*nat = (rnats & nat_mask) != 0;
}
}
@@ -424,8 +425,8 @@
}
/*
- * Using r0 as a target raises a General Exception fault which has
- * higher priority than the Unaligned Reference fault.
+ * Using r0 as a target raises a General Exception fault which has higher priority
+ * than the Unaligned Reference fault.
*/
/*
@@ -1242,7 +1243,7 @@
{
static unsigned long count, last_time;
- if (count > 5 && jiffies - last_time > 5*HZ)
+ if (jiffies - last_time > 5*HZ)
count = 0;
if (++count < 5) {
last_time = jiffies;
diff -urN --ignore-all-space linux-davidm/arch/ia64/kernel/unwind.c lia64/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/kernel/unwind.c Wed Mar 28 21:45:47 2001
@@ -1835,16 +1835,6 @@
unw_init_frame_info(info, t, sw);
}
-void
-unw_init_from_current (struct unw_frame_info *info, struct pt_regs *regs)
-{
- struct switch_stack *sw = (struct switch_stack *) regs - 1;
-
- unw_init_frame_info(info, current, sw);
- /* skip over interrupt frame: */
- unw_unwind(info);
-}
-
static void
init_unwind_table (struct unw_table *table, const char *name, unsigned long segment_base,
unsigned long gp, void *table_start, void *table_end)
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/clear_user.S lia64/arch/ia64/lib/clear_user.S
--- linux-davidm/arch/ia64/lib/clear_user.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/lib/clear_user.S Wed Mar 28 21:46:07 2001
@@ -51,22 +51,6 @@
// have side effects (same thing for writing).
//
- .section "__ex_table", "a" // declare section & section attributes
- .previous
-
-// The label comes first because our store instruction contains a comma
-// and confuse the preprocessor otherwise
-
-#if __GNUC__ >= 3
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- [99:] x
-#else
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- 99: x
-#endif
-
GLOBAL_ENTRY(__do_clear_user)
.prologue
.save ar.pfs, saved_pfs
@@ -160,9 +144,7 @@
// (unlikely) error recovery code
//
-2:
-
- EX(.Lexit3, st8 [buf]=r0,16 )
+2: EX(.Lexit3, st8 [buf]=r0,16 )
;; // needed to get len correct when error
st8 [buf2]=r0,16
adds len=-16,len
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/copy_user.S lia64/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/lib/copy_user.S Thu Apr 5 09:14:26 2001
@@ -30,22 +30,6 @@
*/
#include <asm/asmmacro.h>
-
-// The label comes first because our store instruction contains a comma
-// and confuse the preprocessor otherwise
-//
-#undef DEBUG
-#ifdef DEBUG
-#define EX(y,x...) \
-99: x
-#else
-#define EX(y,x...) \
- .section __ex_table,"a"; \
- data4 @gprel(99f); \
- data4 y-99f; \
- .previous; \
-99: x
-#endif
//
// Tuneable parameters
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/strlen_user.S lia64/arch/ia64/lib/strlen_user.S
--- linux-davidm/arch/ia64/lib/strlen_user.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/lib/strlen_user.S Thu Apr 5 09:14:26 2001
@@ -69,19 +69,6 @@
// - Clearly performance tuning is required.
//
- .section "__ex_table", "a" // declare section & section attributes
- .previous
-
-#if __GNUC__ >= 3
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- [99:] x
-#else
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- 99: x
-#endif
-
#define saved_pfs r11
#define tmp r10
#define base r16
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/strncpy_from_user.S lia64/arch/ia64/lib/strncpy_from_user.S
--- linux-davidm/arch/ia64/lib/strncpy_from_user.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/lib/strncpy_from_user.S Wed Mar 28 21:46:42 2001
@@ -18,19 +18,6 @@
#include <asm/asmmacro.h>
- .section "__ex_table", "a" // declare section & section attributes
- .previous
-
-#if __GNUC__ >= 3
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- [99:] x
-#else
-# define EX(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
- 99: x
-#endif
-
GLOBAL_ENTRY(__strncpy_from_user)
alloc r2=ar.pfs,3,0,0,0
mov r8=0
@@ -52,13 +39,6 @@
;;
(p6) mov r8=in2 // buffer filled up---return buffer length
(p7) sub r8=in1,r9,1 // return string length (excluding NUL character)
-#if __GNUC__ >= 3
[.Lexit:]
br.ret.sptk.few rp
-#else
- br.ret.sptk.few rp
-
-.Lexit:
- br.ret.sptk.few rp
-#endif
END(__strncpy_from_user)
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/strnlen_user.S lia64/arch/ia64/lib/strnlen_user.S
--- linux-davidm/arch/ia64/lib/strnlen_user.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/lib/strnlen_user.S Wed Mar 28 21:46:53 2001
@@ -14,20 +14,6 @@
#include <asm/asmmacro.h>
- .section "__ex_table", "a" // declare section & section attributes
- .previous
-
-/* If a fault occurs, r8 gets set to -EFAULT and r9 gets cleared. */
-#if __GNUC__ >= 3
-# define EXC(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
- [99:] x
-#else
-# define EXC(y,x...) \
- .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
- 99: x
-#endif
-
GLOBAL_ENTRY(__strnlen_user)
.prologue
alloc r2=ar.pfs,2,0,0,0
@@ -43,7 +29,7 @@
;;
// XXX braindead strlen loop---this needs to be optimized
.Loop1:
- EXC(.Lexit, ld1 r8=[in0],1)
+ EXCLR(.Lexit, ld1 r8=[in0],1)
add r9=1,r9
;;
cmp.eq p6,p0=r8,r0
diff -urN --ignore-all-space linux-davidm/arch/ia64/lib/swiotlb.c lia64/arch/ia64/lib/swiotlb.c
--- linux-davidm/arch/ia64/lib/swiotlb.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/lib/swiotlb.c Thu Apr 5 10:01:35 2001
@@ -169,8 +169,8 @@
* sleep because we are called from with in interrupts!
*/
panic("map_single: could not allocate software IO TLB (%ld bytes)", size);
-found:
}
+ found:
spin_unlock_irqrestore(&io_tlb_lock, flags);
/*
diff -urN --ignore-all-space linux-davidm/arch/ia64/mm/init.c lia64/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/mm/init.c Thu Apr 5 10:02:20 2001
@@ -28,98 +28,12 @@
/* References to section boundaries: */
extern char _stext, _etext, _edata, __init_begin, __init_end;
-/*
- * These are allocated in head.S so that we get proper page alignment.
- * If you change the size of these then change head.S as well.
- */
-extern char empty_bad_page[PAGE_SIZE];
-extern pmd_t empty_bad_pmd_table[PTRS_PER_PMD];
-extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
-
extern void ia64_tlb_init (void);
unsigned long MAX_DMA_ADDRESS = PAGE_OFFSET + 0x100000000UL;
static unsigned long totalram_pages;
-/*
- * Fill in empty_bad_pmd_table with entries pointing to
- * empty_bad_pte_table and return the address of this PMD table.
- */
-static pmd_t *
-get_bad_pmd_table (void)
-{
- pmd_t v;
- int i;
-
- pmd_set(&v, empty_bad_pte_table);
-
- for (i = 0; i < PTRS_PER_PMD; ++i)
- empty_bad_pmd_table[i] = v;
-
- return empty_bad_pmd_table;
-}
-
-/*
- * Fill in empty_bad_pte_table with PTEs pointing to empty_bad_page
- * and return the address of this PTE table.
- */
-static pte_t *
-get_bad_pte_table (void)
-{
- pte_t v;
- int i;
-
- set_pte(&v, pte_mkdirty(mk_pte_phys(__pa(empty_bad_page), PAGE_SHARED)));
-
- for (i = 0; i < PTRS_PER_PTE; ++i)
- empty_bad_pte_table[i] = v;
-
- return empty_bad_pte_table;
-}
-
-void
-__handle_bad_pgd (pgd_t *pgd)
-{
- pgd_ERROR(*pgd);
- pgd_set(pgd, get_bad_pmd_table());
-}
-
-void
-__handle_bad_pmd (pmd_t *pmd)
-{
- pmd_ERROR(*pmd);
- pmd_set(pmd, get_bad_pte_table());
-}
-
-/*
- * Allocate and initialize an L3 directory page and set
- * the L2 directory entry PMD to the newly allocated page.
- */
-pte_t*
-get_pte_slow (pmd_t *pmd, unsigned long offset)
-{
- pte_t *pte;
-
- pte = (pte_t *) __get_free_page(GFP_KERNEL);
- if (pmd_none(*pmd)) {
- if (pte) {
- /* everything A-OK */
- clear_page(pte);
- pmd_set(pmd, pte);
- return pte + offset;
- }
- pmd_set(pmd, get_bad_pte_table());
- return NULL;
- }
- free_page((unsigned long) pte);
- if (pmd_bad(*pmd)) {
- __handle_bad_pmd(pmd);
- return NULL;
- }
- return (pte_t *) pmd_page(*pmd) + offset;
-}
-
int
do_check_pgt_cache (int low, int high)
{
@@ -128,11 +42,11 @@
if (pgtable_cache_size > high) {
do {
if (pgd_quicklist)
- free_page((unsigned long)get_pgd_fast()), ++freed;
+ free_page((unsigned long)pgd_alloc_one_fast()), ++freed;
if (pmd_quicklist)
- free_page((unsigned long)get_pmd_fast()), ++freed;
+ free_page((unsigned long)pmd_alloc_one_fast(0, 0)), ++freed;
if (pte_quicklist)
- free_page((unsigned long)get_pte_fast()), ++freed;
+ free_page((unsigned long)pte_alloc_one_fast(0, 0)), ++freed;
} while (pgtable_cache_size > low);
}
return freed;
@@ -289,25 +203,23 @@
page_address(page));
pgd = pgd_offset_k(address); /* note: this is NOT pgd_offset()! */
- pmd = pmd_alloc(pgd, address);
- if (!pmd) {
- __free_page(page);
- panic("Out of memory.");
- return 0;
- }
- pte = pte_alloc(pmd, address);
- if (!pte) {
- __free_page(page);
- panic("Out of memory.");
- return 0;
- }
+
+ spin_lock(&init_mm.page_table_lock);
+ {
+ pmd = pmd_alloc(&init_mm, pgd, address);
+ if (!pmd)
+ goto out;
+ pte = pte_alloc(&init_mm, pmd, address);
+ if (!pte)
+ goto out;
if (!pte_none(*pte)) {
pte_ERROR(*pte);
- __free_page(page);
- return 0;
+ goto out;
}
flush_page_to_ram(page);
set_pte(pte, mk_pte(page, PAGE_GATE));
+ }
+ out: spin_unlock(&init_mm.page_table_lock);
/* no need for flush_tlb */
return page;
}
@@ -323,14 +235,14 @@
# define VHPT_ENABLE_BIT 1
#endif
- /* Set up the kernel identity mappings (regions 6 & 7) and the vmalloc area (region 5): */
+ /*
+ * Set up the kernel identity mapping for regions 6 and 5. The mapping for region
+ * 7 is setup up in _start().
+ */
ia64_clear_ic(flags);
rid = ia64_rid(IA64_REGION_ID_KERNEL, __IA64_UNCACHED_OFFSET);
- ia64_set_rr(__IA64_UNCACHED_OFFSET, (rid << 8) | (_PAGE_SIZE_256M << 2));
-
- rid = ia64_rid(IA64_REGION_ID_KERNEL, PAGE_OFFSET);
- ia64_set_rr(PAGE_OFFSET, (rid << 8) | (_PAGE_SIZE_256M << 2));
+ ia64_set_rr(__IA64_UNCACHED_OFFSET, (rid << 8) | (_PAGE_SIZE_64M << 2));
rid = ia64_rid(IA64_REGION_ID_KERNEL, VMALLOC_START);
ia64_set_rr(VMALLOC_START, (rid << 8) | (PAGE_SHIFT << 2) | 1);
diff -urN --ignore-all-space linux-davidm/arch/ia64/vmlinux.lds.S lia64/arch/ia64/vmlinux.lds.S
--- linux-davidm/arch/ia64/vmlinux.lds.S Thu Apr 5 12:02:10 2001
+++ lia64/arch/ia64/vmlinux.lds.S Thu Apr 5 10:02:57 2001
@@ -5,7 +5,7 @@
OUTPUT_FORMAT("elf64-ia64-little")
OUTPUT_ARCH(ia64)
-ENTRY(_start)
+ENTRY(phys_start)
SECTIONS
{
/* Sections to be discarded */
@@ -16,6 +16,7 @@
}
v = PAGE_OFFSET; /* this symbol is here to make debugging easier... */
+ phys_start = _start - PAGE_OFFSET;
. = KERNEL_START;
@@ -41,7 +42,7 @@
/* Read-only data */
- __gp = ALIGN(8) + 0x200000;
+ __gp = ALIGN(16) + 0x200000; /* gp must be 16-byte aligned for exc. table */
/* Global data */
_data = .;
@@ -67,9 +68,15 @@
{ *(__ksymtab) }
__stop___ksymtab = .;
+ __start___kallsyms = .; /* All kernel symbols for debugging */
+ __kallsyms : AT(ADDR(__kallsyms) - PAGE_OFFSET)
+ { *(__kallsyms) }
+ __stop___kallsyms = .;
+
/* Unwind info & table: */
.IA_64.unwind_info : AT(ADDR(.IA_64.unwind_info) - PAGE_OFFSET)
{ *(.IA_64.unwind_info*) }
+ . = ALIGN(8);
ia64_unw_start = .;
.IA_64.unwind : AT(ADDR(.IA_64.unwind) - PAGE_OFFSET)
{ *(.IA_64.unwind*) }
diff -urN --ignore-all-space linux-davidm/drivers/char/agp/agpgart_be.c lia64/drivers/char/agp/agpgart_be.c
--- linux-davidm/drivers/char/agp/agpgart_be.c Thu Apr 5 12:02:10 2001
+++ lia64/drivers/char/agp/agpgart_be.c Thu Apr 5 10:03:27 2001
@@ -1774,8 +1774,7 @@
}
}
-void *agp_add_fixup(struct vm_area_struct *vma, unsigned long size,
- unsigned long offset)
+void *agp_add_fixup(struct vm_area_struct *vma, unsigned long size, unsigned long offset)
{
agp_fixup_entry_t *entry;
void *handle;
@@ -1822,10 +1821,8 @@
}
spin_lock(&agp_fixup_lock);
- for (pt = agp_fixup_list, prev = NULL; pt; prev = pt, pt = pt->next)
- {
- if ((vma = NULL && pt->handle = ((unsigned long) handle)) ||
- (pt->vma = vma)) {
+ for (pt = agp_fixup_list, prev = NULL; pt; prev = pt, pt = pt->next) {
+ if ((vma = NULL && pt->handle = ((unsigned long) handle)) || (pt->vma = vma)) {
if (prev) {
prev->next = pt->next;
@@ -1844,9 +1841,8 @@
* Look up and return the pte corresponding to addr. Take into account that
* addr might be part of a vmmap in the vmalloc area.
*/
-static pte_t * agp_lookup_pte(struct vm_area_struct *vma, unsigned long addr,
- int kernel) {
-
+static pte_t * agp_lookup_pte(struct vm_area_struct *vma, unsigned long addr, int kernel)
+{
pgd_t *dir;
pmd_t *pmd;
pte_t *pte;
@@ -1977,14 +1973,12 @@
* to previous fixups.
*/
if(old_pa != offset)
- atomic_dec(&virt_to_page(__va(old_pa))
- ->count);
+ atomic_dec(&virt_to_page(__va(old_pa))->count);
/*
* Replace the physical page referenced by pte
* with the new one.
*/
- *pte = mk_pte_phys(new_pa,
- __pgprot(pte_val(*pte) & ~_PFN_MASK));
+ *pte = mk_pte_phys(new_pa, __pgprot(pte_val(*pte) & ~_PFN_MASK));
/*
* Indicate that we're using this page. (This
@@ -2008,8 +2002,7 @@
*/
} else if(old_pa != offset) {
atomic_dec(&virt_to_page(__va(old_pa))->count);
- *pte = mk_pte_phys(offset,
- __pgprot(pte_val(*pte) & ~_PFN_MASK));
+ *pte = mk_pte_phys(offset, __pgprot(pte_val(*pte) & ~_PFN_MASK));
}
/*
diff -urN --ignore-all-space linux-davidm/drivers/char/agp/vmmap.c lia64/drivers/char/agp/vmmap.c
--- linux-davidm/drivers/char/agp/vmmap.c Thu Apr 5 12:02:10 2001
+++ lia64/drivers/char/agp/vmmap.c Thu Apr 5 10:03:51 2001
@@ -133,7 +133,7 @@
if (end > PGDIR_SIZE)
end = PGDIR_SIZE;
do {
- pte_t * pte = pte_alloc_kernel(pmd, address);
+ pte_t * pte = pte_alloc(&init_mm, pmd, address);
if (!pte)
return -ENOMEM;
if (agp_alloc_area_pte(pte, address, end - address, target, prot))
@@ -154,11 +154,11 @@
dir = pgd_offset_k(address);
flush_cache_all();
- lock_kernel();
+ spin_lock(&init_mm.page_table_lock);
do {
pmd_t *pmd;
- pmd = pmd_alloc_kernel(dir, address);
+ pmd = pmd_alloc(&init_mm, dir, address);
ret = -ENOMEM;
if (!pmd)
break;
@@ -173,7 +173,7 @@
ret = 0;
} while (address && (address < end));
- unlock_kernel();
+ spin_unlock(&init_mm.page_table_lock);
flush_tlb_all();
return ret;
}
@@ -193,8 +193,8 @@
if (tmp->addr = addr) {
*p = tmp->next;
agp_vmfree_area_pages(VMALLOC_VMADDR(tmp->addr), tmp->size);
- kfree(tmp);
write_unlock(&vmlist_lock);
+ kfree(tmp);
return;
}
}
diff -urN --ignore-all-space linux-davidm/drivers/char/efirtc.c lia64/drivers/char/efirtc.c
--- linux-davidm/drivers/char/efirtc.c Mon Sep 18 14:57:01 2000
+++ lia64/drivers/char/efirtc.c Thu Apr 5 10:04:04 2001
@@ -40,7 +40,7 @@
#include <asm/uaccess.h>
#include <asm/system.h>
-#define EFI_RTC_VERSION "0.2"
+#define EFI_RTC_VERSION "0.3"
#define EFI_ISDST (EFI_TIME_ADJUST_DAYLIGHT|EFI_TIME_IN_DAYLIGHT)
/*
@@ -315,17 +315,12 @@
spin_unlock_irqrestore(&efi_rtc_lock,flags);
p += sprintf(p,
- "Time :\n"
- "Year : %u\n"
- "Month : %u\n"
- "Day : %u\n"
- "Hour : %u\n"
- "Minute : %u\n"
- "Second : %u\n"
- "Nanosecond: %u\n"
+ "Time : %u:%u:%u.%09u\n"
+ "Date : %u-%u-%u\n"
"Daylight : %u\n",
- eft.year, eft.month, eft.day, eft.hour, eft.minute,
- eft.second, eft.nanosecond, eft.daylight);
+ eft.hour, eft.minute, eft.second, eft.nanosecond,
+ eft.year, eft.month, eft.day,
+ eft.daylight);
if ( eft.timezone = EFI_UNSPECIFIED_TIMEZONE)
p += sprintf(p, "Timezone : unspecified\n");
@@ -335,33 +330,27 @@
p += sprintf(p,
- "\nWakeup Alm:\n"
+ "Alarm Time : %u:%u:%u.%09u\n"
+ "Alarm Date : %u-%u-%u\n"
+ "Alarm Daylight : %u\n"
"Enabled : %s\n"
- "Pending : %s\n"
- "Year : %u\n"
- "Month : %u\n"
- "Day : %u\n"
- "Hour : %u\n"
- "Minute : %u\n"
- "Second : %u\n"
- "Nanosecond: %u\n"
- "Daylight : %u\n",
- enabled = 1 ? "Yes" : "No",
- pending = 1 ? "Yes" : "No",
- alm.year, alm.month, alm.day, alm.hour, alm.minute,
- alm.second, alm.nanosecond, alm.daylight);
+ "Pending : %s\n",
+ alm.hour, alm.minute, alm.second, alm.nanosecond,
+ alm.year, alm.month, alm.day,
+ alm.daylight,
+ enabled = 1 ? "yes" : "no",
+ pending = 1 ? "yes" : "no");
if ( eft.timezone = EFI_UNSPECIFIED_TIMEZONE)
p += sprintf(p, "Timezone : unspecified\n");
else
/* XXX fixme: convert to string? */
- p += sprintf(p, "Timezone : %u\n", eft.timezone);
+ p += sprintf(p, "Timezone : %u\n", alm.timezone);
/*
* now prints the capabilities
*/
p += sprintf(p,
- "\nClock Cap :\n"
"Resolution: %u\n"
"Accuracy : %u\n"
"SetstoZero: %u\n",
@@ -390,7 +379,7 @@
misc_register(&efi_rtc_dev);
- create_proc_read_entry ("efirtc", 0, NULL, efi_rtc_read_proc, NULL);
+ create_proc_read_entry ("driver/efirtc", 0, NULL, efi_rtc_read_proc, NULL);
return 0;
}
diff -urN --ignore-all-space linux-davidm/fs/binfmt_elf.c lia64/fs/binfmt_elf.c
--- linux-davidm/fs/binfmt_elf.c Mon Apr 2 19:01:51 2001
+++ lia64/fs/binfmt_elf.c Thu Apr 5 10:05:23 2001
@@ -140,7 +140,7 @@
*/
sp = (elf_addr_t *)((~15UL & (unsigned long)(u_platform)) - 16UL);
csp = sp;
- csp -= ((exec ? DLINFO_ITEMS*2 : 4) + (k_platform ? 2 : 0));
+ csp -= DLINFO_ITEMS*2 + (k_platform ? 2 : 0);
csp -= envc+1;
csp -= argc+1;
csp -= (!ibcs ? 3 : 1); /* argc itself */
@@ -160,25 +160,20 @@
sp -= 2;
NEW_AUX_ENT(0, AT_PLATFORM, (elf_addr_t)(unsigned long) u_platform);
}
- sp -= 3*2;
+ sp -= DLINFO_ITEMS*2;
NEW_AUX_ENT(0, AT_HWCAP, hwcap);
NEW_AUX_ENT(1, AT_PAGESZ, ELF_EXEC_PAGESIZE);
NEW_AUX_ENT(2, AT_CLKTCK, CLOCKS_PER_SEC);
-
- if (exec) {
- sp -= 10*2;
-
- NEW_AUX_ENT(0, AT_PHDR, load_addr + exec->e_phoff);
- NEW_AUX_ENT(1, AT_PHENT, sizeof (struct elf_phdr));
- NEW_AUX_ENT(2, AT_PHNUM, exec->e_phnum);
- NEW_AUX_ENT(3, AT_BASE, interp_load_addr);
- NEW_AUX_ENT(4, AT_FLAGS, 0);
- NEW_AUX_ENT(5, AT_ENTRY, load_bias + exec->e_entry);
- NEW_AUX_ENT(6, AT_UID, (elf_addr_t) current->uid);
- NEW_AUX_ENT(7, AT_EUID, (elf_addr_t) current->euid);
- NEW_AUX_ENT(8, AT_GID, (elf_addr_t) current->gid);
- NEW_AUX_ENT(9, AT_EGID, (elf_addr_t) current->egid);
- }
+ NEW_AUX_ENT( 3, AT_PHDR, load_addr + exec->e_phoff);
+ NEW_AUX_ENT( 4, AT_PHENT, sizeof (struct elf_phdr));
+ NEW_AUX_ENT( 5, AT_PHNUM, exec->e_phnum);
+ NEW_AUX_ENT( 6, AT_BASE, interp_load_addr);
+ NEW_AUX_ENT( 7, AT_FLAGS, 0);
+ NEW_AUX_ENT( 8, AT_ENTRY, load_bias + exec->e_entry);
+ NEW_AUX_ENT( 9, AT_UID, (elf_addr_t) current->uid);
+ NEW_AUX_ENT(10, AT_EUID, (elf_addr_t) current->euid);
+ NEW_AUX_ENT(11, AT_GID, (elf_addr_t) current->gid);
+ NEW_AUX_ENT(12, AT_EGID, (elf_addr_t) current->egid);
#undef NEW_AUX_ENT
sp -= envc+1;
@@ -694,7 +689,7 @@
create_elf_tables((char *)bprm->p,
bprm->argc,
bprm->envc,
- (interpreter_type = INTERPRETER_ELF ? &elf_ex : NULL),
+ &elf_ex,
load_addr, load_bias,
interp_load_addr,
(interpreter_type = INTERPRETER_AOUT ? 0 : 1));
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/a.out.h lia64/include/asm-ia64/a.out.h
--- linux-davidm/include/asm-ia64/a.out.h Thu Jan 4 22:40:20 2001
+++ lia64/include/asm-ia64/a.out.h Thu Apr 5 11:51:41 2001
@@ -30,7 +30,8 @@
#define N_TXTOFF(x) 0
#ifdef __KERNEL__
-# define STACK_TOP (0x8000000000000000UL + (1UL << (4*PAGE_SHIFT - 12)))
+# include <asm/page.h>
+# define STACK_TOP (0x8000000000000000UL + (1UL << (4*PAGE_SHIFT - 12)) - PAGE_SIZE)
# define IA64_RBS_BOT (STACK_TOP - 0x80000000L) /* bottom of register backing store */
#endif
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/asmmacro.h lia64/include/asm-ia64/asmmacro.h
--- linux-davidm/include/asm-ia64/asmmacro.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/asmmacro.h Wed Mar 28 21:47:53 2001
@@ -29,4 +29,27 @@
#define ASM_UNW_PRLG_PR 0x1
#define ASM_UNW_PRLG_GRSAVE(ninputs) (32+(ninputs))
+/*
+ * Helper macros for accessing user memory.
+ */
+
+ .section "__ex_table", "a" // declare section & section attributes
+ .previous
+
+#if __GNUC__ >= 3
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ [99:] x
+# define EXCLR(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
+ [99:] x
+#else
+# define EX(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y); \
+ 99: x
+# define EXCLR(y,x...) \
+ .xdata4 "__ex_table", @gprel(99f), @gprel(y)+4; \
+ 99: x
+#endif
+
#endif /* _ASM_IA64_ASMMACRO_H */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/io.h lia64/include/asm-ia64/io.h
--- linux-davidm/include/asm-ia64/io.h Thu Jan 4 22:40:20 2001
+++ lia64/include/asm-ia64/io.h Thu Apr 5 11:51:41 2001
@@ -13,8 +13,8 @@
* over and over again with slight variations and possibly making a
* mistake somewhere.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*/
@@ -82,21 +82,11 @@
}
/*
- * For the in/out instructions, we need to do:
- *
- * o "mf" _before_ doing the I/O access to ensure that all prior
- * accesses to memory occur before the I/O access
- * o "mf.a" _after_ doing the I/O access to ensure that the access
- * has completed before we're doing any other I/O accesses
- *
- * The former is necessary because we might be doing normal (cached) memory
- * accesses, e.g., to set up a DMA descriptor table and then do an "outX()"
- * to tell the DMA controller to start the DMA operation. The "mf" ahead
- * of the I/O operation ensures that the DMA table is correct when the I/O
- * access occurs.
- *
- * The mf.a is necessary to ensure that all I/O access occur in program
- * order. --davidm 99/12/07
+ * For the in/out routines, we need to do "mf.a" _after_ doing the I/O access to ensure
+ * that the access has completed before executing other I/O accesses. Since we're doing
+ * the accesses through an uncachable (UC) translation, the CPU will execute them in
+ * program order. However, we still need to tell the compiler not to shuffle them around
+ * during optimization, which is why we use "volatile" pointers.
*/
static inline unsigned int
@@ -378,11 +368,10 @@
#endif
/*
- * An "address" in IO memory space is not clearly either an integer
- * or a pointer. We will accept both, thus the casts.
+ * An "address" in IO memory space is not clearly either an integer or a pointer. We will
+ * accept both, thus the casts.
*
- * On ia-64, we access the physical I/O memory space through the
- * uncached kernel region.
+ * On ia-64, we access the physical I/O memory space through the uncached kernel region.
*/
static inline void *
ioremap (unsigned long offset, unsigned long size)
@@ -412,75 +401,6 @@
__ia64_memcpy_toio((unsigned long)(to),(from),(len))
#define memset_io(addr,c,len) \
__ia64_memset_c_io((unsigned long)(addr),0x0101010101010101UL*(u8)(c),(len))
-
-#define __HAVE_ARCH_MEMSETW_IO
-#define memsetw_io(addr,c,len) \
- _memset_c_io((unsigned long)(addr),0x0001000100010001UL*(u16)(c),(len))
-
-/*
- * XXX - We don't have csum_partial_copy_fromio() yet, so we cheat here and
- * just copy it. The net code will then do the checksum later. Presently
- * only used by some shared memory 8390 Ethernet cards anyway.
- */
-
-#define eth_io_copy_and_sum(skb,src,len,unused) memcpy_fromio((skb)->data,(src),(len))
-
-#if 0
-
-/*
- * XXX this is the kind of legacy stuff we want to get rid of with IA-64... --davidm 99/12/02
- */
-
-/*
- * This is used for checking BIOS signatures. It's not clear at all
- * why this is here. This implementation seems to be the same on
- * all architectures. Strange.
- */
-static inline int
-check_signature (unsigned long io_addr, const unsigned char *signature, int length)
-{
- int retval = 0;
- do {
- if (readb(io_addr) != *signature)
- goto out;
- io_addr++;
- signature++;
- length--;
- } while (length);
- retval = 1;
-out:
- return retval;
-}
-
-#define RTC_PORT(x) (0x70 + (x))
-#define RTC_ALWAYS_BCD 0
-
-#endif
-
-/*
- * The caches on some architectures aren't DMA-coherent and have need
- * to handle this in software. There are two types of operations that
- * can be applied to dma buffers.
- *
- * - dma_cache_inv(start, size) invalidates the affected parts of the
- * caches. Dirty lines of the caches may be written back or simply
- * be discarded. This operation is necessary before dma operations
- * to the memory.
- *
- * - dma_cache_wback(start, size) makes caches and memory coherent
- * by writing the content of the caches back to memory, if necessary
- * (cache flush).
- *
- * - dma_cache_wback_inv(start, size) Like dma_cache_wback() but the
- * function also invalidates the affected part of the caches as
- * necessary before DMA transfers from outside to memory.
- *
- * Fortunately, the IA-64 architecture mandates cache-coherent DMA, so
- * these functions can be implemented as no-ops.
- */
-#define dma_cache_inv(_start,_size) do { } while (0)
-#define dma_cache_wback(_start,_size) do { } while (0)
-#define dma_cache_wback_inv(_start,_size) do { } while (0)
# endif /* __KERNEL__ */
#endif /* _ASM_IA64_IO_H */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/mmu_context.h lia64/include/asm-ia64/mmu_context.h
--- linux-davidm/include/asm-ia64/mmu_context.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/mmu_context.h Thu Apr 5 11:51:42 2001
@@ -6,11 +6,6 @@
* Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
*/
-#include <linux/sched.h>
-#include <linux/spinlock.h>
-
-#include <asm/processor.h>
-
/*
* Routines to manage the allocation of task context numbers. Task context numbers are
* used to reduce or eliminate the need to perform TLB flushes due to context switches.
@@ -24,6 +19,15 @@
#define IA64_REGION_ID_KERNEL 0 /* the kernel's region id (tlb.c depends on this being 0) */
+#define ia64_rid(ctx,addr) (((ctx) << 3) | (addr >> 61))
+
+# ifndef __ASSEMBLY__
+
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+
+#include <asm/processor.h>
+
struct ia64_ctx {
spinlock_t lock;
unsigned int next; /* next context number to use */
@@ -40,12 +44,6 @@
{
}
-static inline unsigned long
-ia64_rid (unsigned long context, unsigned long region_addr)
-{
- return context << 3 | (region_addr >> 61);
-}
-
static inline void
get_new_mmu_context (struct mm_struct *mm)
{
@@ -123,4 +121,5 @@
#define switch_mm(prev_mm,next_mm,next_task,cpu) activate_mm(prev_mm, next_mm)
+# endif /* ! __ASSEMBLY__ */
#endif /* _ASM_IA64_MMU_CONTEXT_H */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/pgalloc.h lia64/include/asm-ia64/pgalloc.h
--- linux-davidm/include/asm-ia64/pgalloc.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/pgalloc.h Thu Apr 5 11:52:05 2001
@@ -33,63 +33,55 @@
#define pte_quicklist (local_cpu_data->pte_quick)
#define pgtable_cache_size (local_cpu_data->pgtable_cache_sz)
-static __inline__ pgd_t*
-get_pgd_slow (void)
-{
- pgd_t *ret = (pgd_t *)__get_free_page(GFP_KERNEL);
- if (ret)
- clear_page(ret);
- return ret;
-}
-
-static __inline__ pgd_t*
-get_pgd_fast (void)
+static inline pgd_t*
+pgd_alloc_one_fast (void)
{
unsigned long *ret = pgd_quicklist;
- if (ret != NULL) {
+ if (__builtin_expect(ret != NULL, 1)) {
pgd_quicklist = (unsigned long *)(*ret);
ret[0] = 0;
--pgtable_cache_size;
- }
+ } else
+ ret = NULL;
return (pgd_t *)ret;
}
-static __inline__ pgd_t*
+static inline pgd_t*
pgd_alloc (void)
{
- pgd_t *pgd;
+ /* the VM system never calls pgd_alloc_one_fast(), so we do it here. */
+ pgd_t *pgd = pgd_alloc_one_fast();
- pgd = get_pgd_fast();
- if (!pgd)
- pgd = get_pgd_slow();
+ if (__builtin_expect(pgd = NULL, 0)) {
+ pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+ if (__builtin_expect(pgd != NULL, 1))
+ clear_page(pgd);
+ }
return pgd;
}
-static __inline__ void
-free_pgd_fast (pgd_t *pgd)
+static inline void
+pgd_free (pgd_t *pgd)
{
*(unsigned long *)pgd = (unsigned long) pgd_quicklist;
pgd_quicklist = (unsigned long *) pgd;
++pgtable_cache_size;
}
-static __inline__ pmd_t *
-get_pmd_slow (void)
+static inline void
+pgd_populate (struct mm_struct *mm, pgd_t *pgd_entry, pmd_t *pmd)
{
- pmd_t *pmd = (pmd_t *) __get_free_page(GFP_KERNEL);
-
- if (pmd)
- clear_page(pmd);
- return pmd;
+ pgd_val(*pgd_entry) = __pa(pmd);
}
-static __inline__ pmd_t *
-get_pmd_fast (void)
+
+static inline pmd_t*
+pmd_alloc_one_fast (struct mm_struct *mm, unsigned long addr)
{
unsigned long *ret = (unsigned long *)pmd_quicklist;
- if (ret != NULL) {
+ if (__builtin_expect(ret != NULL, 1)) {
pmd_quicklist = (unsigned long *)(*ret);
ret[0] = 0;
--pgtable_cache_size;
@@ -97,28 +89,36 @@
return (pmd_t *)ret;
}
-static __inline__ void
-free_pmd_fast (pmd_t *pmd)
+static inline pmd_t*
+pmd_alloc_one (struct mm_struct *mm, unsigned long addr)
+{
+ pmd_t *pmd = (pmd_t *) __get_free_page(GFP_KERNEL);
+
+ if (__builtin_expect(pmd != NULL, 1))
+ clear_page(pmd);
+ return pmd;
+}
+
+static inline void
+pmd_free (pmd_t *pmd)
{
*(unsigned long *)pmd = (unsigned long) pmd_quicklist;
pmd_quicklist = (unsigned long *) pmd;
++pgtable_cache_size;
}
-static __inline__ void
-free_pmd_slow (pmd_t *pmd)
+static inline void
+pmd_populate (struct mm_struct *mm, pmd_t *pmd_entry, pte_t *pte)
{
- free_page((unsigned long)pmd);
+ pmd_val(*pmd_entry) = __pa(pte);
}
-extern pte_t *get_pte_slow (pmd_t *pmd, unsigned long address_preadjusted);
-
-static __inline__ pte_t *
-get_pte_fast (void)
+static inline pte_t*
+pte_alloc_one_fast (struct mm_struct *mm, unsigned long addr)
{
unsigned long *ret = (unsigned long *)pte_quicklist;
- if (ret != NULL) {
+ if (__builtin_expect(ret != NULL, 1)) {
pte_quicklist = (unsigned long *)(*ret);
ret[0] = 0;
--pgtable_cache_size;
@@ -126,71 +126,25 @@
return (pte_t *)ret;
}
-static __inline__ void
-free_pte_fast (pte_t *pte)
-{
- *(unsigned long *)pte = (unsigned long) pte_quicklist;
- pte_quicklist = (unsigned long *) pte;
- ++pgtable_cache_size;
-}
-#define pte_free_kernel(pte) free_pte_fast(pte)
-#define pte_free(pte) free_pte_fast(pte)
-#define pmd_free_kernel(pmd) free_pmd_fast(pmd)
-#define pmd_free(pmd) free_pmd_fast(pmd)
-#define pgd_free(pgd) free_pgd_fast(pgd)
-
-extern void __handle_bad_pgd (pgd_t *pgd);
-extern void __handle_bad_pmd (pmd_t *pmd);
-
-static __inline__ pte_t*
-pte_alloc (pmd_t *pmd, unsigned long vmaddr)
+static inline pte_t*
+pte_alloc_one (struct mm_struct *mm, unsigned long addr)
{
- unsigned long offset;
-
- offset = (vmaddr >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
- if (pmd_none(*pmd)) {
- pte_t *pte_page = get_pte_fast();
+ pte_t *pte = (pte_t *) __get_free_page(GFP_KERNEL);
- if (!pte_page)
- return get_pte_slow(pmd, offset);
- pmd_set(pmd, pte_page);
- return pte_page + offset;
- }
- if (pmd_bad(*pmd)) {
- __handle_bad_pmd(pmd);
- return NULL;
- }
- return (pte_t *) pmd_page(*pmd) + offset;
+ if (__builtin_expect(pte != NULL, 1))
+ clear_page(pte);
+ return pte;
}
-static __inline__ pmd_t*
-pmd_alloc (pgd_t *pgd, unsigned long vmaddr)
+static inline void
+pte_free (pte_t *pte)
{
- unsigned long offset;
-
- offset = (vmaddr >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
- if (pgd_none(*pgd)) {
- pmd_t *pmd_page = get_pmd_fast();
-
- if (!pmd_page)
- pmd_page = get_pmd_slow();
- if (pmd_page) {
- pgd_set(pgd, pmd_page);
- return pmd_page + offset;
- } else
- return NULL;
- }
- if (pgd_bad(*pgd)) {
- __handle_bad_pgd(pgd);
- return NULL;
- }
- return (pmd_t *) pgd_page(*pgd) + offset;
+ *(unsigned long *)pte = (unsigned long) pte_quicklist;
+ pte_quicklist = (unsigned long *) pte;
+ ++pgtable_cache_size;
}
-#define pte_alloc_kernel(pmd, addr) pte_alloc(pmd, addr)
-#define pmd_alloc_kernel(pgd, addr) pmd_alloc(pgd, addr)
-
extern int do_check_pgt_cache (int, int);
/*
@@ -219,7 +173,7 @@
/*
* Flush a specified user mapping
*/
-static __inline__ void
+static inline void
flush_tlb_mm (struct mm_struct *mm)
{
if (mm) {
@@ -237,7 +191,7 @@
/*
* Page-granular tlb flush.
*/
-static __inline__ void
+static inline void
flush_tlb_page (struct vm_area_struct *vma, unsigned long addr)
{
#ifdef CONFIG_SMP
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/pgtable.h lia64/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/pgtable.h Thu Apr 5 11:51:44 2001
@@ -205,25 +205,12 @@
#define set_pte(ptep, pteval) (*(ptep) = (pteval))
#define RGN_SIZE (1UL << 61)
-#define RGN_MAP_LIMIT (1UL << (4*PAGE_SHIFT - 12)) /* limit of mappable area in region */
+#define RGN_MAP_LIMIT ((1UL << (4*PAGE_SHIFT - 12)) - PAGE_SIZE) /* per region addr limit */
#define RGN_KERNEL 7
#define VMALLOC_START (0xa000000000000000 + 3*PAGE_SIZE)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
-#define VMALLOC_END (0xa000000000000000 + RGN_MAP_LIMIT)
-
-/*
- * BAD_PAGETABLE is used when we need a bogus page-table, while
- * BAD_PAGE is used for a bogus page.
- *
- * ZERO_PAGE is a global shared page that is always zero: used
- * for zero-mapped memory areas etc..
- */
-extern pte_t ia64_bad_page (void);
-extern pmd_t *ia64_bad_pagetable (void);
-
-#define BAD_PAGETABLE ia64_bad_pagetable()
-#define BAD_PAGE ia64_bad_page()
+#define VMALLOC_END (0xa000000000000000 + (1UL << (4*PAGE_SHIFT - 9)))
/*
* Conversion functions: convert a page and protection to a page entry,
@@ -253,14 +240,12 @@
/* pte_page() returns the "struct page *" corresponding to the PTE: */
#define pte_page(pte) (mem_map + (unsigned long) ((pte_val(pte) & _PFN_MASK) >> PAGE_SHIFT))
-#define pmd_set(pmdp, ptep) (pmd_val(*(pmdp)) = __pa(ptep))
#define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_bad(pmd) (!ia64_phys_addr_valid(pmd_val(pmd)))
#define pmd_present(pmd) (pmd_val(pmd) != 0UL)
#define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0UL)
#define pmd_page(pmd) ((unsigned long) __va(pmd_val(pmd) & _PFN_MASK))
-#define pgd_set(pgdp, pmdp) (pgd_val(*(pgdp)) = __pa(pmdp))
#define pgd_none(pgd) (!pgd_val(pgd))
#define pgd_bad(pgd) (!ia64_phys_addr_valid(pgd_val(pgd)))
#define pgd_present(pgd) (pgd_val(pgd) != 0UL)
@@ -303,7 +288,11 @@
* works bypasses the caches, but does allow for consecutive writes to
* be combined into single (but larger) write transactions.
*/
+#ifdef CONFIG_MCKINLEY_A0_SPECIFIC
+# define pgprot_writecombine(prot) prot
+#else
#define pgprot_writecombine(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_WC)
+#endif
/*
* Return the region index for virtual address ADDRESS.
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/ptrace.h lia64/include/asm-ia64/ptrace.h
--- linux-davidm/include/asm-ia64/ptrace.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/ptrace.h Thu Apr 5 11:51:41 2001
@@ -220,10 +220,11 @@
struct task_struct; /* forward decl */
extern void show_regs (struct pt_regs *);
- extern long ia64_peek (struct pt_regs *, struct task_struct *, unsigned long addr, long *val);
- extern long ia64_poke (struct pt_regs *, struct task_struct *, unsigned long addr, long val);
- extern void ia64_flush_fph (struct task_struct *t);
- extern void ia64_sync_fph (struct task_struct *t);
+ extern unsigned long ia64_get_user_bsp (struct task_struct *, struct pt_regs *);
+ extern long ia64_peek (struct task_struct *, unsigned long, unsigned long, long *);
+ extern long ia64_poke (struct task_struct *, unsigned long, unsigned long, long);
+ extern void ia64_flush_fph (struct task_struct *);
+ extern void ia64_sync_fph (struct task_struct *);
/* get nat bits for scratch registers such that bit N=1 iff scratch register rN is a NaT */
extern unsigned long ia64_get_scratch_nat_bits (struct pt_regs *pt, unsigned long scratch_unat);
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/siginfo.h lia64/include/asm-ia64/siginfo.h
--- linux-davidm/include/asm-ia64/siginfo.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/siginfo.h Thu Apr 5 11:51:41 2001
@@ -216,10 +216,9 @@
/*
* sigevent definitions
*
- * It seems likely that SIGEV_THREAD will have to be handled from
- * userspace, libpthread transmuting it to SIGEV_SIGNAL, which the
- * thread manager then catches and does the appropriate nonsense.
- * However, everything is written out here so as to not get lost.
+ * It seems likely that SIGEV_THREAD will have to be handled from userspace, libpthread
+ * transmuting it to SIGEV_SIGNAL, which the thread manager then catches and does the
+ * appropriate nonsense. However, everything is written out here so as to not get lost.
*/
#define SIGEV_SIGNAL 0 /* notify via signal */
#define SIGEV_NONE 1 /* other notification: meaningless */
@@ -259,6 +258,7 @@
}
extern int copy_siginfo_to_user(siginfo_t *to, siginfo_t *from);
+extern int copy_siginfo_from_user(siginfo_t *to, siginfo_t *from);
#endif /* __KERNEL__ */
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/system.h lia64/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/system.h Thu Apr 5 11:51:41 2001
@@ -16,14 +16,15 @@
#include <asm/page.h>
-#define KERNEL_START (PAGE_OFFSET + 0x500000)
+#define KERNEL_START (PAGE_OFFSET + 68*1024*1024)
/*
* The following #defines must match with vmlinux.lds.S:
*/
+#define IVT_ADDR (KERNEL_START)
#define IVT_END_ADDR (KERNEL_START + 0x8000)
-#define ZERO_PAGE_ADDR (IVT_END_ADDR + 0*PAGE_SIZE)
-#define SWAPPER_PGD_ADDR (IVT_END_ADDR + 1*PAGE_SIZE)
+#define ZERO_PAGE_ADDR PAGE_ALIGN(IVT_END_ADDR)
+#define SWAPPER_PGD_ADDR (ZERO_PAGE_ADDR + 1*PAGE_SIZE)
#define GATE_ADDR (0xa000000000000000 + PAGE_SIZE)
#define PERCPU_ADDR (0xa000000000000000 + 2*PAGE_SIZE)
@@ -63,12 +64,10 @@
__u16 orig_x; /* cursor's x position */
__u16 orig_y; /* cursor's y position */
} console_info;
- __u16 num_pci_vectors; /* number of ACPI derived PCI IRQ's*/
- __u64 pci_vectors; /* physical address of PCI data (pci_vector_struct)*/
__u64 fpswa; /* physical address of the fpswa interface */
__u64 initrd_start;
__u64 initrd_size;
-} ia64_boot_param;
+} *ia64_boot_param;
static inline void
ia64_insn_group_barrier (void)
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/uaccess.h lia64/include/asm-ia64/uaccess.h
--- linux-davidm/include/asm-ia64/uaccess.h Thu Apr 5 12:02:11 2001
+++ lia64/include/asm-ia64/uaccess.h Thu Apr 5 11:51:48 2001
@@ -33,6 +33,8 @@
#include <linux/errno.h>
#include <linux/sched.h>
+#include <asm/pgtable.h>
+
/*
* For historical reasons, the following macros are grossly misnamed:
*/
@@ -49,16 +51,13 @@
#define segment_eq(a,b) ((a).seg = (b).seg)
/*
- * When accessing user memory, we need to make sure the entire area
- * really is in user-level space. In order to do this efficiently, we
- * make sure that the page at address TASK_SIZE is never valid (we do
- * this by selecting VMALLOC_START as TASK_SIZE+PAGE_SIZE). This way,
- * we can simply check whether the starting address is < TASK_SIZE
- * and, if so, start accessing the memory. If the user specified bad
- * length, we will fault on the NaT page and then return the
- * appropriate error.
+ * When accessing user memory, we need to make sure the entire area really is in
+ * user-level space. In order to do this efficiently, we make sure that the page at
+ * address TASK_SIZE is never valid. We also need to make sure that the address doesn't
+ * point inside the virtually mapped linear page table.
*/
-#define __access_ok(addr,size,segment) (((unsigned long) (addr)) <= (segment).seg)
+#define __access_ok(addr,size,segment) (((unsigned long) (addr)) <= (segment).seg \
+ && ((segment).seg = KERNEL_DS.seg || rgn_offset((unsigned long) (addr)) < RGN_MAP_LIMIT))
#define access_ok(type,addr,size) __access_ok((addr),(size),get_fs())
static inline int
diff -urN --ignore-all-space linux-davidm/include/asm-ia64/unwind.h lia64/include/asm-ia64/unwind.h
--- linux-davidm/include/asm-ia64/unwind.h Mon Oct 9 17:55:01 2000
+++ lia64/include/asm-ia64/unwind.h Thu Apr 5 10:10:04 2001
@@ -109,22 +109,6 @@
struct switch_stack *sw);
/*
- * Prepare to unwind the current task. For this to work, the kernel
- * stack identified by REGS must look like this:
- *
- * // //
- * | |
- * | kernel stack |
- * | |
- * +===========+
- * | struct pt_regs |
- * +---------------------+ <--- REGS
- * | struct switch_stack |
- * +---------------------+
- */
-extern void unw_init_from_current (struct unw_frame_info *info, struct pt_regs *regs);
-
-/*
* Prepare to unwind the currently running thread.
*/
extern void unw_init_running (void (*callback)(struct unw_frame_info *info, void *arg), void *arg);
@@ -144,42 +128,42 @@
#define unw_is_intr_frame(info) (((info)->flags & UNW_FLAG_INTERRUPT_FRAME) != 0)
-static inline unsigned long
+static inline int
unw_get_ip (struct unw_frame_info *info, unsigned long *valp)
{
*valp = (info)->ip;
return 0;
}
-static inline unsigned long
+static inline int
unw_get_sp (struct unw_frame_info *info, unsigned long *valp)
{
*valp = (info)->sp;
return 0;
}
-static inline unsigned long
+static inline int
unw_get_psp (struct unw_frame_info *info, unsigned long *valp)
{
*valp = (info)->psp;
return 0;
}
-static inline unsigned long
+static inline int
unw_get_bsp (struct unw_frame_info *info, unsigned long *valp)
{
*valp = (info)->bsp;
return 0;
}
-static inline unsigned long
+static inline int
unw_get_cfm (struct unw_frame_info *info, unsigned long *valp)
{
*valp = *(info)->cfm_loc;
return 0;
}
-static inline unsigned long
+static inline int
unw_set_cfm (struct unw_frame_info *info, unsigned long val)
{
*(info)->cfm_loc = val;
^ permalink raw reply [flat|nested] 18+ messages in thread