From: David Mosberger <davidm@hpl.hp.com>
To: linux-ia64@vger.kernel.org
Subject: [Linux-ia64] kernel update (relative to 2.4.13)
Date: Thu, 25 Oct 2001 04:27:42 +0000 [thread overview]
Message-ID: <marc-linux-ia64-105590698805620@msgid-missing> (raw)
In-Reply-To: <marc-linux-ia64-105590678205111@msgid-missing>
An updated ia64 patch for 2.4.13 is now available at
ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file:
linux-2.4.11-ia64-011010.diff*
change log:
- support readahead() syscall added by 2.4.13 (both for ia64 and ia32)
- console log level fix by Jesper Juhl
- half-hearted attempt add supporting reading of "default LDT entry" in ia32
modify_ldt() syscall; someone who understands what this is supposed to do
should take a look at this...
- palinfo update by Stephane Eranian
- die() fix by Keith Owens
- unaligned handler fix for rotating fp regs by Tony Luck
- ACPI fix to get AGP bus scanned again by Chris Ahna
- implementation ia64 version of wbinvd() for ACPI; this hasn't been tested
and may not work; shouldn't be an issue at the moment as this is needed only
for ACPI functionality that is not supported on Itanium; still someone who
knows ACPI better may want to take a look at this
- update PCI DMA interface to support page-based mapping/unmapping and
the optional DAC interface
This kernel has been tested with gcc-3.0 on Big Sur, Lion, and HP
simulator. Both UP and MP seem to compile fine. As usual, your
mileage may vary.
Enjoy,
--david
diff -urN linux-2.4.13/Documentation/Configure.help linux-2.4.13-lia/Documentation/Configure.help
--- linux-2.4.13/Documentation/Configure.help Wed Oct 24 10:17:38 2001
+++ linux-2.4.13-lia/Documentation/Configure.help Wed Oct 24 10:21:05 2001
@@ -2632,6 +2632,16 @@
the GLX component for XFree86 3.3.6, which can be downloaded from
http://utah-glx.sourceforge.net/ .
+Intel 460GX support
+CONFIG_AGP_I460
+ This option gives you AGP support for the Intel 460GX chipset. This
+ chipset, the first to support Intel Itanium processors, is new and
+ this option is correspondingly a little experimental.
+
+ If you don't have a 460GX based machine (such as BigSur) with an AGP
+ slot then this option isn't going to do you much good. If you're
+ dying to do Direct Rendering on IA-64, this is what you're looking for.
+
Intel I810/I810 DC100/I810e support
CONFIG_AGP_I810
This option gives you AGP support for the Xserver on the Intel 810,
@@ -12846,6 +12856,18 @@
Say Y here if you would like to be able to read the hard disk
partition table format used by SGI machines.
+Intel EFI GUID partition support
+CONFIG_EFI_PARTITION
+ Say Y here if you would like to use hard disks under Linux which
+ were partitioned using EFI GPT. Presently only useful on the
+ IA-64 platform.
+
+/dev/guid support (EXPERIMENTAL)
+CONFIG_DEVFS_GUID
+ Say Y here if you would like to access disks and partitions by
+ their Globally Unique Identifiers (GUIDs) which will appear as
+ symbolic links in /dev/guid.
+
Ultrix partition support
CONFIG_ULTRIX_PARTITION
Say Y here if you would like to be able to read the hard disk
@@ -18964,11 +18986,22 @@
so the "DIG-compliant" option is usually the right choice.
HP-simulator For the HP simulator (http://software.hp.com/ia64linux/).
- SN1-simulator For the SGI SN1 simulator.
+ SGI-SN1 For SGI SN1 Platforms.
+ SGI-SN2 For SGI SN2 Platforms.
DIG-compliant For DIG ("Developer's Interface Guide") compliant system.
If you don't know what to do, choose "generic".
+CONFIG_IA64_SGI_SN_SIM
+ Build a kernel that runs on both the SGI simulator AND on hardware.
+ There is a very slight performance penalty on hardware for including this
+ option.
+
+CONFIG_IA64_SGI_SN_DEBUG
+ This enables addition debug code that helps isolate
+ platform/kernel bugs. There is a small but measurable performance
+ degradation when this option is enabled.
+
Kernel page size
CONFIG_IA64_PAGE_SIZE_4KB
@@ -18986,56 +19019,13 @@
If you don't know what to do, choose 8KB.
-Enable Itanium A-step specific code
-CONFIG_ITANIUM_ASTEP_SPECIFIC
- Select this option to build a kernel for an Itanium prototype system
- with an A-step CPU. You have an A-step CPU if the "revision" field in
- /proc/cpuinfo is 0.
-
Enable Itanium B-step specific code
CONFIG_ITANIUM_BSTEP_SPECIFIC
Select this option to build a kernel for an Itanium prototype system
- with a B-step CPU. You have a B-step CPU if the "revision" field in
- /proc/cpuinfo has a value in the range from 1 to 4.
-
-Enable Itanium B0-step specific code
-CONFIG_ITANIUM_B0_SPECIFIC
- Select this option to bild a kernel for an Itanium prototype system
- with a B0-step CPU. You have a B0-step CPU if the "revision" field in
- /proc/cpuinfo is 1.
-
-Force interrupt redirection
-CONFIG_IA64_HAVE_IRQREDIR
- Select this option if you know that your system has the ability to
- redirect interrupts to different CPUs. Select N here if you're
- unsure.
-
-Enable use of global TLB purge instruction (ptc.g)
-CONFIG_ITANIUM_PTCG
- Say Y here if you want the kernel to use the IA-64 "ptc.g"
- instruction to flush the TLB on all CPUs. Select N here if
- you're unsure.
-
-Enable SoftSDV hacks
-CONFIG_IA64_SOFTSDV_HACKS
- Say Y here to enable hacks to make the kernel work on the Intel
- SoftSDV simulator. Select N here if you're unsure.
-
-Enable AzusA hacks
-CONFIG_IA64_AZUSA_HACKS
- Say Y here to enable hacks to make the kernel work on the NEC
- AzusA platform. Select N here if you're unsure.
-
-Force socket buffers below 4GB?
-CONFIG_SKB_BELOW_4GB
- Most of today's network interface cards (NICs) support DMA to
- the low 32 bits of the address space only. On machines with
- more then 4GB of memory, this can cause the system to slow
- down if there is no I/O TLB hardware. Turning this option on
- avoids the slow-down by forcing socket buffers to be allocated
- from memory below 4GB. The downside is that your system could
- run out of memory below 4GB before all memory has been used up.
- If you're unsure how to answer this question, answer Y.
+ with a B-step CPU. Only B3 step CPUs are supported. You have a B3-step
+ CPU if the "revision" field in /proc/cpuinfo is equal to 4. If the
+ "revision" field shows a number bigger than 4, you do not have to turn
+ on this option.
Enable IA-64 Machine Check Abort
CONFIG_IA64_MCA
@@ -19055,6 +19045,15 @@
Layer) information in /proc/pal. This contains useful information
about the processors in your systems, such as cache and TLB sizes
and the PAL firmware version in use.
+
+ To use this option, you have to check that the "/proc file system
+ support" (CONFIG_PROC_FS) is enabled, too.
+
+/proc/efi/vars support
+CONFIG_EFI_VARS
+ If you say Y here, you are able to get EFI (Extensible Firmware
+ Interface) variable information in /proc/efi/vars. You may read,
+ write, create, and destroy EFI variables through this interface.
To use this option, you have to check that the "/proc file system
support" (CONFIG_PROC_FS) is enabled, too.
diff -urN linux-2.4.13/Documentation/kernel-parameters.txt linux-2.4.13-lia/Documentation/kernel-parameters.txt
--- linux-2.4.13/Documentation/kernel-parameters.txt Wed Jun 20 11:21:33 2001
+++ linux-2.4.13-lia/Documentation/kernel-parameters.txt Wed Oct 10 17:33:26 2001
@@ -17,6 +17,7 @@
CD Appropriate CD support is enabled.
DEVFS devfs support is enabled.
DRM Direct Rendering Management support is enabled.
+ EFI EFI Partitioning (GPT) is enabled
EIDE EIDE/ATAPI support is enabled.
FB The frame buffer device is enabled.
HW Appropriate hardware is enabled.
@@ -211,6 +212,9 @@
gc_3= [HW,JOY]
gdth= [HW,SCSI]
+
+ gpt [EFI] Forces disk with valid GPT signature but
+ invalid Protective MBR to be treated as GPT.
gscd= [HW,CD]
diff -urN linux-2.4.13/Makefile linux-2.4.13-lia/Makefile
--- linux-2.4.13/Makefile Wed Oct 24 10:17:41 2001
+++ linux-2.4.13-lia/Makefile Wed Oct 24 10:21:05 2001
@@ -88,7 +88,7 @@
CPPFLAGS := -D__KERNEL__ -I$(HPATH)
-CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -O2 \
+CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 \
-fomit-frame-pointer -fno-strict-aliasing -fno-common
AFLAGS := -D__ASSEMBLY__ $(CPPFLAGS)
@@ -137,7 +137,8 @@
drivers/net/net.o \
drivers/media/media.o
DRIVERS-$(CONFIG_AGP) += drivers/char/agp/agp.o
-DRIVERS-$(CONFIG_DRM) += drivers/char/drm/drm.o
+DRIVERS-$(CONFIG_DRM_NEW) += drivers/char/drm/drm.o
+DRIVERS-$(CONFIG_DRM_OLD) += drivers/char/drm-4.0/drm.o
DRIVERS-$(CONFIG_NUBUS) += drivers/nubus/nubus.a
DRIVERS-$(CONFIG_ISDN) += drivers/isdn/isdn.a
DRIVERS-$(CONFIG_NET_FC) += drivers/net/fc/fc.o
@@ -241,14 +242,14 @@
include arch/$(ARCH)/Makefile
-export CPPFLAGS CFLAGS AFLAGS
+export CPPFLAGS CFLAGS CFLAGS_KERNEL AFLAGS AFLAGS_KERNEL
export NETWORKS DRIVERS LIBS HEAD LDFLAGS LINKFLAGS MAKEBOOT ASFLAGS
.S.s:
- $(CPP) $(AFLAGS) -traditional -o $*.s $<
+ $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -o $*.s $<
.S.o:
- $(CC) $(AFLAGS) -traditional -c -o $*.o $<
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -c -o $*.o $<
Version: dummy
@rm -f include/linux/compile.h
diff -urN linux-2.4.13/arch/i386/lib/usercopy.c linux-2.4.13-lia/arch/i386/lib/usercopy.c
--- linux-2.4.13/arch/i386/lib/usercopy.c Mon Sep 24 15:06:13 2001
+++ linux-2.4.13-lia/arch/i386/lib/usercopy.c Thu Oct 4 00:21:39 2001
@@ -14,6 +14,7 @@
unsigned long
__generic_copy_to_user(void *to, const void *from, unsigned long n)
{
+ prefetch(from);
if (access_ok(VERIFY_WRITE, to, n))
{
if(n<512)
@@ -27,6 +28,7 @@
unsigned long
__generic_copy_from_user(void *to, const void *from, unsigned long n)
{
+ prefetchw(to);
if (access_ok(VERIFY_READ, from, n))
{
if(n<512)
diff -urN linux-2.4.13/arch/i386/mm/fault.c linux-2.4.13-lia/arch/i386/mm/fault.c
--- linux-2.4.13/arch/i386/mm/fault.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/i386/mm/fault.c Wed Oct 24 18:11:25 2001
@@ -27,8 +27,6 @@
extern void die(const char *,struct pt_regs *,long);
-extern int console_loglevel;
-
/*
* Ugly, ugly, but the goto's result in better assembly..
*/
diff -urN linux-2.4.13/arch/ia64/Makefile linux-2.4.13-lia/arch/ia64/Makefile
--- linux-2.4.13/arch/ia64/Makefile Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/Makefile Thu Oct 4 00:21:52 2001
@@ -17,13 +17,15 @@
AFLAGS_KERNEL := -mconstant-gp
EXTRA
-CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 -falign-functions2
+CFLAGS := $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-rangeñ0-f15,f32-f127 \
+ -falign-functions2
+# -ffunction-sections
CFLAGS_KERNEL := -mconstant-gp
GCC_VERSION=$(shell $(CROSS_COMPILE)$(HOSTCC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.')
ifneq ($(GCC_VERSION),2)
- CFLAGS += -frename-registers
+ CFLAGS += -frename-registers --param max-inline-insns@0
endif
ifeq ($(CONFIG_ITANIUM_BSTEP_SPECIFIC),y)
@@ -32,7 +34,7 @@
ifdef CONFIG_IA64_GENERIC
CORE_FILES := arch/$(ARCH)/hp/hp.a \
- arch/$(ARCH)/sn/sn.a \
+ arch/$(ARCH)/sn/sn.o \
arch/$(ARCH)/dig/dig.a \
arch/$(ARCH)/sn/io/sgiio.o \
$(CORE_FILES)
@@ -52,15 +54,14 @@
$(CORE_FILES)
endif
-ifdef CONFIG_IA64_SGI_SN1
+ifdef CONFIG_IA64_SGI_SN
CFLAGS += -DBRINGUP
- SUBDIRS := arch/$(ARCH)/sn/sn1 \
- arch/$(ARCH)/sn \
+ SUBDIRS := arch/$(ARCH)/sn/kernel \
arch/$(ARCH)/sn/io \
arch/$(ARCH)/sn/fprom \
$(SUBDIRS)
- CORE_FILES := arch/$(ARCH)/sn/sn.a \
- arch/$(ARCH)/sn/io/sgiio.o\
+ CORE_FILES := arch/$(ARCH)/sn/kernel/sn.o \
+ arch/$(ARCH)/sn/io/sgiio.o \
$(CORE_FILES)
endif
@@ -105,7 +106,7 @@
compressed: vmlinux
$(OBJCOPY) --strip-all vmlinux vmlinux-tmp
- gzip -9 vmlinux-tmp
+ gzip vmlinux-tmp
mv vmlinux-tmp.gz vmlinux.gz
rawboot:
diff -urN linux-2.4.13/arch/ia64/config.in linux-2.4.13-lia/arch/ia64/config.in
--- linux-2.4.13/arch/ia64/config.in Wed Oct 24 10:17:42 2001
+++ linux-2.4.13-lia/arch/ia64/config.in Wed Oct 24 10:21:06 2001
@@ -28,6 +28,7 @@
if [ "$CONFIG_IA64_HP_SIM" = "n" ]; then
define_bool CONFIG_ACPI y
+ define_bool CONFIG_ACPI_EFI y
define_bool CONFIG_ACPI_INTERPRETER y
define_bool CONFIG_ACPI_KERNEL_CONFIG y
fi
@@ -40,7 +41,8 @@
"generic CONFIG_IA64_GENERIC \
DIG-compliant CONFIG_IA64_DIG \
HP-simulator CONFIG_IA64_HP_SIM \
- SGI-SN1 CONFIG_IA64_SGI_SN1" generic
+ SGI-SN1 CONFIG_IA64_SGI_SN1 \
+ SGI-SN2 CONFIG_IA64_SGI_SN2" generic
choice 'Kernel page size' \
"4KB CONFIG_IA64_PAGE_SIZE_4KB \
@@ -51,25 +53,6 @@
if [ "$CONFIG_ITANIUM" = "y" ]; then
define_bool CONFIG_IA64_BRL_EMU y
bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
- fi
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B1-step specific code' CONFIG_ITANIUM_B1_SPECIFIC
- fi
- if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium B2-step specific code' CONFIG_ITANIUM_B2_SPECIFIC
- fi
- bool ' Enable Itanium C-step specific code' CONFIG_ITANIUM_CSTEP_SPECIFIC
- if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" = "y" ]; then
- bool ' Enable Itanium C0-step specific code' CONFIG_ITANIUM_C0_SPECIFIC
- fi
- if [ "$CONFIG_ITANIUM_B0_SPECIFIC" = "y" \
- -o "$CONFIG_ITANIUM_B1_SPECIFIC" = "y" -o "$CONFIG_ITANIUM_B2_SPECIFIC" = "y" ]; then
- define_bool CONFIG_ITANIUM_PTCG n
- else
- define_bool CONFIG_ITANIUM_PTCG y
- fi
if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align cache-sensitive data to 128 bytes
else
@@ -78,7 +61,6 @@
fi
if [ "$CONFIG_MCKINLEY" = "y" ]; then
- define_bool CONFIG_ITANIUM_PTCG y
define_int CONFIG_IA64_L1_CACHE_SHIFT 7
bool ' Enable McKinley A-step specific code' CONFIG_MCKINLEY_ASTEP_SPECIFIC
if [ "$CONFIG_MCKINLEY_ASTEP_SPECIFIC" = "y" ]; then
@@ -87,28 +69,32 @@
fi
if [ "$CONFIG_IA64_DIG" = "y" ]; then
- bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
define_bool CONFIG_PM y
fi
-if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
- bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM
- define_bool CONFIG_DEVFS_DEBUG y
+if [ "$CONFIG_IA64_SGI_SN1" = "y" ] || [ "$CONFIG_IA64_SGI_SN2" = "y" ]; then
+ define_bool CONFIG_IA64_SGI_SN y
+ bool ' Enable extra debugging code' CONFIG_IA64_SGI_SN_DEBUG n
+ bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN_SIM
+ bool ' Enable autotest (llsc). Option to run cache test instead of booting' \
+ CONFIG_IA64_SGI_AUTOTEST n
define_bool CONFIG_DEVFS_FS y
- define_bool CONFIG_IA64_BRL_EMU y
+ if [ "$CONFIG_DEVFS_FS" = "y" ]; then
+ bool ' Enable DEVFS Debug Code' CONFIG_DEVFS_DEBUG n
+ fi
+ bool ' Enable protocol mode for the L1 console' CONFIG_SERIAL_SGI_L1_PROTOCOL y
+ define_bool CONFIG_DISCONTIGMEM y
define_bool CONFIG_IA64_MCA y
- define_bool CONFIG_ITANIUM y
- define_bool CONFIG_SGI_IOC3_ETH y
+ define_bool CONFIG_NUMA y
define_bool CONFIG_PERCPU_IRQ y
- define_int CONFIG_CACHE_LINE_SHIFT 7
- bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM
- bool ' Enable NUMA support' CONFIG_NUMA
+ tristate ' PCIBA support' CONFIG_PCIBA
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
bool 'SMP support' CONFIG_SMP
+tristate 'Support running of Linux/x86 binaries' CONFIG_IA32_SUPPORT
bool 'Performance monitor support' CONFIG_PERFMON
tristate '/proc/pal support' CONFIG_IA64_PALINFO
tristate '/proc/efi/vars support' CONFIG_EFI_VARS
@@ -270,19 +256,19 @@
mainmenu_option next_comment
comment 'Kernel hacking'
-#bool 'Debug kmalloc/kfree' CONFIG_DEBUG_MALLOC
-if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
- tristate 'Kernel support for IA-32 emulation' CONFIG_IA32_SUPPORT
- tristate 'Kernel FP software completion' CONFIG_MATHEMU
-else
- define_bool CONFIG_MATHEMU y
+bool 'Kernel debugging' CONFIG_DEBUG_KERNEL
+if [ "$CONFIG_DEBUG_KERNEL" != "n" ]; then
+ bool ' Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
+ bool ' Disable VHPT' CONFIG_DISABLE_VHPT
+ bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ
+
+# early printk is currently broken for SMP: the secondary processors get stuck...
+# bool ' Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK
+
+ bool ' Debug memory allocations' CONFIG_DEBUG_SLAB
+ bool ' Spinlock debugging' CONFIG_DEBUG_SPINLOCK
+ bool ' Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG
+ bool ' Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
fi
-
-bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
-bool 'Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK
-bool 'Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG
-bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
-bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
-bool 'Disable VHPT' CONFIG_DISABLE_VHPT
endmenu
diff -urN linux-2.4.13/arch/ia64/defconfig linux-2.4.13-lia/arch/ia64/defconfig
--- linux-2.4.13/arch/ia64/defconfig Thu Jun 22 07:09:44 2000
+++ linux-2.4.13-lia/arch/ia64/defconfig Thu Oct 4 00:21:39 2001
@@ -3,53 +3,131 @@
#
#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODVERSIONS=y
+# CONFIG_KMOD is not set
+
+#
# General setup
#
CONFIG_IA64=y
# CONFIG_ISA is not set
+# CONFIG_EISA is not set
+# CONFIG_MCA is not set
# CONFIG_SBUS is not set
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_ACPI=y
+CONFIG_ACPI_EFI=y
+CONFIG_ACPI_INTERPRETER=y
+CONFIG_ACPI_KERNEL_CONFIG=y
+CONFIG_ITANIUM=y
+# CONFIG_MCKINLEY is not set
# CONFIG_IA64_GENERIC is not set
-CONFIG_IA64_HP_SIM=y
-# CONFIG_IA64_SGI_SN1_SIM is not set
-# CONFIG_IA64_DIG is not set
+CONFIG_IA64_DIG=y
+# CONFIG_IA64_HP_SIM is not set
+# CONFIG_IA64_SGI_SN1 is not set
+# CONFIG_IA64_SGI_SN2 is not set
# CONFIG_IA64_PAGE_SIZE_4KB is not set
# CONFIG_IA64_PAGE_SIZE_8KB is not set
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
+CONFIG_IA64_BRL_EMU=y
+CONFIG_ITANIUM_BSTEP_SPECIFIC=y
+CONFIG_IA64_L1_CACHE_SHIFT=6
+CONFIG_IA64_MCA=y
+CONFIG_PM=y
CONFIG_KCORE_ELF=y
-# CONFIG_SMP is not set
-# CONFIG_PERFMON is not set
-# CONFIG_NET is not set
-# CONFIG_SYSVIPC is not set
+CONFIG_SMP=y
+CONFIG_IA32_SUPPORT=y
+CONFIG_PERFMON=y
+CONFIG_IA64_PALINFO=y
+CONFIG_EFI_VARS=y
+CONFIG_NET=y
+CONFIG_SYSVIPC=y
# CONFIG_BSD_PROCESS_ACCT is not set
-# CONFIG_SYSCTL is not set
-# CONFIG_BINFMT_ELF is not set
+CONFIG_SYSCTL=y
+CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
+# CONFIG_ACPI_DEBUG is not set
+# CONFIG_ACPI_BUSMGR is not set
+# CONFIG_ACPI_SYS is not set
+# CONFIG_ACPI_CPU is not set
+# CONFIG_ACPI_BUTTON is not set
+# CONFIG_ACPI_AC is not set
+# CONFIG_ACPI_EC is not set
+# CONFIG_ACPI_CMBATT is not set
+# CONFIG_ACPI_THERMAL is not set
CONFIG_PCI=y
CONFIG_PCI_NAMES=y
# CONFIG_HOTPLUG is not set
# CONFIG_PCMCIA is not set
#
-# Code maturity level options
+# Parallel port support
#
-CONFIG_EXPERIMENTAL=y
+# CONFIG_PARPORT is not set
#
-# Loadable module support
+# Networking options
#
-# CONFIG_MODULES is not set
+CONFIG_PACKET=y
+CONFIG_PACKET_MMAP=y
+# CONFIG_NETLINK is not set
+# CONFIG_NETFILTER is not set
+CONFIG_FILTER=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+# CONFIG_IP_PNP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_INET_ECN is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_IPV6 is not set
+# CONFIG_KHTTPD is not set
+# CONFIG_ATM is not set
+
+#
+#
+#
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_DECNET is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_LLC is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
#
-# Parallel port support
+# QoS and/or fair queueing
#
-# CONFIG_PARPORT is not set
+# CONFIG_NET_SCHED is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
#
# Plug and Play configuration
#
# CONFIG_PNP is not set
# CONFIG_ISAPNP is not set
+# CONFIG_PNPBIOS is not set
#
# Block devices
@@ -58,14 +136,12 @@
# CONFIG_BLK_DEV_XD is not set
# CONFIG_PARIDE is not set
# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
-
-#
-# Additional Block Devices
-#
-# CONFIG_BLK_DEV_LOOP is not set
-# CONFIG_BLK_DEV_MD is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
+# CONFIG_BLK_DEV_INITRD is not set
#
# I2O device support
@@ -73,10 +149,23 @@
# CONFIG_I2O is not set
# CONFIG_I2O_PCI is not set
# CONFIG_I2O_BLOCK is not set
+# CONFIG_I2O_LAN is not set
# CONFIG_I2O_SCSI is not set
# CONFIG_I2O_PROC is not set
#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+# CONFIG_BLK_DEV_MD is not set
+# CONFIG_MD_LINEAR is not set
+# CONFIG_MD_RAID0 is not set
+# CONFIG_MD_RAID1 is not set
+# CONFIG_MD_RAID5 is not set
+# CONFIG_MD_MULTIPATH is not set
+# CONFIG_BLK_DEV_LVM is not set
+
+#
# ATA/IDE/MFM/RLL support
#
CONFIG_IDE=y
@@ -92,12 +181,21 @@
# CONFIG_BLK_DEV_HD_IDE is not set
# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_IDEDISK=y
-# CONFIG_IDEDISK_MULTI_MODE is not set
+CONFIG_IDEDISK_MULTI_MODE=y
+# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set
+# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set
+# CONFIG_BLK_DEV_IDEDISK_IBM is not set
+# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set
+# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set
+# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set
+# CONFIG_BLK_DEV_IDEDISK_WD is not set
+# CONFIG_BLK_DEV_COMMERIAL is not set
+# CONFIG_BLK_DEV_TIVO is not set
# CONFIG_BLK_DEV_IDECS is not set
CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
-# CONFIG_BLK_DEV_IDEFLOPPY is not set
-# CONFIG_BLK_DEV_IDESCSI is not set
+CONFIG_BLK_DEV_IDEFLOPPY=y
+CONFIG_BLK_DEV_IDESCSI=y
#
# IDE chipset support/bugfixes
@@ -109,45 +207,209 @@
CONFIG_BLK_DEV_IDEPCI=y
CONFIG_IDEPCI_SHARE_IRQ=y
CONFIG_BLK_DEV_IDEDMA_PCI=y
+CONFIG_BLK_DEV_ADMA=y
# CONFIG_BLK_DEV_OFFBOARD is not set
-CONFIG_IDEDMA_PCI_AUTO=y
+# CONFIG_IDEDMA_PCI_AUTO is not set
CONFIG_BLK_DEV_IDEDMA=y
-CONFIG_IDEDMA_PCI_EXPERIMENTAL=y
# CONFIG_IDEDMA_PCI_WIP is not set
# CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set
-# CONFIG_BLK_DEV_AEC6210 is not set
-# CONFIG_AEC6210_TUNING is not set
+# CONFIG_BLK_DEV_AEC62XX is not set
+# CONFIG_AEC62XX_TUNING is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_WDC_ALI15X3 is not set
-# CONFIG_BLK_DEV_AMD7409 is not set
-# CONFIG_AMD7409_OVERRIDE is not set
+# CONFIG_BLK_DEV_AMD74XX is not set
+# CONFIG_AMD74XX_OVERRIDE is not set
# CONFIG_BLK_DEV_CMD64X is not set
-# CONFIG_CMD64X_RAID is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5530 is not set
# CONFIG_BLK_DEV_HPT34X is not set
# CONFIG_HPT34X_AUTODMA is not set
# CONFIG_BLK_DEV_HPT366 is not set
-# CONFIG_HPT366_FIP is not set
-# CONFIG_HPT366_MODE3 is not set
CONFIG_BLK_DEV_PIIX=y
-CONFIG_PIIX_TUNING=y
+# CONFIG_PIIX_TUNING is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX is not set
# CONFIG_PDC202XX_BURST is not set
-# CONFIG_PDC202XX_MASTER is not set
+# CONFIG_PDC202XX_FORCE is not set
+# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIS5513 is not set
+# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_IDE_CHIPSETS is not set
-CONFIG_IDEDMA_AUTO=y
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_IDEDMA_IVB is not set
+# CONFIG_DMA_NONPCI is not set
CONFIG_BLK_DEV_IDE_MODES=y
+# CONFIG_BLK_DEV_ATARAID is not set
+# CONFIG_BLK_DEV_ATARAID_PDC is not set
+# CONFIG_BLK_DEV_ATARAID_HPT is not set
#
# SCSI support
#
-# CONFIG_SCSI is not set
+CONFIG_SCSI=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+CONFIG_SD_EXTRA_DEVS@
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_CHR_DEV_OSST is not set
+# CONFIG_BLK_DEV_SR is not set
+# CONFIG_CHR_DEV_SG is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+CONFIG_SCSI_DEBUG_QUEUES=y
+# CONFIG_SCSI_MULTI_LUN is not set
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_7000FASST is not set
+# CONFIG_SCSI_ACARD is not set
+# CONFIG_SCSI_AHA152X is not set
+# CONFIG_SCSI_AHA1542 is not set
+# CONFIG_SCSI_AHA1740 is not set
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_DPT_I2O is not set
+# CONFIG_SCSI_ADVANSYS is not set
+# CONFIG_SCSI_IN2000 is not set
+# CONFIG_SCSI_AM53C974 is not set
+# CONFIG_SCSI_MEGARAID is not set
+# CONFIG_SCSI_BUSLOGIC is not set
+# CONFIG_SCSI_CPQFCTS is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_DTC3280 is not set
+# CONFIG_SCSI_EATA is not set
+# CONFIG_SCSI_EATA_DMA is not set
+# CONFIG_SCSI_EATA_PIO is not set
+# CONFIG_SCSI_FUTURE_DOMAIN is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_GENERIC_NCR5380 is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+# CONFIG_SCSI_NCR53C406A is not set
+# CONFIG_SCSI_NCR_D700 is not set
+# CONFIG_SCSI_NCR53C7xx is not set
+# CONFIG_SCSI_NCR53C8XX is not set
+# CONFIG_SCSI_SYM53C8XX is not set
+# CONFIG_SCSI_PAS16 is not set
+# CONFIG_SCSI_PCI2000 is not set
+# CONFIG_SCSI_PCI2220I is not set
+# CONFIG_SCSI_PSI240I is not set
+# CONFIG_SCSI_QLOGIC_FAS is not set
+# CONFIG_SCSI_QLOGIC_ISP is not set
+# CONFIG_SCSI_QLOGIC_FC is not set
+CONFIG_SCSI_QLOGIC_1280=y
+# CONFIG_SCSI_QLOGIC_QLA2100 is not set
+# CONFIG_SCSI_SIM710 is not set
+# CONFIG_SCSI_SYM53C416 is not set
+# CONFIG_SCSI_DC390T is not set
+# CONFIG_SCSI_T128 is not set
+# CONFIG_SCSI_U14_34F is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# Network device support
+#
+CONFIG_NETDEVICES=y
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+CONFIG_DUMMY=y
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+# CONFIG_SUNLANCE is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNBMAC is not set
+# CONFIG_SUNQE is not set
+# CONFIG_SUNLANCE is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+# CONFIG_LANCE is not set
+# CONFIG_NET_VENDOR_SMC is not set
+# CONFIG_NET_VENDOR_RACAL is not set
+# CONFIG_HP100 is not set
+# CONFIG_NET_ISA is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_APRICOT is not set
+# CONFIG_CS89x0 is not set
+# CONFIG_TULIP is not set
+# CONFIG_DE4X5 is not set
+# CONFIG_DGRS is not set
+# CONFIG_DM9102 is not set
+CONFIG_EEPRO100=y
+# CONFIG_LNE390 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_NE3210 is not set
+# CONFIG_ES3210 is not set
+# CONFIG_8139TOO is not set
+# CONFIG_8139TOO_PIO is not set
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+# CONFIG_8139TOO_8129 is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+# CONFIG_WINBOND_840 is not set
+# CONFIG_LAN_SAA9730 is not set
+# CONFIG_NET_POCKET is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_MYRI_SBUS is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PLIP is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+# CONFIG_NET_FC is not set
+# CONFIG_RCPCI is not set
+# CONFIG_SHAPER is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
#
# Amateur Radio support
@@ -165,13 +427,27 @@
# CONFIG_CD_NO_IDESCSI is not set
#
+# Input core support
+#
+CONFIG_INPUT=y
+CONFIG_INPUT_KEYBDEV=y
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X\x1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Yv8
+# CONFIG_INPUT_JOYDEV is not set
+CONFIG_INPUT_EVDEV=y
+
+#
# Character devices
#
-# CONFIG_VT is not set
-# CONFIG_SERIAL is not set
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_SERIAL=y
+CONFIG_SERIAL_CONSOLE=y
# CONFIG_SERIAL_EXTENDED is not set
# CONFIG_SERIAL_NONSTANDARD is not set
-# CONFIG_UNIX98_PTYS is not set
+CONFIG_UNIX98_PTYS=y
+CONFIG_UNIX98_PTY_COUNT%6
#
# I2C support
@@ -182,97 +458,382 @@
# Mice
#
# CONFIG_BUSMOUSE is not set
-# CONFIG_MOUSE is not set
+CONFIG_MOUSE=y
+CONFIG_PSMOUSE=y
+# CONFIG_82C710_MOUSE is not set
+# CONFIG_PC110_PAD is not set
+
+#
+# Joysticks
+#
+# CONFIG_INPUT_GAMEPORT is not set
+# CONFIG_INPUT_NS558 is not set
+# CONFIG_INPUT_LIGHTNING is not set
+# CONFIG_INPUT_PCIGAME is not set
+# CONFIG_INPUT_CS461X is not set
+# CONFIG_INPUT_EMU10K1 is not set
+CONFIG_INPUT_SERIO=y
+CONFIG_INPUT_SERPORT=y
#
# Joysticks
#
-# CONFIG_JOYSTICK is not set
+# CONFIG_INPUT_ANALOG is not set
+# CONFIG_INPUT_A3D is not set
+# CONFIG_INPUT_ADI is not set
+# CONFIG_INPUT_COBRA is not set
+# CONFIG_INPUT_GF2K is not set
+# CONFIG_INPUT_GRIP is not set
+# CONFIG_INPUT_INTERACT is not set
+# CONFIG_INPUT_TMDC is not set
+# CONFIG_INPUT_SIDEWINDER is not set
+# CONFIG_INPUT_IFORCE_USB is not set
+# CONFIG_INPUT_IFORCE_232 is not set
+# CONFIG_INPUT_WARRIOR is not set
+# CONFIG_INPUT_MAGELLAN is not set
+# CONFIG_INPUT_SPACEORB is not set
+# CONFIG_INPUT_SPACEBALL is not set
+# CONFIG_INPUT_STINGER is not set
+# CONFIG_INPUT_DB9 is not set
+# CONFIG_INPUT_GAMECON is not set
+# CONFIG_INPUT_TURBOGRAFX is not set
# CONFIG_QIC02_TAPE is not set
#
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
+# CONFIG_INTEL_RNG is not set
# CONFIG_NVRAM is not set
# CONFIG_RTC is not set
CONFIG_EFI_RTC=y
-
-#
-# Video For Linux
-#
-# CONFIG_VIDEO_DEV is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
+# CONFIG_SONYPI is not set
#
# Ftape, the floppy tape device driver
#
# CONFIG_FTAPE is not set
-# CONFIG_DRM is not set
-# CONFIG_DRM_TDFX is not set
-# CONFIG_AGP is not set
+CONFIG_AGP=y
+# CONFIG_AGP_INTEL is not set
+CONFIG_AGP_I460=y
+# CONFIG_AGP_I810 is not set
+# CONFIG_AGP_VIA is not set
+# CONFIG_AGP_AMD is not set
+# CONFIG_AGP_SIS is not set
+# CONFIG_AGP_ALI is not set
+# CONFIG_AGP_SWORKS is not set
+CONFIG_DRM=y
+# CONFIG_DRM_NEW is not set
+CONFIG_DRM_OLD=y
+CONFIG_DRM40_TDFX=y
+# CONFIG_DRM40_GAMMA is not set
+# CONFIG_DRM40_R128 is not set
+# CONFIG_DRM40_RADEON is not set
+# CONFIG_DRM40_I810 is not set
+# CONFIG_DRM40_MGA is not set
#
-# USB support
+# Multimedia devices
+#
+CONFIG_VIDEO_DEV=y
+
+#
+# Video For Linux
#
-# CONFIG_USB is not set
+CONFIG_VIDEO_PROC_FS=y
+# CONFIG_I2C_PARPORT is not set
+
+#
+# Video Adapters
+#
+# CONFIG_VIDEO_PMS is not set
+# CONFIG_VIDEO_CPIA is not set
+# CONFIG_VIDEO_SAA5249 is not set
+# CONFIG_TUNER_3036 is not set
+# CONFIG_VIDEO_STRADIS is not set
+# CONFIG_VIDEO_ZORAN is not set
+# CONFIG_VIDEO_ZR36120 is not set
+# CONFIG_VIDEO_MEYE is not set
+
+#
+# Radio Adapters
+#
+# CONFIG_RADIO_CADET is not set
+# CONFIG_RADIO_RTRACK is not set
+# CONFIG_RADIO_RTRACK2 is not set
+# CONFIG_RADIO_AZTECH is not set
+# CONFIG_RADIO_GEMTEK is not set
+# CONFIG_RADIO_GEMTEK_PCI is not set
+# CONFIG_RADIO_MAXIRADIO is not set
+# CONFIG_RADIO_MAESTRO is not set
+# CONFIG_RADIO_MIROPCM20 is not set
+# CONFIG_RADIO_MIROPCM20_RDS is not set
+# CONFIG_RADIO_SF16FMI is not set
+# CONFIG_RADIO_TERRATEC is not set
+# CONFIG_RADIO_TRUST is not set
+# CONFIG_RADIO_TYPHOON is not set
+# CONFIG_RADIO_ZOLTRIX is not set
#
# File systems
#
# CONFIG_QUOTA is not set
-# CONFIG_AUTOFS_FS is not set
+CONFIG_AUTOFS_FS=y
# CONFIG_AUTOFS4_FS is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_REISERFS_CHECK is not set
# CONFIG_ADFS_FS is not set
+# CONFIG_ADFS_FS_RW is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_BFS_FS is not set
-# CONFIG_FAT_FS is not set
-# CONFIG_MSDOS_FS is not set
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
# CONFIG_UMSDOS_FS is not set
-# CONFIG_VFAT_FS is not set
+CONFIG_VFAT_FS=y
# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
# CONFIG_CRAMFS is not set
-# CONFIG_ISO9660_FS is not set
+# CONFIG_TMPFS is not set
+# CONFIG_RAMFS is not set
+CONFIG_ISO9660_FS=y
# CONFIG_JOLIET is not set
# CONFIG_MINIX_FS is not set
+# CONFIG_VXFS_FS is not set
# CONFIG_NTFS_FS is not set
+# CONFIG_NTFS_RW is not set
# CONFIG_HPFS_FS is not set
-# CONFIG_PROC_FS is not set
+CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVFS_MOUNT is not set
# CONFIG_DEVFS_DEBUG is not set
-# CONFIG_DEVPTS_FS is not set
+CONFIG_DEVPTS_FS=y
# CONFIG_QNX4FS_FS is not set
+# CONFIG_QNX4FS_RW is not set
# CONFIG_ROMFS_FS is not set
-# CONFIG_EXT2_FS is not set
+CONFIG_EXT2_FS=y
# CONFIG_SYSV_FS is not set
# CONFIG_UDF_FS is not set
+# CONFIG_UDF_RW is not set
# CONFIG_UFS_FS is not set
+# CONFIG_UFS_FS_WRITE is not set
+
+#
+# Network File Systems
+#
+# CONFIG_CODA_FS is not set
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_ROOT_NFS is not set
+CONFIG_NFSD=y
+CONFIG_NFSD_V3=y
+CONFIG_SUNRPC=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_SMB_FS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_NCPFS_PACKET_SIGNING is not set
+# CONFIG_NCPFS_IOCTL_LOCKING is not set
+# CONFIG_NCPFS_STRONG is not set
+# CONFIG_NCPFS_NFS_NS is not set
+# CONFIG_NCPFS_OS2_NS is not set
+# CONFIG_NCPFS_SMALLDOS is not set
+# CONFIG_NCPFS_NLS is not set
+# CONFIG_NCPFS_EXTRAS is not set
#
# Partition Types
#
-# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
-# CONFIG_NLS is not set
-# CONFIG_NLS is not set
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+CONFIG_EFI_PARTITION=y
+# CONFIG_DEVFS_GUID is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_SMB_NLS is not set
+CONFIG_NLS=y
+
+#
+# Native Language Support
+#
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ISO8859_1 is not set
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+
+#
+# Console drivers
+#
+CONFIG_VGA_CONSOLE=y
+
+#
+# Frame-buffer support
+#
+# CONFIG_FB is not set
#
# Sound
#
-# CONFIG_SOUND is not set
+CONFIG_SOUND=y
+# CONFIG_SOUND_BT878 is not set
+# CONFIG_SOUND_CMPCI is not set
+# CONFIG_SOUND_EMU10K1 is not set
+# CONFIG_MIDI_EMU10K1 is not set
+# CONFIG_SOUND_FUSION is not set
+CONFIG_SOUND_CS4281=y
+# CONFIG_SOUND_ES1370 is not set
+# CONFIG_SOUND_ES1371 is not set
+# CONFIG_SOUND_ESSSOLO1 is not set
+# CONFIG_SOUND_MAESTRO is not set
+# CONFIG_SOUND_MAESTRO3 is not set
+# CONFIG_SOUND_ICH is not set
+# CONFIG_SOUND_RME96XX is not set
+# CONFIG_SOUND_SONICVIBES is not set
+# CONFIG_SOUND_TRIDENT is not set
+# CONFIG_SOUND_MSNDCLAS is not set
+# CONFIG_SOUND_MSNDPIN is not set
+# CONFIG_SOUND_VIA82CXXX is not set
+# CONFIG_MIDI_VIA82CXXX is not set
+# CONFIG_SOUND_OSS is not set
+# CONFIG_SOUND_TVMIXER is not set
+
+#
+# USB support
+#
+CONFIG_USB=y
+# CONFIG_USB_DEBUG is not set
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEVICEFS=y
+# CONFIG_USB_BANDWIDTH is not set
+
+#
+# USB Controllers
+#
+CONFIG_USB_UHCI_ALT=y
+CONFIG_USB_OHCI=y
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_AUDIO is not set
+# CONFIG_USB_BLUETOOTH is not set
+# CONFIG_USB_STORAGE is not set
+# CONFIG_USB_ACM is not set
+# CONFIG_USB_PRINTER is not set
+
+#
+# USB Human Interface Devices (HID)
+#
+# CONFIG_USB_HID is not set
+CONFIG_USB_KBD=y
+CONFIG_USB_MOUSE=y
+# CONFIG_USB_WACOM is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_DC2XX is not set
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_SCANNER is not set
+# CONFIG_USB_MICROTEK is not set
+
+#
+# USB Multimedia devices
+#
+CONFIG_USB_IBMCAM=y
+# CONFIG_USB_OV511 is not set
+# CONFIG_USB_PWC is not set
+# CONFIG_USB_SE401 is not set
+# CONFIG_USB_DSBR is not set
+# CONFIG_USB_DABUSB is not set
+
+#
+# USB Network adaptors
+#
+# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_CDCETHER is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_USBNET is not set
+
+#
+# USB port drivers
+#
+# CONFIG_USB_USS720 is not set
+
+#
+# USB Serial Converter support
+#
+# CONFIG_USB_SERIAL is not set
+
+#
+# USB misc drivers
+#
+# CONFIG_USB_RIO500 is not set
+
+#
+# Bluetooth support
+#
+# CONFIG_BLUEZ is not set
#
# Kernel hacking
#
-# CONFIG_IA32_SUPPORT is not set
-# CONFIG_MATHEMU is not set
-# CONFIG_MAGIC_SYSRQ is not set
-# CONFIG_IA64_EARLY_PRINTK is not set
+CONFIG_DEBUG_KERNEL=y
+CONFIG_IA64_PRINT_HAZARDS=y
+# CONFIG_DISABLE_VHPT is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_IA64_DEBUG_CMPXCHG is not set
# CONFIG_IA64_DEBUG_IRQ is not set
-# CONFIG_IA64_PRINT_HAZARDS is not set
-# CONFIG_KDB is not set
diff -urN linux-2.4.13/arch/ia64/ia32/binfmt_elf32.c linux-2.4.13-lia/arch/ia64/ia32/binfmt_elf32.c
--- linux-2.4.13/arch/ia64/ia32/binfmt_elf32.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/binfmt_elf32.c Thu Oct 4 00:21:52 2001
@@ -3,10 +3,11 @@
*
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 06/16/00 A. Mallick initialize csd/ssd/tssd/cflg for ia32_load_state
* 04/13/01 D. Mosberger dropped saving tssd in ar.k1---it's not needed
+ * 09/14/01 D. Mosberger fixed memory management for gdt/tss page
*/
#include <linux/config.h>
@@ -41,65 +42,59 @@
extern void ia64_elf32_init (struct pt_regs *regs);
extern void put_dirty_page (struct task_struct * tsk, struct page *page, unsigned long address);
+static void elf32_set_personality (void);
+
#define ELF_PLAT_INIT(_r) ia64_elf32_init(_r)
#define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm)
-#define elf_map elf_map32
+#define elf_map elf32_map
+#define SET_PERSONALITY(ex, ibcs2) elf32_set_personality()
/* Ugly but avoids duplication */
#include "../../../fs/binfmt_elf.c"
-/* Global descriptor table */
-unsigned long *ia32_gdt_table, *ia32_tss;
+extern struct page *ia32_shared_page[];
+extern unsigned long *ia32_gdt;
struct page *
-put_shared_page (struct task_struct * tsk, struct page *page, unsigned long address)
+ia32_install_shared_page (struct vm_area_struct *vma, unsigned long address, int no_share)
{
- pgd_t * pgd;
- pmd_t * pmd;
- pte_t * pte;
-
- if (page_count(page) != 1)
- printk("mem_map disagrees with %p at %08lx\n", (void *) page, address);
+ struct page *pg = ia32_shared_page[(address - vma->vm_start)/PAGE_SIZE];
- pgd = pgd_offset(tsk->mm, address);
-
- spin_lock(&tsk->mm->page_table_lock);
- {
- pmd = pmd_alloc(tsk->mm, pgd, address);
- if (!pmd)
- goto out;
- pte = pte_alloc(tsk->mm, pmd, address);
- if (!pte)
- goto out;
- if (!pte_none(*pte))
- goto out;
- flush_page_to_ram(page);
- set_pte(pte, pte_mkwrite(mk_pte(page, PAGE_SHARED)));
- }
- spin_unlock(&tsk->mm->page_table_lock);
- /* no need for flush_tlb */
- return page;
-
- out:
- spin_unlock(&tsk->mm->page_table_lock);
- __free_page(page);
- return 0;
+ get_page(pg);
+ return pg;
}
+static struct vm_operations_struct ia32_shared_page_vm_ops = {
+ nopage: ia32_install_shared_page
+};
+
void
ia64_elf32_init (struct pt_regs *regs)
{
struct vm_area_struct *vma;
- int nr;
/*
* Map GDT and TSS below 4GB, where the processor can find them. We need to map
* it with privilege level 3 because the IVE uses non-privileged accesses to these
* tables. IA-32 segmentation is used to protect against IA-32 accesses to them.
*/
- put_shared_page(current, virt_to_page(ia32_gdt_table), IA32_GDT_OFFSET);
- if (PAGE_SHIFT <= IA32_PAGE_SHIFT)
- put_shared_page(current, virt_to_page(ia32_tss), IA32_TSS_OFFSET);
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (vma) {
+ vma->vm_mm = current->mm;
+ vma->vm_start = IA32_GDT_OFFSET;
+ vma->vm_end = vma->vm_start + max(PAGE_SIZE, 2*IA32_PAGE_SIZE);
+ vma->vm_page_prot = PAGE_SHARED;
+ vma->vm_flags = VM_READ|VM_MAYREAD;
+ vma->vm_ops = &ia32_shared_page_vm_ops;
+ vma->vm_pgoff = 0;
+ vma->vm_file = NULL;
+ vma->vm_private_data = NULL;
+ down_write(¤t->mm->mmap_sem);
+ {
+ insert_vm_struct(current->mm, vma);
+ }
+ up_write(¤t->mm->mmap_sem);
+ }
/*
* Install LDT as anonymous memory. This gives us all-zero segment descriptors
@@ -116,34 +111,13 @@
vma->vm_pgoff = 0;
vma->vm_file = NULL;
vma->vm_private_data = NULL;
- insert_vm_struct(current->mm, vma);
+ down_write(¤t->mm->mmap_sem);
+ {
+ insert_vm_struct(current->mm, vma);
+ }
+ up_write(¤t->mm->mmap_sem);
}
- nr = smp_processor_id();
-
- current->thread.map_base = IA32_PAGE_OFFSET/3;
- current->thread.task_size = IA32_PAGE_OFFSET; /* use what Linux/x86 uses... */
- set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
-
- /* Setup the segment selectors */
- regs->r16 = (__USER_DS << 16) | __USER_DS; /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state() sets TSS and LDT */
-
- /* Setup the segment descriptors */
- regs->r24 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* ESD */
- regs->r27 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* DSD */
- regs->r28 = 0; /* FSD (null) */
- regs->r29 = 0; /* GSD (null) */
- regs->r30 = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_LDT(nr)]); /* LDTD */
-
- /*
- * Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descriptor
- * format defined by Figure 3-11 "Pseudo-Descriptor Format" in the IA-32
- * architecture manual.
- */
- regs->r31 = IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA32_PAGE_SIZE - 1, 0,
- 0, 0, 0, 0, 0, 0));
-
ia64_psr(regs)->ac = 0; /* turn off alignment checking */
regs->loadrs = 0;
/*
@@ -164,10 +138,19 @@
current->thread.fcr = IA32_FCR_DEFAULT;
current->thread.fir = 0;
current->thread.fdr = 0;
- current->thread.csd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_CS >> 3]);
- current->thread.ssd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]);
- current->thread.tssd = IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_TSS(nr)]);
+ /*
+ * Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descriptor
+ * format defined by Figure 3-11 "Pseudo-Descriptor Format" in the IA-32
+ * architecture manual.
+ */
+ regs->r31 = IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA32_PAGE_SIZE - 1, 0,
+ 0, 0, 0, 0, 0, 0));
+ /* Setup the segment selectors */
+ regs->r16 = (__USER_DS << 16) | __USER_DS; /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state() sets TSS and LDT */
+
+ ia32_load_segment_descriptors(current);
ia32_load_state(current);
}
@@ -189,6 +172,7 @@
if (!mpnt)
return -ENOMEM;
+ down_write(¤t->mm->mmap_sem);
{
mpnt->vm_mm = current->mm;
mpnt->vm_start = PAGE_MASK & (unsigned long) bprm->p;
@@ -204,54 +188,32 @@
}
for (i = 0 ; i < MAX_ARG_PAGES ; i++) {
- if (bprm->page[i]) {
- put_dirty_page(current,bprm->page[i],stack_base);
+ struct page *page = bprm->page[i];
+ if (page) {
+ bprm->page[i] = NULL;
+ put_dirty_page(current, page, stack_base);
}
stack_base += PAGE_SIZE;
}
+ up_write(¤t->mm->mmap_sem);
return 0;
}
-static unsigned long
-ia32_mm_addr (unsigned long addr)
+static void
+elf32_set_personality (void)
{
- struct vm_area_struct *vma;
-
- if ((vma = find_vma(current->mm, addr)) = NULL)
- return ELF_PAGESTART(addr);
- if (vma->vm_start > addr)
- return ELF_PAGESTART(addr);
- return ELF_PAGEALIGN(addr);
+ set_personality(PER_LINUX32);
+ current->thread.map_base = IA32_PAGE_OFFSET/3;
+ current->thread.task_size = IA32_PAGE_OFFSET; /* use what Linux/x86 uses... */
+ set_fs(USER_DS); /* set addr limit for new TASK_SIZE */
}
-/*
- * Normally we would do an `mmap' to map in the process's text section.
- * This doesn't work with IA32 processes as the ELF file might specify
- * a non page size aligned address. Instead we will just allocate
- * memory and read the data in from the file. Slightly less efficient
- * but it works.
- */
-extern long ia32_do_mmap (struct file *filep, unsigned int len, unsigned int prot,
- unsigned int flags, unsigned int fd, unsigned int offset);
-
static unsigned long
-elf_map32 (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
+elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
{
- unsigned long retval;
+ unsigned long pgoff = (eppnt->p_vaddr) & ~IA32_PAGE_MASK;
- if (eppnt->p_memsz >= (1UL<<32) || addr > (1UL<<32) - eppnt->p_memsz)
- return -EINVAL;
-
- /*
- * Make sure the elf interpreter doesn't get loaded at location 0
- * so that NULL pointers correctly cause segfaults.
- */
- if (addr = 0)
- addr += PAGE_SIZE;
- set_brk(ia32_mm_addr(addr), addr + eppnt->p_memsz);
- memset((char *) addr + eppnt->p_filesz, 0, eppnt->p_memsz - eppnt->p_filesz);
- kernel_read(filep, eppnt->p_offset, (char *) addr, eppnt->p_filesz);
- retval = (unsigned long) addr;
- return retval;
+ return ia32_do_mmap(filep, (addr & IA32_PAGE_MASK), eppnt->p_filesz + pgoff, prot, type,
+ eppnt->p_offset - pgoff);
}
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_entry.S linux-2.4.13-lia/arch/ia64/ia32/ia32_entry.S
--- linux-2.4.13/arch/ia64/ia32/ia32_entry.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_entry.S Wed Oct 24 18:11:48 2001
@@ -2,7 +2,7 @@
#include <asm/offsets.h>
#include <asm/signal.h>
-#include "../kernel/entry.h"
+#include "../kernel/minstate.h"
/*
* execve() is special because in case of success, we need to
@@ -14,13 +14,13 @@
alloc loc1=ar.pfs,3,2,4,0
mov loc0=rp
.body
- mov out0=in0 // filename
+ zxt4 out0=in0 // filename
;; // stop bit between alloc and call
- mov out1=in1 // argv
- mov out2=in2 // envp
+ zxt4 out1=in1 // argv
+ zxt4 out2=in2 // envp
add out3\x16,sp // regs
br.call.sptk.few rp=sys32_execve
-1: cmp4.ge p6,p0=r8,r0
+1: cmp.ge p6,p0=r8,r0
mov ar.pfs=loc1 // restore ar.pfs
;;
(p6) mov ar.pfs=r0 // clear ar.pfs in case of success
@@ -29,31 +29,80 @@
br.ret.sptk.few rp
END(ia32_execve)
- //
- // Get possibly unaligned sigmask argument into an aligned
- // kernel buffer
-GLOBAL_ENTRY(ia32_rt_sigsuspend)
- // We'll cheat and not do an alloc here since we are ultimately
- // going to do a simple branch to the IA64 sys_rt_sigsuspend.
- // r32 is still the first argument which is the signal mask.
- // We copy this 4-byte aligned value to an 8-byte aligned buffer
- // in the task structure and then jump to the IA64 code.
+ENTRY(ia32_clone)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2)
+ alloc r16=ar.pfs,2,2,4,0
+ DO_SAVE_SWITCH_STACK
+ mov loc0=rp
+ mov loc1=r16 // save ar.pfs across do_fork
+ .body
+ zxt4 out1=in1 // newsp
+ mov out3=0 // stacksize
+ adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
+ zxt4 out0=in0 // out0 = clone_flags
+ br.call.sptk.many rp=do_fork
+.ret0: .restore sp
+ adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
+ mov ar.pfs=loc1
+ mov rp=loc0
+ br.ret.sptk.many rp
+END(ia32_clone)
- EX(.Lfail, ld4 r2=[r32],4) // load low part of sigmask
- ;;
- EX(.Lfail, ld4 r3=[r32]) // load high part of sigmask
- adds r32=IA64_TASK_THREAD_SIGMASK_OFFSET,r13
- ;;
- st8 [r32]=r2
- adds r10=IA64_TASK_THREAD_SIGMASK_OFFSET+4,r13
+ENTRY(sys32_rt_sigsuspend)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
+ alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs
+ mov loc0=rp
+ mov out0=in0 // mask
+ mov out1=in1 // sigsetsize
+ mov out2=sp // out2 = &sigscratch
+ .fframe 16
+ adds sp=-16,sp // allocate dummy "sigscratch"
;;
+ .body
+ br.call.sptk.many rp=ia32_rt_sigsuspend
+1: .restore sp
+ adds sp\x16,sp
+ mov rp=loc0
+ mov ar.pfs=loc1
+ br.ret.sptk.many rp
+END(sys32_rt_sigsuspend)
- st4 [r10]=r3
- br.cond.sptk.many sys_rt_sigsuspend
-
-.Lfail: br.ret.sptk.many rp // failed to read sigmask
-END(ia32_rt_sigsuspend)
+ENTRY(sys32_sigsuspend)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
+ alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs
+ mov loc0=rp
+ mov out0=in2 // mask (first two args are ignored)
+ ;;
+ mov out1=sp // out1 = &sigscratch
+ .fframe 16
+ adds sp=-16,sp // allocate dummy "sigscratch"
+ .body
+ br.call.sptk.many rp=ia32_sigsuspend
+1: .restore sp
+ adds sp\x16,sp
+ mov rp=loc0
+ mov ar.pfs=loc1
+ br.ret.sptk.many rp
+END(sys32_sigsuspend)
+GLOBAL_ENTRY(ia32_ret_from_clone)
+ PT_REGS_UNWIND_INFO(0)
+ /*
+ * We need to call schedule_tail() to complete the scheduling process.
+ * Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
+ * address of the previously executing task.
+ */
+ br.call.sptk.many rp=ia64_invoke_schedule_tail
+.ret1: adds r2=IA64_TASK_PTRACE_OFFSET,r13
+ ;;
+ ld8 r2=[r2]
+ ;;
+ mov r8=0
+ tbit.nz p6,p0=r2,PT_TRACESYS_BIT
+(p6) br.cond.spnt .ia32_strace_check_retval
+ ;; // prevent RAW on r8
+END(ia32_ret_from_clone)
+ // fall thrugh
GLOBAL_ENTRY(ia32_ret_from_syscall)
PT_REGS_UNWIND_INFO(0)
@@ -72,20 +121,25 @@
// manipulate ar.pfs.
//
// Input:
- // r15 = syscall number
- // b6 = syscall entry point
+ // r8 = syscall number
+ // b6 = syscall entry point
//
GLOBAL_ENTRY(ia32_trace_syscall)
PT_REGS_UNWIND_INFO(0)
+ mov r3=-38
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,sp
+ ;;
+ st8 [r2]=r3 // initialize return code to -ENOSYS
br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
-.ret0: br.call.sptk.few rp¶ // do the syscall
-.ret1: cmp.lt p6,p0=r8,r0 // syscall failed?
+.ret2: br.call.sptk.few rp¶ // do the syscall
+.ia32_strace_check_retval:
+ cmp.lt p6,p0=r8,r0 // syscall failed?
adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
;;
st8.spill [r2]=r8 // store return value in slot for r8
br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
-.ret2: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
- br.cond.sptk.many ia64_leave_kernel // rp MUST be != ia64_leave_kernel!
+.ret4: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
+ br.cond.sptk.many ia64_leave_kernel
END(ia32_trace_syscall)
GLOBAL_ENTRY(sys32_vfork)
@@ -110,7 +164,7 @@
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
br.call.sptk.few rp=do_fork
-.ret3: mov ar.pfs=loc1
+.ret5: mov ar.pfs=loc1
.restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov rp=loc0
@@ -137,21 +191,21 @@
data8 sys32_time
data8 sys_mknod
data8 sys_chmod /* 15 */
- data8 sys_lchown
+ data8 sys_lchown /* 16-bit version */
data8 sys32_ni_syscall /* old break syscall holder */
data8 sys32_ni_syscall
data8 sys32_lseek
data8 sys_getpid /* 20 */
data8 sys_mount
data8 sys_oldumount
- data8 sys_setuid
- data8 sys_getuid
+ data8 sys_setuid /* 16-bit version */
+ data8 sys_getuid /* 16-bit version */
data8 sys32_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */
data8 sys32_ptrace
data8 sys32_alarm
data8 sys32_ni_syscall
- data8 sys_pause
- data8 ia32_utime /* 30 */
+ data8 sys32_pause
+ data8 sys32_utime /* 30 */
data8 sys32_ni_syscall /* old stty syscall holder */
data8 sys32_ni_syscall /* old gtty syscall holder */
data8 sys_access
@@ -167,15 +221,15 @@
data8 sys32_times
data8 sys32_ni_syscall /* old prof syscall holder */
data8 sys_brk /* 45 */
- data8 sys_setgid
- data8 sys_getgid
+ data8 sys_setgid /* 16-bit version */
+ data8 sys_getgid /* 16-bit version */
data8 sys32_signal
- data8 sys_geteuid
- data8 sys_getegid /* 50 */
+ data8 sys_geteuid /* 16-bit version */
+ data8 sys_getegid /* 16-bit version */ /* 50 */
data8 sys_acct
data8 sys_umount /* recycled never used phys( */
data8 sys32_ni_syscall /* old lock syscall holder */
- data8 ia32_ioctl
+ data8 sys32_ioctl
data8 sys32_fcntl /* 55 */
data8 sys32_ni_syscall /* old mpx syscall holder */
data8 sys_setpgid
@@ -191,19 +245,19 @@
data8 sys32_sigaction
data8 sys32_ni_syscall
data8 sys32_ni_syscall
- data8 sys_setreuid /* 70 */
- data8 sys_setregid
- data8 sys32_ni_syscall
- data8 sys_sigpending
+ data8 sys_setreuid /* 16-bit version */ /* 70 */
+ data8 sys_setregid /* 16-bit version */
+ data8 sys32_sigsuspend
+ data8 sys32_sigpending
data8 sys_sethostname
data8 sys32_setrlimit /* 75 */
- data8 sys32_getrlimit
+ data8 sys32_old_getrlimit
data8 sys32_getrusage
data8 sys32_gettimeofday
data8 sys32_settimeofday
- data8 sys_getgroups /* 80 */
- data8 sys_setgroups
- data8 old_select
+ data8 sys32_getgroups16 /* 80 */
+ data8 sys32_setgroups16
+ data8 sys32_old_select
data8 sys_symlink
data8 sys32_ni_syscall
data8 sys_readlink /* 85 */
@@ -212,17 +266,17 @@
data8 sys_reboot
data8 sys32_readdir
data8 sys32_mmap /* 90 */
- data8 sys_munmap
+ data8 sys32_munmap
data8 sys_truncate
data8 sys_ftruncate
data8 sys_fchmod
- data8 sys_fchown /* 95 */
+ data8 sys_fchown /* 16-bit version */ /* 95 */
data8 sys_getpriority
data8 sys_setpriority
data8 sys32_ni_syscall /* old profil syscall holder */
data8 sys32_statfs
data8 sys32_fstatfs /* 100 */
- data8 sys_ioperm
+ data8 sys32_ioperm
data8 sys32_socketcall
data8 sys_syslog
data8 sys32_setitimer
@@ -231,36 +285,36 @@
data8 sys32_newlstat
data8 sys32_newfstat
data8 sys32_ni_syscall
- data8 sys_iopl /* 110 */
+ data8 sys32_iopl /* 110 */
data8 sys_vhangup
data8 sys32_ni_syscall /* used to be sys_idle */
data8 sys32_ni_syscall
data8 sys32_wait4
data8 sys_swapoff /* 115 */
- data8 sys_sysinfo
+ data8 sys32_sysinfo
data8 sys32_ipc
data8 sys_fsync
data8 sys32_sigreturn
- data8 sys_clone /* 120 */
+ data8 ia32_clone /* 120 */
data8 sys_setdomainname
data8 sys32_newuname
data8 sys32_modify_ldt
- data8 sys_adjtimex
+ data8 sys32_ni_syscall /* adjtimex */
data8 sys32_mprotect /* 125 */
- data8 sys_sigprocmask
- data8 sys_create_module
- data8 sys_init_module
- data8 sys_delete_module
- data8 sys_get_kernel_syms /* 130 */
- data8 sys_quotactl
+ data8 sys32_sigprocmask
+ data8 sys32_ni_syscall /* create_module */
+ data8 sys32_ni_syscall /* init_module */
+ data8 sys32_ni_syscall /* delete_module */
+ data8 sys32_ni_syscall /* get_kernel_syms */ /* 130 */
+ data8 sys32_quotactl
data8 sys_getpgid
data8 sys_fchdir
- data8 sys_bdflush
- data8 sys_sysfs /* 135 */
- data8 sys_personality
+ data8 sys32_ni_syscall /* sys_bdflush */
+ data8 sys_sysfs /* 135 */
+ data8 sys32_personality
data8 sys32_ni_syscall /* for afs_syscall */
- data8 sys_setfsuid
- data8 sys_setfsgid
+ data8 sys_setfsuid /* 16-bit version */
+ data8 sys_setfsgid /* 16-bit version */
data8 sys_llseek /* 140 */
data8 sys32_getdents
data8 sys32_select
@@ -282,66 +336,73 @@
data8 sys_sched_yield
data8 sys_sched_get_priority_max
data8 sys_sched_get_priority_min /* 160 */
- data8 sys_sched_rr_get_interval
+ data8 sys32_sched_rr_get_interval
data8 sys32_nanosleep
data8 sys_mremap
- data8 sys_setresuid
- data8 sys32_getresuid /* 165 */
- data8 sys_vm86
- data8 sys_query_module
+ data8 sys_setresuid /* 16-bit version */
+ data8 sys32_getresuid16 /* 16-bit version */ /* 165 */
+ data8 sys32_ni_syscall /* vm86 */
+ data8 sys32_ni_syscall /* sys_query_module */
data8 sys_poll
- data8 sys_nfsservctl
+ data8 sys32_ni_syscall /* nfsservctl */
data8 sys_setresgid /* 170 */
- data8 sys32_getresgid
+ data8 sys32_getresgid16
data8 sys_prctl
data8 sys32_rt_sigreturn
data8 sys32_rt_sigaction
data8 sys32_rt_sigprocmask /* 175 */
data8 sys_rt_sigpending
- data8 sys_rt_sigtimedwait
- data8 sys_rt_sigqueueinfo
- data8 ia32_rt_sigsuspend
- data8 sys_pread /* 180 */
- data8 sys_pwrite
- data8 sys_chown
+ data8 sys32_rt_sigtimedwait
+ data8 sys32_rt_sigqueueinfo
+ data8 sys32_rt_sigsuspend
+ data8 sys32_pread /* 180 */
+ data8 sys32_pwrite
+ data8 sys_chown /* 16-bit version */
data8 sys_getcwd
data8 sys_capget
data8 sys_capset /* 185 */
data8 sys32_sigaltstack
- data8 sys_sendfile
+ data8 sys32_sendfile
data8 sys32_ni_syscall /* streams1 */
data8 sys32_ni_syscall /* streams2 */
data8 sys32_vfork /* 190 */
+ data8 sys32_getrlimit
+ data8 sys32_mmap2
+ data8 sys32_truncate64
+ data8 sys32_ftruncate64
+ data8 sys32_stat64 /* 195 */
+ data8 sys32_lstat64
+ data8 sys32_fstat64
+ data8 sys_lchown
+ data8 sys_getuid
+ data8 sys_getgid /* 200 */
+ data8 sys_geteuid
+ data8 sys_getegid
+ data8 sys_setreuid
+ data8 sys_setregid
+ data8 sys_getgroups /* 205 */
+ data8 sys_setgroups
+ data8 sys_fchown
+ data8 sys_setresuid
+ data8 sys_getresuid
+ data8 sys_setresgid /* 210 */
+ data8 sys_getresgid
+ data8 sys_chown
+ data8 sys_setuid
+ data8 sys_setgid
+ data8 sys_setfsuid /* 215 */
+ data8 sys_setfsgid
+ data8 sys_pivot_root
+ data8 sys_mincore
+ data8 sys_madvise
+ data8 sys_getdents64 /* 220 */
+ data8 sys32_fcntl64
+ data8 sys_ni_syscall /* reserved for TUX */
+ data8 sys_ni_syscall /* reserved for Security */
+ data8 sys_gettid
+ data8 sys_readahead /* 225 */
data8 sys_ni_syscall
data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 195 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 200 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 205 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 210 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 215 */
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall
- data8 sys_ni_syscall /* 220 */
data8 sys_ni_syscall
data8 sys_ni_syscall
/*
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_ioctl.c linux-2.4.13-lia/arch/ia64/ia32/ia32_ioctl.c
--- linux-2.4.13/arch/ia64/ia32/ia32_ioctl.c Thu Jan 4 12:50:17 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_ioctl.c Thu Oct 4 00:21:52 2001
@@ -3,6 +3,8 @@
*
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
+ * Copyright (C) 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/types.h>
@@ -22,8 +24,12 @@
#include <linux/if_ppp.h>
#include <linux/ixjuser.h>
#include <linux/i2o-dev.h>
+
+#include <asm/ia32.h>
+
#include <../drivers/char/drm/drm.h>
+
#define IOCTL_NR(a) ((a) & ~(_IOC_SIZEMASK << _IOC_SIZESHIFT))
#define DO_IOCTL(fd, cmd, arg) ({ \
@@ -36,179 +42,200 @@
_ret; \
})
-#define P(i) ((void *)(long)(i))
-
+#define P(i) ((void *)(unsigned long)(i))
asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg);
-asmlinkage long ia32_ioctl(unsigned int fd, unsigned int cmd, unsigned int arg)
+static long
+put_dirent32 (struct dirent *d, struct linux32_dirent *d32)
+{
+ size_t namelen = strlen(d->d_name);
+
+ return (put_user(d->d_ino, &d32->d_ino)
+ || put_user(d->d_off, &d32->d_off)
+ || put_user(d->d_reclen, &d32->d_reclen)
+ || copy_to_user(d32->d_name, d->d_name, namelen + 1));
+}
+
+asmlinkage long
+sys32_ioctl (unsigned int fd, unsigned int cmd, unsigned int arg)
{
long ret;
switch (IOCTL_NR(cmd)) {
-
- case IOCTL_NR(DRM_IOCTL_VERSION):
- {
- drm_version_t ver;
- struct {
- int version_major;
- int version_minor;
- int version_patchlevel;
- unsigned int name_len;
- unsigned int name; /* pointer */
- unsigned int date_len;
- unsigned int date; /* pointer */
- unsigned int desc_len;
- unsigned int desc; /* pointer */
- } ver32;
-
- if (copy_from_user(&ver32, P(arg), sizeof(ver32)))
- return -EFAULT;
- ver.name_len = ver32.name_len;
- ver.name = P(ver32.name);
- ver.date_len = ver32.date_len;
- ver.date = P(ver32.date);
- ver.desc_len = ver32.desc_len;
- ver.desc = P(ver32.desc);
- ret = DO_IOCTL(fd, cmd, &ver);
- if (ret >= 0) {
- ver32.version_major = ver.version_major;
- ver32.version_minor = ver.version_minor;
- ver32.version_patchlevel = ver.version_patchlevel;
- ver32.name_len = ver.name_len;
- ver32.date_len = ver.date_len;
- ver32.desc_len = ver.desc_len;
- if (copy_to_user(P(arg), &ver32, sizeof(ver32)))
- return -EFAULT;
- }
- return(ret);
- }
-
- case IOCTL_NR(DRM_IOCTL_GET_UNIQUE):
- {
- drm_unique_t un;
- struct {
- unsigned int unique_len;
- unsigned int unique;
- } un32;
-
- if (copy_from_user(&un32, P(arg), sizeof(un32)))
- return -EFAULT;
- un.unique_len = un32.unique_len;
- un.unique = P(un32.unique);
- ret = DO_IOCTL(fd, cmd, &un);
- if (ret >= 0) {
- un32.unique_len = un.unique_len;
- if (copy_to_user(P(arg), &un32, sizeof(un32)))
- return -EFAULT;
- }
- return(ret);
- }
- case IOCTL_NR(DRM_IOCTL_SET_UNIQUE):
- case IOCTL_NR(DRM_IOCTL_ADD_MAP):
- case IOCTL_NR(DRM_IOCTL_ADD_BUFS):
- case IOCTL_NR(DRM_IOCTL_MARK_BUFS):
- case IOCTL_NR(DRM_IOCTL_INFO_BUFS):
- case IOCTL_NR(DRM_IOCTL_MAP_BUFS):
- case IOCTL_NR(DRM_IOCTL_FREE_BUFS):
- case IOCTL_NR(DRM_IOCTL_ADD_CTX):
- case IOCTL_NR(DRM_IOCTL_RM_CTX):
- case IOCTL_NR(DRM_IOCTL_MOD_CTX):
- case IOCTL_NR(DRM_IOCTL_GET_CTX):
- case IOCTL_NR(DRM_IOCTL_SWITCH_CTX):
- case IOCTL_NR(DRM_IOCTL_NEW_CTX):
- case IOCTL_NR(DRM_IOCTL_RES_CTX):
-
- case IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE):
- case IOCTL_NR(DRM_IOCTL_AGP_RELEASE):
- case IOCTL_NR(DRM_IOCTL_AGP_ENABLE):
- case IOCTL_NR(DRM_IOCTL_AGP_INFO):
- case IOCTL_NR(DRM_IOCTL_AGP_ALLOC):
- case IOCTL_NR(DRM_IOCTL_AGP_FREE):
- case IOCTL_NR(DRM_IOCTL_AGP_BIND):
- case IOCTL_NR(DRM_IOCTL_AGP_UNBIND):
-
- /* Mga specific ioctls */
-
- case IOCTL_NR(DRM_IOCTL_MGA_INIT):
-
- /* I810 specific ioctls */
-
- case IOCTL_NR(DRM_IOCTL_I810_GETBUF):
- case IOCTL_NR(DRM_IOCTL_I810_COPY):
-
- /* Rage 128 specific ioctls */
-
- case IOCTL_NR(DRM_IOCTL_R128_PACKET):
-
- case IOCTL_NR(VFAT_IOCTL_READDIR_BOTH):
- case IOCTL_NR(VFAT_IOCTL_READDIR_SHORT):
- case IOCTL_NR(MTIOCGET):
- case IOCTL_NR(MTIOCPOS):
- case IOCTL_NR(MTIOCGETCONFIG):
- case IOCTL_NR(MTIOCSETCONFIG):
- case IOCTL_NR(PPPIOCSCOMPRESS):
- case IOCTL_NR(PPPIOCGIDLE):
- case IOCTL_NR(NCP_IOC_GET_FS_INFO_V2):
- case IOCTL_NR(NCP_IOC_GETOBJECTNAME):
- case IOCTL_NR(NCP_IOC_SETOBJECTNAME):
- case IOCTL_NR(NCP_IOC_GETPRIVATEDATA):
- case IOCTL_NR(NCP_IOC_SETPRIVATEDATA):
- case IOCTL_NR(NCP_IOC_GETMOUNTUID2):
- case IOCTL_NR(CAPI_MANUFACTURER_CMD):
- case IOCTL_NR(VIDIOCGTUNER):
- case IOCTL_NR(VIDIOCSTUNER):
- case IOCTL_NR(VIDIOCGWIN):
- case IOCTL_NR(VIDIOCSWIN):
- case IOCTL_NR(VIDIOCGFBUF):
- case IOCTL_NR(VIDIOCSFBUF):
- case IOCTL_NR(MGSL_IOCSPARAMS):
- case IOCTL_NR(MGSL_IOCGPARAMS):
- case IOCTL_NR(ATM_GETNAMES):
- case IOCTL_NR(ATM_GETLINKRATE):
- case IOCTL_NR(ATM_GETTYPE):
- case IOCTL_NR(ATM_GETESI):
- case IOCTL_NR(ATM_GETADDR):
- case IOCTL_NR(ATM_RSTADDR):
- case IOCTL_NR(ATM_ADDADDR):
- case IOCTL_NR(ATM_DELADDR):
- case IOCTL_NR(ATM_GETCIRANGE):
- case IOCTL_NR(ATM_SETCIRANGE):
- case IOCTL_NR(ATM_SETESI):
- case IOCTL_NR(ATM_SETESIF):
- case IOCTL_NR(ATM_GETSTAT):
- case IOCTL_NR(ATM_GETSTATZ):
- case IOCTL_NR(ATM_GETLOOP):
- case IOCTL_NR(ATM_SETLOOP):
- case IOCTL_NR(ATM_QUERYLOOP):
- case IOCTL_NR(ENI_SETMULT):
- case IOCTL_NR(NS_GETPSTAT):
- /* case IOCTL_NR(NS_SETBUFLEV): This is a duplicate case with ZATM_GETPOOLZ */
- case IOCTL_NR(ZATM_GETPOOLZ):
- case IOCTL_NR(ZATM_GETPOOL):
- case IOCTL_NR(ZATM_SETPOOL):
- case IOCTL_NR(ZATM_GETTHIST):
- case IOCTL_NR(IDT77105_GETSTAT):
- case IOCTL_NR(IDT77105_GETSTATZ):
- case IOCTL_NR(IXJCTL_TONE_CADENCE):
- case IOCTL_NR(IXJCTL_FRAMES_READ):
- case IOCTL_NR(IXJCTL_FRAMES_WRITTEN):
- case IOCTL_NR(IXJCTL_READ_WAIT):
- case IOCTL_NR(IXJCTL_WRITE_WAIT):
- case IOCTL_NR(IXJCTL_DRYBUFFER_READ):
- case IOCTL_NR(I2OHRTGET):
- case IOCTL_NR(I2OLCTGET):
- case IOCTL_NR(I2OPARMSET):
- case IOCTL_NR(I2OPARMGET):
- case IOCTL_NR(I2OSWDL):
- case IOCTL_NR(I2OSWUL):
- case IOCTL_NR(I2OSWDEL):
- case IOCTL_NR(I2OHTML):
+ case IOCTL_NR(VFAT_IOCTL_READDIR_SHORT):
+ case IOCTL_NR(VFAT_IOCTL_READDIR_BOTH): {
+ struct linux32_dirent *d32 = P(arg);
+ struct dirent d[2];
+
+ ret = DO_IOCTL(fd, _IOR('r', _IOC_NR(cmd),
+ struct dirent [2]),
+ (unsigned long) d);
+ if (ret < 0)
+ return ret;
+
+ if (put_dirent32(d, d32) || put_dirent32(d + 1, d32 + 1))
+ return -EFAULT;
+
+ return ret;
+ }
+
+ case IOCTL_NR(DRM_IOCTL_VERSION):
+ {
+ drm_version_t ver;
+ struct {
+ int version_major;
+ int version_minor;
+ int version_patchlevel;
+ unsigned int name_len;
+ unsigned int name; /* pointer */
+ unsigned int date_len;
+ unsigned int date; /* pointer */
+ unsigned int desc_len;
+ unsigned int desc; /* pointer */
+ } ver32;
+
+ if (copy_from_user(&ver32, P(arg), sizeof(ver32)))
+ return -EFAULT;
+ ver.name_len = ver32.name_len;
+ ver.name = P(ver32.name);
+ ver.date_len = ver32.date_len;
+ ver.date = P(ver32.date);
+ ver.desc_len = ver32.desc_len;
+ ver.desc = P(ver32.desc);
+ ret = DO_IOCTL(fd, DRM_IOCTL_VERSION, &ver);
+ if (ret >= 0) {
+ ver32.version_major = ver.version_major;
+ ver32.version_minor = ver.version_minor;
+ ver32.version_patchlevel = ver.version_patchlevel;
+ ver32.name_len = ver.name_len;
+ ver32.date_len = ver.date_len;
+ ver32.desc_len = ver.desc_len;
+ if (copy_to_user(P(arg), &ver32, sizeof(ver32)))
+ return -EFAULT;
+ }
+ return ret;
+ }
+
+ case IOCTL_NR(DRM_IOCTL_GET_UNIQUE):
+ {
+ drm_unique_t un;
+ struct {
+ unsigned int unique_len;
+ unsigned int unique;
+ } un32;
+
+ if (copy_from_user(&un32, P(arg), sizeof(un32)))
+ return -EFAULT;
+ un.unique_len = un32.unique_len;
+ un.unique = P(un32.unique);
+ ret = DO_IOCTL(fd, DRM_IOCTL_GET_UNIQUE, &un);
+ if (ret >= 0) {
+ un32.unique_len = un.unique_len;
+ if (copy_to_user(P(arg), &un32, sizeof(un32)))
+ return -EFAULT;
+ }
+ return ret;
+ }
+ case IOCTL_NR(DRM_IOCTL_SET_UNIQUE):
+ case IOCTL_NR(DRM_IOCTL_ADD_MAP):
+ case IOCTL_NR(DRM_IOCTL_ADD_BUFS):
+ case IOCTL_NR(DRM_IOCTL_MARK_BUFS):
+ case IOCTL_NR(DRM_IOCTL_INFO_BUFS):
+ case IOCTL_NR(DRM_IOCTL_MAP_BUFS):
+ case IOCTL_NR(DRM_IOCTL_FREE_BUFS):
+ case IOCTL_NR(DRM_IOCTL_ADD_CTX):
+ case IOCTL_NR(DRM_IOCTL_RM_CTX):
+ case IOCTL_NR(DRM_IOCTL_MOD_CTX):
+ case IOCTL_NR(DRM_IOCTL_GET_CTX):
+ case IOCTL_NR(DRM_IOCTL_SWITCH_CTX):
+ case IOCTL_NR(DRM_IOCTL_NEW_CTX):
+ case IOCTL_NR(DRM_IOCTL_RES_CTX):
+
+ case IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE):
+ case IOCTL_NR(DRM_IOCTL_AGP_RELEASE):
+ case IOCTL_NR(DRM_IOCTL_AGP_ENABLE):
+ case IOCTL_NR(DRM_IOCTL_AGP_INFO):
+ case IOCTL_NR(DRM_IOCTL_AGP_ALLOC):
+ case IOCTL_NR(DRM_IOCTL_AGP_FREE):
+ case IOCTL_NR(DRM_IOCTL_AGP_BIND):
+ case IOCTL_NR(DRM_IOCTL_AGP_UNBIND):
+
+ /* Mga specific ioctls */
+
+ case IOCTL_NR(DRM_IOCTL_MGA_INIT):
+
+ /* I810 specific ioctls */
+
+ case IOCTL_NR(DRM_IOCTL_I810_GETBUF):
+ case IOCTL_NR(DRM_IOCTL_I810_COPY):
+
+ case IOCTL_NR(MTIOCGET):
+ case IOCTL_NR(MTIOCPOS):
+ case IOCTL_NR(MTIOCGETCONFIG):
+ case IOCTL_NR(MTIOCSETCONFIG):
+ case IOCTL_NR(PPPIOCSCOMPRESS):
+ case IOCTL_NR(PPPIOCGIDLE):
+ case IOCTL_NR(NCP_IOC_GET_FS_INFO_V2):
+ case IOCTL_NR(NCP_IOC_GETOBJECTNAME):
+ case IOCTL_NR(NCP_IOC_SETOBJECTNAME):
+ case IOCTL_NR(NCP_IOC_GETPRIVATEDATA):
+ case IOCTL_NR(NCP_IOC_SETPRIVATEDATA):
+ case IOCTL_NR(NCP_IOC_GETMOUNTUID2):
+ case IOCTL_NR(CAPI_MANUFACTURER_CMD):
+ case IOCTL_NR(VIDIOCGTUNER):
+ case IOCTL_NR(VIDIOCSTUNER):
+ case IOCTL_NR(VIDIOCGWIN):
+ case IOCTL_NR(VIDIOCSWIN):
+ case IOCTL_NR(VIDIOCGFBUF):
+ case IOCTL_NR(VIDIOCSFBUF):
+ case IOCTL_NR(MGSL_IOCSPARAMS):
+ case IOCTL_NR(MGSL_IOCGPARAMS):
+ case IOCTL_NR(ATM_GETNAMES):
+ case IOCTL_NR(ATM_GETLINKRATE):
+ case IOCTL_NR(ATM_GETTYPE):
+ case IOCTL_NR(ATM_GETESI):
+ case IOCTL_NR(ATM_GETADDR):
+ case IOCTL_NR(ATM_RSTADDR):
+ case IOCTL_NR(ATM_ADDADDR):
+ case IOCTL_NR(ATM_DELADDR):
+ case IOCTL_NR(ATM_GETCIRANGE):
+ case IOCTL_NR(ATM_SETCIRANGE):
+ case IOCTL_NR(ATM_SETESI):
+ case IOCTL_NR(ATM_SETESIF):
+ case IOCTL_NR(ATM_GETSTAT):
+ case IOCTL_NR(ATM_GETSTATZ):
+ case IOCTL_NR(ATM_GETLOOP):
+ case IOCTL_NR(ATM_SETLOOP):
+ case IOCTL_NR(ATM_QUERYLOOP):
+ case IOCTL_NR(ENI_SETMULT):
+ case IOCTL_NR(NS_GETPSTAT):
+ /* case IOCTL_NR(NS_SETBUFLEV): This is a duplicate case with ZATM_GETPOOLZ */
+ case IOCTL_NR(ZATM_GETPOOLZ):
+ case IOCTL_NR(ZATM_GETPOOL):
+ case IOCTL_NR(ZATM_SETPOOL):
+ case IOCTL_NR(ZATM_GETTHIST):
+ case IOCTL_NR(IDT77105_GETSTAT):
+ case IOCTL_NR(IDT77105_GETSTATZ):
+ case IOCTL_NR(IXJCTL_TONE_CADENCE):
+ case IOCTL_NR(IXJCTL_FRAMES_READ):
+ case IOCTL_NR(IXJCTL_FRAMES_WRITTEN):
+ case IOCTL_NR(IXJCTL_READ_WAIT):
+ case IOCTL_NR(IXJCTL_WRITE_WAIT):
+ case IOCTL_NR(IXJCTL_DRYBUFFER_READ):
+ case IOCTL_NR(I2OHRTGET):
+ case IOCTL_NR(I2OLCTGET):
+ case IOCTL_NR(I2OPARMSET):
+ case IOCTL_NR(I2OPARMGET):
+ case IOCTL_NR(I2OSWDL):
+ case IOCTL_NR(I2OSWUL):
+ case IOCTL_NR(I2OSWDEL):
+ case IOCTL_NR(I2OHTML):
break;
- default:
- return(sys_ioctl(fd, cmd, (unsigned long)arg));
+ default:
+ return sys_ioctl(fd, cmd, (unsigned long)arg);
}
printk("%x:unimplemented IA32 ioctl system call\n", cmd);
- return(-EINVAL);
+ return -EINVAL;
}
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_ldt.c linux-2.4.13-lia/arch/ia64/ia32/ia32_ldt.c
--- linux-2.4.13/arch/ia64/ia32/ia32_ldt.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_ldt.c Wed Oct 24 18:12:38 2001
@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Adapted from arch/i386/kernel/ldt.c
*/
@@ -16,6 +16,8 @@
#include <asm/uaccess.h>
#include <asm/ia32.h>
+#define P(p) ((void *) (unsigned long) (p))
+
/*
* read_ldt() is not really atomic - this is not a problem since synchronization of reads
* and writes done to the LDT has to be assured by user-space anyway. Writes are atomic,
@@ -58,10 +60,30 @@
}
static int
+read_default_ldt (void * ptr, unsigned long bytecount)
+{
+ unsigned long size;
+ int err;
+
+ /* XXX fix me: should return equivalent of default_ldt[0] */
+ err = 0;
+ size = 8;
+ if (size > bytecount)
+ size = bytecount;
+
+ err = size;
+ if (clear_user(ptr, size))
+ err = -EFAULT;
+
+ return err;
+}
+
+static int
write_ldt (void * ptr, unsigned long bytecount, int oldmode)
{
struct ia32_modify_ldt_ldt_s ldt_info;
__u64 entry;
+ int ret;
if (bytecount != sizeof(ldt_info))
return -EINVAL;
@@ -97,23 +119,28 @@
* memory, but we still need to guard against out-of-memory, hence we must use
* put_user().
*/
- return __put_user(entry, (__u64 *) IA32_LDT_OFFSET + ldt_info.entry_number);
+ ret = __put_user(entry, (__u64 *) IA32_LDT_OFFSET + ldt_info.entry_number);
+ ia32_load_segment_descriptors(current);
+ return ret;
}
asmlinkage int
-sys32_modify_ldt (int func, void *ptr, unsigned int bytecount)
+sys32_modify_ldt (int func, unsigned int ptr, unsigned int bytecount)
{
int ret = -ENOSYS;
switch (func) {
case 0:
- ret = read_ldt(ptr, bytecount);
+ ret = read_ldt(P(ptr), bytecount);
break;
case 1:
- ret = write_ldt(ptr, bytecount, 1);
+ ret = write_ldt(P(ptr), bytecount, 1);
+ break;
+ case 2:
+ ret = read_default_ldt(P(ptr), bytecount);
break;
case 0x11:
- ret = write_ldt(ptr, bytecount, 0);
+ ret = write_ldt(P(ptr), bytecount, 0);
break;
}
return ret;
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_signal.c linux-2.4.13-lia/arch/ia64/ia32/ia32_signal.c
--- linux-2.4.13/arch/ia64/ia32/ia32_signal.c Mon Oct 9 17:54:53 2000
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_signal.c Wed Oct 10 17:38:49 2001
@@ -1,8 +1,8 @@
/*
* IA32 Architecture-specific signal handling support.
*
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999, 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
@@ -13,6 +13,7 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/personality.h>
#include <linux/ptrace.h>
#include <linux/sched.h>
#include <linux/signal.h>
@@ -28,9 +29,15 @@
#include <asm/segment.h>
#include <asm/ia32.h>
+#include "../kernel/sigframe.h"
+
+#define A(__x) ((unsigned long)(__x))
+
#define DEBUG_SIG 0
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+#define __IA32_NR_sigreturn 119
+#define __IA32_NR_rt_sigreturn 173
struct sigframe_ia32
{
@@ -54,12 +61,51 @@
char retcode[8];
};
-static int
+int
+copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from)
+{
+ unsigned long tmp;
+ int err;
+
+ if (!access_ok(VERIFY_READ, from, sizeof(siginfo_t32)))
+ return -EFAULT;
+
+ err = __get_user(to->si_signo, &from->si_signo);
+ err |= __get_user(to->si_errno, &from->si_errno);
+ err |= __get_user(to->si_code, &from->si_code);
+
+ if (from->si_code < 0)
+ err |= __copy_from_user(&to->_sifields._pad, &from->_sifields._pad, SI_PAD_SIZE);
+ else {
+ switch (from->si_code >> 16) {
+ case __SI_CHLD >> 16:
+ err |= __get_user(to->si_utime, &from->si_utime);
+ err |= __get_user(to->si_stime, &from->si_stime);
+ err |= __get_user(to->si_status, &from->si_status);
+ default:
+ err |= __get_user(to->si_pid, &from->si_pid);
+ err |= __get_user(to->si_uid, &from->si_uid);
+ break;
+ case __SI_FAULT >> 16:
+ err |= __get_user(tmp, &from->si_addr);
+ to->si_addr = (void *) tmp;
+ break;
+ case __SI_POLL >> 16:
+ err |= __get_user(to->si_band, &from->si_band);
+ err |= __get_user(to->si_fd, &from->si_fd);
+ break;
+ /* case __SI_RT: This is not generated by the kernel as of now. */
+ }
+ }
+ return err;
+}
+
+int
copy_siginfo_to_user32 (siginfo_t32 *to, siginfo_t *from)
{
int err;
- if (!access_ok (VERIFY_WRITE, to, sizeof(siginfo_t32)))
+ if (!access_ok(VERIFY_WRITE, to, sizeof(siginfo_t32)))
return -EFAULT;
/* If you change siginfo_t structure, please be sure
@@ -97,110 +143,329 @@
return err;
}
+static inline void
+sigact_set_handler (struct k_sigaction *sa, unsigned int handler, unsigned int restorer)
+{
+ if (handler + 1 <= 2)
+ /* SIG_DFL, SIG_IGN, or SIG_ERR: must sign-extend to 64-bits */
+ sa->sa.sa_handler = (__sighandler_t) A((int) handler);
+ else
+ sa->sa.sa_handler = (__sighandler_t) (((unsigned long) restorer << 32) | handler);
+}
+asmlinkage long
+ia32_rt_sigsuspend (sigset32_t *uset, unsigned int sigsetsize, struct sigscratch *scr)
+{
+ extern long ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall);
+ sigset_t oldset, set;
-static int
-setup_sigcontext_ia32(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
- struct pt_regs *regs, unsigned long mask)
+ scr->scratch_unat = 0; /* avoid leaking kernel bits to user level */
+ memset(&set, 0, sizeof(&set));
+
+ if (sigsetsize > sizeof(sigset_t))
+ return -EINVAL;
+
+ if (copy_from_user(&set.sig, &uset->sig, sigsetsize))
+ return -EFAULT;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sigmask_lock);
+ {
+ oldset = current->blocked;
+ current->blocked = set;
+ recalc_sigpending(current);
+ }
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ /*
+ * The return below usually returns to the signal handler. We need to pre-set the
+ * correct error code here to ensure that the right values get saved in sigcontext
+ * by ia64_do_signal.
+ */
+ scr->pt.r8 = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ if (ia64_do_signal(&oldset, scr, 1))
+ return -EINTR;
+ }
+}
+
+asmlinkage long
+ia32_sigsuspend (unsigned int mask, struct sigscratch *scr)
+{
+ return ia32_rt_sigsuspend((sigset32_t *)&mask, sizeof(mask), scr);
+}
+
+asmlinkage long
+sys32_signal (int sig, unsigned int handler)
+{
+ struct k_sigaction new_sa, old_sa;
+ int ret;
+
+ sigact_set_handler(&new_sa, handler, 0);
+ new_sa.sa.sa_flags = SA_ONESHOT | SA_NOMASK;
+
+ ret = do_sigaction(sig, &new_sa, &old_sa);
+
+ return ret ? ret : IA32_SA_HANDLER(&old_sa);
+}
+
+asmlinkage long
+sys32_rt_sigaction (int sig, struct sigaction32 *act,
+ struct sigaction32 *oact, unsigned int sigsetsize)
+{
+ struct k_sigaction new_ka, old_ka;
+ unsigned int handler, restorer;
+ int ret;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset32_t))
+ return -EINVAL;
+
+ if (act) {
+ ret = get_user(handler, &act->sa_handler);
+ ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= get_user(restorer, &act->sa_restorer);
+ ret |= copy_from_user(&new_ka.sa.sa_mask, &act->sa_mask, sizeof(sigset32_t));
+ if (ret)
+ return -EFAULT;
+
+ sigact_set_handler(&new_ka, handler, restorer);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ ret = put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler);
+ ret |= put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer);
+ ret |= copy_to_user(&oact->sa_mask, &old_ka.sa.sa_mask, sizeof(sigset32_t));
+ }
+ return ret;
+}
+
+
+extern asmlinkage long sys_rt_sigprocmask (int how, sigset_t *set, sigset_t *oset,
+ size_t sigsetsize);
+
+asmlinkage long
+sys32_rt_sigprocmask (int how, sigset32_t *set, sigset32_t *oset, unsigned int sigsetsize)
+{
+ mm_segment_t old_fs = get_fs();
+ sigset_t s;
+ long ret;
+
+ if (sigsetsize > sizeof(s))
+ return -EINVAL;
+
+ if (set) {
+ memset(&s, 0, sizeof(s));
+ if (copy_from_user(&s.sig, set, sigsetsize))
+ return -EFAULT;
+ }
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL, sizeof(s));
+ set_fs(old_fs);
+ if (ret)
+ return ret;
+ if (oset) {
+ if (copy_to_user(oset, &s.sig, sigsetsize))
+ return -EFAULT;
+ }
+ return 0;
+}
+
+asmlinkage long
+sys32_sigprocmask (int how, unsigned int *set, unsigned int *oset)
{
- int err = 0;
- unsigned long flag;
+ return sys32_rt_sigprocmask(how, (sigset32_t *) set, (sigset32_t *) oset, sizeof(*set));
+}
- err |= __put_user((regs->r16 >> 32) & 0xffff , (unsigned int *)&sc->fs);
- err |= __put_user((regs->r16 >> 48) & 0xffff , (unsigned int *)&sc->gs);
+asmlinkage long
+sys32_rt_sigtimedwait (sigset32_t *uthese, siginfo_t32 *uinfo, struct timespec32 *uts,
+ unsigned int sigsetsize)
+{
+ extern asmlinkage long sys_rt_sigtimedwait (const sigset_t *, siginfo_t *,
+ const struct timespec *, size_t);
+ extern int copy_siginfo_to_user32 (siginfo_t32 *, siginfo_t *);
+ mm_segment_t old_fs = get_fs();
+ struct timespec t;
+ siginfo_t info;
+ sigset_t s;
+ int ret;
- err |= __put_user((regs->r16 >> 56) & 0xffff, (unsigned int *)&sc->es);
- err |= __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds);
- err |= __put_user(regs->r15, &sc->edi);
- err |= __put_user(regs->r14, &sc->esi);
- err |= __put_user(regs->r13, &sc->ebp);
- err |= __put_user(regs->r12, &sc->esp);
- err |= __put_user(regs->r11, &sc->ebx);
- err |= __put_user(regs->r10, &sc->edx);
- err |= __put_user(regs->r9, &sc->ecx);
- err |= __put_user(regs->r8, &sc->eax);
+ if (copy_from_user(&s.sig, uthese, sizeof(sigset32_t)))
+ return -EFAULT;
+ if (uts) {
+ ret = get_user(t.tv_sec, &uts->tv_sec);
+ ret |= get_user(t.tv_nsec, &uts->tv_nsec);
+ if (ret)
+ return -EFAULT;
+ }
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
+ set_fs(old_fs);
+ if (ret >= 0 && uinfo) {
+ if (copy_siginfo_to_user32(uinfo, &info))
+ return -EFAULT;
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_rt_sigqueueinfo (int pid, int sig, siginfo_t32 *uinfo)
+{
+ extern asmlinkage long sys_rt_sigqueueinfo (int, int, siginfo_t *);
+ extern int copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from);
+ mm_segment_t old_fs = get_fs();
+ siginfo_t info;
+ int ret;
+
+ if (copy_siginfo_from_user32(&info, uinfo))
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_rt_sigqueueinfo(pid, sig, &info);
+ set_fs(old_fs);
+ return ret;
+}
+
+asmlinkage long
+sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigaction32 *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ unsigned int handler, restorer;
+ int ret;
+
+ if (act) {
+ old_sigset32_t mask;
+
+ ret = get_user(handler, &act->sa_handler);
+ ret |= get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= get_user(restorer, &act->sa_restorer);
+ ret |= get_user(mask, &act->sa_mask);
+ if (ret)
+ return ret;
+
+ sigact_set_handler(&new_ka, handler, restorer);
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ ret = put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler);
+ ret |= put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer);
+ ret |= put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+static int
+setup_sigcontext_ia32 (struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
+ struct pt_regs *regs, unsigned long mask)
+{
+ int err = 0;
+ unsigned long flag;
+
+ err |= __put_user((regs->r16 >> 32) & 0xffff, (unsigned int *)&sc->fs);
+ err |= __put_user((regs->r16 >> 48) & 0xffff, (unsigned int *)&sc->gs);
+ err |= __put_user((regs->r16 >> 16) & 0xffff, (unsigned int *)&sc->es);
+ err |= __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds);
+ err |= __put_user(regs->r15, &sc->edi);
+ err |= __put_user(regs->r14, &sc->esi);
+ err |= __put_user(regs->r13, &sc->ebp);
+ err |= __put_user(regs->r12, &sc->esp);
+ err |= __put_user(regs->r11, &sc->ebx);
+ err |= __put_user(regs->r10, &sc->edx);
+ err |= __put_user(regs->r9, &sc->ecx);
+ err |= __put_user(regs->r8, &sc->eax);
#if 0
- err |= __put_user(current->tss.trap_no, &sc->trapno);
- err |= __put_user(current->tss.error_code, &sc->err);
+ err |= __put_user(current->tss.trap_no, &sc->trapno);
+ err |= __put_user(current->tss.error_code, &sc->err);
#endif
- err |= __put_user(regs->cr_iip, &sc->eip);
- err |= __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs);
- /*
- * `eflags' is in an ar register for this context
- */
- asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
- err |= __put_user((unsigned int)flag, &sc->eflags);
-
- err |= __put_user(regs->r12, &sc->esp_at_signal);
- err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
+ err |= __put_user(regs->cr_iip, &sc->eip);
+ err |= __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs);
+ /*
+ * `eflags' is in an ar register for this context
+ */
+ asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
+ err |= __put_user((unsigned int)flag, &sc->eflags);
+ err |= __put_user(regs->r12, &sc->esp_at_signal);
+ err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
#if 0
- tmp = save_i387(fpstate);
- if (tmp < 0)
- err = 1;
- else
- err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
+ tmp = save_i387(fpstate);
+ if (tmp < 0)
+ err = 1;
+ else
+ err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
- /* non-iBCS2 extensions.. */
+ /* non-iBCS2 extensions.. */
#endif
- err |= __put_user(mask, &sc->oldmask);
+ err |= __put_user(mask, &sc->oldmask);
#if 0
- err |= __put_user(current->tss.cr2, &sc->cr2);
+ err |= __put_user(current->tss.cr2, &sc->cr2);
#endif
-
- return err;
+ return err;
}
static int
-restore_sigcontext_ia32(struct pt_regs *regs, struct sigcontext_ia32 *sc, int *peax)
+restore_sigcontext_ia32 (struct pt_regs *regs, struct sigcontext_ia32 *sc, int *peax)
{
- unsigned int err = 0;
+ unsigned int err = 0;
+
+#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
-#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
+#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48)
+#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32)
+#define copyseg_cs(tmp) (regs->r17 |= tmp)
+#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16)
+#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16)
+#define copyseg_ds(tmp) (regs->r16 |= tmp)
+
+#define COPY_SEG(seg) \
+ { \
+ unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp); \
+ }
+#define COPY_SEG_STRICT(seg) \
+ { \
+ unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp|3); \
+ }
-#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48)
-#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32)
-#define copyseg_cs(tmp) (regs->r17 |= tmp)
-#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16)
-#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16)
-#define copyseg_ds(tmp) (regs->r16 |= tmp)
-
-#define COPY_SEG(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
- copyseg_##seg(tmp); }
-
-#define COPY_SEG_STRICT(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
- copyseg_##seg(tmp|3); }
-
- /* To make COPY_SEGs easier, we zero r16, r17 */
- regs->r16 = 0;
- regs->r17 = 0;
-
- COPY_SEG(gs);
- COPY_SEG(fs);
- COPY_SEG(es);
- COPY_SEG(ds);
- COPY(r15, edi);
- COPY(r14, esi);
- COPY(r13, ebp);
- COPY(r12, esp);
- COPY(r11, ebx);
- COPY(r10, edx);
- COPY(r9, ecx);
- COPY(cr_iip, eip);
- COPY_SEG_STRICT(cs);
- COPY_SEG_STRICT(ss);
- {
+ /* To make COPY_SEGs easier, we zero r16, r17 */
+ regs->r16 = 0;
+ regs->r17 = 0;
+
+ COPY_SEG(gs);
+ COPY_SEG(fs);
+ COPY_SEG(es);
+ COPY_SEG(ds);
+ COPY(r15, edi);
+ COPY(r14, esi);
+ COPY(r13, ebp);
+ COPY(r12, esp);
+ COPY(r11, ebx);
+ COPY(r10, edx);
+ COPY(r9, ecx);
+ COPY(cr_iip, eip);
+ COPY_SEG_STRICT(cs);
+ COPY_SEG_STRICT(ss);
+ ia32_load_segment_descriptors(current);
+ {
unsigned int tmpflags;
unsigned long flag;
/*
- * IA32 `eflags' is not part of `pt_regs', it's
- * in an ar register which is part of the thread
- * context. Fortunately, we are executing in the
+ * IA32 `eflags' is not part of `pt_regs', it's in an ar register which
+ * is part of the thread context. Fortunately, we are executing in the
* IA32 process's context.
*/
err |= __get_user(tmpflags, &sc->eflags);
@@ -210,186 +475,191 @@
asm volatile ("mov ar.eflag=%0 ;;" :: "r"(flag));
regs->r1 = -1; /* disable syscall checks, r1 is orig_eax */
- }
+ }
#if 0
- {
- struct _fpstate * buf;
- err |= __get_user(buf, &sc->fpstate);
- if (buf) {
- if (verify_area(VERIFY_READ, buf, sizeof(*buf)))
- goto badframe;
- err |= restore_i387(buf);
- }
- }
+ {
+ struct _fpstate * buf;
+ err |= __get_user(buf, &sc->fpstate);
+ if (buf) {
+ if (verify_area(VERIFY_READ, buf, sizeof(*buf)))
+ goto badframe;
+ err |= restore_i387(buf);
+ }
+ }
#endif
- err |= __get_user(*peax, &sc->eax);
- return err;
+ err |= __get_user(*peax, &sc->eax);
+ return err;
-#if 0
-badframe:
- return 1;
+#if 0
+ badframe:
+ return 1;
#endif
-
}
/*
* Determine which stack to use..
*/
static inline void *
-get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
+get_sigframe (struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
{
- unsigned long esp;
- unsigned int xss;
+ unsigned long esp;
- /* Default to using normal stack */
- esp = regs->r12;
- xss = regs->r16 >> 16;
-
- /* This is the X/Open sanctioned signal stack switching. */
- if (ka->sa.sa_flags & SA_ONSTACK) {
- if (! on_sig_stack(esp))
- esp = current->sas_ss_sp + current->sas_ss_size;
- }
- /* Legacy stack switching not supported */
-
- return (void *)((esp - frame_size) & -8ul);
+ /* Default to using normal stack (truncate off sign-extension of bit 31: */
+ esp = (unsigned int) regs->r12;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+ if (!on_sig_stack(esp))
+ esp = current->sas_ss_sp + current->sas_ss_size;
+ }
+ /* Legacy stack switching not supported */
+
+ return (void *)((esp - frame_size) & -8ul);
}
static int
-setup_frame_ia32(int sig, struct k_sigaction *ka, sigset_t *set,
- struct pt_regs * regs)
-{
- struct sigframe_ia32 *frame;
- int err = 0;
-
- frame = get_sigframe(ka, regs, sizeof(*frame));
-
- if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
- goto give_sigsegv;
-
- err |= __put_user((current->exec_domain
- && current->exec_domain->signal_invmap
- && sig < 32
- ? (int)(current->exec_domain->signal_invmap[sig])
- : sig),
- &frame->sig);
-
- err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
-
- if (_IA32_NSIG_WORDS > 1) {
- err |= __copy_to_user(frame->extramask, &set->sig[1],
- sizeof(frame->extramask));
- }
-
- /* Set up to return from userspace. If provided, use a stub
- already in userspace. */
- err |= __put_user((long)frame->retcode, &frame->pretcode);
- /* This is popl %eax ; movl $,%eax ; int $0x80 */
- err |= __put_user(0xb858, (short *)(frame->retcode+0));
-#define __IA32_NR_sigreturn 119
- err |= __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retcode+2));
- err |= __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+4));
- err |= __put_user(0x80cd, (short *)(frame->retcode+6));
-
- if (err)
- goto give_sigsegv;
-
- /* Set up registers for signal handler */
- regs->r12 = (unsigned long) frame;
- regs->cr_iip = (unsigned long) ka->sa.sa_handler;
-
- set_fs(USER_DS);
- regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS;
+setup_frame_ia32 (int sig, struct k_sigaction *ka, sigset_t *set, struct pt_regs * regs)
+{
+ struct sigframe_ia32 *frame;
+ int err = 0;
+
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? (int)(current->exec_domain->signal_invmap[sig])
+ : sig),
+ &frame->sig);
+
+ err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
+
+ if (_IA32_NSIG_WORDS > 1)
+ err |= __copy_to_user(frame->extramask, (char *) &set->sig + 4,
+ sizeof(frame->extramask));
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ unsigned int restorer = IA32_SA_RESTORER(ka);
+ err |= __put_user(restorer, &frame->pretcode);
+ } else {
+ err |= __put_user((long)frame->retcode, &frame->pretcode);
+ /* This is popl %eax ; movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb858, (short *)(frame->retcode+0));
+ err |= __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retcode+2));
+ err |= __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+4));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+6));
+ }
+
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = IA32_SA_HANDLER(ka);
+
+ set_fs(USER_DS);
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
#if 0
- regs->eflags &= ~TF_MASK;
+ regs->eflags &= ~TF_MASK;
#endif
#if 0
- printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
+ printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
current->comm, current->pid, sig, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
- return 1;
+ return 1;
-give_sigsegv:
- if (sig = SIGSEGV)
- ka->sa.sa_handler = SIG_DFL;
- force_sig(SIGSEGV, current);
- return 0;
+ give_sigsegv:
+ if (sig = SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+ return 0;
}
static int
-setup_rt_frame_ia32(int sig, struct k_sigaction *ka, siginfo_t *info,
- sigset_t *set, struct pt_regs * regs)
+setup_rt_frame_ia32 (int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs * regs)
{
- struct rt_sigframe_ia32 *frame;
- int err = 0;
+ struct rt_sigframe_ia32 *frame;
+ int err = 0;
- frame = get_sigframe(ka, regs, sizeof(*frame));
+ frame = get_sigframe(ka, regs, sizeof(*frame));
- if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
- goto give_sigsegv;
-
- err |= __put_user((current->exec_domain
- && current->exec_domain->signal_invmap
- && sig < 32
- ? current->exec_domain->signal_invmap[sig]
- : sig),
- &frame->sig);
- err |= __put_user((long)&frame->info, &frame->pinfo);
- err |= __put_user((long)&frame->uc, &frame->puc);
- err |= copy_siginfo_to_user32(&frame->info, info);
-
- /* Create the ucontext. */
- err |= __put_user(0, &frame->uc.uc_flags);
- err |= __put_user(0, &frame->uc.uc_link);
- err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
- err |= __put_user(sas_ss_flags(regs->r12),
- &frame->uc.uc_stack.ss_flags);
- err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
- err |= setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate,
- regs, set->sig[0]);
- err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
-
- err |= __put_user((long)frame->retcode, &frame->pretcode);
- /* This is movl $,%eax ; int $0x80 */
- err |= __put_user(0xb8, (char *)(frame->retcode+0));
-#define __IA32_NR_rt_sigreturn 173
- err |= __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1));
- err |= __put_user(0x80cd, (short *)(frame->retcode+5));
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? current->exec_domain->signal_invmap[sig]
+ : sig),
+ &frame->sig);
+ err |= __put_user((long)&frame->info, &frame->pinfo);
+ err |= __put_user((long)&frame->uc, &frame->puc);
+ err |= copy_siginfo_to_user32(&frame->info, info);
+
+ /* Create the ucontext. */
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(sas_ss_flags(regs->r12), &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate, regs, set->sig[0]);
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ unsigned int restorer = IA32_SA_RESTORER(ka);
+ err |= __put_user(restorer, &frame->pretcode);
+ } else {
+ err |= __put_user((long)frame->retcode, &frame->pretcode);
+ /* This is movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb8, (char *)(frame->retcode+0));
+ err |= __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+5));
+ }
- if (err)
- goto give_sigsegv;
+ if (err)
+ goto give_sigsegv;
- /* Set up registers for signal handler */
- regs->r12 = (unsigned long) frame;
- regs->cr_iip = (unsigned long) ka->sa.sa_handler;
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = IA32_SA_HANDLER(ka);
- set_fs(USER_DS);
+ set_fs(USER_DS);
- regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
- regs->r17 = (__USER_DS << 16) | __USER_CS;
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES = DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
#if 0
- regs->eflags &= ~TF_MASK;
+ regs->eflags &= ~TF_MASK;
#endif
#if 0
- printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
+ printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
current->comm, current->pid, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
- return 1;
+ return 1;
give_sigsegv:
- if (sig = SIGSEGV)
- ka->sa.sa_handler = SIG_DFL;
- force_sig(SIGSEGV, current);
- return 0;
+ if (sig = SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+ return 0;
}
int
@@ -398,95 +668,78 @@
{
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
- return(setup_rt_frame_ia32(sig, ka, info, set, regs));
+ return setup_rt_frame_ia32(sig, ka, info, set, regs);
else
- return(setup_frame_ia32(sig, ka, set, regs));
+ return setup_frame_ia32(sig, ka, set, regs);
}
-asmlinkage int
-sys32_sigreturn(
-int arg0,
-int arg1,
-int arg2,
-int arg3,
-int arg4,
-int arg5,
-int arg6,
-int arg7,
-unsigned long stack)
-{
- struct pt_regs *regs = (struct pt_regs *) &stack;
- struct sigframe_ia32 *frame = (struct sigframe_ia32 *)(regs->r12- 8);
- sigset_t set;
- int eax;
-
- if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
- goto badframe;
-
- if (__get_user(set.sig[0], &frame->sc.oldmask)
- || (_IA32_NSIG_WORDS > 1
- && __copy_from_user((((char *) &set.sig) + 4),
- &frame->extramask,
- sizeof(frame->extramask))))
- goto badframe;
-
- sigdelsetmask(&set, ~_BLOCKABLE);
- spin_lock_irq(¤t->sigmask_lock);
- current->blocked = (sigset_t) set;
- recalc_sigpending(current);
- spin_unlock_irq(¤t->sigmask_lock);
-
- if (restore_sigcontext_ia32(regs, &frame->sc, &eax))
- goto badframe;
- return eax;
-
-badframe:
- force_sig(SIGSEGV, current);
- return 0;
-}
-
-asmlinkage int
-sys32_rt_sigreturn(
-int arg0,
-int arg1,
-int arg2,
-int arg3,
-int arg4,
-int arg5,
-int arg6,
-int arg7,
-unsigned long stack)
-{
- struct pt_regs *regs = (struct pt_regs *) &stack;
- struct rt_sigframe_ia32 *frame = (struct rt_sigframe_ia32 *)(regs->r12 - 4);
- sigset_t set;
- stack_t st;
- int eax;
-
- if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
- goto badframe;
- if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
- goto badframe;
-
- sigdelsetmask(&set, ~_BLOCKABLE);
- spin_lock_irq(¤t->sigmask_lock);
- current->blocked = set;
- recalc_sigpending(current);
- spin_unlock_irq(¤t->sigmask_lock);
-
- if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax))
- goto badframe;
-
- if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
- goto badframe;
- /* It is more difficult to avoid calling this function than to
- call it and ignore errors. */
- do_sigaltstack(&st, NULL, regs->r12);
-
- return eax;
-
-badframe:
- force_sig(SIGSEGV, current);
- return 0;
-}
+asmlinkage long
+sys32_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int arg5, int arg6, int arg7,
+ unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long esp = (unsigned int) regs->r12;
+ struct sigframe_ia32 *frame = (struct sigframe_ia32 *)(esp - 8);
+ sigset_t set;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_IA32_NSIG_WORDS > 1 && __copy_from_user((char *) &set.sig + 4, &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = (sigset_t) set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->sc, &eax))
+ goto badframe;
+ return eax;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+asmlinkage long
+sys32_rt_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int arg5, int arg6, int arg7,
+ unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long esp = (unsigned int) regs->r12;
+ struct rt_sigframe_ia32 *frame = (struct rt_sigframe_ia32 *)(esp - 4);
+ sigset_t set;
+ stack_t st;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax))
+ goto badframe;
+
+ if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
+ goto badframe;
+ /* It is more difficult to avoid calling this function than to
+ call it and ignore errors. */
+ do_sigaltstack(&st, NULL, esp);
+
+ return eax;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_support.c linux-2.4.13-lia/arch/ia64/ia32/ia32_support.c
--- linux-2.4.13/arch/ia64/ia32/ia32_support.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_support.c Wed Oct 10 17:39:02 2001
@@ -4,15 +4,18 @@
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 2000 Asit K. Mallick <asit.k.mallick@intel.com>
* Copyright (C) 2001 Hewlett-Packard Co
- * Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 06/16/00 A. Mallick added csd/ssd/tssd for ia32 thread context
* 02/19/01 D. Mosberger dropped tssd; it's not needed
+ * 09/14/01 D. Mosberger fixed memory management for gdt/tss page
+ * 09/29/01 D. Mosberger added ia32_load_segment_descriptors()
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/mm.h>
+#include <linux/personality.h>
#include <linux/sched.h>
#include <asm/page.h>
@@ -21,10 +24,46 @@
#include <asm/processor.h>
#include <asm/ia32.h>
-extern unsigned long *ia32_gdt_table, *ia32_tss;
-
extern void die_if_kernel (char *str, struct pt_regs *regs, long err);
+struct exec_domain ia32_exec_domain;
+struct page *ia32_shared_page[(2*IA32_PAGE_SIZE + PAGE_SIZE - 1)/PAGE_SIZE];
+unsigned long *ia32_gdt;
+
+static unsigned long
+load_desc (u16 selector)
+{
+ unsigned long *table, limit, index;
+
+ if (!selector)
+ return 0;
+ if (selector & IA32_SEGSEL_TI) {
+ table = (unsigned long *) IA32_LDT_OFFSET;
+ limit = IA32_LDT_ENTRIES;
+ } else {
+ table = ia32_gdt;
+ limit = IA32_PAGE_SIZE / sizeof(ia32_gdt[0]);
+ }
+ index = selector >> IA32_SEGSEL_INDEX_SHIFT;
+ if (index >= limit)
+ return 0;
+ return IA32_SEG_UNSCRAMBLE(table[index]);
+}
+
+void
+ia32_load_segment_descriptors (struct task_struct *task)
+{
+ struct pt_regs *regs = ia64_task_regs(task);
+
+ /* Setup the segment descriptors */
+ regs->r24 = load_desc(regs->r16 >> 16); /* ESD */
+ regs->r27 = load_desc(regs->r16 >> 0); /* DSD */
+ regs->r28 = load_desc(regs->r16 >> 32); /* FSD */
+ regs->r29 = load_desc(regs->r16 >> 48); /* GSD */
+ task->thread.csd = load_desc(regs->r17 >> 0); /* CSD */
+ task->thread.ssd = load_desc(regs->r17 >> 16); /* SSD */
+}
+
void
ia32_save_state (struct task_struct *t)
{
@@ -46,14 +85,17 @@
t->thread.csd = csd;
t->thread.ssd = ssd;
ia64_set_kr(IA64_KR_IO_BASE, t->thread.old_iob);
+ ia64_set_kr(IA64_KR_TSSD, t->thread.old_k1);
}
void
ia32_load_state (struct task_struct *t)
{
- unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd;
+ unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd, tssd;
struct pt_regs *regs = ia64_task_regs(t);
- int nr;
+ int nr = smp_processor_id(); /* LDT and TSS depend on CPU number: */
+
+ nr = smp_processor_id();
eflag = t->thread.eflag;
fsr = t->thread.fsr;
@@ -62,6 +104,7 @@
fdr = t->thread.fdr;
csd = t->thread.csd;
ssd = t->thread.ssd;
+ tssd = load_desc(_TSS(nr)); /* TSSD */
asm volatile ("mov ar.eflag=%0;"
"mov ar.fsr=%1;"
@@ -72,11 +115,12 @@
"mov ar.ssd=%6;"
:: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr), "r"(csd), "r"(ssd));
current->thread.old_iob = ia64_get_kr(IA64_KR_IO_BASE);
+ current->thread.old_k1 = ia64_get_kr(IA64_KR_TSSD);
ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE);
+ ia64_set_kr(IA64_KR_TSSD, tssd);
- /* load TSS and LDT while preserving SS and CS: */
- nr = smp_processor_id();
regs->r17 = (_TSS(nr) << 48) | (_LDT(nr) << 32) | (__u32) regs->r17;
+ regs->r30 = load_desc(_LDT(nr)); /* LDTD */
}
/*
@@ -85,36 +129,34 @@
void
ia32_gdt_init (void)
{
- unsigned long gdt_and_tss_page, ldt_size;
+ unsigned long *tss;
+ unsigned long ldt_size;
int nr;
- /* allocate two IA-32 pages of memory: */
- gdt_and_tss_page = __get_free_pages(GFP_KERNEL,
- (IA32_PAGE_SHIFT < PAGE_SHIFT)
- ? 0 : (IA32_PAGE_SHIFT + 1) - PAGE_SHIFT);
- ia32_gdt_table = (unsigned long *) gdt_and_tss_page;
- ia32_tss = (unsigned long *) (gdt_and_tss_page + IA32_PAGE_SIZE);
-
- /* Zero the gdt and tss */
- memset((void *) gdt_and_tss_page, 0, 2*IA32_PAGE_SIZE);
+ ia32_shared_page[0] = alloc_page(GFP_KERNEL);
+ ia32_gdt = page_address(ia32_shared_page[0]);
+ tss = ia32_gdt + IA32_PAGE_SIZE/sizeof(ia32_gdt[0]);
+
+ if (IA32_PAGE_SIZE = PAGE_SIZE) {
+ ia32_shared_page[1] = alloc_page(GFP_KERNEL);
+ tss = page_address(ia32_shared_page[1]);
+ }
/* CS descriptor in IA-32 (scrambled) format */
- ia32_gdt_table[__USER_CS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET - 1) >> IA32_PAGE_SHIFT,
- 0xb, 1, 3, 1, 1, 1, 1);
+ ia32_gdt[__USER_CS >> 3] = IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) >> IA32_PAGE_SHIFT,
+ 0xb, 1, 3, 1, 1, 1, 1);
/* DS descriptor in IA-32 (scrambled) format */
- ia32_gdt_table[__USER_DS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET - 1) >> IA32_PAGE_SHIFT,
- 0x3, 1, 3, 1, 1, 1, 1);
+ ia32_gdt[__USER_DS >> 3] = IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) >> IA32_PAGE_SHIFT,
+ 0x3, 1, 3, 1, 1, 1, 1);
/* We never change the TSS and LDT descriptors, so we can share them across all CPUs. */
ldt_size = PAGE_ALIGN(IA32_LDT_ENTRIES*IA32_LDT_ENTRY_SIZE);
for (nr = 0; nr < NR_CPUS; ++nr) {
- ia32_gdt_table[_TSS(nr)] = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235,
- 0xb, 0, 3, 1, 1, 1, 0);
- ia32_gdt_table[_LDT(nr)] = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1,
- 0x2, 0, 3, 1, 1, 1, 0);
+ ia32_gdt[_TSS(nr)] = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235,
+ 0xb, 0, 3, 1, 1, 1, 0);
+ ia32_gdt[_LDT(nr)] = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1,
+ 0x2, 0, 3, 1, 1, 1, 0);
}
}
@@ -133,3 +175,18 @@
siginfo.si_code = TRAP_BRKPT;
force_sig_info(SIGTRAP, &siginfo, current);
}
+
+static int __init
+ia32_init (void)
+{
+ ia32_exec_domain.name = "Linux/x86";
+ ia32_exec_domain.handler = NULL;
+ ia32_exec_domain.pers_low = PER_LINUX32;
+ ia32_exec_domain.pers_high = PER_LINUX32;
+ ia32_exec_domain.signal_map = default_exec_domain.signal_map;
+ ia32_exec_domain.signal_invmap = default_exec_domain.signal_invmap;
+ register_exec_domain(&ia32_exec_domain);
+ return 0;
+}
+
+__initcall(ia32_init);
diff -urN linux-2.4.13/arch/ia64/ia32/ia32_traps.c linux-2.4.13-lia/arch/ia64/ia32/ia32_traps.c
--- linux-2.4.13/arch/ia64/ia32/ia32_traps.c Thu Jan 4 12:50:17 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/ia32_traps.c Thu Oct 4 00:21:52 2001
@@ -1,7 +1,12 @@
/*
- * IA32 exceptions handler
+ * IA-32 exception handlers
*
+ * Copyright (C) 2000 Asit K. Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+/*
* 06/16/00 A. Mallick added siginfo for most cases (close to IA32)
+ * 09/29/00 D. Mosberger added ia32_intercept()
*/
#include <linux/kernel.h>
@@ -9,6 +14,26 @@
#include <asm/ia32.h>
#include <asm/ptrace.h>
+
+int
+ia32_intercept (struct pt_regs *regs, unsigned long isr)
+{
+ switch ((isr >> 16) & 0xff) {
+ case 0: /* Instruction intercept fault */
+ case 3: /* Locked Data reference fault */
+ case 1: /* Gate intercept trap */
+ return -1;
+
+ case 2: /* System flag trap */
+ if (((isr >> 14) & 0x3) >= 2) {
+ /* MOV SS, POP SS instructions */
+ ia64_psr(regs)->id = 1;
+ return 0;
+ } else
+ return -1;
+ }
+ return -1;
+}
int
ia32_exception (struct pt_regs *regs, unsigned long isr)
diff -urN linux-2.4.13/arch/ia64/ia32/sys_ia32.c linux-2.4.13-lia/arch/ia64/ia32/sys_ia32.c
--- linux-2.4.13/arch/ia64/ia32/sys_ia32.c Mon Aug 20 10:18:26 2001
+++ linux-2.4.13-lia/arch/ia64/ia32/sys_ia32.c Wed Oct 10 17:39:17 2001
@@ -1,14 +1,13 @@
/*
- * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on
- * sys_sparc32
+ * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Derived from sys_sparc32.c.
*
* Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
- * Copyright (C) 2000 Hewlett-Packard Co.
- * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* These routines maintain argument size conversion between 32bit and 64bit
* environment.
@@ -53,31 +52,56 @@
#include <asm/types.h>
#include <asm/uaccess.h>
#include <asm/semaphore.h>
-#include <asm/ipc.h>
#include <net/scm.h>
#include <net/sock.h>
#include <asm/ia32.h>
+#define DEBUG 0
+
+#if DEBUG
+# define DBG(fmt...) printk(KERN_DEBUG fmt)
+#else
+# define DBG(fmt...)
+#endif
+
#define A(__x) ((unsigned long)(__x))
#define AA(__x) ((unsigned long)(__x))
#define ROUND_UP(x,a) ((__typeof__(x))(((unsigned long)(x) + ((a) - 1)) & ~((a) - 1)))
#define NAME_OFFSET(de) ((int) ((de)->d_name - (char *) (de)))
+#define OFFSET4K(a) ((a) & 0xfff)
+#define PAGE_START(addr) ((addr) & PAGE_MASK)
+#define PAGE_OFF(addr) ((addr) & ~PAGE_MASK)
+
extern asmlinkage long sys_execve (char *, char **, char **, struct pt_regs *);
extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long);
+extern asmlinkage long sys_munmap (unsigned long, size_t);
+extern unsigned long arch_get_unmapped_area (struct file *, unsigned long, unsigned long,
+ unsigned long, unsigned long);
+
+/* forward declaration: */
+asmlinkage long sys32_mprotect (unsigned int, unsigned int, int);
+
+/*
+ * Anything that modifies or inspects ia32 user virtual memory must hold this semaphore
+ * while doing so.
+ */
+/* XXX make per-mm: */
+static DECLARE_MUTEX(ia32_mmap_sem);
static int
nargs (unsigned int arg, char **ap)
{
- int n, err, addr;
+ unsigned int addr;
+ int n, err;
if (!arg)
return 0;
n = 0;
do {
- err = get_user(addr, (int *)A(arg));
+ err = get_user(addr, (unsigned int *)A(arg));
if (err)
return err;
if (ap)
@@ -94,7 +118,7 @@
int stack)
{
struct pt_regs *regs = (struct pt_regs *)&stack;
- unsigned long old_map_base, old_task_size;
+ unsigned long old_map_base, old_task_size, tssd;
char **av, **ae;
int na, ne, len;
long r;
@@ -123,15 +147,20 @@
old_map_base = current->thread.map_base;
old_task_size = current->thread.task_size;
+ tssd = ia64_get_kr(IA64_KR_TSSD);
- /* we may be exec'ing a 64-bit process: reset map base & task-size: */
+ /* we may be exec'ing a 64-bit process: reset map base, task-size, and io-base: */
current->thread.map_base = DEFAULT_MAP_BASE;
current->thread.task_size = DEFAULT_TASK_SIZE;
+ ia64_set_kr(IA64_KR_IO_BASE, current->thread.old_iob);
+ ia64_set_kr(IA64_KR_TSSD, current->thread.old_k1);
set_fs(KERNEL_DS);
r = sys_execve(filename, av, ae, regs);
if (r < 0) {
- /* oops, execve failed, switch back to old map base & task-size: */
+ /* oops, execve failed, switch back to old values... */
+ ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE);
+ ia64_set_kr(IA64_KR_TSSD, tssd);
current->thread.map_base = old_map_base;
current->thread.task_size = old_task_size;
set_fs(USER_DS); /* establish new task-size as the address-limit */
@@ -142,30 +171,33 @@
}
static inline int
-putstat(struct stat32 *ubuf, struct stat *kbuf)
+putstat (struct stat32 *ubuf, struct stat *kbuf)
{
int err;
- err = put_user (kbuf->st_dev, &ubuf->st_dev);
- err |= __put_user (kbuf->st_ino, &ubuf->st_ino);
- err |= __put_user (kbuf->st_mode, &ubuf->st_mode);
- err |= __put_user (kbuf->st_nlink, &ubuf->st_nlink);
- err |= __put_user (kbuf->st_uid, &ubuf->st_uid);
- err |= __put_user (kbuf->st_gid, &ubuf->st_gid);
- err |= __put_user (kbuf->st_rdev, &ubuf->st_rdev);
- err |= __put_user (kbuf->st_size, &ubuf->st_size);
- err |= __put_user (kbuf->st_atime, &ubuf->st_atime);
- err |= __put_user (kbuf->st_mtime, &ubuf->st_mtime);
- err |= __put_user (kbuf->st_ctime, &ubuf->st_ctime);
- err |= __put_user (kbuf->st_blksize, &ubuf->st_blksize);
- err |= __put_user (kbuf->st_blocks, &ubuf->st_blocks);
+ if (clear_user(ubuf, sizeof(*ubuf)))
+ return 1;
+
+ err = __put_user(kbuf->st_dev, &ubuf->st_dev);
+ err |= __put_user(kbuf->st_ino, &ubuf->st_ino);
+ err |= __put_user(kbuf->st_mode, &ubuf->st_mode);
+ err |= __put_user(kbuf->st_nlink, &ubuf->st_nlink);
+ err |= __put_user(kbuf->st_uid, &ubuf->st_uid);
+ err |= __put_user(kbuf->st_gid, &ubuf->st_gid);
+ err |= __put_user(kbuf->st_rdev, &ubuf->st_rdev);
+ err |= __put_user(kbuf->st_size, &ubuf->st_size);
+ err |= __put_user(kbuf->st_atime, &ubuf->st_atime);
+ err |= __put_user(kbuf->st_mtime, &ubuf->st_mtime);
+ err |= __put_user(kbuf->st_ctime, &ubuf->st_ctime);
+ err |= __put_user(kbuf->st_blksize, &ubuf->st_blksize);
+ err |= __put_user(kbuf->st_blocks, &ubuf->st_blocks);
return err;
}
-extern asmlinkage long sys_newstat(char * filename, struct stat * statbuf);
+extern asmlinkage long sys_newstat (char * filename, struct stat * statbuf);
asmlinkage long
-sys32_newstat(char * filename, struct stat32 *statbuf)
+sys32_newstat (char *filename, struct stat32 *statbuf)
{
int ret;
struct stat s;
@@ -173,8 +205,8 @@
set_fs(KERNEL_DS);
ret = sys_newstat(filename, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
@@ -182,16 +214,16 @@
extern asmlinkage long sys_newlstat(char * filename, struct stat * statbuf);
asmlinkage long
-sys32_newlstat(char * filename, struct stat32 *statbuf)
+sys32_newlstat (char *filename, struct stat32 *statbuf)
{
- int ret;
- struct stat s;
mm_segment_t old_fs = get_fs();
+ struct stat s;
+ int ret;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_newlstat(filename, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
@@ -199,112 +231,249 @@
extern asmlinkage long sys_newfstat(unsigned int fd, struct stat * statbuf);
asmlinkage long
-sys32_newfstat(unsigned int fd, struct stat32 *statbuf)
+sys32_newfstat (unsigned int fd, struct stat32 *statbuf)
{
- int ret;
- struct stat s;
mm_segment_t old_fs = get_fs();
+ struct stat s;
+ int ret;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_newfstat(fd, &s);
- set_fs (old_fs);
- if (putstat (statbuf, &s))
+ set_fs(old_fs);
+ if (putstat(statbuf, &s))
return -EFAULT;
return ret;
}
-#define OFFSET4K(a) ((a) & 0xfff)
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
-unsigned long
-do_mmap_fake(struct file *file, unsigned long addr, unsigned long len,
- unsigned long prot, unsigned long flags, loff_t off)
+
+static int
+get_page_prot (unsigned long addr)
+{
+ struct vm_area_struct *vma = find_vma(current->mm, addr);
+ int prot = 0;
+
+ if (!vma || vma->vm_start > addr)
+ return 0;
+
+ if (vma->vm_flags & VM_READ)
+ prot |= PROT_READ;
+ if (vma->vm_flags & VM_WRITE)
+ prot |= PROT_WRITE;
+ if (vma->vm_flags & VM_EXEC)
+ prot |= PROT_EXEC;
+ return prot;
+}
+
+/*
+ * Map a subpage by creating an anonymous page that contains the union of the old page and
+ * the subpage.
+ */
+static unsigned long
+mmap_subpage (struct file *file, unsigned long start, unsigned long end, int prot, int flags,
+ loff_t off)
{
+ void *page = (void *) get_zeroed_page(GFP_KERNEL);
struct inode *inode;
- void *front, *back;
- unsigned long baddr;
- int r;
- char c;
+ unsigned long ret;
+ int old_prot = get_page_prot(start);
- if (OFFSET4K(addr) || OFFSET4K(off))
- return -EINVAL;
- prot |= PROT_WRITE;
- front = NULL;
- back = NULL;
- if ((baddr = (addr & PAGE_MASK)) != addr && get_user(c, (char *)baddr) = 0) {
- front = kmalloc(addr - baddr, GFP_KERNEL);
- if (!front)
- return -ENOMEM;
- __copy_user(front, (void *)baddr, addr - baddr);
+ DBG("mmap_subpage(file=%p,start=0x%lx,end=0x%lx,prot=%x,flags=%x,off=0x%llx)\n",
+ file, start, end, prot, flags, off);
+
+ if (!page)
+ return -ENOMEM;
+
+ if (old_prot)
+ copy_from_user(page, (void *) PAGE_START(start), PAGE_SIZE);
+
+ down_write(¤t->mm->mmap_sem);
+ {
+ ret = do_mmap(0, PAGE_START(start), PAGE_SIZE, prot | PROT_WRITE,
+ flags | MAP_FIXED | MAP_ANONYMOUS, 0);
}
- if (addr && ((addr + len) & ~PAGE_MASK) && get_user(c, (char *)(addr + len)) = 0) {
- back = kmalloc(PAGE_SIZE - ((addr + len) & ~PAGE_MASK), GFP_KERNEL);
- if (!back) {
- if (front)
- kfree(front);
- return -ENOMEM;
+ up_write(¤t->mm->mmap_sem);
+
+ if (IS_ERR((void *) ret))
+ goto out;
+
+ if (old_prot) {
+ /* copy back the old page contents. */
+ if (PAGE_OFF(start))
+ copy_to_user((void *) PAGE_START(start), page, PAGE_OFF(start));
+ if (PAGE_OFF(end))
+ copy_to_user((void *) end, page + PAGE_OFF(end),
+ PAGE_SIZE - PAGE_OFF(end));
+ }
+ if (!(flags & MAP_ANONYMOUS)) {
+ /* read the file contents */
+ inode = file->f_dentry->d_inode;
+ if (!inode->i_fop || !file->f_op->read
+ || ((*file->f_op->read)(file, (char *) start, end - start, &off) < 0))
+ {
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+ if (!(prot & PROT_WRITE))
+ ret = sys_mprotect(PAGE_START(start), PAGE_SIZE, prot | old_prot);
+ out:
+ free_page((unsigned long) page);
+ return ret;
+}
+
+static unsigned long
+emulate_mmap (struct file *file, unsigned long start, unsigned long len, int prot, int flags,
+ loff_t off)
+{
+ unsigned long tmp, end, pend, pstart, ret, is_congruent, fudge = 0;
+ struct inode *inode;
+ loff_t poff;
+
+ end = start + len;
+ pstart = PAGE_START(start);
+ pend = PAGE_ALIGN(end);
+
+ if (flags & MAP_FIXED) {
+ if (start > pstart) {
+ if (flags & MAP_SHARED)
+ printk(KERN_INFO
+ "%s(%d): emulate_mmap() can't share head (addr=0x%lx)\n",
+ current->comm, current->pid, start);
+ ret = mmap_subpage(file, start, min(PAGE_ALIGN(start), end), prot, flags,
+ off);
+ if (IS_ERR((void *) ret))
+ return ret;
+ pstart += PAGE_SIZE;
+ if (pstart >= pend)
+ return start; /* done */
+ }
+ if (end < pend) {
+ if (flags & MAP_SHARED)
+ printk(KERN_INFO
+ "%s(%d): emulate_mmap() can't share tail (end=0x%lx)\n",
+ current->comm, current->pid, end);
+ ret = mmap_subpage(file, max(start, PAGE_START(end)), end, prot, flags,
+ (off + len) - PAGE_OFF(end));
+ if (IS_ERR((void *) ret))
+ return ret;
+ pend -= PAGE_SIZE;
+ if (pstart >= pend)
+ return start; /* done */
+ }
+ } else {
+ /*
+ * If a start address was specified, use it if the entire rounded out area
+ * is available.
+ */
+ if (start && !pstart)
+ fudge = 1; /* handle case of mapping to range (0,PAGE_SIZE) */
+ tmp = arch_get_unmapped_area(file, pstart - fudge, pend - pstart, 0, flags);
+ if (tmp != pstart) {
+ pstart = tmp;
+ start = pstart + PAGE_OFF(off); /* make start congruent with off */
+ end = start + len;
+ pend = PAGE_ALIGN(end);
}
- __copy_user(back, (char *)addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
}
+
+ poff = off + (pstart - start); /* note: (pstart - start) may be negative */
+ is_congruent = (flags & MAP_ANONYMOUS) || (PAGE_OFF(poff) = 0);
+
+ if ((flags & MAP_SHARED) && !is_congruent)
+ printk(KERN_INFO "%s(%d): emulate_mmap() can't share contents of incongruent mmap "
+ "(addr=0x%lx,off=0x%llx)\n", current->comm, current->pid, start, off);
+
+ DBG("mmap_body: mapping [0x%lx-0x%lx) %s with poff 0x%llx\n", pstart, pend,
+ is_congruent ? "congruent" : "not congruent", poff);
+
down_write(¤t->mm->mmap_sem);
- r = do_mmap(0, baddr, len + (addr - baddr), prot, flags | MAP_ANONYMOUS, 0);
+ {
+ if (!(flags & MAP_ANONYMOUS) && is_congruent)
+ ret = do_mmap(file, pstart, pend - pstart, prot, flags | MAP_FIXED, poff);
+ else
+ ret = do_mmap(0, pstart, pend - pstart,
+ prot | ((flags & MAP_ANONYMOUS) ? 0 : PROT_WRITE),
+ flags | MAP_FIXED | MAP_ANONYMOUS, 0);
+ }
up_write(¤t->mm->mmap_sem);
- if (r < 0)
- return(r);
- if (addr = 0)
- addr = r;
- if (back) {
- __copy_user((char *)addr + len, back, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
- kfree(back);
- }
- if (front) {
- __copy_user((void *)baddr, front, addr - baddr);
- kfree(front);
- }
- if (flags & MAP_ANONYMOUS) {
- clear_user((char *)addr, len);
- return(addr);
+
+ if (IS_ERR((void *) ret))
+ return ret;
+
+ if (!is_congruent) {
+ /* read the file contents */
+ inode = file->f_dentry->d_inode;
+ if (!inode->i_fop || !file->f_op->read
+ || ((*file->f_op->read)(file, (char *) pstart, pend - pstart, &poff) < 0))
+ {
+ sys_munmap(pstart, pend - pstart);
+ return -EINVAL;
+ }
+ if (!(prot & PROT_WRITE) && sys_mprotect(pstart, pend - pstart, prot) < 0)
+ return EINVAL;
}
- if (!file)
- return -EINVAL;
- inode = file->f_dentry->d_inode;
- if (!inode->i_fop)
- return -EINVAL;
- if (!file->f_op->read)
- return -EINVAL;
- r = file->f_op->read(file, (char *)addr, len, &off);
- return (r < 0) ? -EINVAL : addr;
+ return start;
}
-long
-ia32_do_mmap (struct file *file, unsigned int addr, unsigned int len, unsigned int prot,
- unsigned int flags, unsigned int fd, unsigned int offset)
+#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */
+
+static inline unsigned int
+get_prot32 (unsigned int prot)
{
- long error = -EFAULT;
- unsigned int poff;
+ if (prot & PROT_WRITE)
+ /* on x86, PROT_WRITE implies PROT_READ which implies PROT_EEC */
+ prot |= PROT_READ | PROT_WRITE | PROT_EXEC;
+ else if (prot & (PROT_READ | PROT_EXEC))
+ /* on x86, there is no distinction between PROT_READ and PROT_EXEC */
+ prot |= (PROT_READ | PROT_EXEC);
- flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
- prot |= PROT_EXEC;
+ return prot;
+}
- if ((flags & MAP_FIXED) && ((addr & ~PAGE_MASK) || (offset & ~PAGE_MASK)))
- error = do_mmap_fake(file, addr, len, prot, flags, (loff_t)offset);
- else {
- poff = offset & PAGE_MASK;
- len += offset - poff;
+unsigned long
+ia32_do_mmap (struct file *file, unsigned long addr, unsigned long len, int prot, int flags,
+ loff_t offset)
+{
+ DBG("ia32_do_mmap(file=%p,addr=0x%lx,len=0x%lx,prot=%x,flags=%x,offset=0x%llx)\n",
+ file, addr, len, prot, flags, offset);
+
+ if (file && (!file->f_op || !file->f_op->mmap))
+ return -ENODEV;
+
+ len = IA32_PAGE_ALIGN(len);
+ if (len = 0)
+ return addr;
+
+ if (len > IA32_PAGE_OFFSET || addr > IA32_PAGE_OFFSET - len)
+ return -EINVAL;
+
+ if (OFFSET4K(offset))
+ return -EINVAL;
- down_write(¤t->mm->mmap_sem);
- error = do_mmap_pgoff(file, addr, len, prot, flags, poff >> PAGE_SHIFT);
- up_write(¤t->mm->mmap_sem);
+ prot = get_prot32(prot);
- if (!IS_ERR((void *) error))
- error += offset - poff;
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ down(&ia32_mmap_sem);
+ {
+ addr = emulate_mmap(file, addr, len, prot, flags, offset);
}
- return error;
+ up(&ia32_mmap_sem);
+#else
+ down_write(¤t->mm->mmap_sem);
+ {
+ addr = do_mmap(file, addr, len, prot, flags, offset);
+ }
+ up_write(¤t->mm->mmap_sem);
+#endif
+ DBG("ia32_do_mmap: returning 0x%lx\n", addr);
+ return addr;
}
/*
- * Linux/i386 didn't use to be able to handle more than
- * 4 system call parameters, so these system calls used a memory
- * block for parameter passing..
+ * Linux/i386 didn't use to be able to handle more than 4 system call parameters, so these
+ * system calls used a memory block for parameter passing..
*/
struct mmap_arg_struct {
@@ -317,180 +486,166 @@
};
asmlinkage long
-sys32_mmap(struct mmap_arg_struct *arg)
+sys32_mmap (struct mmap_arg_struct *arg)
{
struct mmap_arg_struct a;
struct file *file = NULL;
- long retval;
+ unsigned long addr;
+ int flags;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
- if (PAGE_ALIGN(a.len) = 0)
- return a.addr;
+ if (OFFSET4K(a.offset))
+ return -EINVAL;
+
+ flags = a.flags;
- if (!(a.flags & MAP_ANONYMOUS)) {
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
file = fget(a.fd);
if (!file)
return -EBADF;
}
-#ifdef CONFIG_IA64_PAGE_SIZE_4KB
- if ((a.offset & ~PAGE_MASK) != 0)
- return -EINVAL;
- down_write(¤t->mm->mmap_sem);
- retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, a.offset >> PAGE_SHIFT);
- up_write(¤t->mm->mmap_sem);
-#else
- retval = ia32_do_mmap(file, a.addr, a.len, a.prot, a.flags, a.fd, a.offset);
-#endif
+ addr = ia32_do_mmap(file, a.addr, a.len, a.prot, flags, a.offset);
+
if (file)
fput(file);
- return retval;
+ return addr;
}
asmlinkage long
-sys32_mprotect(unsigned long start, size_t len, unsigned long prot)
+sys32_mmap2 (unsigned int addr, unsigned int len, unsigned int prot, unsigned int flags,
+ unsigned int fd, unsigned int pgoff)
{
+ struct file *file = NULL;
+ unsigned long retval;
-#ifdef CONFIG_IA64_PAGE_SIZE_4KB
- return(sys_mprotect(start, len, prot));
-#else // CONFIG_IA64_PAGE_SIZE_4KB
- if (prot = 0)
- return(0);
- len += start & ~PAGE_MASK;
- if ((start & ~PAGE_MASK) && (prot & PROT_WRITE))
- prot |= PROT_EXEC;
- return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot));
-#endif // CONFIG_IA64_PAGE_SIZE_4KB
-}
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ return -EBADF;
+ }
-asmlinkage long
-sys32_pipe(int *fd)
-{
- int retval;
- int fds[2];
+ retval = ia32_do_mmap(file, addr, len, prot, flags,
+ (unsigned long) pgoff << IA32_PAGE_SHIFT);
- retval = do_pipe(fds);
- if (retval)
- goto out;
- if (copy_to_user(fd, fds, sizeof(fds)))
- retval = -EFAULT;
- out:
+ if (file)
+ fput(file);
return retval;
}
asmlinkage long
-sys32_signal (int sig, unsigned int handler)
+sys32_munmap (unsigned int start, unsigned int len)
{
- struct k_sigaction new_sa, old_sa;
- int ret;
+ unsigned int end = start + len;
+ long ret;
+
+#if PAGE_SHIFT <= IA32_PAGE_SHIFT
+ ret = sys_munmap(start, end - start);
+#else
+ if (start > end)
+ return -EINVAL;
+
+ start = PAGE_ALIGN(start);
+ end = PAGE_START(end);
+
+ if (start >= end)
+ return 0;
+
+ down(&ia32_mmap_sem);
+ {
+ ret = sys_munmap(start, end - start);
+ }
+ up(&ia32_mmap_sem);
+#endif
+ return ret;
+}
- new_sa.sa.sa_handler = (__sighandler_t) A(handler);
- new_sa.sa.sa_flags = SA_ONESHOT | SA_NOMASK;
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+
+/*
+ * When mprotect()ing a partial page, we set the permission to the union of the old
+ * settings and the new settings. In other words, it's only possible to make access to a
+ * partial page less restrictive.
+ */
+static long
+mprotect_subpage (unsigned long address, int new_prot)
+{
+ int old_prot;
- ret = do_sigaction(sig, &new_sa, &old_sa);
+ if (new_prot = PROT_NONE)
+ return 0; /* optimize case where nothing changes... */
- return ret ? ret : (unsigned long)old_sa.sa.sa_handler;
+ old_prot = get_page_prot(address);
+ return sys_mprotect(address, PAGE_SIZE, new_prot | old_prot);
}
+#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */
+
asmlinkage long
-sys32_rt_sigaction(int sig, struct sigaction32 *act,
- struct sigaction32 *oact, unsigned int sigsetsize)
+sys32_mprotect (unsigned int start, unsigned int len, int prot)
{
- struct k_sigaction new_ka, old_ka;
- int ret;
- sigset32_t set32;
+ unsigned long end = start + len;
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ long retval = 0;
+#endif
+
+ prot = get_prot32(prot);
- /* XXX: Don't preclude handling different sized sigset_t's. */
- if (sigsetsize != sizeof(sigset32_t))
+#if PAGE_SHIFT <= IA32_PAGE_SHIFT
+ return sys_mprotect(start, end - start, prot);
+#else
+ if (OFFSET4K(start))
return -EINVAL;
- if (act) {
- ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
- ret |= __copy_from_user(&set32, &act->sa_mask,
- sizeof(sigset32_t));
- switch (_NSIG_WORDS) {
- case 4: new_ka.sa.sa_mask.sig[3] = set32.sig[6]
- | (((long)set32.sig[7]) << 32);
- case 3: new_ka.sa.sa_mask.sig[2] = set32.sig[4]
- | (((long)set32.sig[5]) << 32);
- case 2: new_ka.sa.sa_mask.sig[1] = set32.sig[2]
- | (((long)set32.sig[3]) << 32);
- case 1: new_ka.sa.sa_mask.sig[0] = set32.sig[0]
- | (((long)set32.sig[1]) << 32);
- }
- ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ end = IA32_PAGE_ALIGN(end);
+ if (end < start)
+ return -EINVAL;
- if (ret)
- return -EFAULT;
- }
+ down(&ia32_mmap_sem);
+ {
+ if (PAGE_OFF(start)) {
+ /* start address is 4KB aligned but not page aligned. */
+ retval = mprotect_subpage(PAGE_START(start), prot);
+ if (retval < 0)
+ goto out;
- ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+ start = PAGE_ALIGN(start);
+ if (start >= end)
+ goto out; /* retval is already zero... */
+ }
- if (!ret && oact) {
- switch (_NSIG_WORDS) {
- case 4:
- set32.sig[7] = (old_ka.sa.sa_mask.sig[3] >> 32);
- set32.sig[6] = old_ka.sa.sa_mask.sig[3];
- case 3:
- set32.sig[5] = (old_ka.sa.sa_mask.sig[2] >> 32);
- set32.sig[4] = old_ka.sa.sa_mask.sig[2];
- case 2:
- set32.sig[3] = (old_ka.sa.sa_mask.sig[1] >> 32);
- set32.sig[2] = old_ka.sa.sa_mask.sig[1];
- case 1:
- set32.sig[1] = (old_ka.sa.sa_mask.sig[0] >> 32);
- set32.sig[0] = old_ka.sa.sa_mask.sig[0];
+ if (PAGE_OFF(end)) {
+ /* end address is 4KB aligned but not page aligned. */
+ retval = mprotect_subpage(PAGE_START(end), prot);
+ if (retval < 0)
+ return retval;
+ end = PAGE_START(end);
}
- ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
- ret |= __copy_to_user(&oact->sa_mask, &set32,
- sizeof(sigset32_t));
- ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ retval = sys_mprotect(start, end - start, prot);
}
-
- return ret;
+ out:
+ up(&ia32_mmap_sem);
+ return retval;
+#endif
}
-
-extern asmlinkage long sys_rt_sigprocmask(int how, sigset_t *set, sigset_t *oset,
- size_t sigsetsize);
-
asmlinkage long
-sys32_rt_sigprocmask(int how, sigset32_t *set, sigset32_t *oset,
- unsigned int sigsetsize)
+sys32_pipe (int *fd)
{
- sigset_t s;
- sigset32_t s32;
- int ret;
- mm_segment_t old_fs = get_fs();
+ int retval;
+ int fds[2];
- if (set) {
- if (copy_from_user (&s32, set, sizeof(sigset32_t)))
- return -EFAULT;
- switch (_NSIG_WORDS) {
- case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
- case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
- case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
- case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
- }
- }
- set_fs (KERNEL_DS);
- ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL,
- sigsetsize);
- set_fs (old_fs);
- if (ret) return ret;
- if (oset) {
- switch (_NSIG_WORDS) {
- case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
- case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
- case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
- case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
- }
- if (copy_to_user (oset, &s32, sizeof(sigset32_t)))
- return -EFAULT;
- }
- return 0;
+ retval = do_pipe(fds);
+ if (retval)
+ goto out;
+ if (copy_to_user(fd, fds, sizeof(fds)))
+ retval = -EFAULT;
+ out:
+ return retval;
}
static inline int
@@ -498,31 +653,34 @@
{
int err;
- err = put_user (kbuf->f_type, &ubuf->f_type);
- err |= __put_user (kbuf->f_bsize, &ubuf->f_bsize);
- err |= __put_user (kbuf->f_blocks, &ubuf->f_blocks);
- err |= __put_user (kbuf->f_bfree, &ubuf->f_bfree);
- err |= __put_user (kbuf->f_bavail, &ubuf->f_bavail);
- err |= __put_user (kbuf->f_files, &ubuf->f_files);
- err |= __put_user (kbuf->f_ffree, &ubuf->f_ffree);
- err |= __put_user (kbuf->f_namelen, &ubuf->f_namelen);
- err |= __put_user (kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]);
- err |= __put_user (kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]);
+ if (!access_ok(VERIFY_WRITE, ubuf, sizeof(*ubuf)))
+ return -EFAULT;
+
+ err = __put_user(kbuf->f_type, &ubuf->f_type);
+ err |= __put_user(kbuf->f_bsize, &ubuf->f_bsize);
+ err |= __put_user(kbuf->f_blocks, &ubuf->f_blocks);
+ err |= __put_user(kbuf->f_bfree, &ubuf->f_bfree);
+ err |= __put_user(kbuf->f_bavail, &ubuf->f_bavail);
+ err |= __put_user(kbuf->f_files, &ubuf->f_files);
+ err |= __put_user(kbuf->f_ffree, &ubuf->f_ffree);
+ err |= __put_user(kbuf->f_namelen, &ubuf->f_namelen);
+ err |= __put_user(kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]);
+ err |= __put_user(kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]);
return err;
}
extern asmlinkage long sys_statfs(const char * path, struct statfs * buf);
asmlinkage long
-sys32_statfs(const char * path, struct statfs32 *buf)
+sys32_statfs (const char *path, struct statfs32 *buf)
{
int ret;
struct statfs s;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
- ret = sys_statfs((const char *)path, &s);
- set_fs (old_fs);
+ set_fs(KERNEL_DS);
+ ret = sys_statfs(path, &s);
+ set_fs(old_fs);
if (put_statfs(buf, &s))
return -EFAULT;
return ret;
@@ -531,15 +689,15 @@
extern asmlinkage long sys_fstatfs(unsigned int fd, struct statfs * buf);
asmlinkage long
-sys32_fstatfs(unsigned int fd, struct statfs32 *buf)
+sys32_fstatfs (unsigned int fd, struct statfs32 *buf)
{
int ret;
struct statfs s;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_fstatfs(fd, &s);
- set_fs (old_fs);
+ set_fs(old_fs);
if (put_statfs(buf, &s))
return -EFAULT;
return ret;
@@ -557,23 +715,21 @@
};
static inline long
-get_tv32(struct timeval *o, struct timeval32 *i)
+get_tv32 (struct timeval *o, struct timeval32 *i)
{
return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
- (__get_user(o->tv_sec, &i->tv_sec) |
- __get_user(o->tv_usec, &i->tv_usec)));
+ (__get_user(o->tv_sec, &i->tv_sec) | __get_user(o->tv_usec, &i->tv_usec)));
}
static inline long
-put_tv32(struct timeval32 *o, struct timeval *i)
+put_tv32 (struct timeval32 *o, struct timeval *i)
{
return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
- (__put_user(i->tv_sec, &o->tv_sec) |
- __put_user(i->tv_usec, &o->tv_usec)));
+ (__put_user(i->tv_sec, &o->tv_sec) | __put_user(i->tv_usec, &o->tv_usec)));
}
static inline long
-get_it32(struct itimerval *o, struct itimerval32 *i)
+get_it32 (struct itimerval *o, struct itimerval32 *i)
{
return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
(__get_user(o->it_interval.tv_sec, &i->it_interval.tv_sec) |
@@ -583,7 +739,7 @@
}
static inline long
-put_it32(struct itimerval32 *o, struct itimerval *i)
+put_it32 (struct itimerval32 *o, struct itimerval *i)
{
return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
(__put_user(i->it_interval.tv_sec, &o->it_interval.tv_sec) |
@@ -592,10 +748,10 @@
__put_user(i->it_value.tv_usec, &o->it_value.tv_usec)));
}
-extern int do_getitimer(int which, struct itimerval *value);
+extern int do_getitimer (int which, struct itimerval *value);
asmlinkage long
-sys32_getitimer(int which, struct itimerval32 *it)
+sys32_getitimer (int which, struct itimerval32 *it)
{
struct itimerval kit;
int error;
@@ -607,10 +763,10 @@
return error;
}
-extern int do_setitimer(int which, struct itimerval *, struct itimerval *);
+extern int do_setitimer (int which, struct itimerval *, struct itimerval *);
asmlinkage long
-sys32_setitimer(int which, struct itimerval32 *in, struct itimerval32 *out)
+sys32_setitimer (int which, struct itimerval32 *in, struct itimerval32 *out)
{
struct itimerval kin, kout;
int error;
@@ -630,8 +786,9 @@
return 0;
}
+
asmlinkage unsigned long
-sys32_alarm(unsigned int seconds)
+sys32_alarm (unsigned int seconds)
{
struct itimerval it_new, it_old;
unsigned int oldalarm;
@@ -660,7 +817,7 @@
extern asmlinkage long sys_gettimeofday (struct timeval *tv, struct timezone *tz);
asmlinkage long
-ia32_utime(char * filename, struct utimbuf_32 *times32)
+sys32_utime (char *filename, struct utimbuf_32 *times32)
{
mm_segment_t old_fs = get_fs();
struct timeval tv[2], *tvp;
@@ -673,20 +830,20 @@
if (get_user(tv[1].tv_sec, ×32->mtime))
return -EFAULT;
tv[1].tv_usec = 0;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
tvp = tv;
} else
tvp = NULL;
ret = sys_utimes(filename, tvp);
- set_fs (old_fs);
+ set_fs(old_fs);
return ret;
}
extern struct timezone sys_tz;
-extern int do_sys_settimeofday(struct timeval *tv, struct timezone *tz);
+extern int do_sys_settimeofday (struct timeval *tv, struct timezone *tz);
asmlinkage long
-sys32_gettimeofday(struct timeval32 *tv, struct timezone *tz)
+sys32_gettimeofday (struct timeval32 *tv, struct timezone *tz)
{
if (tv) {
struct timeval ktv;
@@ -702,7 +859,7 @@
}
asmlinkage long
-sys32_settimeofday(struct timeval32 *tv, struct timezone *tz)
+sys32_settimeofday (struct timeval32 *tv, struct timezone *tz)
{
struct timeval ktv;
struct timezone ktz;
@@ -719,20 +876,6 @@
return do_sys_settimeofday(tv ? &ktv : NULL, tz ? &ktz : NULL);
}
-struct linux32_dirent {
- u32 d_ino;
- u32 d_off;
- u16 d_reclen;
- char d_name[1];
-};
-
-struct old_linux32_dirent {
- u32 d_ino;
- u32 d_offset;
- u16 d_namlen;
- char d_name[1];
-};
-
struct getdents32_callback {
struct linux32_dirent * current_dir;
struct linux32_dirent * previous;
@@ -775,7 +918,7 @@
}
asmlinkage long
-sys32_getdents (unsigned int fd, void * dirent, unsigned int count)
+sys32_getdents (unsigned int fd, struct linux32_dirent *dirent, unsigned int count)
{
struct file * file;
struct linux32_dirent * lastdirent;
@@ -787,7 +930,7 @@
if (!file)
goto out;
- buf.current_dir = (struct linux32_dirent *) dirent;
+ buf.current_dir = dirent;
buf.previous = NULL;
buf.count = count;
buf.error = 0;
@@ -831,7 +974,7 @@
}
asmlinkage long
-sys32_readdir (unsigned int fd, void * dirent, unsigned int count)
+sys32_readdir (unsigned int fd, void *dirent, unsigned int count)
{
int error;
struct file * file;
@@ -866,7 +1009,7 @@
#define ROUND_UP_TIME(x,y) (((x)+(y)-1)/(y))
asmlinkage long
-sys32_select(int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
+sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
{
fd_set_bits fds;
char *bits;
@@ -878,8 +1021,7 @@
time_t sec, usec;
ret = -EFAULT;
- if (get_user(sec, &tvp32->tv_sec)
- || get_user(usec, &tvp32->tv_usec))
+ if (get_user(sec, &tvp32->tv_sec) || get_user(usec, &tvp32->tv_usec))
goto out_nofds;
ret = -EINVAL;
@@ -933,9 +1075,7 @@
usec = timeout % HZ;
usec *= (1000000/HZ);
}
- if (put_user(sec, (int *)&tvp32->tv_sec)
- || put_user(usec, (int *)&tvp32->tv_usec))
- {
+ if (put_user(sec, &tvp32->tv_sec) || put_user(usec, &tvp32->tv_usec)) {
ret = -EFAULT;
goto out;
}
@@ -969,50 +1109,43 @@
};
asmlinkage long
-old_select(struct sel_arg_struct *arg)
+sys32_old_select (struct sel_arg_struct *arg)
{
struct sel_arg_struct a;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
- return sys32_select(a.n, (fd_set *)A(a.inp), (fd_set *)A(a.outp), (fd_set *)A(a.exp),
- (struct timeval32 *)A(a.tvp));
+ return sys32_select(a.n, (fd_set *) A(a.inp), (fd_set *) A(a.outp), (fd_set *) A(a.exp),
+ (struct timeval32 *) A(a.tvp));
}
-struct timespec32 {
- int tv_sec;
- int tv_nsec;
-};
-
-extern asmlinkage long sys_nanosleep(struct timespec *rqtp, struct timespec *rmtp);
+extern asmlinkage long sys_nanosleep (struct timespec *rqtp, struct timespec *rmtp);
asmlinkage long
-sys32_nanosleep(struct timespec32 *rqtp, struct timespec32 *rmtp)
+sys32_nanosleep (struct timespec32 *rqtp, struct timespec32 *rmtp)
{
struct timespec t;
int ret;
- mm_segment_t old_fs = get_fs ();
+ mm_segment_t old_fs = get_fs();
- if (get_user (t.tv_sec, &rqtp->tv_sec) ||
- __get_user (t.tv_nsec, &rqtp->tv_nsec))
+ if (get_user (t.tv_sec, &rqtp->tv_sec) || get_user (t.tv_nsec, &rqtp->tv_nsec))
return -EFAULT;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_nanosleep(&t, rmtp ? &t : NULL);
- set_fs (old_fs);
+ set_fs(old_fs);
if (rmtp && ret = -EINTR) {
- if (__put_user (t.tv_sec, &rmtp->tv_sec) ||
- __put_user (t.tv_nsec, &rmtp->tv_nsec))
+ if (put_user(t.tv_sec, &rmtp->tv_sec) || put_user(t.tv_nsec, &rmtp->tv_nsec))
return -EFAULT;
}
return ret;
}
struct iovec32 { unsigned int iov_base; int iov_len; };
-asmlinkage ssize_t sys_readv(unsigned long,const struct iovec *,unsigned long);
-asmlinkage ssize_t sys_writev(unsigned long,const struct iovec *,unsigned long);
+asmlinkage ssize_t sys_readv (unsigned long,const struct iovec *,unsigned long);
+asmlinkage ssize_t sys_writev (unsigned long,const struct iovec *,unsigned long);
static struct iovec *
-get_iovec32(struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int type)
+get_iovec32 (struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int type)
{
int i;
u32 buf, len;
@@ -1022,24 +1155,23 @@
if (!count)
return 0;
- if(verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count))
- return(struct iovec *)0;
+ if (verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count))
+ return NULL;
if (count > UIO_MAXIOV)
- return(struct iovec *)0;
+ return NULL;
if (count > UIO_FASTIOV) {
iov = kmalloc(count*sizeof(struct iovec), GFP_KERNEL);
if (!iov)
- return((struct iovec *)0);
+ return NULL;
} else
iov = iov_buf;
ivp = iov;
for (i = 0; i < count; i++) {
- if (__get_user(len, &iov32->iov_len) ||
- __get_user(buf, &iov32->iov_base)) {
+ if (__get_user(len, &iov32->iov_len) || __get_user(buf, &iov32->iov_base)) {
if (iov != iov_buf)
kfree(iov);
- return((struct iovec *)0);
+ return NULL;
}
if (verify_area(type, (void *)A(buf), len)) {
if (iov != iov_buf)
@@ -1047,22 +1179,23 @@
return((struct iovec *)0);
}
ivp->iov_base = (void *)A(buf);
- ivp->iov_len = (__kernel_size_t)len;
+ ivp->iov_len = (__kernel_size_t) len;
iov32++;
ivp++;
}
- return(iov);
+ return iov;
}
asmlinkage long
-sys32_readv(int fd, struct iovec32 *vector, u32 count)
+sys32_readv (int fd, struct iovec32 *vector, u32 count)
{
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov;
- int ret;
+ long ret;
mm_segment_t old_fs = get_fs();
- if ((iov = get_iovec32(vector, iovstack, count, VERIFY_WRITE)) = (struct iovec *)0)
+ iov = get_iovec32(vector, iovstack, count, VERIFY_WRITE);
+ if (!iov)
return -EFAULT;
set_fs(KERNEL_DS);
ret = sys_readv(fd, iov, count);
@@ -1073,14 +1206,15 @@
}
asmlinkage long
-sys32_writev(int fd, struct iovec32 *vector, u32 count)
+sys32_writev (int fd, struct iovec32 *vector, u32 count)
{
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov;
- int ret;
+ long ret;
mm_segment_t old_fs = get_fs();
- if ((iov = get_iovec32(vector, iovstack, count, VERIFY_READ)) = (struct iovec *)0)
+ iov = get_iovec32(vector, iovstack, count, VERIFY_READ);
+ if (!iov)
return -EFAULT;
set_fs(KERNEL_DS);
ret = sys_writev(fd, iov, count);
@@ -1098,45 +1232,66 @@
int rlim_max;
};
-extern asmlinkage long sys_getrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_getrlimit (unsigned int resource, struct rlimit *rlim);
asmlinkage long
-sys32_getrlimit(unsigned int resource, struct rlimit32 *rlim)
+sys32_old_getrlimit (unsigned int resource, struct rlimit32 *rlim)
{
+ mm_segment_t old_fs = get_fs();
+ struct rlimit r;
+ int ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_getrlimit(resource, &r);
+ set_fs(old_fs);
+ if (!ret) {
+ ret = put_user(RESOURCE32(r.rlim_cur), &rlim->rlim_cur);
+ ret |= put_user(RESOURCE32(r.rlim_max), &rlim->rlim_max);
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_getrlimit (unsigned int resource, struct rlimit32 *rlim)
+{
+ mm_segment_t old_fs = get_fs();
struct rlimit r;
int ret;
- mm_segment_t old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_getrlimit(resource, &r);
- set_fs (old_fs);
+ set_fs(old_fs);
if (!ret) {
- ret = put_user (RESOURCE32(r.rlim_cur), &rlim->rlim_cur);
- ret |= __put_user (RESOURCE32(r.rlim_max), &rlim->rlim_max);
+ if (r.rlim_cur >= 0xffffffff)
+ r.rlim_cur = 0xffffffff;
+ if (r.rlim_max >= 0xffffffff)
+ r.rlim_max = 0xffffffff;
+ ret = put_user(r.rlim_cur, &rlim->rlim_cur);
+ ret |= put_user(r.rlim_max, &rlim->rlim_max);
}
return ret;
}
-extern asmlinkage long sys_setrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_setrlimit (unsigned int resource, struct rlimit *rlim);
asmlinkage long
-sys32_setrlimit(unsigned int resource, struct rlimit32 *rlim)
+sys32_setrlimit (unsigned int resource, struct rlimit32 *rlim)
{
struct rlimit r;
int ret;
- mm_segment_t old_fs = get_fs ();
+ mm_segment_t old_fs = get_fs();
- if (resource >= RLIM_NLIMITS) return -EINVAL;
- if (get_user (r.rlim_cur, &rlim->rlim_cur) ||
- __get_user (r.rlim_max, &rlim->rlim_max))
+ if (resource >= RLIM_NLIMITS)
+ return -EINVAL;
+ if (get_user(r.rlim_cur, &rlim->rlim_cur) || get_user(r.rlim_max, &rlim->rlim_max))
return -EFAULT;
if (r.rlim_cur = RLIM_INFINITY32)
r.rlim_cur = RLIM_INFINITY;
if (r.rlim_max = RLIM_INFINITY32)
r.rlim_max = RLIM_INFINITY;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_setrlimit(resource, &r);
- set_fs (old_fs);
+ set_fs(old_fs);
return ret;
}
@@ -1154,25 +1309,141 @@
unsigned msg_flags;
};
-static inline int
-shape_msg(struct msghdr *mp, struct msghdr32 *mp32)
-{
- int ret;
- unsigned int i;
+struct cmsghdr32 {
+ __kernel_size_t32 cmsg_len;
+ int cmsg_level;
+ int cmsg_type;
+};
- if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32)))
- return(-EFAULT);
- ret = __get_user(i, &mp32->msg_name);
- mp->msg_name = (void *)A(i);
- ret |= __get_user(mp->msg_namelen, &mp32->msg_namelen);
- ret |= __get_user(i, &mp32->msg_iov);
+/* Bleech... */
+#define __CMSG32_NXTHDR(ctl, len, cmsg, cmsglen) __cmsg32_nxthdr((ctl),(len),(cmsg),(cmsglen))
+#define CMSG32_NXTHDR(mhdr, cmsg, cmsglen) cmsg32_nxthdr((mhdr), (cmsg), (cmsglen))
+#define CMSG32_ALIGN(len) ( ((len)+sizeof(int)-1) & ~(sizeof(int)-1) )
+#define CMSG32_DATA(cmsg) \
+ ((void *)((char *)(cmsg) + CMSG32_ALIGN(sizeof(struct cmsghdr32))))
+#define CMSG32_SPACE(len) \
+ (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + CMSG32_ALIGN(len))
+#define CMSG32_LEN(len) (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + (len))
+#define __CMSG32_FIRSTHDR(ctl,len) \
+ ((len) >= sizeof(struct cmsghdr32) ? (struct cmsghdr32 *)(ctl) : (struct cmsghdr32 *)NULL)
+#define CMSG32_FIRSTHDR(msg) __CMSG32_FIRSTHDR((msg)->msg_control, (msg)->msg_controllen)
+
+static inline struct cmsghdr32 *
+__cmsg32_nxthdr (void *ctl, __kernel_size_t size, struct cmsghdr32 *cmsg, int cmsg_len)
+{
+ struct cmsghdr32 * ptr;
+
+ ptr = (struct cmsghdr32 *)(((unsigned char *) cmsg) + CMSG32_ALIGN(cmsg_len));
+ if ((unsigned long)((char*)(ptr+1) - (char *) ctl) > size)
+ return NULL;
+ return ptr;
+}
+
+static inline struct cmsghdr32 *
+cmsg32_nxthdr (struct msghdr *msg, struct cmsghdr32 *cmsg, int cmsg_len)
+{
+ return __cmsg32_nxthdr(msg->msg_control, msg->msg_controllen, cmsg, cmsg_len);
+}
+
+static inline int
+get_msghdr32 (struct msghdr *mp, struct msghdr32 *mp32)
+{
+ int ret;
+ unsigned int i;
+
+ if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32)))
+ return -EFAULT;
+ ret = __get_user(i, &mp32->msg_name);
+ mp->msg_name = (void *)A(i);
+ ret |= __get_user(mp->msg_namelen, &mp32->msg_namelen);
+ ret |= __get_user(i, &mp32->msg_iov);
mp->msg_iov = (struct iovec *)A(i);
ret |= __get_user(mp->msg_iovlen, &mp32->msg_iovlen);
ret |= __get_user(i, &mp32->msg_control);
mp->msg_control = (void *)A(i);
ret |= __get_user(mp->msg_controllen, &mp32->msg_controllen);
ret |= __get_user(mp->msg_flags, &mp32->msg_flags);
- return(ret ? -EFAULT : 0);
+ return ret ? -EFAULT : 0;
+}
+
+/*
+ * There is a lot of hair here because the alignment rules (and thus placement) of cmsg
+ * headers and length are different for 32-bit apps. -DaveM
+ */
+static int
+get_cmsghdr32 (struct msghdr *kmsg, unsigned char *stackbuf, struct sock *sk, size_t *bufsize)
+{
+ struct cmsghdr *kcmsg, *kcmsg_base;
+ __kernel_size_t kcmlen, tmp;
+ __kernel_size_t32 ucmlen;
+ struct cmsghdr32 *ucmsg;
+ long err;
+
+ kcmlen = 0;
+ kcmsg_base = kcmsg = (struct cmsghdr *)stackbuf;
+ ucmsg = CMSG32_FIRSTHDR(kmsg);
+ while (ucmsg != NULL) {
+ if (get_user(ucmlen, &ucmsg->cmsg_len))
+ return -EFAULT;
+
+ /* Catch bogons. */
+ if (CMSG32_ALIGN(ucmlen) < CMSG32_ALIGN(sizeof(struct cmsghdr32)))
+ return -EINVAL;
+ if ((unsigned long)(((char *)ucmsg - (char *)kmsg->msg_control) + ucmlen)
+ > kmsg->msg_controllen)
+ return -EINVAL;
+
+ tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
+ CMSG_ALIGN(sizeof(struct cmsghdr)));
+ kcmlen += tmp;
+ ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
+ }
+ if (kcmlen = 0)
+ return -EINVAL;
+
+ /*
+ * The kcmlen holds the 64-bit version of the control length. It may not be
+ * modified as we do not stick it into the kmsg until we have successfully copied
+ * over all of the data from the user.
+ */
+ if (kcmlen > *bufsize) {
+ *bufsize = kcmlen;
+ kcmsg_base = kcmsg = sock_kmalloc(sk, kcmlen, GFP_KERNEL);
+ }
+ if (kcmsg = NULL)
+ return -ENOBUFS;
+
+ /* Now copy them over neatly. */
+ memset(kcmsg, 0, kcmlen);
+ ucmsg = CMSG32_FIRSTHDR(kmsg);
+ while (ucmsg != NULL) {
+ err = get_user(ucmlen, &ucmsg->cmsg_len);
+ tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
+ CMSG_ALIGN(sizeof(struct cmsghdr)));
+ kcmsg->cmsg_len = tmp;
+ err |= get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level);
+ err |= get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type);
+
+ /* Copy over the data. */
+ err |= copy_from_user(CMSG_DATA(kcmsg), CMSG32_DATA(ucmsg),
+ (ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))));
+ if (err)
+ goto out_free_efault;
+
+ /* Advance. */
+ kcmsg = (struct cmsghdr *)((char *)kcmsg + CMSG_ALIGN(tmp));
+ ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
+ }
+
+ /* Ok, looks like we made it. Hook it up and return success. */
+ kmsg->msg_control = kcmsg_base;
+ kmsg->msg_controllen = kcmlen;
+ return 0;
+
+out_free_efault:
+ if (kcmsg_base != (struct cmsghdr *)stackbuf)
+ sock_kfree_s(sk, kcmsg_base, kcmlen);
+ return -EFAULT;
}
/*
@@ -1187,20 +1458,17 @@
*/
static inline int
-verify_iovec32(struct msghdr *m, struct iovec *iov, char *address, int mode)
+verify_iovec32 (struct msghdr *m, struct iovec *iov, char *address, int mode)
{
int size, err, ct;
struct iovec32 *iov32;
- if(m->msg_namelen)
- {
- if(mode=VERIFY_READ)
- {
- err=move_addr_to_kernel(m->msg_name, m->msg_namelen, address);
- if(err<0)
+ if (m->msg_namelen) {
+ if (mode = VERIFY_READ) {
+ err = move_addr_to_kernel(m->msg_name, m->msg_namelen, address);
+ if (err < 0)
goto out;
}
-
m->msg_name = address;
} else
m->msg_name = NULL;
@@ -1209,7 +1477,7 @@
size = m->msg_iovlen * sizeof(struct iovec32);
if (copy_from_user(iov, m->msg_iov, size))
goto out;
- m->msg_iov=iov;
+ m->msg_iov = iov;
err = 0;
iov32 = (struct iovec32 *)iov;
@@ -1222,8 +1490,188 @@
return err;
}
-extern __inline__ void
-sockfd_put(struct socket *sock)
+static void
+put_cmsg32(struct msghdr *kmsg, int level, int type, int len, void *data)
+{
+ struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
+ struct cmsghdr32 cmhdr;
+ int cmlen = CMSG32_LEN(len);
+
+ if(cm = NULL || kmsg->msg_controllen < sizeof(*cm)) {
+ kmsg->msg_flags |= MSG_CTRUNC;
+ return;
+ }
+
+ if(kmsg->msg_controllen < cmlen) {
+ kmsg->msg_flags |= MSG_CTRUNC;
+ cmlen = kmsg->msg_controllen;
+ }
+ cmhdr.cmsg_level = level;
+ cmhdr.cmsg_type = type;
+ cmhdr.cmsg_len = cmlen;
+
+ if(copy_to_user(cm, &cmhdr, sizeof cmhdr))
+ return;
+ if(copy_to_user(CMSG32_DATA(cm), data,
+ cmlen - sizeof(struct cmsghdr32)))
+ return;
+ cmlen = CMSG32_SPACE(len);
+ kmsg->msg_control += cmlen;
+ kmsg->msg_controllen -= cmlen;
+}
+
+static void
+scm_detach_fds32 (struct msghdr *kmsg, struct scm_cookie *scm)
+{
+ struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
+ int fdmax = (kmsg->msg_controllen - sizeof(struct cmsghdr32))
+ / sizeof(int);
+ int fdnum = scm->fp->count;
+ struct file **fp = scm->fp->fp;
+ int *cmfptr;
+ int err = 0, i;
+
+ if (fdnum < fdmax)
+ fdmax = fdnum;
+
+ for (i = 0, cmfptr = (int *) CMSG32_DATA(cm);
+ i < fdmax;
+ i++, cmfptr++) {
+ int new_fd;
+ err = get_unused_fd();
+ if (err < 0)
+ break;
+ new_fd = err;
+ err = put_user(new_fd, cmfptr);
+ if (err) {
+ put_unused_fd(new_fd);
+ break;
+ }
+ /* Bump the usage count and install the file. */
+ get_file(fp[i]);
+ current->files->fd[new_fd] = fp[i];
+ }
+
+ if (i > 0) {
+ int cmlen = CMSG32_LEN(i * sizeof(int));
+ if (!err)
+ err = put_user(SOL_SOCKET, &cm->cmsg_level);
+ if (!err)
+ err = put_user(SCM_RIGHTS, &cm->cmsg_type);
+ if (!err)
+ err = put_user(cmlen, &cm->cmsg_len);
+ if (!err) {
+ cmlen = CMSG32_SPACE(i * sizeof(int));
+ kmsg->msg_control += cmlen;
+ kmsg->msg_controllen -= cmlen;
+ }
+ }
+ if (i < fdnum)
+ kmsg->msg_flags |= MSG_CTRUNC;
+
+ /*
+ * All of the files that fit in the message have had their
+ * usage counts incremented, so we just free the list.
+ */
+ __scm_destroy(scm);
+}
+
+/*
+ * In these cases we (currently) can just copy to data over verbatim because all CMSGs
+ * created by the kernel have well defined types which have the same layout in both the
+ * 32-bit and 64-bit API. One must add some special cased conversions here if we start
+ * sending control messages with incompatible types.
+ *
+ * SCM_RIGHTS and SCM_CREDENTIALS are done by hand in recvmsg32 right after
+ * we do our work. The remaining cases are:
+ *
+ * SOL_IP IP_PKTINFO struct in_pktinfo 32-bit clean
+ * IP_TTL int 32-bit clean
+ * IP_TOS __u8 32-bit clean
+ * IP_RECVOPTS variable length 32-bit clean
+ * IP_RETOPTS variable length 32-bit clean
+ * (these last two are clean because the types are defined
+ * by the IPv4 protocol)
+ * IP_RECVERR struct sock_extended_err +
+ * struct sockaddr_in 32-bit clean
+ * SOL_IPV6 IPV6_RECVERR struct sock_extended_err +
+ * struct sockaddr_in6 32-bit clean
+ * IPV6_PKTINFO struct in6_pktinfo 32-bit clean
+ * IPV6_HOPLIMIT int 32-bit clean
+ * IPV6_FLOWINFO u32 32-bit clean
+ * IPV6_HOPOPTS ipv6 hop exthdr 32-bit clean
+ * IPV6_DSTOPTS ipv6 dst exthdr(s) 32-bit clean
+ * IPV6_RTHDR ipv6 routing exthdr 32-bit clean
+ * IPV6_AUTHHDR ipv6 auth exthdr 32-bit clean
+ */
+static void
+cmsg32_recvmsg_fixup (struct msghdr *kmsg, unsigned long orig_cmsg_uptr)
+{
+ unsigned char *workbuf, *wp;
+ unsigned long bufsz, space_avail;
+ struct cmsghdr *ucmsg;
+ long err;
+
+ bufsz = ((unsigned long)kmsg->msg_control) - orig_cmsg_uptr;
+ space_avail = kmsg->msg_controllen + bufsz;
+ wp = workbuf = kmalloc(bufsz, GFP_KERNEL);
+ if (workbuf = NULL)
+ goto fail;
+
+ /* To make this more sane we assume the kernel sends back properly
+ * formatted control messages. Because of how the kernel will truncate
+ * the cmsg_len for MSG_TRUNC cases, we need not check that case either.
+ */
+ ucmsg = (struct cmsghdr *) orig_cmsg_uptr;
+ while (((unsigned long)ucmsg) < ((unsigned long)kmsg->msg_control)) {
+ struct cmsghdr32 *kcmsg32 = (struct cmsghdr32 *) wp;
+ int clen64, clen32;
+
+ /*
+ * UCMSG is the 64-bit format CMSG entry in user-space. KCMSG32 is within
+ * the kernel space temporary buffer we use to convert into a 32-bit style
+ * CMSG.
+ */
+ err = get_user(kcmsg32->cmsg_len, &ucmsg->cmsg_len);
+ err |= get_user(kcmsg32->cmsg_level, &ucmsg->cmsg_level);
+ err |= get_user(kcmsg32->cmsg_type, &ucmsg->cmsg_type);
+ if (err)
+ goto fail2;
+
+ clen64 = kcmsg32->cmsg_len;
+ copy_from_user(CMSG32_DATA(kcmsg32), CMSG_DATA(ucmsg),
+ clen64 - CMSG_ALIGN(sizeof(*ucmsg)));
+ clen32 = ((clen64 - CMSG_ALIGN(sizeof(*ucmsg))) +
+ CMSG32_ALIGN(sizeof(struct cmsghdr32)));
+ kcmsg32->cmsg_len = clen32;
+
+ ucmsg = (struct cmsghdr *) (((char *)ucmsg) + CMSG_ALIGN(clen64));
+ wp = (((char *)kcmsg32) + CMSG32_ALIGN(clen32));
+ }
+
+ /* Copy back fixed up data, and adjust pointers. */
+ bufsz = (wp - workbuf);
+ if (copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz))
+ goto fail2;
+
+ kmsg->msg_control = (struct cmsghdr *) (((char *)orig_cmsg_uptr) + bufsz);
+ kmsg->msg_controllen = space_avail - bufsz;
+ kfree(workbuf);
+ return;
+
+ fail2:
+ kfree(workbuf);
+ fail:
+ /*
+ * If we leave the 64-bit format CMSG chunks in there, the application could get
+ * confused and crash. So to ensure greater recovery, we report no CMSGs.
+ */
+ kmsg->msg_controllen += bufsz;
+ kmsg->msg_control = (void *) orig_cmsg_uptr;
+}
+
+static inline void
+sockfd_put (struct socket *sock)
{
fput(sock->file);
}
@@ -1234,13 +1682,14 @@
24 for IPv6,
about 80 for AX.25 */
-extern struct socket *sockfd_lookup(int fd, int *err);
+extern struct socket *sockfd_lookup (int fd, int *err);
/*
* BSD sendmsg interface
*/
-int sys32_sendmsg(int fd, struct msghdr32 *msg, unsigned flags)
+int
+sys32_sendmsg (int fd, struct msghdr32 *msg, unsigned flags)
{
struct socket *sock;
char address[MAX_SOCK_ADDR];
@@ -1248,10 +1697,11 @@
unsigned char ctl[sizeof(struct cmsghdr) + 20]; /* 20 is size of ipv6_pktinfo */
unsigned char *ctl_buf = ctl;
struct msghdr msg_sys;
- int err, ctl_len, iov_size, total_len;
+ int err, iov_size, total_len;
+ size_t ctl_len;
err = -EFAULT;
- if (shape_msg(&msg_sys, msg))
+ if (get_msghdr32(&msg_sys, msg))
goto out;
sock = sockfd_lookup(fd, &err);
@@ -1282,20 +1732,12 @@
if (msg_sys.msg_controllen > INT_MAX)
goto out_freeiov;
- ctl_len = msg_sys.msg_controllen;
- if (ctl_len)
- {
- if (ctl_len > sizeof(ctl))
- {
- err = -ENOBUFS;
- ctl_buf = sock_kmalloc(sock->sk, ctl_len, GFP_KERNEL);
- if (ctl_buf = NULL)
- goto out_freeiov;
- }
- err = -EFAULT;
- if (copy_from_user(ctl_buf, msg_sys.msg_control, ctl_len))
- goto out_freectl;
- msg_sys.msg_control = ctl_buf;
+ if (msg_sys.msg_controllen) {
+ ctl_len = sizeof(ctl);
+ err = get_cmsghdr32(&msg_sys, ctl_buf, sock->sk, &ctl_len);
+ if (err)
+ goto out_freeiov;
+ ctl_buf = msg_sys.msg_control;
}
msg_sys.msg_flags = flags;
@@ -1303,7 +1745,6 @@
msg_sys.msg_flags |= MSG_DONTWAIT;
err = sock_sendmsg(sock, &msg_sys, total_len);
-out_freectl:
if (ctl_buf != ctl)
sock_kfree_s(sock->sk, ctl_buf, ctl_len);
out_freeiov:
@@ -1328,6 +1769,7 @@
struct msghdr msg_sys;
unsigned long cmsg_ptr;
int err, iov_size, total_len, len;
+ struct scm_cookie scm;
/* kernel mode address */
char addr[MAX_SOCK_ADDR];
@@ -1336,8 +1778,8 @@
struct sockaddr *uaddr;
int *uaddr_len;
- err=-EFAULT;
- if (shape_msg(&msg_sys, msg))
+ err = -EFAULT;
+ if (get_msghdr32(&msg_sys, msg))
goto out;
sock = sockfd_lookup(fd, &err);
@@ -1374,13 +1816,42 @@
if (sock->file->f_flags & O_NONBLOCK)
flags |= MSG_DONTWAIT;
- err = sock_recvmsg(sock, &msg_sys, total_len, flags);
- if (err < 0)
- goto out_freeiov;
- len = err;
- if (uaddr != NULL) {
- err = move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len);
+ memset(&scm, 0, sizeof(scm));
+
+ lock_kernel();
+ {
+ err = sock->ops->recvmsg(sock, &msg_sys, total_len, flags, &scm);
+ if (err < 0)
+ goto out_unlock_freeiov;
+
+ len = err;
+ if (!msg_sys.msg_control) {
+ if (sock->passcred || scm.fp)
+ msg_sys.msg_flags |= MSG_CTRUNC;
+ if (scm.fp)
+ __scm_destroy(&scm);
+ } else {
+ /*
+ * If recvmsg processing itself placed some control messages into
+ * user space, it's is using 64-bit CMSG processing, so we need to
+ * fix it up before we tack on more stuff.
+ */
+ if ((unsigned long) msg_sys.msg_control != cmsg_ptr)
+ cmsg32_recvmsg_fixup(&msg_sys, cmsg_ptr);
+
+ /* Wheee... */
+ if (sock->passcred)
+ put_cmsg32(&msg_sys, SOL_SOCKET, SCM_CREDENTIALS,
+ sizeof(scm.creds), &scm.creds);
+ if (scm.fp != NULL)
+ scm_detach_fds32(&msg_sys, &scm);
+ }
+ }
+ unlock_kernel();
+
+ if (uaddr != NULL) {
+ err = move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len);
if (err < 0)
goto out_freeiov;
}
@@ -1393,20 +1864,23 @@
goto out_freeiov;
err = len;
-out_freeiov:
+ out_freeiov:
if (iov != iovstack)
sock_kfree_s(sock->sk, iov, iov_size);
-out_put:
+ out_put:
sockfd_put(sock);
-out:
+ out:
return err;
+
+ out_unlock_freeiov:
+ goto out_freeiov;
}
/* Argument list sizes for sys_socketcall */
#define AL(x) ((x) * sizeof(u32))
-static unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
- AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
- AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
+static const unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
+ AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
+ AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
#undef AL
extern asmlinkage long sys_bind(int fd, struct sockaddr *umyaddr, int addrlen);
@@ -1435,7 +1909,8 @@
extern asmlinkage long sys_shutdown(int fd, int how);
extern asmlinkage long sys_listen(int fd, int backlog);
-asmlinkage long sys32_socketcall(int call, u32 *args)
+asmlinkage long
+sys32_socketcall (int call, u32 *args)
{
int ret;
u32 a[6];
@@ -1463,16 +1938,13 @@
ret = sys_listen(a0, a1);
break;
case SYS_ACCEPT:
- ret = sys_accept(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_accept(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_GETSOCKNAME:
- ret = sys_getsockname(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_getsockname(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_GETPEERNAME:
- ret = sys_getpeername(a0, (struct sockaddr *)A(a1),
- (int *)A(a[2]));
+ ret = sys_getpeername(a0, (struct sockaddr *)A(a1), (int *)A(a[2]));
break;
case SYS_SOCKETPAIR:
ret = sys_socketpair(a0, a1, a[2], (int *)A(a[3]));
@@ -1500,12 +1972,10 @@
ret = sys_getsockopt(a0, a1, a[2], a[3], a[4]);
break;
case SYS_SENDMSG:
- ret = sys32_sendmsg(a0, (struct msghdr32 *)A(a1),
- a[2]);
+ ret = sys32_sendmsg(a0, (struct msghdr32 *) A(a1), a[2]);
break;
case SYS_RECVMSG:
- ret = sys32_recvmsg(a0, (struct msghdr32 *)A(a1),
- a[2]);
+ ret = sys32_recvmsg(a0, (struct msghdr32 *) A(a1), a[2]);
break;
default:
ret = EINVAL;
@@ -1522,15 +1992,28 @@
struct msgbuf32 { s32 mtype; char mtext[1]; };
-struct ipc_perm32
-{
- key_t key;
- __kernel_uid_t32 uid;
- __kernel_gid_t32 gid;
- __kernel_uid_t32 cuid;
- __kernel_gid_t32 cgid;
+struct ipc_perm32 {
+ key_t key;
+ __kernel_uid_t32 uid;
+ __kernel_gid_t32 gid;
+ __kernel_uid_t32 cuid;
+ __kernel_gid_t32 cgid;
+ __kernel_mode_t32 mode;
+ unsigned short seq;
+};
+
+struct ipc64_perm32 {
+ key_t key;
+ __kernel_uid32_t32 uid;
+ __kernel_gid32_t32 gid;
+ __kernel_uid32_t32 cuid;
+ __kernel_gid32_t32 cgid;
__kernel_mode_t32 mode;
- unsigned short seq;
+ unsigned short __pad1;
+ unsigned short seq;
+ unsigned short __pad2;
+ unsigned int unused1;
+ unsigned int unused2;
};
struct semid_ds32 {
@@ -1544,8 +2027,18 @@
unsigned short sem_nsems; /* no. of semaphores in array */
};
-struct msqid_ds32
-{
+struct semid64_ds32 {
+ struct ipc64_perm32 sem_perm;
+ __kernel_time_t32 sem_otime;
+ unsigned int __unused1;
+ __kernel_time_t32 sem_ctime;
+ unsigned int __unused2;
+ unsigned int sem_nsems;
+ unsigned int __unused3;
+ unsigned int __unused4;
+};
+
+struct msqid_ds32 {
struct ipc_perm32 msg_perm;
u32 msg_first;
u32 msg_last;
@@ -1561,110 +2054,206 @@
__kernel_ipc_pid_t32 msg_lrpid;
};
+struct msqid64_ds32 {
+ struct ipc64_perm32 msg_perm;
+ __kernel_time_t32 msg_stime;
+ unsigned int __unused1;
+ __kernel_time_t32 msg_rtime;
+ unsigned int __unused2;
+ __kernel_time_t32 msg_ctime;
+ unsigned int __unused3;
+ unsigned int msg_cbytes;
+ unsigned int msg_qnum;
+ unsigned int msg_qbytes;
+ __kernel_pid_t32 msg_lspid;
+ __kernel_pid_t32 msg_lrpid;
+ unsigned int __unused4;
+ unsigned int __unused5;
+};
+
struct shmid_ds32 {
- struct ipc_perm32 shm_perm;
- int shm_segsz;
- __kernel_time_t32 shm_atime;
- __kernel_time_t32 shm_dtime;
- __kernel_time_t32 shm_ctime;
- __kernel_ipc_pid_t32 shm_cpid;
- __kernel_ipc_pid_t32 shm_lpid;
- unsigned short shm_nattch;
+ struct ipc_perm32 shm_perm;
+ int shm_segsz;
+ __kernel_time_t32 shm_atime;
+ __kernel_time_t32 shm_dtime;
+ __kernel_time_t32 shm_ctime;
+ __kernel_ipc_pid_t32 shm_cpid;
+ __kernel_ipc_pid_t32 shm_lpid;
+ unsigned short shm_nattch;
+};
+
+struct shmid64_ds32 {
+ struct ipc64_perm shm_perm;
+ __kernel_size_t32 shm_segsz;
+ __kernel_time_t32 shm_atime;
+ unsigned int __unused1;
+ __kernel_time_t32 shm_dtime;
+ unsigned int __unused2;
+ __kernel_time_t32 shm_ctime;
+ unsigned int __unused3;
+ __kernel_pid_t32 shm_cpid;
+ __kernel_pid_t32 shm_lpid;
+ unsigned int shm_nattch;
+ unsigned int __unused4;
+ unsigned int __unused5;
+};
+
+struct shminfo64_32 {
+ unsigned int shmmax;
+ unsigned int shmmin;
+ unsigned int shmmni;
+ unsigned int shmseg;
+ unsigned int shmall;
+ unsigned int __unused1;
+ unsigned int __unused2;
+ unsigned int __unused3;
+ unsigned int __unused4;
};
+struct shm_info32 {
+ int used_ids;
+ u32 shm_tot, shm_rss, shm_swp;
+ u32 swap_attempts, swap_successes;
+};
+
+struct ipc_kludge {
+ struct msgbuf *msgp;
+ long msgtyp;
+};
+
+#define SEMOP 1
+#define SEMGET 2
+#define SEMCTL 3
+#define MSGSND 11
+#define MSGRCV 12
+#define MSGGET 13
+#define MSGCTL 14
+#define SHMAT 21
+#define SHMDT 22
+#define SHMGET 23
+#define SHMCTL 24
+
#define IPCOP_MASK(__x) (1UL << (__x))
static int
-do_sys32_semctl(int first, int second, int third, void *uptr)
+ipc_parse_version32 (int *cmd)
+{
+ if (*cmd & IPC_64) {
+ *cmd ^= IPC_64;
+ return IPC_64;
+ } else {
+ return IPC_OLD;
+ }
+}
+
+static int
+semctl32 (int first, int second, int third, void *uptr)
{
union semun fourth;
u32 pad;
int err = 0, err2;
struct semid64_ds s;
- struct semid_ds32 *usp;
mm_segment_t old_fs;
+ int version = ipc_parse_version32(&third);
if (!uptr)
return -EINVAL;
if (get_user(pad, (u32 *)uptr))
return -EFAULT;
- if(third = SETVAL)
+ if (third = SETVAL)
fourth.val = (int)pad;
else
fourth.__pad = (void *)A(pad);
switch (third) {
-
- case IPC_INFO:
- case IPC_RMID:
- case IPC_SET:
- case SEM_INFO:
- case GETVAL:
- case GETPID:
- case GETNCNT:
- case GETZCNT:
- case GETALL:
- case SETVAL:
- case SETALL:
- err = sys_semctl (first, second, third, fourth);
+ case IPC_INFO:
+ case IPC_RMID:
+ case IPC_SET:
+ case SEM_INFO:
+ case GETVAL:
+ case GETPID:
+ case GETNCNT:
+ case GETZCNT:
+ case GETALL:
+ case SETVAL:
+ case SETALL:
+ err = sys_semctl(first, second, third, fourth);
break;
- case IPC_STAT:
- case SEM_STAT:
- usp = (struct semid_ds32 *)A(pad);
+ case IPC_STAT:
+ case SEM_STAT:
fourth.__pad = &s;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_semctl (first, second, third, fourth);
- set_fs (old_fs);
- err2 = put_user(s.sem_perm.key, &usp->sem_perm.key);
- err2 |= __put_user(s.sem_perm.uid, &usp->sem_perm.uid);
- err2 |= __put_user(s.sem_perm.gid, &usp->sem_perm.gid);
- err2 |= __put_user(s.sem_perm.cuid,
- &usp->sem_perm.cuid);
- err2 |= __put_user (s.sem_perm.cgid,
- &usp->sem_perm.cgid);
- err2 |= __put_user (s.sem_perm.mode,
- &usp->sem_perm.mode);
- err2 |= __put_user (s.sem_perm.seq, &usp->sem_perm.seq);
- err2 |= __put_user (s.sem_otime, &usp->sem_otime);
- err2 |= __put_user (s.sem_ctime, &usp->sem_ctime);
- err2 |= __put_user (s.sem_nsems, &usp->sem_nsems);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_semctl(first, second, third, fourth);
+ set_fs(old_fs);
+
+ if (version = IPC_64) {
+ struct semid64_ds32 *usp64 = (struct semid64_ds32 *) A(pad);
+
+ if (!access_ok(VERIFY_WRITE, usp64, sizeof(*usp64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s.sem_perm.key, &usp64->sem_perm.key);
+ err2 |= __put_user(s.sem_perm.uid, &usp64->sem_perm.uid);
+ err2 |= __put_user(s.sem_perm.gid, &usp64->sem_perm.gid);
+ err2 |= __put_user(s.sem_perm.cuid, &usp64->sem_perm.cuid);
+ err2 |= __put_user(s.sem_perm.cgid, &usp64->sem_perm.cgid);
+ err2 |= __put_user(s.sem_perm.mode, &usp64->sem_perm.mode);
+ err2 |= __put_user(s.sem_perm.seq, &usp64->sem_perm.seq);
+ err2 |= __put_user(s.sem_otime, &usp64->sem_otime);
+ err2 |= __put_user(s.sem_ctime, &usp64->sem_ctime);
+ err2 |= __put_user(s.sem_nsems, &usp64->sem_nsems);
+ } else {
+ struct semid_ds32 *usp32 = (struct semid_ds32 *) A(pad);
+
+ if (!access_ok(VERIFY_WRITE, usp32, sizeof(*usp32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s.sem_perm.key, &usp32->sem_perm.key);
+ err2 |= __put_user(s.sem_perm.uid, &usp32->sem_perm.uid);
+ err2 |= __put_user(s.sem_perm.gid, &usp32->sem_perm.gid);
+ err2 |= __put_user(s.sem_perm.cuid, &usp32->sem_perm.cuid);
+ err2 |= __put_user(s.sem_perm.cgid, &usp32->sem_perm.cgid);
+ err2 |= __put_user(s.sem_perm.mode, &usp32->sem_perm.mode);
+ err2 |= __put_user(s.sem_perm.seq, &usp32->sem_perm.seq);
+ err2 |= __put_user(s.sem_otime, &usp32->sem_otime);
+ err2 |= __put_user(s.sem_ctime, &usp32->sem_ctime);
+ err2 |= __put_user(s.sem_nsems, &usp32->sem_nsems);
+ }
if (err2)
- err = -EFAULT;
+ err = -EFAULT;
break;
-
}
-
return err;
}
static int
do_sys32_msgsnd (int first, int second, int third, void *uptr)
{
- struct msgbuf *p = kmalloc (second + sizeof (struct msgbuf)
- + 4, GFP_USER);
+ struct msgbuf *p = kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER);
struct msgbuf32 *up = (struct msgbuf32 *)uptr;
mm_segment_t old_fs;
int err;
if (!p)
return -ENOMEM;
- err = get_user (p->mtype, &up->mtype);
- err |= __copy_from_user (p->mtext, &up->mtext, second);
+ err = get_user(p->mtype, &up->mtype);
+ err |= copy_from_user(p->mtext, &up->mtext, second);
if (err)
goto out;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgsnd (first, p, second, third);
- set_fs (old_fs);
-out:
- kfree (p);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgsnd(first, p, second, third);
+ set_fs(old_fs);
+ out:
+ kfree(p);
return err;
}
static int
-do_sys32_msgrcv (int first, int second, int msgtyp, int third,
- int version, void *uptr)
+do_sys32_msgrcv (int first, int second, int msgtyp, int third, int version, void *uptr)
{
struct msgbuf32 *up;
struct msgbuf *p;
@@ -1679,185 +2268,281 @@
if (!uptr)
goto out;
err = -EFAULT;
- if (copy_from_user (&ipck, uipck, sizeof (struct ipc_kludge)))
+ if (copy_from_user(&ipck, uipck, sizeof(struct ipc_kludge)))
goto out;
uptr = (void *)A(ipck.msgp);
msgtyp = ipck.msgtyp;
}
err = -ENOMEM;
- p = kmalloc (second + sizeof (struct msgbuf) + 4, GFP_USER);
+ p = kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER);
if (!p)
goto out;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgrcv (first, p, second + 4, msgtyp, third);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgrcv(first, p, second + 4, msgtyp, third);
+ set_fs(old_fs);
if (err < 0)
goto free_then_out;
up = (struct msgbuf32 *)uptr;
- if (put_user (p->mtype, &up->mtype) ||
- __copy_to_user (&up->mtext, p->mtext, err))
+ if (put_user(p->mtype, &up->mtype) || copy_to_user(&up->mtext, p->mtext, err))
err = -EFAULT;
free_then_out:
- kfree (p);
+ kfree(p);
out:
return err;
}
static int
-do_sys32_msgctl (int first, int second, void *uptr)
+msgctl32 (int first, int second, void *uptr)
{
int err = -EINVAL, err2;
struct msqid_ds m;
struct msqid64_ds m64;
- struct msqid_ds32 *up = (struct msqid_ds32 *)uptr;
+ struct msqid_ds32 *up32 = (struct msqid_ds32 *)uptr;
+ struct msqid64_ds32 *up64 = (struct msqid64_ds32 *)uptr;
mm_segment_t old_fs;
+ int version = ipc_parse_version32(&second);
switch (second) {
-
- case IPC_INFO:
- case IPC_RMID:
- case MSG_INFO:
- err = sys_msgctl (first, second, (struct msqid_ds *)uptr);
- break;
-
- case IPC_SET:
- err = get_user (m.msg_perm.uid, &up->msg_perm.uid);
- err |= __get_user (m.msg_perm.gid, &up->msg_perm.gid);
- err |= __get_user (m.msg_perm.mode, &up->msg_perm.mode);
- err |= __get_user (m.msg_qbytes, &up->msg_qbytes);
+ case IPC_INFO:
+ case IPC_RMID:
+ case MSG_INFO:
+ err = sys_msgctl(first, second, (struct msqid_ds *)uptr);
+ break;
+
+ case IPC_SET:
+ if (version = IPC_64) {
+ err = get_user(m.msg_perm.uid, &up64->msg_perm.uid);
+ err |= get_user(m.msg_perm.gid, &up64->msg_perm.gid);
+ err |= get_user(m.msg_perm.mode, &up64->msg_perm.mode);
+ err |= get_user(m.msg_qbytes, &up64->msg_qbytes);
+ } else {
+ err = get_user(m.msg_perm.uid, &up32->msg_perm.uid);
+ err |= get_user(m.msg_perm.gid, &up32->msg_perm.gid);
+ err |= get_user(m.msg_perm.mode, &up32->msg_perm.mode);
+ err |= get_user(m.msg_qbytes, &up32->msg_qbytes);
+ }
if (err)
break;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgctl (first, second, &m);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgctl(first, second, &m);
+ set_fs(old_fs);
break;
- case IPC_STAT:
- case MSG_STAT:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_msgctl (first, second, (void *) &m64);
- set_fs (old_fs);
- err2 = put_user (m64.msg_perm.key, &up->msg_perm.key);
- err2 |= __put_user(m64.msg_perm.uid, &up->msg_perm.uid);
- err2 |= __put_user(m64.msg_perm.gid, &up->msg_perm.gid);
- err2 |= __put_user(m64.msg_perm.cuid, &up->msg_perm.cuid);
- err2 |= __put_user(m64.msg_perm.cgid, &up->msg_perm.cgid);
- err2 |= __put_user(m64.msg_perm.mode, &up->msg_perm.mode);
- err2 |= __put_user(m64.msg_perm.seq, &up->msg_perm.seq);
- err2 |= __put_user(m64.msg_stime, &up->msg_stime);
- err2 |= __put_user(m64.msg_rtime, &up->msg_rtime);
- err2 |= __put_user(m64.msg_ctime, &up->msg_ctime);
- err2 |= __put_user(m64.msg_cbytes, &up->msg_cbytes);
- err2 |= __put_user(m64.msg_qnum, &up->msg_qnum);
- err2 |= __put_user(m64.msg_qbytes, &up->msg_qbytes);
- err2 |= __put_user(m64.msg_lspid, &up->msg_lspid);
- err2 |= __put_user(m64.msg_lrpid, &up->msg_lrpid);
- if (err2)
- err = -EFAULT;
- break;
+ case IPC_STAT:
+ case MSG_STAT:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_msgctl(first, second, (void *) &m64);
+ set_fs(old_fs);
+ if (version = IPC_64) {
+ if (!access_ok(VERIFY_WRITE, up64, sizeof(*up64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(m64.msg_perm.key, &up64->msg_perm.key);
+ err2 |= __put_user(m64.msg_perm.uid, &up64->msg_perm.uid);
+ err2 |= __put_user(m64.msg_perm.gid, &up64->msg_perm.gid);
+ err2 |= __put_user(m64.msg_perm.cuid, &up64->msg_perm.cuid);
+ err2 |= __put_user(m64.msg_perm.cgid, &up64->msg_perm.cgid);
+ err2 |= __put_user(m64.msg_perm.mode, &up64->msg_perm.mode);
+ err2 |= __put_user(m64.msg_perm.seq, &up64->msg_perm.seq);
+ err2 |= __put_user(m64.msg_stime, &up64->msg_stime);
+ err2 |= __put_user(m64.msg_rtime, &up64->msg_rtime);
+ err2 |= __put_user(m64.msg_ctime, &up64->msg_ctime);
+ err2 |= __put_user(m64.msg_cbytes, &up64->msg_cbytes);
+ err2 |= __put_user(m64.msg_qnum, &up64->msg_qnum);
+ err2 |= __put_user(m64.msg_qbytes, &up64->msg_qbytes);
+ err2 |= __put_user(m64.msg_lspid, &up64->msg_lspid);
+ err2 |= __put_user(m64.msg_lrpid, &up64->msg_lrpid);
+ if (err2)
+ err = -EFAULT;
+ } else {
+ if (!access_ok(VERIFY_WRITE, up32, sizeof(*up32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(m64.msg_perm.key, &up32->msg_perm.key);
+ err2 |= __put_user(m64.msg_perm.uid, &up32->msg_perm.uid);
+ err2 |= __put_user(m64.msg_perm.gid, &up32->msg_perm.gid);
+ err2 |= __put_user(m64.msg_perm.cuid, &up32->msg_perm.cuid);
+ err2 |= __put_user(m64.msg_perm.cgid, &up32->msg_perm.cgid);
+ err2 |= __put_user(m64.msg_perm.mode, &up32->msg_perm.mode);
+ err2 |= __put_user(m64.msg_perm.seq, &up32->msg_perm.seq);
+ err2 |= __put_user(m64.msg_stime, &up32->msg_stime);
+ err2 |= __put_user(m64.msg_rtime, &up32->msg_rtime);
+ err2 |= __put_user(m64.msg_ctime, &up32->msg_ctime);
+ err2 |= __put_user(m64.msg_cbytes, &up32->msg_cbytes);
+ err2 |= __put_user(m64.msg_qnum, &up32->msg_qnum);
+ err2 |= __put_user(m64.msg_qbytes, &up32->msg_qbytes);
+ err2 |= __put_user(m64.msg_lspid, &up32->msg_lspid);
+ err2 |= __put_user(m64.msg_lrpid, &up32->msg_lrpid);
+ if (err2)
+ err = -EFAULT;
+ }
+ break;
}
-
return err;
}
static int
-do_sys32_shmat (int first, int second, int third, int version, void *uptr)
+shmat32 (int first, int second, int third, int version, void *uptr)
{
unsigned long raddr;
u32 *uaddr = (u32 *)A((u32)third);
int err;
if (version = 1)
- return -EINVAL;
- err = sys_shmat (first, uptr, second, &raddr);
+ return -EINVAL; /* iBCS2 emulator entry point: unsupported */
+ err = sys_shmat(first, uptr, second, &raddr);
if (err)
return err;
return put_user(raddr, uaddr);
}
static int
-do_sys32_shmctl (int first, int second, void *uptr)
+shmctl32 (int first, int second, void *uptr)
{
int err = -EFAULT, err2;
struct shmid_ds s;
struct shmid64_ds s64;
- struct shmid_ds32 *up = (struct shmid_ds32 *)uptr;
+ struct shmid_ds32 *up32 = (struct shmid_ds32 *)uptr;
+ struct shmid64_ds32 *up64 = (struct shmid64_ds32 *)uptr;
mm_segment_t old_fs;
- struct shm_info32 {
- int used_ids;
- u32 shm_tot, shm_rss, shm_swp;
- u32 swap_attempts, swap_successes;
- } *uip = (struct shm_info32 *)uptr;
+ struct shm_info32 *uip = (struct shm_info32 *)uptr;
struct shm_info si;
+ int version = ipc_parse_version32(&second);
+ struct shminfo64 smi;
+ struct shminfo *usi32 = (struct shminfo *) uptr;
+ struct shminfo64_32 *usi64 = (struct shminfo64_32 *) uptr;
switch (second) {
+ case IPC_INFO:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, (struct shmid_ds *)&smi);
+ set_fs(old_fs);
+
+ if (version = IPC_64) {
+ if (!access_ok(VERIFY_WRITE, usi64, sizeof(*usi64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(smi.shmmax, &usi64->shmmax);
+ err2 |= __put_user(smi.shmmin, &usi64->shmmin);
+ err2 |= __put_user(smi.shmmni, &usi64->shmmni);
+ err2 |= __put_user(smi.shmseg, &usi64->shmseg);
+ err2 |= __put_user(smi.shmall, &usi64->shmall);
+ } else {
+ if (!access_ok(VERIFY_WRITE, usi32, sizeof(*usi32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(smi.shmmax, &usi32->shmmax);
+ err2 |= __put_user(smi.shmmin, &usi32->shmmin);
+ err2 |= __put_user(smi.shmmni, &usi32->shmmni);
+ err2 |= __put_user(smi.shmseg, &usi32->shmseg);
+ err2 |= __put_user(smi.shmall, &usi32->shmall);
+ }
+ if (err2)
+ err = -EFAULT;
+ break;
- case IPC_INFO:
- case IPC_RMID:
- case SHM_LOCK:
- case SHM_UNLOCK:
- err = sys_shmctl (first, second, (struct shmid_ds *)uptr);
+ case IPC_RMID:
+ case SHM_LOCK:
+ case SHM_UNLOCK:
+ err = sys_shmctl(first, second, (struct shmid_ds *)uptr);
break;
- case IPC_SET:
- err = get_user (s.shm_perm.uid, &up->shm_perm.uid);
- err |= __get_user (s.shm_perm.gid, &up->shm_perm.gid);
- err |= __get_user (s.shm_perm.mode, &up->shm_perm.mode);
+
+ case IPC_SET:
+ if (version = IPC_64) {
+ err = get_user(s.shm_perm.uid, &up64->shm_perm.uid);
+ err |= get_user(s.shm_perm.gid, &up64->shm_perm.gid);
+ err |= get_user(s.shm_perm.mode, &up64->shm_perm.mode);
+ } else {
+ err = get_user(s.shm_perm.uid, &up32->shm_perm.uid);
+ err |= get_user(s.shm_perm.gid, &up32->shm_perm.gid);
+ err |= get_user(s.shm_perm.mode, &up32->shm_perm.mode);
+ }
if (err)
break;
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, &s);
- set_fs (old_fs);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, &s);
+ set_fs(old_fs);
break;
- case IPC_STAT:
- case SHM_STAT:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, (void *) &s64);
- set_fs (old_fs);
+ case IPC_STAT:
+ case SHM_STAT:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, (void *) &s64);
+ set_fs(old_fs);
if (err < 0)
break;
- err2 = put_user (s64.shm_perm.key, &up->shm_perm.key);
- err2 |= __put_user (s64.shm_perm.uid, &up->shm_perm.uid);
- err2 |= __put_user (s64.shm_perm.gid, &up->shm_perm.gid);
- err2 |= __put_user (s64.shm_perm.cuid,
- &up->shm_perm.cuid);
- err2 |= __put_user (s64.shm_perm.cgid,
- &up->shm_perm.cgid);
- err2 |= __put_user (s64.shm_perm.mode,
- &up->shm_perm.mode);
- err2 |= __put_user (s64.shm_perm.seq, &up->shm_perm.seq);
- err2 |= __put_user (s64.shm_atime, &up->shm_atime);
- err2 |= __put_user (s64.shm_dtime, &up->shm_dtime);
- err2 |= __put_user (s64.shm_ctime, &up->shm_ctime);
- err2 |= __put_user (s64.shm_segsz, &up->shm_segsz);
- err2 |= __put_user (s64.shm_nattch, &up->shm_nattch);
- err2 |= __put_user (s64.shm_cpid, &up->shm_cpid);
- err2 |= __put_user (s64.shm_lpid, &up->shm_lpid);
+ if (version = IPC_64) {
+ if (!access_ok(VERIFY_WRITE, up64, sizeof(*up64))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s64.shm_perm.key, &up64->shm_perm.key);
+ err2 |= __put_user(s64.shm_perm.uid, &up64->shm_perm.uid);
+ err2 |= __put_user(s64.shm_perm.gid, &up64->shm_perm.gid);
+ err2 |= __put_user(s64.shm_perm.cuid, &up64->shm_perm.cuid);
+ err2 |= __put_user(s64.shm_perm.cgid, &up64->shm_perm.cgid);
+ err2 |= __put_user(s64.shm_perm.mode, &up64->shm_perm.mode);
+ err2 |= __put_user(s64.shm_perm.seq, &up64->shm_perm.seq);
+ err2 |= __put_user(s64.shm_atime, &up64->shm_atime);
+ err2 |= __put_user(s64.shm_dtime, &up64->shm_dtime);
+ err2 |= __put_user(s64.shm_ctime, &up64->shm_ctime);
+ err2 |= __put_user(s64.shm_segsz, &up64->shm_segsz);
+ err2 |= __put_user(s64.shm_nattch, &up64->shm_nattch);
+ err2 |= __put_user(s64.shm_cpid, &up64->shm_cpid);
+ err2 |= __put_user(s64.shm_lpid, &up64->shm_lpid);
+ } else {
+ if (!access_ok(VERIFY_WRITE, up32, sizeof(*up32))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(s64.shm_perm.key, &up32->shm_perm.key);
+ err2 |= __put_user(s64.shm_perm.uid, &up32->shm_perm.uid);
+ err2 |= __put_user(s64.shm_perm.gid, &up32->shm_perm.gid);
+ err2 |= __put_user(s64.shm_perm.cuid, &up32->shm_perm.cuid);
+ err2 |= __put_user(s64.shm_perm.cgid, &up32->shm_perm.cgid);
+ err2 |= __put_user(s64.shm_perm.mode, &up32->shm_perm.mode);
+ err2 |= __put_user(s64.shm_perm.seq, &up32->shm_perm.seq);
+ err2 |= __put_user(s64.shm_atime, &up32->shm_atime);
+ err2 |= __put_user(s64.shm_dtime, &up32->shm_dtime);
+ err2 |= __put_user(s64.shm_ctime, &up32->shm_ctime);
+ err2 |= __put_user(s64.shm_segsz, &up32->shm_segsz);
+ err2 |= __put_user(s64.shm_nattch, &up32->shm_nattch);
+ err2 |= __put_user(s64.shm_cpid, &up32->shm_cpid);
+ err2 |= __put_user(s64.shm_lpid, &up32->shm_lpid);
+ }
if (err2)
err = -EFAULT;
break;
- case SHM_INFO:
- old_fs = get_fs ();
- set_fs (KERNEL_DS);
- err = sys_shmctl (first, second, (void *)&si);
- set_fs (old_fs);
+ case SHM_INFO:
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_shmctl(first, second, (void *)&si);
+ set_fs(old_fs);
if (err < 0)
break;
- err2 = put_user (si.used_ids, &uip->used_ids);
- err2 |= __put_user (si.shm_tot, &uip->shm_tot);
- err2 |= __put_user (si.shm_rss, &uip->shm_rss);
- err2 |= __put_user (si.shm_swp, &uip->shm_swp);
- err2 |= __put_user (si.swap_attempts,
- &uip->swap_attempts);
- err2 |= __put_user (si.swap_successes,
- &uip->swap_successes);
+
+ if (!access_ok(VERIFY_WRITE, uip, sizeof(*uip))) {
+ err = -EFAULT;
+ break;
+ }
+ err2 = __put_user(si.used_ids, &uip->used_ids);
+ err2 |= __put_user(si.shm_tot, &uip->shm_tot);
+ err2 |= __put_user(si.shm_rss, &uip->shm_rss);
+ err2 |= __put_user(si.shm_swp, &uip->shm_swp);
+ err2 |= __put_user(si.swap_attempts, &uip->swap_attempts);
+ err2 |= __put_user(si.swap_successes, &uip->swap_successes);
if (err2)
err = -EFAULT;
break;
@@ -1869,59 +2554,42 @@
asmlinkage long
sys32_ipc (u32 call, int first, int second, int third, u32 ptr, u32 fifth)
{
- int version, err;
+ int version;
version = call >> 16; /* hack for backward compatibility */
call &= 0xffff;
switch (call) {
-
- case SEMOP:
+ case SEMOP:
/* struct sembuf is the same on 32 and 64bit :)) */
- err = sys_semop (first, (struct sembuf *)AA(ptr),
- second);
- break;
- case SEMGET:
- err = sys_semget (first, second, third);
- break;
- case SEMCTL:
- err = do_sys32_semctl (first, second, third,
- (void *)AA(ptr));
- break;
-
- case MSGSND:
- err = do_sys32_msgsnd (first, second, third,
- (void *)AA(ptr));
- break;
- case MSGRCV:
- err = do_sys32_msgrcv (first, second, fifth, third,
- version, (void *)AA(ptr));
- break;
- case MSGGET:
- err = sys_msgget ((key_t) first, second);
- break;
- case MSGCTL:
- err = do_sys32_msgctl (first, second, (void *)AA(ptr));
- break;
+ return sys_semop(first, (struct sembuf *)AA(ptr), second);
+ case SEMGET:
+ return sys_semget(first, second, third);
+ case SEMCTL:
+ return semctl32(first, second, third, (void *)AA(ptr));
+
+ case MSGSND:
+ return do_sys32_msgsnd(first, second, third, (void *)AA(ptr));
+ case MSGRCV:
+ return do_sys32_msgrcv(first, second, fifth, third, version, (void *)AA(ptr));
+ case MSGGET:
+ return sys_msgget((key_t) first, second);
+ case MSGCTL:
+ return msgctl32(first, second, (void *)AA(ptr));
+
+ case SHMAT:
+ return shmat32(first, second, third, version, (void *)AA(ptr));
+ break;
+ case SHMDT:
+ return sys_shmdt((char *)AA(ptr));
+ case SHMGET:
+ return sys_shmget(first, second, third);
+ case SHMCTL:
+ return shmctl32(first, second, (void *)AA(ptr));
- case SHMAT:
- err = do_sys32_shmat (first, second, third, version, (void *)AA(ptr));
- break;
- case SHMDT:
- err = sys_shmdt ((char *)AA(ptr));
- break;
- case SHMGET:
- err = sys_shmget (first, second, third);
- break;
- case SHMCTL:
- err = do_sys32_shmctl (first, second, (void *)AA(ptr));
- break;
- default:
- err = -EINVAL;
- break;
+ default:
+ return -EINVAL;
}
-
- return err;
}
/*
@@ -1929,7 +2597,8 @@
* sys_gettimeofday(). IA64 did this but i386 Linux did not
* so we have to implement this system call here.
*/
-asmlinkage long sys32_time(int * tloc)
+asmlinkage long
+sys32_time (int *tloc)
{
int i;
@@ -1937,7 +2606,7 @@
stuff it to user space. No side effects */
i = CURRENT_TIME;
if (tloc) {
- if (put_user(i,tloc))
+ if (put_user(i, tloc))
i = -EFAULT;
}
return i;
@@ -1967,7 +2636,10 @@
{
int err;
- err = put_user (r->ru_utime.tv_sec, &ru->ru_utime.tv_sec);
+ if (!access_ok(VERIFY_WRITE, ru, sizeof(*ru)))
+ return -EFAULT;
+
+ err = __put_user (r->ru_utime.tv_sec, &ru->ru_utime.tv_sec);
err |= __put_user (r->ru_utime.tv_usec, &ru->ru_utime.tv_usec);
err |= __put_user (r->ru_stime.tv_sec, &ru->ru_stime.tv_sec);
err |= __put_user (r->ru_stime.tv_usec, &ru->ru_stime.tv_usec);
@@ -1989,8 +2661,7 @@
}
asmlinkage long
-sys32_wait4(__kernel_pid_t32 pid, unsigned int *stat_addr, int options,
- struct rusage32 *ru)
+sys32_wait4 (int pid, unsigned int *stat_addr, int options, struct rusage32 *ru)
{
if (!ru)
return sys_wait4(pid, stat_addr, options, NULL);
@@ -2000,37 +2671,38 @@
unsigned int status;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_wait4(pid, stat_addr ? &status : NULL, options, &r);
- set_fs (old_fs);
- if (put_rusage (ru, &r)) return -EFAULT;
- if (stat_addr && put_user (status, stat_addr))
+ set_fs(old_fs);
+ if (put_rusage(ru, &r))
+ return -EFAULT;
+ if (stat_addr && put_user(status, stat_addr))
return -EFAULT;
return ret;
}
}
asmlinkage long
-sys32_waitpid(__kernel_pid_t32 pid, unsigned int *stat_addr, int options)
+sys32_waitpid (int pid, unsigned int *stat_addr, int options)
{
return sys32_wait4(pid, stat_addr, options, NULL);
}
-extern asmlinkage long
-sys_getrusage(int who, struct rusage *ru);
+extern asmlinkage long sys_getrusage (int who, struct rusage *ru);
asmlinkage long
-sys32_getrusage(int who, struct rusage32 *ru)
+sys32_getrusage (int who, struct rusage32 *ru)
{
struct rusage r;
int ret;
mm_segment_t old_fs = get_fs();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_getrusage(who, &r);
- set_fs (old_fs);
- if (put_rusage (ru, &r)) return -EFAULT;
+ set_fs(old_fs);
+ if (put_rusage (ru, &r))
+ return -EFAULT;
return ret;
}
@@ -2041,41 +2713,41 @@
__kernel_clock_t32 tms_cstime;
};
-extern asmlinkage long sys_times(struct tms * tbuf);
+extern asmlinkage long sys_times (struct tms * tbuf);
asmlinkage long
-sys32_times(struct tms32 *tbuf)
+sys32_times (struct tms32 *tbuf)
{
+ mm_segment_t old_fs = get_fs();
struct tms t;
long ret;
- mm_segment_t old_fs = get_fs ();
int err;
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
ret = sys_times(tbuf ? &t : NULL);
- set_fs (old_fs);
+ set_fs(old_fs);
if (tbuf) {
err = put_user (IA32_TICK(t.tms_utime), &tbuf->tms_utime);
- err |= __put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime);
- err |= __put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime);
- err |= __put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime);
+ err |= put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime);
+ err |= put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime);
+ err |= put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime);
if (err)
ret = -EFAULT;
}
return IA32_TICK(ret);
}
-unsigned int
+static unsigned int
ia32_peek (struct pt_regs *regs, struct task_struct *child, unsigned long addr, unsigned int *val)
{
size_t copied;
unsigned int ret;
copied = access_process_vm(child, addr, val, sizeof(*val), 0);
- return(copied != sizeof(ret) ? -EIO : 0);
+ return (copied != sizeof(ret)) ? -EIO : 0;
}
-unsigned int
+static unsigned int
ia32_poke (struct pt_regs *regs, struct task_struct *child, unsigned long addr, unsigned int val)
{
@@ -2105,135 +2777,87 @@
#define PT_UESP 15
#define PT_SS 16
-unsigned int
-getreg(struct task_struct *child, int regno)
+static unsigned int
+getreg (struct task_struct *child, int regno)
{
struct pt_regs *child_regs;
child_regs = ia64_task_regs(child);
switch (regno / sizeof(int)) {
-
- case PT_EBX:
- return(child_regs->r11);
- case PT_ECX:
- return(child_regs->r9);
- case PT_EDX:
- return(child_regs->r10);
- case PT_ESI:
- return(child_regs->r14);
- case PT_EDI:
- return(child_regs->r15);
- case PT_EBP:
- return(child_regs->r13);
- case PT_EAX:
- case PT_ORIG_EAX:
- return(child_regs->r8);
- case PT_EIP:
- return(child_regs->cr_iip);
- case PT_UESP:
- return(child_regs->r12);
- case PT_EFL:
- return(child->thread.eflag);
- case PT_DS:
- case PT_ES:
- case PT_FS:
- case PT_GS:
- case PT_SS:
- return((unsigned int)__USER_DS);
- case PT_CS:
- return((unsigned int)__USER_CS);
- default:
- printk(KERN_ERR "getregs:unknown register %d\n", regno);
+ case PT_EBX: return child_regs->r11;
+ case PT_ECX: return child_regs->r9;
+ case PT_EDX: return child_regs->r10;
+ case PT_ESI: return child_regs->r14;
+ case PT_EDI: return child_regs->r15;
+ case PT_EBP: return child_regs->r13;
+ case PT_EAX: return child_regs->r8;
+ case PT_ORIG_EAX: return child_regs->r1; /* see dispatch_to_ia32_handler() */
+ case PT_EIP: return child_regs->cr_iip;
+ case PT_UESP: return child_regs->r12;
+ case PT_EFL: return child->thread.eflag;
+ case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS:
+ return __USER_DS;
+ case PT_CS: return __USER_CS;
+ default:
+ printk(KERN_ERR "ia32.getreg(): unknown register %d\n", regno);
break;
-
}
- return(0);
+ return 0;
}
-void
-putreg(struct task_struct *child, int regno, unsigned int value)
+static void
+putreg (struct task_struct *child, int regno, unsigned int value)
{
struct pt_regs *child_regs;
child_regs = ia64_task_regs(child);
switch (regno / sizeof(int)) {
-
- case PT_EBX:
- child_regs->r11 = value;
- break;
- case PT_ECX:
- child_regs->r9 = value;
- break;
- case PT_EDX:
- child_regs->r10 = value;
- break;
- case PT_ESI:
- child_regs->r14 = value;
- break;
- case PT_EDI:
- child_regs->r15 = value;
- break;
- case PT_EBP:
- child_regs->r13 = value;
- break;
- case PT_EAX:
- case PT_ORIG_EAX:
- child_regs->r8 = value;
- break;
- case PT_EIP:
- child_regs->cr_iip = value;
- break;
- case PT_UESP:
- child_regs->r12 = value;
- break;
- case PT_EFL:
- child->thread.eflag = value;
- break;
- case PT_DS:
- case PT_ES:
- case PT_FS:
- case PT_GS:
- case PT_SS:
+ case PT_EBX: child_regs->r11 = value; break;
+ case PT_ECX: child_regs->r9 = value; break;
+ case PT_EDX: child_regs->r10 = value; break;
+ case PT_ESI: child_regs->r14 = value; break;
+ case PT_EDI: child_regs->r15 = value; break;
+ case PT_EBP: child_regs->r13 = value; break;
+ case PT_EAX: child_regs->r8 = value; break;
+ case PT_ORIG_EAX: child_regs->r1 = value; break;
+ case PT_EIP: child_regs->cr_iip = value; break;
+ case PT_UESP: child_regs->r12 = value; break;
+ case PT_EFL: child->thread.eflag = value; break;
+ case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS:
if (value != __USER_DS)
- printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
+ printk(KERN_ERR
+ "ia32.putreg: attempt to set invalid segment register %d = %x\n",
regno, value);
break;
- case PT_CS:
+ case PT_CS:
if (value != __USER_CS)
- printk(KERN_ERR "setregs:try to set invalid segment register %d = %x\n",
+ printk(KERN_ERR
+ "ia32.putreg: attempt to to set invalid segment register %d = %x\n",
regno, value);
break;
- default:
- printk(KERN_ERR "getregs:unknown register %d\n", regno);
+ default:
+ printk(KERN_ERR "ia32.putreg: unknown register %d\n", regno);
break;
-
}
}
static inline void
-ia32f2ia64f(void *dst, void *src)
+ia32f2ia64f (void *dst, void *src)
{
-
- __asm__ ("ldfe f6=[%1] ;;\n\t"
- "stf.spill [%0]ö"
- :
- : "r"(dst), "r"(src));
+ asm volatile ("ldfe f6=[%1];; stf.spill [%0]ö" :: "r"(dst), "r"(src) : "memory");
return;
}
static inline void
-ia64f2ia32f(void *dst, void *src)
+ia64f2ia32f (void *dst, void *src)
{
-
- __asm__ ("ldf.fill f6=[%1] ;;\n\t"
- "stfe [%0]ö"
- :
- : "r"(dst), "r"(src));
+ asm volatile ("ldf.fill f6=[%1];; stfe [%0]ö" :: "r"(dst), "r"(src) : "memory");
return;
}
-void
-put_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp, int tos)
+static void
+put_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp,
+ int tos)
{
struct _fpreg_ia32 *f;
char buf[32];
@@ -2242,62 +2866,59 @@
if ((regno += tos) >= 8)
regno -= 8;
switch (regno) {
-
- case 0:
+ case 0:
ia64f2ia32f(f, &ptp->f8);
break;
- case 1:
+ case 1:
ia64f2ia32f(f, &ptp->f9);
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
+ case 2:
+ case 3:
+ case 4:
+ case 5:
+ case 6:
+ case 7:
ia64f2ia32f(f, &swp->f10 + (regno - 2));
break;
-
}
- __copy_to_user(reg, f, sizeof(*reg));
+ copy_to_user(reg, f, sizeof(*reg));
}
-void
-get_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp, int tos)
+static void
+get_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switch_stack *swp,
+ int tos)
{
if ((regno += tos) >= 8)
regno -= 8;
switch (regno) {
-
- case 0:
- __copy_from_user(&ptp->f8, reg, sizeof(*reg));
+ case 0:
+ copy_from_user(&ptp->f8, reg, sizeof(*reg));
break;
- case 1:
- __copy_from_user(&ptp->f9, reg, sizeof(*reg));
+ case 1:
+ copy_from_user(&ptp->f9, reg, sizeof(*reg));
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
- __copy_from_user(&swp->f10 + (regno - 2), reg, sizeof(*reg));
+ case 2:
+ case 3:
+ case 4:
+ case 5:
+ case 6:
+ case 7:
+ copy_from_user(&swp->f10 + (regno - 2), reg, sizeof(*reg));
break;
-
}
return;
}
-int
-save_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save)
+static int
+save_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save)
{
struct switch_stack *swp;
struct pt_regs *ptp;
int i, tos;
if (!access_ok(VERIFY_WRITE, save, sizeof(*save)))
- return(-EIO);
+ return -EIO;
__put_user(tsk->thread.fcr, &save->cw);
__put_user(tsk->thread.fsr, &save->sw);
__put_user(tsk->thread.fsr >> 32, &save->tag);
@@ -2313,11 +2934,11 @@
tos = (tsk->thread.fsr >> 11) & 3;
for (i = 0; i < 8; i++)
put_fpreg(i, &save->_st[i], ptp, swp, tos);
- return(0);
+ return 0;
}
-int
-restore_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save)
+static int
+restore_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save)
{
struct switch_stack *swp;
struct pt_regs *ptp;
@@ -2340,10 +2961,11 @@
tos = (tsk->thread.fsr >> 11) & 3;
for (i = 0; i < 8; i++)
get_fpreg(i, &save->_st[i], ptp, swp, tos);
- return(ret ? -EFAULT : 0);
+ return ret ? -EFAULT : 0;
}
-asmlinkage long sys_ptrace(long, pid_t, unsigned long, unsigned long, long, long, long, long, long);
+extern asmlinkage long sys_ptrace (long, pid_t, unsigned long, unsigned long, long, long, long,
+ long, long);
/*
* Note that the IA32 version of `ptrace' calls the IA64 routine for
@@ -2358,13 +2980,12 @@
{
struct pt_regs *regs = (struct pt_regs *) &stack;
struct task_struct *child;
+ unsigned int value, tmp;
long i, ret;
- unsigned int value;
lock_kernel();
if (request = PTRACE_TRACEME) {
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
goto out;
}
@@ -2379,8 +3000,7 @@
goto out;
if (request = PTRACE_ATTACH) {
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
goto out;
}
ret = -ESRCH;
@@ -2398,21 +3018,32 @@
case PTRACE_PEEKDATA: /* read word at location addr */
ret = ia32_peek(regs, child, addr, &value);
if (ret = 0)
- ret = put_user(value, (unsigned int *)A(data));
+ ret = put_user(value, (unsigned int *) A(data));
else
ret = -EIO;
goto out;
case PTRACE_POKETEXT:
case PTRACE_POKEDATA: /* write the word at location addr */
- ret = ia32_poke(regs, child, addr, (unsigned int)data);
+ ret = ia32_poke(regs, child, addr, data);
goto out;
case PTRACE_PEEKUSR: /* read word at addr in USER area */
- ret = 0;
+ ret = -EIO;
+ if ((addr & 3) || addr > 17*sizeof(int))
+ break;
+
+ tmp = getreg(child, addr);
+ if (!put_user(tmp, (unsigned int *) A(data)))
+ ret = 0;
break;
case PTRACE_POKEUSR: /* write word at addr in USER area */
+ ret = -EIO;
+ if ((addr & 3) || addr > 17*sizeof(int))
+ break;
+
+ putreg(child, addr, data);
ret = 0;
break;
@@ -2421,28 +3052,25 @@
ret = -EIO;
break;
}
- for ( i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
- __put_user(getreg(child, i), (unsigned int *) A(data));
+ for (i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
+ put_user(getreg(child, i), (unsigned int *) A(data));
data += sizeof(int);
}
ret = 0;
break;
case IA32_PTRACE_SETREGS:
- {
- unsigned int tmp;
if (!access_ok(VERIFY_READ, (int *) A(data), 17*sizeof(int))) {
ret = -EIO;
break;
}
- for ( i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
- __get_user(tmp, (unsigned int *) A(data));
+ for (i = 0; i < 17*sizeof(int); i += sizeof(int) ) {
+ get_user(tmp, (unsigned int *) A(data));
putreg(child, i, tmp);
data += sizeof(int);
}
ret = 0;
break;
- }
case IA32_PTRACE_GETFPREGS:
ret = save_ia32_fpstate(child, (struct _fpstate_ia32 *) A(data));
@@ -2457,10 +3085,8 @@
case PTRACE_KILL:
case PTRACE_SINGLESTEP: /* execute chile for one instruction */
case PTRACE_DETACH: /* detach a process */
- unlock_kernel();
- ret = sys_ptrace(request, pid, addr, data,
- arg4, arg5, arg6, arg7, stack);
- return(ret);
+ ret = sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, stack);
+ break;
default:
ret = -EIO;
@@ -2477,7 +3103,10 @@
{
int err;
- err = get_user(kfl->l_type, &ufl->l_type);
+ if (!access_ok(VERIFY_READ, ufl, sizeof(*ufl)))
+ return -EFAULT;
+
+ err = __get_user(kfl->l_type, &ufl->l_type);
err |= __get_user(kfl->l_whence, &ufl->l_whence);
err |= __get_user(kfl->l_start, &ufl->l_start);
err |= __get_user(kfl->l_len, &ufl->l_len);
@@ -2490,6 +3119,9 @@
{
int err;
+ if (!access_ok(VERIFY_WRITE, ufl, sizeof(*ufl)))
+ return -EFAULT;
+
err = __put_user(kfl->l_type, &ufl->l_type);
err |= __put_user(kfl->l_whence, &ufl->l_whence);
err |= __put_user(kfl->l_start, &ufl->l_start);
@@ -2498,71 +3130,43 @@
return err;
}
-extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd,
- unsigned long arg);
+extern asmlinkage long sys_fcntl (unsigned int fd, unsigned int cmd, unsigned long arg);
asmlinkage long
-sys32_fcntl(unsigned int fd, unsigned int cmd, int arg)
+sys32_fcntl (unsigned int fd, unsigned int cmd, unsigned int arg)
{
- struct flock f;
mm_segment_t old_fs;
+ struct flock f;
long ret;
switch (cmd) {
- case F_GETLK:
- case F_SETLK:
- case F_SETLKW:
- if(get_flock32(&f, (struct flock32 *)((long)arg)))
+ case F_GETLK:
+ case F_SETLK:
+ case F_SETLKW:
+ if (get_flock32(&f, (struct flock32 *) A(arg)))
return -EFAULT;
old_fs = get_fs();
set_fs(KERNEL_DS);
- ret = sys_fcntl(fd, cmd, (unsigned long)&f);
+ ret = sys_fcntl(fd, cmd, (unsigned long) &f);
set_fs(old_fs);
- if(cmd = F_GETLK && put_flock32(&f, (struct flock32 *)((long)arg)))
+ if (cmd = F_GETLK && put_flock32(&f, (struct flock32 *) A(arg)))
return -EFAULT;
return ret;
- default:
+
+ default:
/*
* `sys_fcntl' lies about arg, for the F_SETOWN
* sub-function arg can have a negative value.
*/
- return sys_fcntl(fd, cmd, (unsigned long)((long)arg));
- }
-}
-
-asmlinkage long
-sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigaction32 *oact)
-{
- struct k_sigaction new_ka, old_ka;
- int ret;
-
- if (act) {
- old_sigset32_t mask;
-
- ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
- ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
- ret |= __get_user(mask, &act->sa_mask);
- if (ret)
- return ret;
- siginitset(&new_ka.sa.sa_mask, mask);
- }
-
- ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
-
- if (!ret && oact) {
- ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
- ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
- ret |= __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ return sys_fcntl(fd, cmd, arg);
}
-
- return ret;
}
asmlinkage long sys_ni_syscall(void);
asmlinkage long
-sys32_ni_syscall(int dummy0, int dummy1, int dummy2, int dummy3,
- int dummy4, int dummy5, int dummy6, int dummy7, int stack)
+sys32_ni_syscall (int dummy0, int dummy1, int dummy2, int dummy3, int dummy4, int dummy5,
+ int dummy6, int dummy7, int stack)
{
struct pt_regs *regs = (struct pt_regs *)&stack;
@@ -2577,7 +3181,7 @@
#define IOLEN ((65536 / 4) * 4096)
asmlinkage long
-sys_iopl (int level)
+sys32_iopl (int level)
{
extern unsigned long ia64_iobase;
int fd;
@@ -2612,9 +3216,8 @@
up_write(¤t->mm->mmap_sem);
if (addr >= 0) {
- ia64_set_kr(IA64_KR_IO_BASE, addr);
old = (old & ~0x3000) | (level << 12);
- __asm__ __volatile__("mov ar.eflag=%0 ;;" :: "r"(old));
+ asm volatile ("mov ar.eflag=%0;;" :: "r"(old));
}
fput(file);
@@ -2623,7 +3226,7 @@
}
asmlinkage long
-sys_ioperm (unsigned int from, unsigned int num, int on)
+sys32_ioperm (unsigned int from, unsigned int num, int on)
{
/*
@@ -2636,7 +3239,7 @@
* XXX proper ioperm() support should be emulated by
* manipulating the page protections...
*/
- return sys_iopl(3);
+ return sys32_iopl(3);
}
typedef struct {
@@ -2646,10 +3249,8 @@
} ia32_stack_t;
asmlinkage long
-sys32_sigaltstack (const ia32_stack_t *uss32, ia32_stack_t *uoss32,
-long arg2, long arg3, long arg4,
-long arg5, long arg6, long arg7,
-long stack)
+sys32_sigaltstack (ia32_stack_t *uss32, ia32_stack_t *uoss32,
+ long arg2, long arg3, long arg4, long arg5, long arg6, long arg7, long stack)
{
struct pt_regs *pt = (struct pt_regs *) &stack;
stack_t uss, uoss;
@@ -2658,8 +3259,8 @@
mm_segment_t old_fs = get_fs();
if (uss32)
- if (copy_from_user(&buf32, (void *)A(uss32), sizeof(ia32_stack_t)))
- return(-EFAULT);
+ if (copy_from_user(&buf32, uss32, sizeof(ia32_stack_t)))
+ return -EFAULT;
uss.ss_sp = (void *) (long) buf32.ss_sp;
uss.ss_flags = buf32.ss_flags;
uss.ss_size = buf32.ss_size;
@@ -2672,34 +3273,34 @@
buf32.ss_sp = (long) uoss.ss_sp;
buf32.ss_flags = uoss.ss_flags;
buf32.ss_size = uoss.ss_size;
- if (copy_to_user((void*)A(uoss32), &buf32, sizeof(ia32_stack_t)))
- return(-EFAULT);
+ if (copy_to_user(uoss32, &buf32, sizeof(ia32_stack_t)))
+ return -EFAULT;
}
- return(ret);
+ return ret;
}
asmlinkage int
-sys_pause (void)
+sys32_pause (void)
{
current->state = TASK_INTERRUPTIBLE;
schedule();
return -ERESTARTNOHAND;
}
-asmlinkage long sys_msync(unsigned long start, size_t len, int flags);
+asmlinkage long sys_msync (unsigned long start, size_t len, int flags);
asmlinkage int
-sys32_msync(unsigned int start, unsigned int len, int flags)
+sys32_msync (unsigned int start, unsigned int len, int flags)
{
unsigned int addr;
if (OFFSET4K(start))
return -EINVAL;
- addr = start & PAGE_MASK;
- return(sys_msync(addr, len + (start - addr), flags));
+ addr = PAGE_START(start);
+ return sys_msync(addr, len + (start - addr), flags);
}
-struct sysctl_ia32 {
+struct sysctl32 {
unsigned int name;
int nlen;
unsigned int oldval;
@@ -2712,16 +3313,16 @@
extern asmlinkage long sys_sysctl(struct __sysctl_args *args);
asmlinkage long
-sys32_sysctl(struct sysctl_ia32 *args32)
+sys32_sysctl (struct sysctl32 *args)
{
- struct sysctl_ia32 a32;
+ struct sysctl32 a32;
mm_segment_t old_fs = get_fs ();
void *oldvalp, *newvalp;
size_t oldlen;
int *namep;
long ret;
- if (copy_from_user(&a32, args32, sizeof (a32)))
+ if (copy_from_user(&a32, args, sizeof(a32)))
return -EFAULT;
/*
@@ -2754,7 +3355,7 @@
}
asmlinkage long
-sys32_newuname(struct new_utsname * name)
+sys32_newuname (struct new_utsname *name)
{
extern asmlinkage long sys_newuname(struct new_utsname * name);
int ret = sys_newuname(name);
@@ -2765,10 +3366,10 @@
return ret;
}
-extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
+extern asmlinkage long sys_getresuid (uid_t *ruid, uid_t *euid, uid_t *suid);
asmlinkage long
-sys32_getresuid (u16 *ruid, u16 *euid, u16 *suid)
+sys32_getresuid16 (u16 *ruid, u16 *euid, u16 *suid)
{
uid_t a, b, c;
int ret;
@@ -2786,7 +3387,7 @@
extern asmlinkage long sys_getresgid (gid_t *rgid, gid_t *egid, gid_t *sgid);
asmlinkage long
-sys32_getresgid(u16 *rgid, u16 *egid, u16 *sgid)
+sys32_getresgid16 (u16 *rgid, u16 *egid, u16 *sgid)
{
gid_t a, b, c;
int ret;
@@ -2796,15 +3397,13 @@
ret = sys_getresgid(&a, &b, &c);
set_fs(old_fs);
- if (!ret) {
- ret = put_user(a, rgid);
- ret |= put_user(b, egid);
- ret |= put_user(c, sgid);
- }
- return ret;
+ if (ret)
+ return ret;
+
+ return put_user(a, rgid) | put_user(b, egid) | put_user(c, sgid);
}
-int
+asmlinkage long
sys32_lseek (unsigned int fd, int offset, unsigned int whence)
{
extern off_t sys_lseek (unsigned int fd, off_t offset, unsigned int origin);
@@ -2813,36 +3412,272 @@
return sys_lseek(fd, offset, whence);
}
-#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
+extern asmlinkage long sys_getgroups (int gidsetsize, gid_t *grouplist);
-/* In order to reduce some races, while at the same time doing additional
- * checking and hopefully speeding things up, we copy filenames to the
- * kernel data space before using them..
- *
- * POSIX.1 2.4: an empty pathname is invalid (ENOENT).
- */
-static inline int
-do_getname32(const char *filename, char *page)
+asmlinkage long
+sys32_getgroups16 (int gidsetsize, short *grouplist)
{
- int retval;
+ mm_segment_t old_fs = get_fs();
+ gid_t gl[NGROUPS];
+ int ret, i;
- /* 32bit pointer will be always far below TASK_SIZE :)) */
- retval = strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE);
- if (retval > 0) {
- if (retval < PAGE_SIZE)
- return 0;
- return -ENAMETOOLONG;
- } else if (!retval)
- retval = -ENOENT;
- return retval;
+ set_fs(KERNEL_DS);
+ ret = sys_getgroups(gidsetsize, gl);
+ set_fs(old_fs);
+
+ if (gidsetsize && ret > 0 && ret <= NGROUPS)
+ for (i = 0; i < ret; i++, grouplist++)
+ if (put_user(gl[i], grouplist))
+ return -EFAULT;
+ return ret;
}
-char *
-getname32(const char *filename)
+extern asmlinkage long sys_setgroups (int gidsetsize, gid_t *grouplist);
+
+asmlinkage long
+sys32_setgroups16 (int gidsetsize, short *grouplist)
{
- char *tmp, *result;
+ mm_segment_t old_fs = get_fs();
+ gid_t gl[NGROUPS];
+ int ret, i;
- result = ERR_PTR(-ENOMEM);
+ if ((unsigned) gidsetsize > NGROUPS)
+ return -EINVAL;
+ for (i = 0; i < gidsetsize; i++, grouplist++)
+ if (get_user(gl[i], grouplist))
+ return -EFAULT;
+ set_fs(KERNEL_DS);
+ ret = sys_setgroups(gidsetsize, gl);
+ set_fs(old_fs);
+ return ret;
+}
+
+/*
+ * Unfortunately, the x86 compiler aligns variables of type "long long" to a 4 byte boundary
+ * only, which means that the x86 version of "struct flock64" doesn't match the ia64 version
+ * of struct flock.
+ */
+
+static inline long
+ia32_put_flock (struct flock *l, unsigned long addr)
+{
+ return (put_user(l->l_type, (short *) addr)
+ | put_user(l->l_whence, (short *) (addr + 2))
+ | put_user(l->l_start, (long *) (addr + 4))
+ | put_user(l->l_len, (long *) (addr + 12))
+ | put_user(l->l_pid, (int *) (addr + 20)));
+}
+
+static inline long
+ia32_get_flock (struct flock *l, unsigned long addr)
+{
+ unsigned int start_lo, start_hi, len_lo, len_hi;
+ int err = (get_user(l->l_type, (short *) addr)
+ | get_user(l->l_whence, (short *) (addr + 2))
+ | get_user(start_lo, (int *) (addr + 4))
+ | get_user(start_hi, (int *) (addr + 8))
+ | get_user(len_lo, (int *) (addr + 12))
+ | get_user(len_hi, (int *) (addr + 16))
+ | get_user(l->l_pid, (int *) (addr + 20)));
+ l->l_start = ((unsigned long) start_hi << 32) | start_lo;
+ l->l_len = ((unsigned long) len_hi << 32) | len_lo;
+ return err;
+}
+
+asmlinkage long
+sys32_fcntl64 (unsigned int fd, unsigned int cmd, unsigned int arg)
+{
+ mm_segment_t old_fs;
+ struct flock f;
+ long ret;
+
+ switch (cmd) {
+ case F_GETLK64:
+ case F_SETLK64:
+ case F_SETLKW64:
+ if (ia32_get_flock(&f, arg))
+ return -EFAULT;
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ ret = sys_fcntl(fd, cmd, (unsigned long) &f);
+ set_fs(old_fs);
+ if (cmd = F_GETLK && ia32_put_flock(&f, arg))
+ return -EFAULT;
+ break;
+
+ default:
+ ret = sys32_fcntl(fd, cmd, arg);
+ break;
+ }
+ return ret;
+}
+
+asmlinkage long
+sys32_truncate64 (unsigned int path, unsigned int len_lo, unsigned int len_hi)
+{
+ extern asmlinkage long sys_truncate (const char *path, unsigned long length);
+
+ return sys_truncate((const char *) A(path), ((unsigned long) len_hi << 32) | len_lo);
+}
+
+asmlinkage long
+sys32_ftruncate64 (int fd, unsigned int len_lo, unsigned int len_hi)
+{
+ extern asmlinkage long sys_ftruncate (int fd, unsigned long length);
+
+ return sys_ftruncate(fd, ((unsigned long) len_hi << 32) | len_lo);
+}
+
+static int
+putstat64 (struct stat64 *ubuf, struct stat *kbuf)
+{
+ int err;
+
+ if (clear_user(ubuf, sizeof(*ubuf)))
+ return 1;
+
+ err = __put_user(kbuf->st_dev, &ubuf->st_dev);
+ err |= __put_user(kbuf->st_ino, &ubuf->__st_ino);
+ err |= __put_user(kbuf->st_ino, &ubuf->st_ino_lo);
+ err |= __put_user(kbuf->st_ino >> 32, &ubuf->st_ino_hi);
+ err |= __put_user(kbuf->st_mode, &ubuf->st_mode);
+ err |= __put_user(kbuf->st_nlink, &ubuf->st_nlink);
+ err |= __put_user(kbuf->st_uid, &ubuf->st_uid);
+ err |= __put_user(kbuf->st_gid, &ubuf->st_gid);
+ err |= __put_user(kbuf->st_rdev, &ubuf->st_rdev);
+ err |= __put_user(kbuf->st_size, &ubuf->st_size_lo);
+ err |= __put_user((kbuf->st_size >> 32), &ubuf->st_size_hi);
+ err |= __put_user(kbuf->st_atime, &ubuf->st_atime);
+ err |= __put_user(kbuf->st_mtime, &ubuf->st_mtime);
+ err |= __put_user(kbuf->st_ctime, &ubuf->st_ctime);
+ err |= __put_user(kbuf->st_blksize, &ubuf->st_blksize);
+ err |= __put_user(kbuf->st_blocks, &ubuf->st_blocks);
+ return err;
+}
+
+asmlinkage long
+sys32_stat64 (char *filename, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newstat(filename, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_lstat64 (char *filename, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newlstat(filename, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_fstat64 (unsigned int fd, struct stat64 *statbuf)
+{
+ mm_segment_t old_fs = get_fs();
+ struct stat s;
+ long ret;
+
+ set_fs(KERNEL_DS);
+ ret = sys_newfstat(fd, &s);
+ set_fs(old_fs);
+ if (putstat64(statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+asmlinkage long
+sys32_sigpending (unsigned int *set)
+{
+ return do_sigpending(set, sizeof(*set));
+}
+
+struct sysinfo32 {
+ s32 uptime;
+ u32 loads[3];
+ u32 totalram;
+ u32 freeram;
+ u32 sharedram;
+ u32 bufferram;
+ u32 totalswap;
+ u32 freeswap;
+ unsigned short procs;
+ char _f[22];
+};
+
+asmlinkage long
+sys32_sysinfo (struct sysinfo32 *info)
+{
+ extern asmlinkage long sys_sysinfo (struct sysinfo *);
+ mm_segment_t old_fs = get_fs();
+ struct sysinfo s;
+ long ret, err;
+
+ set_fs(KERNEL_DS);
+ ret = sys_sysinfo(&s);
+ set_fs(old_fs);
+
+ if (!access_ok(VERIFY_WRITE, info, sizeof(*info)))
+ return -EFAULT;
+
+ err = __put_user(s.uptime, &info->uptime);
+ err |= __put_user(s.loads[0], &info->loads[0]);
+ err |= __put_user(s.loads[1], &info->loads[1]);
+ err |= __put_user(s.loads[2], &info->loads[2]);
+ err |= __put_user(s.totalram, &info->totalram);
+ err |= __put_user(s.freeram, &info->freeram);
+ err |= __put_user(s.sharedram, &info->sharedram);
+ err |= __put_user(s.bufferram, &info->bufferram);
+ err |= __put_user(s.totalswap, &info->totalswap);
+ err |= __put_user(s.freeswap, &info->freeswap);
+ err |= __put_user(s.procs, &info->procs);
+ if (err)
+ return -EFAULT;
+ return ret;
+}
+
+/* In order to reduce some races, while at the same time doing additional
+ * checking and hopefully speeding things up, we copy filenames to the
+ * kernel data space before using them..
+ *
+ * POSIX.1 2.4: an empty pathname is invalid (ENOENT).
+ */
+static inline int
+do_getname32 (const char *filename, char *page)
+{
+ int retval;
+
+ /* 32bit pointer will be always far below TASK_SIZE :)) */
+ retval = strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE);
+ if (retval > 0) {
+ if (retval < PAGE_SIZE)
+ return 0;
+ return -ENAMETOOLONG;
+ } else if (!retval)
+ retval = -ENOENT;
+ return retval;
+}
+
+static char *
+getname32 (const char *filename)
+{
+ char *tmp, *result;
+
+ result = ERR_PTR(-ENOMEM);
tmp = (char *)__get_free_page(GFP_KERNEL);
if (tmp) {
int retval = do_getname32(filename, tmp);
@@ -2856,178 +3691,132 @@
return result;
}
-/* 32-bit timeval and related flotsam. */
-
-extern asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int on);
-
-asmlinkage long
-sys32_ioperm(u32 from, u32 num, int on)
-{
- return sys_ioperm((unsigned long)from, (unsigned long)num, on);
-}
-
struct dqblk32 {
- __u32 dqb_bhardlimit;
- __u32 dqb_bsoftlimit;
- __u32 dqb_curblocks;
- __u32 dqb_ihardlimit;
- __u32 dqb_isoftlimit;
- __u32 dqb_curinodes;
- __kernel_time_t32 dqb_btime;
- __kernel_time_t32 dqb_itime;
+ __u32 dqb_bhardlimit;
+ __u32 dqb_bsoftlimit;
+ __u32 dqb_curblocks;
+ __u32 dqb_ihardlimit;
+ __u32 dqb_isoftlimit;
+ __u32 dqb_curinodes;
+ __kernel_time_t32 dqb_btime;
+ __kernel_time_t32 dqb_itime;
};
-extern asmlinkage long sys_quotactl(int cmd, const char *special, int id,
- caddr_t addr);
-
asmlinkage long
-sys32_quotactl(int cmd, const char *special, int id, unsigned long addr)
+sys32_quotactl (int cmd, unsigned int special, int id, struct dqblk32 *addr)
{
+ extern asmlinkage long sys_quotactl (int, const char *, int, caddr_t);
int cmds = cmd >> SUBCMDSHIFT;
- int err;
- struct dqblk d;
mm_segment_t old_fs;
+ struct dqblk d;
char *spec;
+ long err;
switch (cmds) {
- case Q_GETQUOTA:
+ case Q_GETQUOTA:
break;
- case Q_SETQUOTA:
- case Q_SETUSE:
- case Q_SETQLIM:
- if (copy_from_user (&d, (struct dqblk32 *)addr,
- sizeof (struct dqblk32)))
+ case Q_SETQUOTA:
+ case Q_SETUSE:
+ case Q_SETQLIM:
+ if (copy_from_user (&d, addr, sizeof(struct dqblk32)))
return -EFAULT;
d.dqb_itime = ((struct dqblk32 *)&d)->dqb_itime;
d.dqb_btime = ((struct dqblk32 *)&d)->dqb_btime;
break;
- default:
- return sys_quotactl(cmd, special,
- id, (caddr_t)addr);
+ default:
+ return sys_quotactl(cmd, (void *) A(special), id, (caddr_t) addr);
}
- spec = getname32 (special);
+ spec = getname32((void *) A(special));
err = PTR_ERR(spec);
- if (IS_ERR(spec)) return err;
+ if (IS_ERR(spec))
+ return err;
old_fs = get_fs ();
- set_fs (KERNEL_DS);
+ set_fs(KERNEL_DS);
err = sys_quotactl(cmd, (const char *)spec, id, (caddr_t)&d);
- set_fs (old_fs);
- putname (spec);
+ set_fs(old_fs);
+ putname(spec);
if (cmds = Q_GETQUOTA) {
__kernel_time_t b = d.dqb_btime, i = d.dqb_itime;
((struct dqblk32 *)&d)->dqb_itime = i;
((struct dqblk32 *)&d)->dqb_btime = b;
- if (copy_to_user ((struct dqblk32 *)addr, &d,
- sizeof (struct dqblk32)))
+ if (copy_to_user(addr, &d, sizeof(struct dqblk32)))
return -EFAULT;
}
return err;
}
-extern asmlinkage long sys_utime(char * filename, struct utimbuf * times);
-
-struct utimbuf32 {
- __kernel_time_t32 actime, modtime;
-};
-
asmlinkage long
-sys32_utime(char * filename, struct utimbuf32 *times)
+sys32_sched_rr_get_interval (pid_t pid, struct timespec32 *interval)
{
- struct utimbuf t;
- mm_segment_t old_fs;
- int ret;
- char *filenam;
+ extern asmlinkage long sys_sched_rr_get_interval (pid_t, struct timespec *);
+ mm_segment_t old_fs = get_fs();
+ struct timespec t;
+ long ret;
- if (!times)
- return sys_utime(filename, NULL);
- if (get_user (t.actime, ×->actime) ||
- __get_user (t.modtime, ×->modtime))
- return -EFAULT;
- filenam = getname32 (filename);
- ret = PTR_ERR(filenam);
- if (!IS_ERR(filenam)) {
- old_fs = get_fs();
- set_fs (KERNEL_DS);
- ret = sys_utime(filenam, &t);
- set_fs (old_fs);
- putname (filenam);
- }
+ set_fs(KERNEL_DS);
+ ret = sys_sched_rr_get_interval(pid, &t);
+ set_fs(old_fs);
+ if (put_user (t.tv_sec, &interval->tv_sec) || put_user (t.tv_nsec, &interval->tv_nsec))
+ return -EFAULT;
return ret;
}
-/*
- * Ooo, nasty. We need here to frob 32-bit unsigned longs to
- * 64-bit unsigned longs.
- */
-
-static inline int
-get_fd_set32(unsigned long n, unsigned long *fdset, u32 *ufdset)
+asmlinkage long
+sys32_pread (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, u32 pos_hi)
{
- if (ufdset) {
- unsigned long odd;
-
- if (verify_area(VERIFY_WRITE, ufdset, n*sizeof(u32)))
- return -EFAULT;
+ extern asmlinkage long sys_pread (unsigned int, char *, size_t, loff_t);
+ return sys_pread(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo);
+}
- odd = n & 1UL;
- n &= ~1UL;
- while (n) {
- unsigned long h, l;
- __get_user(l, ufdset);
- __get_user(h, ufdset+1);
- ufdset += 2;
- *fdset++ = h << 32 | l;
- n -= 2;
- }
- if (odd)
- __get_user(*fdset, ufdset);
- } else {
- /* Tricky, must clear full unsigned long in the
- * kernel fdset at the end, this makes sure that
- * actually happens.
- */
- memset(fdset, 0, ((n + 1) & ~1)*sizeof(u32));
- }
- return 0;
+asmlinkage long
+sys32_pwrite (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, u32 pos_hi)
+{
+ extern asmlinkage long sys_pwrite (unsigned int, const char *, size_t, loff_t);
+ return sys_pwrite(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo);
}
-static inline void
-set_fd_set32(unsigned long n, u32 *ufdset, unsigned long *fdset)
+asmlinkage long
+sys32_sendfile (int out_fd, int in_fd, int *offset, unsigned int count)
{
- unsigned long odd;
+ extern asmlinkage long sys_sendfile (int, int, off_t *, size_t);
+ mm_segment_t old_fs = get_fs();
+ long ret;
+ off_t of;
- if (!ufdset)
- return;
+ if (offset && get_user(of, offset))
+ return -EFAULT;
- odd = n & 1UL;
- n &= ~1UL;
- while (n) {
- unsigned long h, l;
- l = *fdset++;
- h = l >> 32;
- __put_user(l, ufdset);
- __put_user(h, ufdset+1);
- ufdset += 2;
- n -= 2;
- }
- if (odd)
- __put_user(*fdset, ufdset);
-}
+ set_fs(KERNEL_DS);
+ ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
+ set_fs(old_fs);
+
+ if (!ret && offset && put_user(of, offset))
+ return -EFAULT;
-extern asmlinkage long sys_sysfs(int option, unsigned long arg1,
- unsigned long arg2);
+ return ret;
+}
asmlinkage long
-sys32_sysfs(int option, u32 arg1, u32 arg2)
+sys32_personality (unsigned int personality)
{
- return sys_sysfs(option, arg1, arg2);
+ extern asmlinkage long sys_personality (unsigned long);
+ long ret;
+
+ if (current->personality = PER_LINUX32 && personality = PER_LINUX)
+ personality = PER_LINUX32;
+ ret = sys_personality(personality);
+ if (ret = PER_LINUX32)
+ ret = PER_LINUX;
+ return ret;
}
+#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
+
struct ncp_mount_data32 {
int version;
unsigned int ncp_fd;
__kernel_uid_t32 mounted_uid;
- __kernel_pid_t32 wdog_pid;
+ int wdog_pid;
unsigned char mounted_vol[NCP_VOLNAME_LEN + 1];
unsigned int time_out;
unsigned int retry_count;
@@ -3061,1485 +3850,169 @@
__kernel_uid_t32 uid;
__kernel_gid_t32 gid;
__kernel_mode_t32 file_mode;
- __kernel_mode_t32 dir_mode;
-};
-
-static void *
-do_smb_super_data_conv(void *raw_data)
-{
- struct smb_mount_data *s = (struct smb_mount_data *)raw_data;
- struct smb_mount_data32 *s32 = (struct smb_mount_data32 *)raw_data;
-
- s->version = s32->version;
- s->mounted_uid = s32->mounted_uid;
- s->uid = s32->uid;
- s->gid = s32->gid;
- s->file_mode = s32->file_mode;
- s->dir_mode = s32->dir_mode;
- return raw_data;
-}
-
-static int
-copy_mount_stuff_to_kernel(const void *user, unsigned long *kernel)
-{
- int i;
- unsigned long page;
- struct vm_area_struct *vma;
-
- *kernel = 0;
- if(!user)
- return 0;
- vma = find_vma(current->mm, (unsigned long)user);
- if(!vma || (unsigned long)user < vma->vm_start)
- return -EFAULT;
- if(!(vma->vm_flags & VM_READ))
- return -EFAULT;
- i = vma->vm_end - (unsigned long) user;
- if(PAGE_SIZE <= (unsigned long) i)
- i = PAGE_SIZE - 1;
- if(!(page = __get_free_page(GFP_KERNEL)))
- return -ENOMEM;
- if(copy_from_user((void *) page, user, i)) {
- free_page(page);
- return -EFAULT;
- }
- *kernel = page;
- return 0;
-}
-
-extern asmlinkage long sys_mount(char * dev_name, char * dir_name, char * type,
- unsigned long new_flags, void *data);
-
-#define SMBFS_NAME "smbfs"
-#define NCPFS_NAME "ncpfs"
-
-asmlinkage long
-sys32_mount(char *dev_name, char *dir_name, char *type,
- unsigned long new_flags, u32 data)
-{
- unsigned long type_page;
- int err, is_smb, is_ncp;
-
- if(!capable(CAP_SYS_ADMIN))
- return -EPERM;
- is_smb = is_ncp = 0;
- err = copy_mount_stuff_to_kernel((const void *)type, &type_page);
- if(err)
- return err;
- if(type_page) {
- is_smb = !strcmp((char *)type_page, SMBFS_NAME);
- is_ncp = !strcmp((char *)type_page, NCPFS_NAME);
- }
- if(!is_smb && !is_ncp) {
- if(type_page)
- free_page(type_page);
- return sys_mount(dev_name, dir_name, type, new_flags,
- (void *)AA(data));
- } else {
- unsigned long dev_page, dir_page, data_page;
-
- err = copy_mount_stuff_to_kernel((const void *)dev_name,
- &dev_page);
- if(err)
- goto out;
- err = copy_mount_stuff_to_kernel((const void *)dir_name,
- &dir_page);
- if(err)
- goto dev_out;
- err = copy_mount_stuff_to_kernel((const void *)AA(data),
- &data_page);
- if(err)
- goto dir_out;
- if(is_ncp)
- do_ncp_super_data_conv((void *)data_page);
- else if(is_smb)
- do_smb_super_data_conv((void *)data_page);
- else
- panic("The problem is here...");
- err = do_mount((char *)dev_page, (char *)dir_page,
- (char *)type_page, new_flags,
- (void *)data_page);
- if(data_page)
- free_page(data_page);
- dir_out:
- if(dir_page)
- free_page(dir_page);
- dev_out:
- if(dev_page)
- free_page(dev_page);
- out:
- if(type_page)
- free_page(type_page);
- return err;
- }
-}
-
-struct sysinfo32 {
- s32 uptime;
- u32 loads[3];
- u32 totalram;
- u32 freeram;
- u32 sharedram;
- u32 bufferram;
- u32 totalswap;
- u32 freeswap;
- unsigned short procs;
- char _f[22];
-};
-
-extern asmlinkage long sys_sysinfo(struct sysinfo *info);
-
-asmlinkage long
-sys32_sysinfo(struct sysinfo32 *info)
-{
- struct sysinfo s;
- int ret, err;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_sysinfo(&s);
- set_fs (old_fs);
- err = put_user (s.uptime, &info->uptime);
- err |= __put_user (s.loads[0], &info->loads[0]);
- err |= __put_user (s.loads[1], &info->loads[1]);
- err |= __put_user (s.loads[2], &info->loads[2]);
- err |= __put_user (s.totalram, &info->totalram);
- err |= __put_user (s.freeram, &info->freeram);
- err |= __put_user (s.sharedram, &info->sharedram);
- err |= __put_user (s.bufferram, &info->bufferram);
- err |= __put_user (s.totalswap, &info->totalswap);
- err |= __put_user (s.freeswap, &info->freeswap);
- err |= __put_user (s.procs, &info->procs);
- if (err)
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_sched_rr_get_interval(pid_t pid,
- struct timespec *interval);
-
-asmlinkage long
-sys32_sched_rr_get_interval(__kernel_pid_t32 pid, struct timespec32 *interval)
-{
- struct timespec t;
- int ret;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_sched_rr_get_interval(pid, &t);
- set_fs (old_fs);
- if (put_user (t.tv_sec, &interval->tv_sec) ||
- __put_user (t.tv_nsec, &interval->tv_nsec))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_sigprocmask(int how, old_sigset_t *set,
- old_sigset_t *oset);
-
-asmlinkage long
-sys32_sigprocmask(int how, old_sigset_t32 *set, old_sigset_t32 *oset)
-{
- old_sigset_t s;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- if (set && get_user (s, set)) return -EFAULT;
- set_fs (KERNEL_DS);
- ret = sys_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL);
- set_fs (old_fs);
- if (ret) return ret;
- if (oset && put_user (s, oset)) return -EFAULT;
- return 0;
-}
-
-extern asmlinkage long sys_sigpending(old_sigset_t *set);
-
-asmlinkage long
-sys32_sigpending(old_sigset_t32 *set)
-{
- old_sigset_t s;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_sigpending(&s);
- set_fs (old_fs);
- if (put_user (s, set)) return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_rt_sigpending(sigset_t *set, size_t sigsetsize);
-
-asmlinkage long
-sys32_rt_sigpending(sigset_t32 *set, __kernel_size_t32 sigsetsize)
-{
- sigset_t s;
- sigset_t32 s32;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_rt_sigpending(&s, sigsetsize);
- set_fs (old_fs);
- if (!ret) {
- switch (_NSIG_WORDS) {
- case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
- case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
- case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
- case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
- }
- if (copy_to_user (set, &s32, sizeof(sigset_t32)))
- return -EFAULT;
- }
- return ret;
-}
-
-siginfo_t32 *
-siginfo64to32(siginfo_t32 *d, siginfo_t *s)
-{
- memset(d, 0, sizeof(siginfo_t32));
- d->si_signo = s->si_signo;
- d->si_errno = s->si_errno;
- d->si_code = s->si_code;
- if (s->si_signo >= SIGRTMIN) {
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- /* XXX: Ouch, how to find this out??? */
- d->si_int = s->si_int;
- } else switch (s->si_signo) {
- /* XXX: What about POSIX1.b timers */
- case SIGCHLD:
- d->si_pid = s->si_pid;
- d->si_status = s->si_status;
- d->si_utime = s->si_utime;
- d->si_stime = s->si_stime;
- break;
- case SIGSEGV:
- case SIGBUS:
- case SIGFPE:
- case SIGILL:
- d->si_addr = (long)(s->si_addr);
- /* XXX: Do we need to translate this from ia64 to ia32 traps? */
- d->si_trapno = s->si_trapno;
- break;
- case SIGPOLL:
- d->si_band = s->si_band;
- d->si_fd = s->si_fd;
- break;
- default:
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- break;
- }
- return d;
-}
-
-siginfo_t *
-siginfo32to64(siginfo_t *d, siginfo_t32 *s)
-{
- d->si_signo = s->si_signo;
- d->si_errno = s->si_errno;
- d->si_code = s->si_code;
- if (s->si_signo >= SIGRTMIN) {
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- /* XXX: Ouch, how to find this out??? */
- d->si_int = s->si_int;
- } else switch (s->si_signo) {
- /* XXX: What about POSIX1.b timers */
- case SIGCHLD:
- d->si_pid = s->si_pid;
- d->si_status = s->si_status;
- d->si_utime = s->si_utime;
- d->si_stime = s->si_stime;
- break;
- case SIGSEGV:
- case SIGBUS:
- case SIGFPE:
- case SIGILL:
- d->si_addr = (void *)A(s->si_addr);
- /* XXX: Do we need to translate this from ia32 to ia64 traps? */
- d->si_trapno = s->si_trapno;
- break;
- case SIGPOLL:
- d->si_band = s->si_band;
- d->si_fd = s->si_fd;
- break;
- default:
- d->si_pid = s->si_pid;
- d->si_uid = s->si_uid;
- break;
- }
- return d;
-}
-
-extern asmlinkage long
-sys_rt_sigtimedwait(const sigset_t *uthese, siginfo_t *uinfo,
- const struct timespec *uts, size_t sigsetsize);
-
-asmlinkage long
-sys32_rt_sigtimedwait(sigset_t32 *uthese, siginfo_t32 *uinfo,
- struct timespec32 *uts, __kernel_size_t32 sigsetsize)
-{
- sigset_t s;
- sigset_t32 s32;
- struct timespec t;
- int ret;
- mm_segment_t old_fs = get_fs();
- siginfo_t info;
- siginfo_t32 info32;
-
- if (copy_from_user (&s32, uthese, sizeof(sigset_t32)))
- return -EFAULT;
- switch (_NSIG_WORDS) {
- case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
- case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
- case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
- case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
- }
- if (uts) {
- ret = get_user (t.tv_sec, &uts->tv_sec);
- ret |= __get_user (t.tv_nsec, &uts->tv_nsec);
- if (ret)
- return -EFAULT;
- }
- set_fs (KERNEL_DS);
- ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
- set_fs (old_fs);
- if (ret >= 0 && uinfo) {
- if (copy_to_user (uinfo, siginfo64to32(&info32, &info),
- sizeof(siginfo_t32)))
- return -EFAULT;
- }
- return ret;
-}
-
-extern asmlinkage long
-sys_rt_sigqueueinfo(int pid, int sig, siginfo_t *uinfo);
-
-asmlinkage long
-sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo)
-{
- siginfo_t info;
- siginfo_t32 info32;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- if (copy_from_user (&info32, uinfo, sizeof(siginfo_t32)))
- return -EFAULT;
- /* XXX: Is this correct? */
- siginfo32to64(&info, &info32);
- set_fs (KERNEL_DS);
- ret = sys_rt_sigqueueinfo(pid, sig, &info);
- set_fs (old_fs);
- return ret;
-}
-
-extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
-
-asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
-{
- uid_t sruid, seuid;
-
- sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
- seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
- return sys_setreuid(sruid, seuid);
-}
-
-extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
-
-asmlinkage long
-sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid,
- __kernel_uid_t32 suid)
-{
- uid_t sruid, seuid, ssuid;
-
- sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
- seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
- ssuid = (suid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid);
- return sys_setresuid(sruid, seuid, ssuid);
-}
-
-extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
-
-asmlinkage long
-sys32_getresuid(__kernel_uid_t32 *ruid, __kernel_uid_t32 *euid,
- __kernel_uid_t32 *suid)
-{
- uid_t a, b, c;
- int ret;
- mm_segment_t old_fs = get_fs();
-
- set_fs (KERNEL_DS);
- ret = sys_getresuid(&a, &b, &c);
- set_fs (old_fs);
- if (put_user (a, ruid) || put_user (b, euid) || put_user (c, suid))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid);
-
-asmlinkage long
-sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid)
-{
- gid_t srgid, segid;
-
- srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
- segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
- return sys_setregid(srgid, segid);
-}
-
-extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
-
-asmlinkage long
-sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid,
- __kernel_gid_t32 sgid)
-{
- gid_t srgid, segid, ssgid;
-
- srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
- segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
- ssgid = (sgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid);
- return sys_setresgid(srgid, segid, ssgid);
-}
-
-extern asmlinkage long sys_getgroups(int gidsetsize, gid_t *grouplist);
-
-asmlinkage long
-sys32_getgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
-{
- gid_t gl[NGROUPS];
- int ret, i;
- mm_segment_t old_fs = get_fs ();
-
- set_fs (KERNEL_DS);
- ret = sys_getgroups(gidsetsize, gl);
- set_fs (old_fs);
- if (gidsetsize && ret > 0 && ret <= NGROUPS)
- for (i = 0; i < ret; i++, grouplist++)
- if (__put_user (gl[i], grouplist))
- return -EFAULT;
- return ret;
-}
-
-extern asmlinkage long sys_setgroups(int gidsetsize, gid_t *grouplist);
-
-asmlinkage long
-sys32_setgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
-{
- gid_t gl[NGROUPS];
- int ret, i;
- mm_segment_t old_fs = get_fs ();
-
- if ((unsigned) gidsetsize > NGROUPS)
- return -EINVAL;
- for (i = 0; i < gidsetsize; i++, grouplist++)
- if (__get_user (gl[i], grouplist))
- return -EFAULT;
- set_fs (KERNEL_DS);
- ret = sys_setgroups(gidsetsize, gl);
- set_fs (old_fs);
- return ret;
-}
-
-
-/* XXX These as well... */
-extern __inline__ struct socket *
-socki_lookup(struct inode *inode)
-{
- return &inode->u.socket_i;
-}
-
-extern __inline__ struct socket *
-sockfd_lookup(int fd, int *err)
-{
- struct file *file;
- struct inode *inode;
-
- if (!(file = fget(fd)))
- {
- *err = -EBADF;
- return NULL;
- }
-
- inode = file->f_dentry->d_inode;
- if (!inode->i_sock || !socki_lookup(inode))
- {
- *err = -ENOTSOCK;
- fput(file);
- return NULL;
- }
-
- return socki_lookup(inode);
-}
-
-struct msghdr32 {
- u32 msg_name;
- int msg_namelen;
- u32 msg_iov;
- __kernel_size_t32 msg_iovlen;
- u32 msg_control;
- __kernel_size_t32 msg_controllen;
- unsigned msg_flags;
-};
-
-struct cmsghdr32 {
- __kernel_size_t32 cmsg_len;
- int cmsg_level;
- int cmsg_type;
-};
-
-/* Bleech... */
-#define __CMSG32_NXTHDR(ctl, len, cmsg, cmsglen) \
- __cmsg32_nxthdr((ctl),(len),(cmsg),(cmsglen))
-#define CMSG32_NXTHDR(mhdr, cmsg, cmsglen) \
- cmsg32_nxthdr((mhdr), (cmsg), (cmsglen))
-
-#define CMSG32_ALIGN(len) ( ((len)+sizeof(int)-1) & ~(sizeof(int)-1) )
-
-#define CMSG32_DATA(cmsg) \
- ((void *)((char *)(cmsg) + CMSG32_ALIGN(sizeof(struct cmsghdr32))))
-#define CMSG32_SPACE(len) \
- (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + CMSG32_ALIGN(len))
-#define CMSG32_LEN(len) (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + (len))
-
-#define __CMSG32_FIRSTHDR(ctl,len) ((len) >= sizeof(struct cmsghdr32) ? \
- (struct cmsghdr32 *)(ctl) : \
- (struct cmsghdr32 *)NULL)
-#define CMSG32_FIRSTHDR(msg) \
- __CMSG32_FIRSTHDR((msg)->msg_control, (msg)->msg_controllen)
-
-__inline__ struct cmsghdr32 *
-__cmsg32_nxthdr(void *__ctl, __kernel_size_t __size,
- struct cmsghdr32 *__cmsg, int __cmsg_len)
-{
- struct cmsghdr32 * __ptr;
-
- __ptr = (struct cmsghdr32 *)(((unsigned char *) __cmsg) +
- CMSG32_ALIGN(__cmsg_len));
- if ((unsigned long)((char*)(__ptr+1) - (char *) __ctl) > __size)
- return NULL;
-
- return __ptr;
-}
-
-__inline__ struct cmsghdr32 *
-cmsg32_nxthdr (struct msghdr *__msg, struct cmsghdr32 *__cmsg, int __cmsg_len)
-{
- return __cmsg32_nxthdr(__msg->msg_control, __msg->msg_controllen,
- __cmsg, __cmsg_len);
-}
-
-static inline int
-iov_from_user32_to_kern(struct iovec *kiov, struct iovec32 *uiov32, int niov)
-{
- int tot_len = 0;
-
- while(niov > 0) {
- u32 len, buf;
-
- if(get_user(len, &uiov32->iov_len) ||
- get_user(buf, &uiov32->iov_base)) {
- tot_len = -EFAULT;
- break;
- }
- tot_len += len;
- kiov->iov_base = (void *)A(buf);
- kiov->iov_len = (__kernel_size_t) len;
- uiov32++;
- kiov++;
- niov--;
- }
- return tot_len;
-}
-
-static inline int
-msghdr_from_user32_to_kern(struct msghdr *kmsg, struct msghdr32 *umsg)
-{
- u32 tmp1, tmp2, tmp3;
- int err;
-
- err = get_user(tmp1, &umsg->msg_name);
- err |= __get_user(tmp2, &umsg->msg_iov);
- err |= __get_user(tmp3, &umsg->msg_control);
- if (err)
- return -EFAULT;
-
- kmsg->msg_name = (void *)A(tmp1);
- kmsg->msg_iov = (struct iovec *)A(tmp2);
- kmsg->msg_control = (void *)A(tmp3);
-
- err = get_user(kmsg->msg_namelen, &umsg->msg_namelen);
- err |= get_user(kmsg->msg_iovlen, &umsg->msg_iovlen);
- err |= get_user(kmsg->msg_controllen, &umsg->msg_controllen);
- err |= get_user(kmsg->msg_flags, &umsg->msg_flags);
-
- return err;
-}
-
-/* I've named the args so it is easy to tell whose space the pointers are in. */
-static int
-verify_iovec32(struct msghdr *kern_msg, struct iovec *kern_iov,
- char *kern_address, int mode)
-{
- int tot_len;
-
- if(kern_msg->msg_namelen) {
- if(mode=VERIFY_READ) {
- int err = move_addr_to_kernel(kern_msg->msg_name,
- kern_msg->msg_namelen,
- kern_address);
- if(err < 0)
- return err;
- }
- kern_msg->msg_name = kern_address;
- } else
- kern_msg->msg_name = NULL;
-
- if(kern_msg->msg_iovlen > UIO_FASTIOV) {
- kern_iov = kmalloc(kern_msg->msg_iovlen * sizeof(struct iovec),
- GFP_KERNEL);
- if(!kern_iov)
- return -ENOMEM;
- }
-
- tot_len = iov_from_user32_to_kern(kern_iov,
- (struct iovec32 *)kern_msg->msg_iov,
- kern_msg->msg_iovlen);
- if(tot_len >= 0)
- kern_msg->msg_iov = kern_iov;
- else if(kern_msg->msg_iovlen > UIO_FASTIOV)
- kfree(kern_iov);
-
- return tot_len;
-}
-
-/* There is a lot of hair here because the alignment rules (and
- * thus placement) of cmsg headers and length are different for
- * 32-bit apps. -DaveM
- */
-static int
-cmsghdr_from_user32_to_kern(struct msghdr *kmsg, unsigned char *stackbuf,
- int stackbuf_size)
-{
- struct cmsghdr32 *ucmsg;
- struct cmsghdr *kcmsg, *kcmsg_base;
- __kernel_size_t32 ucmlen;
- __kernel_size_t kcmlen, tmp;
-
- kcmlen = 0;
- kcmsg_base = kcmsg = (struct cmsghdr *)stackbuf;
- ucmsg = CMSG32_FIRSTHDR(kmsg);
- while(ucmsg != NULL) {
- if(get_user(ucmlen, &ucmsg->cmsg_len))
- return -EFAULT;
-
- /* Catch bogons. */
- if(CMSG32_ALIGN(ucmlen) <
- CMSG32_ALIGN(sizeof(struct cmsghdr32)))
- return -EINVAL;
- if((unsigned long)(((char *)ucmsg - (char *)kmsg->msg_control)
- + ucmlen) > kmsg->msg_controllen)
- return -EINVAL;
-
- tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
- CMSG_ALIGN(sizeof(struct cmsghdr)));
- kcmlen += tmp;
- ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
- }
- if(kcmlen = 0)
- return -EINVAL;
-
- /* The kcmlen holds the 64-bit version of the control length.
- * It may not be modified as we do not stick it into the kmsg
- * until we have successfully copied over all of the data
- * from the user.
- */
- if(kcmlen > stackbuf_size)
- kcmsg_base = kcmsg = kmalloc(kcmlen, GFP_KERNEL);
- if(kcmsg = NULL)
- return -ENOBUFS;
-
- /* Now copy them over neatly. */
- memset(kcmsg, 0, kcmlen);
- ucmsg = CMSG32_FIRSTHDR(kmsg);
- while(ucmsg != NULL) {
- __get_user(ucmlen, &ucmsg->cmsg_len);
- tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
- CMSG_ALIGN(sizeof(struct cmsghdr)));
- kcmsg->cmsg_len = tmp;
- __get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level);
- __get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type);
-
- /* Copy over the data. */
- if(copy_from_user(CMSG_DATA(kcmsg),
- CMSG32_DATA(ucmsg),
- (ucmlen - CMSG32_ALIGN(sizeof(*ucmsg)))))
- goto out_free_efault;
-
- /* Advance. */
- kcmsg = (struct cmsghdr *)((char *)kcmsg + CMSG_ALIGN(tmp));
- ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
- }
-
- /* Ok, looks like we made it. Hook it up and return success. */
- kmsg->msg_control = kcmsg_base;
- kmsg->msg_controllen = kcmlen;
- return 0;
-
-out_free_efault:
- if(kcmsg_base != (struct cmsghdr *)stackbuf)
- kfree(kcmsg_base);
- return -EFAULT;
-}
-
-static void
-put_cmsg32(struct msghdr *kmsg, int level, int type, int len, void *data)
-{
- struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
- struct cmsghdr32 cmhdr;
- int cmlen = CMSG32_LEN(len);
-
- if(cm = NULL || kmsg->msg_controllen < sizeof(*cm)) {
- kmsg->msg_flags |= MSG_CTRUNC;
- return;
- }
-
- if(kmsg->msg_controllen < cmlen) {
- kmsg->msg_flags |= MSG_CTRUNC;
- cmlen = kmsg->msg_controllen;
- }
- cmhdr.cmsg_level = level;
- cmhdr.cmsg_type = type;
- cmhdr.cmsg_len = cmlen;
-
- if(copy_to_user(cm, &cmhdr, sizeof cmhdr))
- return;
- if(copy_to_user(CMSG32_DATA(cm), data,
- cmlen - sizeof(struct cmsghdr32)))
- return;
- cmlen = CMSG32_SPACE(len);
- kmsg->msg_control += cmlen;
- kmsg->msg_controllen -= cmlen;
-}
-
-static void scm_detach_fds32(struct msghdr *kmsg, struct scm_cookie *scm)
-{
- struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
- int fdmax = (kmsg->msg_controllen - sizeof(struct cmsghdr32))
- / sizeof(int);
- int fdnum = scm->fp->count;
- struct file **fp = scm->fp->fp;
- int *cmfptr;
- int err = 0, i;
-
- if (fdnum < fdmax)
- fdmax = fdnum;
-
- for (i = 0, cmfptr = (int *) CMSG32_DATA(cm);
- i < fdmax;
- i++, cmfptr++) {
- int new_fd;
- err = get_unused_fd();
- if (err < 0)
- break;
- new_fd = err;
- err = put_user(new_fd, cmfptr);
- if (err) {
- put_unused_fd(new_fd);
- break;
- }
- /* Bump the usage count and install the file. */
- fp[i]->f_count++;
- current->files->fd[new_fd] = fp[i];
- }
-
- if (i > 0) {
- int cmlen = CMSG32_LEN(i * sizeof(int));
- if (!err)
- err = put_user(SOL_SOCKET, &cm->cmsg_level);
- if (!err)
- err = put_user(SCM_RIGHTS, &cm->cmsg_type);
- if (!err)
- err = put_user(cmlen, &cm->cmsg_len);
- if (!err) {
- cmlen = CMSG32_SPACE(i * sizeof(int));
- kmsg->msg_control += cmlen;
- kmsg->msg_controllen -= cmlen;
- }
- }
- if (i < fdnum)
- kmsg->msg_flags |= MSG_CTRUNC;
-
- /*
- * All of the files that fit in the message have had their
- * usage counts incremented, so we just free the list.
- */
- __scm_destroy(scm);
-}
-
-/* In these cases we (currently) can just copy to data over verbatim
- * because all CMSGs created by the kernel have well defined types which
- * have the same layout in both the 32-bit and 64-bit API. One must add
- * some special cased conversions here if we start sending control messages
- * with incompatible types.
- *
- * SCM_RIGHTS and SCM_CREDENTIALS are done by hand in recvmsg32 right after
- * we do our work. The remaining cases are:
- *
- * SOL_IP IP_PKTINFO struct in_pktinfo 32-bit clean
- * IP_TTL int 32-bit clean
- * IP_TOS __u8 32-bit clean
- * IP_RECVOPTS variable length 32-bit clean
- * IP_RETOPTS variable length 32-bit clean
- * (these last two are clean because the types are defined
- * by the IPv4 protocol)
- * IP_RECVERR struct sock_extended_err +
- * struct sockaddr_in 32-bit clean
- * SOL_IPV6 IPV6_RECVERR struct sock_extended_err +
- * struct sockaddr_in6 32-bit clean
- * IPV6_PKTINFO struct in6_pktinfo 32-bit clean
- * IPV6_HOPLIMIT int 32-bit clean
- * IPV6_FLOWINFO u32 32-bit clean
- * IPV6_HOPOPTS ipv6 hop exthdr 32-bit clean
- * IPV6_DSTOPTS ipv6 dst exthdr(s) 32-bit clean
- * IPV6_RTHDR ipv6 routing exthdr 32-bit clean
- * IPV6_AUTHHDR ipv6 auth exthdr 32-bit clean
- */
-static void
-cmsg32_recvmsg_fixup(struct msghdr *kmsg, unsigned long orig_cmsg_uptr)
-{
- unsigned char *workbuf, *wp;
- unsigned long bufsz, space_avail;
- struct cmsghdr *ucmsg;
-
- bufsz = ((unsigned long)kmsg->msg_control) - orig_cmsg_uptr;
- space_avail = kmsg->msg_controllen + bufsz;
- wp = workbuf = kmalloc(bufsz, GFP_KERNEL);
- if(workbuf = NULL)
- goto fail;
-
- /* To make this more sane we assume the kernel sends back properly
- * formatted control messages. Because of how the kernel will truncate
- * the cmsg_len for MSG_TRUNC cases, we need not check that case either.
- */
- ucmsg = (struct cmsghdr *) orig_cmsg_uptr;
- while(((unsigned long)ucmsg) < ((unsigned long)kmsg->msg_control)) {
- struct cmsghdr32 *kcmsg32 = (struct cmsghdr32 *) wp;
- int clen64, clen32;
-
- /* UCMSG is the 64-bit format CMSG entry in user-space.
- * KCMSG32 is within the kernel space temporary buffer
- * we use to convert into a 32-bit style CMSG.
- */
- __get_user(kcmsg32->cmsg_len, &ucmsg->cmsg_len);
- __get_user(kcmsg32->cmsg_level, &ucmsg->cmsg_level);
- __get_user(kcmsg32->cmsg_type, &ucmsg->cmsg_type);
-
- clen64 = kcmsg32->cmsg_len;
- copy_from_user(CMSG32_DATA(kcmsg32), CMSG_DATA(ucmsg),
- clen64 - CMSG_ALIGN(sizeof(*ucmsg)));
- clen32 = ((clen64 - CMSG_ALIGN(sizeof(*ucmsg))) +
- CMSG32_ALIGN(sizeof(struct cmsghdr32)));
- kcmsg32->cmsg_len = clen32;
-
- ucmsg = (struct cmsghdr *) (((char *)ucmsg) +
- CMSG_ALIGN(clen64));
- wp = (((char *)kcmsg32) + CMSG32_ALIGN(clen32));
- }
-
- /* Copy back fixed up data, and adjust pointers. */
- bufsz = (wp - workbuf);
- copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz);
-
- kmsg->msg_control = (struct cmsghdr *)
- (((char *)orig_cmsg_uptr) + bufsz);
- kmsg->msg_controllen = space_avail - bufsz;
-
- kfree(workbuf);
- return;
-
-fail:
- /* If we leave the 64-bit format CMSG chunks in there,
- * the application could get confused and crash. So to
- * ensure greater recovery, we report no CMSGs.
- */
- kmsg->msg_controllen += bufsz;
- kmsg->msg_control = (void *) orig_cmsg_uptr;
-}
-
-asmlinkage long
-sys32_sendmsg(int fd, struct msghdr32 *user_msg, unsigned user_flags)
-{
- struct socket *sock;
- char address[MAX_SOCK_ADDR];
- struct iovec iov[UIO_FASTIOV];
- unsigned char ctl[sizeof(struct cmsghdr) + 20];
- unsigned char *ctl_buf = ctl;
- struct msghdr kern_msg;
- int err, total_len;
-
- if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
- return -EFAULT;
- if(kern_msg.msg_iovlen > UIO_MAXIOV)
- return -EINVAL;
- err = verify_iovec32(&kern_msg, iov, address, VERIFY_READ);
- if (err < 0)
- goto out;
- total_len = err;
-
- if(kern_msg.msg_controllen) {
- err = cmsghdr_from_user32_to_kern(&kern_msg, ctl, sizeof(ctl));
- if(err)
- goto out_freeiov;
- ctl_buf = kern_msg.msg_control;
- }
- kern_msg.msg_flags = user_flags;
-
- sock = sockfd_lookup(fd, &err);
- if (sock != NULL) {
- if (sock->file->f_flags & O_NONBLOCK)
- kern_msg.msg_flags |= MSG_DONTWAIT;
- err = sock_sendmsg(sock, &kern_msg, total_len);
- sockfd_put(sock);
- }
-
- /* N.B. Use kfree here, as kern_msg.msg_controllen might change? */
- if(ctl_buf != ctl)
- kfree(ctl_buf);
-out_freeiov:
- if(kern_msg.msg_iov != iov)
- kfree(kern_msg.msg_iov);
-out:
- return err;
-}
-
-asmlinkage long
-sys32_recvmsg(int fd, struct msghdr32 *user_msg, unsigned int user_flags)
-{
- struct iovec iovstack[UIO_FASTIOV];
- struct msghdr kern_msg;
- char addr[MAX_SOCK_ADDR];
- struct socket *sock;
- struct iovec *iov = iovstack;
- struct sockaddr *uaddr;
- int *uaddr_len;
- unsigned long cmsg_ptr;
- int err, total_len, len = 0;
-
- if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
- return -EFAULT;
- if(kern_msg.msg_iovlen > UIO_MAXIOV)
- return -EINVAL;
-
- uaddr = kern_msg.msg_name;
- uaddr_len = &user_msg->msg_namelen;
- err = verify_iovec32(&kern_msg, iov, addr, VERIFY_WRITE);
- if (err < 0)
- goto out;
- total_len = err;
-
- cmsg_ptr = (unsigned long) kern_msg.msg_control;
- kern_msg.msg_flags = 0;
-
- sock = sockfd_lookup(fd, &err);
- if (sock != NULL) {
- struct scm_cookie scm;
-
- if (sock->file->f_flags & O_NONBLOCK)
- user_flags |= MSG_DONTWAIT;
- memset(&scm, 0, sizeof(scm));
- lock_kernel();
- err = sock->ops->recvmsg(sock, &kern_msg, total_len,
- user_flags, &scm);
- if(err >= 0) {
- len = err;
- if(!kern_msg.msg_control) {
- if(sock->passcred || scm.fp)
- kern_msg.msg_flags |= MSG_CTRUNC;
- if(scm.fp)
- __scm_destroy(&scm);
- } else {
- /* If recvmsg processing itself placed some
- * control messages into user space, it's is
- * using 64-bit CMSG processing, so we need
- * to fix it up before we tack on more stuff.
- */
- if((unsigned long) kern_msg.msg_control
- != cmsg_ptr)
- cmsg32_recvmsg_fixup(&kern_msg,
- cmsg_ptr);
-
- /* Wheee... */
- if(sock->passcred)
- put_cmsg32(&kern_msg,
- SOL_SOCKET, SCM_CREDENTIALS,
- sizeof(scm.creds),
- &scm.creds);
- if(scm.fp != NULL)
- scm_detach_fds32(&kern_msg, &scm);
- }
- }
- unlock_kernel();
- sockfd_put(sock);
- }
-
- if(uaddr != NULL && err >= 0)
- err = move_addr_to_user(addr, kern_msg.msg_namelen, uaddr,
- uaddr_len);
- if(cmsg_ptr != 0 && err >= 0) {
- unsigned long ucmsg_ptr = ((unsigned long)kern_msg.msg_control);
- __kernel_size_t32 uclen = (__kernel_size_t32) (ucmsg_ptr
- - cmsg_ptr);
- err |= __put_user(uclen, &user_msg->msg_controllen);
- }
- if(err >= 0)
- err = __put_user(kern_msg.msg_flags, &user_msg->msg_flags);
- if(kern_msg.msg_iov != iov)
- kfree(kern_msg.msg_iov);
-out:
- if(err < 0)
- return err;
- return len;
-}
-
-extern void check_pending(int signum);
-
-#ifdef CONFIG_MODULES
-
-extern asmlinkage unsigned long sys_create_module(const char *name_user,
- size_t size);
-
-asmlinkage unsigned long
-sys32_create_module(const char *name_user, __kernel_size_t32 size)
-{
- return sys_create_module(name_user, (size_t)size);
-}
-
-extern asmlinkage long sys_init_module(const char *name_user,
- struct module *mod_user);
-
-/* Hey, when you're trying to init module, take time and prepare us a nice 64bit
- * module structure, even if from 32bit modutils... Why to pollute kernel... :))
- */
-asmlinkage long
-sys32_init_module(const char *name_user, struct module *mod_user)
-{
- return sys_init_module(name_user, mod_user);
-}
-
-extern asmlinkage long sys_delete_module(const char *name_user);
-
-asmlinkage long
-sys32_delete_module(const char *name_user)
-{
- return sys_delete_module(name_user);
-}
-
-struct module_info32 {
- u32 addr;
- u32 size;
- u32 flags;
- s32 usecount;
-};
-
-/* Query various bits about modules. */
-
-static inline long
-get_mod_name(const char *user_name, char **buf)
-{
- unsigned long page;
- long retval;
-
- if ((unsigned long)user_name >= TASK_SIZE
- && !segment_eq(get_fs (), KERNEL_DS))
- return -EFAULT;
-
- page = __get_free_page(GFP_KERNEL);
- if (!page)
- return -ENOMEM;
-
- retval = strncpy_from_user((char *)page, user_name, PAGE_SIZE);
- if (retval > 0) {
- if (retval < PAGE_SIZE) {
- *buf = (char *)page;
- return retval;
- }
- retval = -ENAMETOOLONG;
- } else if (!retval)
- retval = -EINVAL;
-
- free_page(page);
- return retval;
-}
-
-static inline void
-put_mod_name(char *buf)
-{
- free_page((unsigned long)buf);
-}
-
-static __inline__ struct module *
-find_module(const char *name)
-{
- struct module *mod;
-
- for (mod = module_list; mod ; mod = mod->next) {
- if (mod->flags & MOD_DELETED)
- continue;
- if (!strcmp(mod->name, name))
- break;
- }
-
- return mod;
-}
-
-static int
-qm_modules(char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- struct module *mod;
- size_t nmod, space, len;
-
- nmod = space = 0;
-
- for (mod = module_list; mod->next != NULL; mod = mod->next, ++nmod) {
- len = strlen(mod->name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, mod->name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(nmod, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while ((mod = mod->next)->next != NULL)
- space += strlen(mod->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static int
-qm_deps(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- size_t i, space, len;
-
- if (mod->next = NULL)
- return -EINVAL;
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = 0;
- for (i = 0; i < mod->ndeps; ++i) {
- const char *dep_name = mod->deps[i].dep->name;
-
- len = strlen(dep_name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, dep_name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(i, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while (++i < mod->ndeps)
- space += strlen(mod->deps[i].dep->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static int
-qm_refs(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
-{
- size_t nrefs, space, len;
- struct module_ref *ref;
-
- if (mod->next = NULL)
- return -EINVAL;
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = 0;
- for (nrefs = 0, ref = mod->refs; ref ; ++nrefs, ref = ref->next_ref) {
- const char *ref_name = ref->ref->name;
-
- len = strlen(ref_name)+1;
- if (len > bufsize)
- goto calc_space_needed;
- if (copy_to_user(buf, ref_name, len))
- return -EFAULT;
- buf += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(nrefs, ret))
- return -EFAULT;
- else
- return 0;
-
-calc_space_needed:
- space += len;
- while ((ref = ref->next_ref) != NULL)
- space += strlen(ref->ref->name)+1;
-
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
-}
-
-static inline int
-qm_symbols(struct module *mod, char *buf, size_t bufsize,
- __kernel_size_t32 *ret)
-{
- size_t i, space, len;
- struct module_symbol *s;
- char *strings;
- unsigned *vals;
-
- if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
- if (put_user(0, ret))
- return -EFAULT;
- else
- return 0;
-
- space = mod->nsyms * 2*sizeof(u32);
-
- i = len = 0;
- s = mod->syms;
-
- if (space > bufsize)
- goto calc_space_needed;
-
- if (!access_ok(VERIFY_WRITE, buf, space))
- return -EFAULT;
-
- bufsize -= space;
- vals = (unsigned *)buf;
- strings = buf+space;
-
- for (; i < mod->nsyms ; ++i, ++s, vals += 2) {
- len = strlen(s->name)+1;
- if (len > bufsize)
- goto calc_space_needed;
-
- if (copy_to_user(strings, s->name, len)
- || __put_user(s->value, vals+0)
- || __put_user(space, vals+1))
- return -EFAULT;
-
- strings += len;
- bufsize -= len;
- space += len;
- }
-
- if (put_user(i, ret))
- return -EFAULT;
- else
- return 0;
+ __kernel_mode_t32 dir_mode;
+};
-calc_space_needed:
- for (; i < mod->nsyms; ++i, ++s)
- space += strlen(s->name)+1;
+static void *
+do_smb_super_data_conv(void *raw_data)
+{
+ struct smb_mount_data *s = (struct smb_mount_data *)raw_data;
+ struct smb_mount_data32 *s32 = (struct smb_mount_data32 *)raw_data;
- if (put_user(space, ret))
- return -EFAULT;
- else
- return -ENOSPC;
+ s->version = s32->version;
+ s->mounted_uid = s32->mounted_uid;
+ s->uid = s32->uid;
+ s->gid = s32->gid;
+ s->file_mode = s32->file_mode;
+ s->dir_mode = s32->dir_mode;
+ return raw_data;
}
-static inline int
-qm_info(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
+static int
+copy_mount_stuff_to_kernel(const void *user, unsigned long *kernel)
{
- int error = 0;
-
- if (mod->next = NULL)
- return -EINVAL;
-
- if (sizeof(struct module_info32) <= bufsize) {
- struct module_info32 info;
- info.addr = (unsigned long)mod;
- info.size = mod->size;
- info.flags = mod->flags;
- info.usecount - ((mod_member_present(mod, can_unload)
- && mod->can_unload)
- ? -1 : atomic_read(&mod->uc.usecount));
-
- if (copy_to_user(buf, &info, sizeof(struct module_info32)))
- return -EFAULT;
- } else
- error = -ENOSPC;
+ int i;
+ unsigned long page;
+ struct vm_area_struct *vma;
- if (put_user(sizeof(struct module_info32), ret))
+ *kernel = 0;
+ if(!user)
+ return 0;
+ vma = find_vma(current->mm, (unsigned long)user);
+ if(!vma || (unsigned long)user < vma->vm_start)
return -EFAULT;
-
- return error;
+ if(!(vma->vm_flags & VM_READ))
+ return -EFAULT;
+ i = vma->vm_end - (unsigned long) user;
+ if(PAGE_SIZE <= (unsigned long) i)
+ i = PAGE_SIZE - 1;
+ if(!(page = __get_free_page(GFP_KERNEL)))
+ return -ENOMEM;
+ if(copy_from_user((void *) page, user, i)) {
+ free_page(page);
+ return -EFAULT;
+ }
+ *kernel = page;
+ return 0;
}
+extern asmlinkage long sys_mount(char * dev_name, char * dir_name, char * type,
+ unsigned long new_flags, void *data);
+
+#define SMBFS_NAME "smbfs"
+#define NCPFS_NAME "ncpfs"
+
asmlinkage long
-sys32_query_module(char *name_user, int which, char *buf,
- __kernel_size_t32 bufsize, u32 ret)
+sys32_mount(char *dev_name, char *dir_name, char *type,
+ unsigned long new_flags, u32 data)
{
- struct module *mod;
- int err;
+ unsigned long type_page;
+ int err, is_smb, is_ncp;
- lock_kernel();
- if (name_user = 0) {
- /* This finds "kernel_module" which is not exported. */
- for(mod = module_list; mod->next != NULL; mod = mod->next)
- ;
+ if(!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ is_smb = is_ncp = 0;
+ err = copy_mount_stuff_to_kernel((const void *)type, &type_page);
+ if(err)
+ return err;
+ if(type_page) {
+ is_smb = !strcmp((char *)type_page, SMBFS_NAME);
+ is_ncp = !strcmp((char *)type_page, NCPFS_NAME);
+ }
+ if(!is_smb && !is_ncp) {
+ if(type_page)
+ free_page(type_page);
+ return sys_mount(dev_name, dir_name, type, new_flags,
+ (void *)AA(data));
} else {
- long namelen;
- char *name;
+ unsigned long dev_page, dir_page, data_page;
- if ((namelen = get_mod_name(name_user, &name)) < 0) {
- err = namelen;
- goto out;
- }
- err = -ENOENT;
- if (namelen = 0) {
- /* This finds "kernel_module" which is not exported. */
- for(mod = module_list;
- mod->next != NULL;
- mod = mod->next) ;
- } else if ((mod = find_module(name)) = NULL) {
- put_mod_name(name);
+ err = copy_mount_stuff_to_kernel((const void *)dev_name,
+ &dev_page);
+ if(err)
goto out;
- }
- put_mod_name(name);
- }
-
- switch (which)
- {
- case 0:
- err = 0;
- break;
- case QM_MODULES:
- err = qm_modules(buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_DEPS:
- err = qm_deps(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_REFS:
- err = qm_refs(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- case QM_SYMBOLS:
- err = qm_symbols(mod, buf, bufsize,
- (__kernel_size_t32 *)AA(ret));
- break;
- case QM_INFO:
- err = qm_info(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
- break;
- default:
- err = -EINVAL;
- break;
+ err = copy_mount_stuff_to_kernel((const void *)dir_name,
+ &dir_page);
+ if(err)
+ goto dev_out;
+ err = copy_mount_stuff_to_kernel((const void *)AA(data),
+ &data_page);
+ if(err)
+ goto dir_out;
+ if(is_ncp)
+ do_ncp_super_data_conv((void *)data_page);
+ else if(is_smb)
+ do_smb_super_data_conv((void *)data_page);
+ else
+ panic("The problem is here...");
+ err = do_mount((char *)dev_page, (char *)dir_page,
+ (char *)type_page, new_flags,
+ (void *)data_page);
+ if(data_page)
+ free_page(data_page);
+ dir_out:
+ if(dir_page)
+ free_page(dir_page);
+ dev_out:
+ if(dev_page)
+ free_page(dev_page);
+ out:
+ if(type_page)
+ free_page(type_page);
+ return err;
}
-out:
- unlock_kernel();
- return err;
}
-struct kernel_sym32 {
- u32 value;
- char name[60];
-};
-
-extern asmlinkage long sys_get_kernel_syms(struct kernel_sym *table);
+extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
-asmlinkage long
-sys32_get_kernel_syms(struct kernel_sym32 *table)
+asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
{
- int len, i;
- struct kernel_sym *tbl;
- mm_segment_t old_fs;
+ uid_t sruid, seuid;
- len = sys_get_kernel_syms(NULL);
- if (!table) return len;
- tbl = kmalloc (len * sizeof (struct kernel_sym), GFP_KERNEL);
- if (!tbl) return -ENOMEM;
- old_fs = get_fs();
- set_fs (KERNEL_DS);
- sys_get_kernel_syms(tbl);
- set_fs (old_fs);
- for (i = 0; i < len; i++, table += sizeof (struct kernel_sym32)) {
- if (put_user (tbl[i].value, &table->value) ||
- copy_to_user (table->name, tbl[i].name, 60))
- break;
- }
- kfree (tbl);
- return i;
+ sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
+ seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
+ return sys_setreuid(sruid, seuid);
}
-#else /* CONFIG_MODULES */
-
-asmlinkage unsigned long
-sys32_create_module(const char *name_user, size_t size)
-{
- return -ENOSYS;
-}
+extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
asmlinkage long
-sys32_init_module(const char *name_user, struct module *mod_user)
+sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid,
+ __kernel_uid_t32 suid)
{
- return -ENOSYS;
-}
+ uid_t sruid, seuid, ssuid;
-asmlinkage long
-sys32_delete_module(const char *name_user)
-{
- return -ENOSYS;
+ sruid = (ruid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
+ seuid = (euid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
+ ssuid = (suid = (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid);
+ return sys_setresuid(sruid, seuid, ssuid);
}
+extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid);
+
asmlinkage long
-sys32_query_module(const char *name_user, int which, char *buf, size_t bufsize,
- size_t *ret)
+sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid)
{
- /* Let the program know about the new interface. Not that
- it'll do them much good. */
- if (which = 0)
- return 0;
+ gid_t srgid, segid;
- return -ENOSYS;
+ srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
+ segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
+ return sys_setregid(srgid, segid);
}
+extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
+
asmlinkage long
-sys32_get_kernel_syms(struct kernel_sym *table)
+sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid,
+ __kernel_gid_t32 sgid)
{
- return -ENOSYS;
-}
+ gid_t srgid, segid, ssgid;
-#endif /* CONFIG_MODULES */
+ srgid = (rgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
+ segid = (egid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
+ ssgid = (sgid = (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid);
+ return sys_setresgid(srgid, segid, ssgid);
+}
/* Stuff for NFS server syscalls... */
struct nfsctl_svc32 {
@@ -4820,154 +4293,6 @@
return err;
}
-asmlinkage long sys_utimes(char *, struct timeval *);
-
-asmlinkage long
-sys32_utimes(char *filename, struct timeval32 *tvs)
-{
- char *kfilename;
- struct timeval ktvs[2];
- mm_segment_t old_fs;
- int ret;
-
- kfilename = getname32(filename);
- ret = PTR_ERR(kfilename);
- if (!IS_ERR(kfilename)) {
- if (tvs) {
- if (get_tv32(&ktvs[0], tvs) ||
- get_tv32(&ktvs[1], 1+tvs))
- return -EFAULT;
- }
-
- old_fs = get_fs();
- set_fs(KERNEL_DS);
- ret = sys_utimes(kfilename, &ktvs[0]);
- set_fs(old_fs);
-
- putname(kfilename);
- }
- return ret;
-}
-
-/* These are here just in case some old ia32 binary calls it. */
-asmlinkage long
-sys32_pause(void)
-{
- current->state = TASK_INTERRUPTIBLE;
- schedule();
- return -ERESTARTNOHAND;
-}
-
-/* PCI config space poking. */
-extern asmlinkage long sys_pciconfig_read(unsigned long bus,
- unsigned long dfn,
- unsigned long off,
- unsigned long len,
- unsigned char *buf);
-
-extern asmlinkage long sys_pciconfig_write(unsigned long bus,
- unsigned long dfn,
- unsigned long off,
- unsigned long len,
- unsigned char *buf);
-
-asmlinkage long
-sys32_pciconfig_read(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
-{
- return sys_pciconfig_read((unsigned long) bus,
- (unsigned long) dfn,
- (unsigned long) off,
- (unsigned long) len,
- (unsigned char *)AA(ubuf));
-}
-
-asmlinkage long
-sys32_pciconfig_write(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
-{
- return sys_pciconfig_write((unsigned long) bus,
- (unsigned long) dfn,
- (unsigned long) off,
- (unsigned long) len,
- (unsigned char *)AA(ubuf));
-}
-
-extern asmlinkage long sys_prctl(int option, unsigned long arg2,
- unsigned long arg3, unsigned long arg4,
- unsigned long arg5);
-
-asmlinkage long
-sys32_prctl(int option, u32 arg2, u32 arg3, u32 arg4, u32 arg5)
-{
- return sys_prctl(option,
- (unsigned long) arg2,
- (unsigned long) arg3,
- (unsigned long) arg4,
- (unsigned long) arg5);
-}
-
-
-extern asmlinkage ssize_t sys_pread(unsigned int fd, char * buf,
- size_t count, loff_t pos);
-
-extern asmlinkage ssize_t sys_pwrite(unsigned int fd, const char * buf,
- size_t count, loff_t pos);
-
-typedef __kernel_ssize_t32 ssize_t32;
-
-asmlinkage ssize_t32
-sys32_pread(unsigned int fd, char *ubuf, __kernel_size_t32 count,
- u32 poshi, u32 poslo)
-{
- return sys_pread(fd, ubuf, count,
- ((loff_t)AA(poshi) << 32) | AA(poslo));
-}
-
-asmlinkage ssize_t32
-sys32_pwrite(unsigned int fd, char *ubuf, __kernel_size_t32 count,
- u32 poshi, u32 poslo)
-{
- return sys_pwrite(fd, ubuf, count,
- ((loff_t)AA(poshi) << 32) | AA(poslo));
-}
-
-
-extern asmlinkage long sys_personality(unsigned long);
-
-asmlinkage long
-sys32_personality(unsigned long personality)
-{
- int ret;
- if (current->personality = PER_LINUX32 && personality = PER_LINUX)
- personality = PER_LINUX32;
- ret = sys_personality(personality);
- if (ret = PER_LINUX32)
- ret = PER_LINUX;
- return ret;
-}
-
-extern asmlinkage ssize_t sys_sendfile(int out_fd, int in_fd, off_t *offset,
- size_t count);
-
-asmlinkage long
-sys32_sendfile(int out_fd, int in_fd, __kernel_off_t32 *offset, s32 count)
-{
- mm_segment_t old_fs = get_fs();
- int ret;
- off_t of;
-
- if (offset && get_user(of, offset))
- return -EFAULT;
-
- set_fs(KERNEL_DS);
- ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
- set_fs(old_fs);
-
- if (!ret && offset && put_user(of, offset))
- return -EFAULT;
-
- return ret;
-}
-
/* Handle adjtimex compatability. */
struct timex32 {
@@ -5041,4 +4366,4 @@
return ret;
}
-#endif // NOTYET
+#endif /* NOTYET */
diff -urN linux-2.4.13/arch/ia64/kernel/Makefile linux-2.4.13-lia/arch/ia64/kernel/Makefile
--- linux-2.4.13/arch/ia64/kernel/Makefile Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/Makefile Wed Oct 10 17:55:55 2001
@@ -16,7 +16,7 @@
obj-y := acpi.o entry.o gate.o efi.o efi_stub.o ia64_ksyms.o irq.o irq_ia64.o irq_lsapic.o ivt.o \
machvec.o pal.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \
signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
-obj-$(CONFIG_IA64_GENERIC) += machvec.o iosapic.o
+obj-$(CONFIG_IA64_GENERIC) += iosapic.o
obj-$(CONFIG_IA64_DIG) += iosapic.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_EFI_VARS) += efivars.o
diff -urN linux-2.4.13/arch/ia64/kernel/acpi.c linux-2.4.13-lia/arch/ia64/kernel/acpi.c
--- linux-2.4.13/arch/ia64/kernel/acpi.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/acpi.c Thu Oct 4 00:21:39 2001
@@ -9,7 +9,7 @@
* Copyright (C) 2000 Hewlett-Packard Co.
* Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 2000 Intel Corp.
- * Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
+ * Copyright (C) 2000,2001 J.I. Lee <jung-ik.lee@intel.com>
* ACPI based kernel configuration manager.
* ACPI 2.0 & IA64 ext 0.71
*/
@@ -34,6 +34,9 @@
#undef ACPI_DEBUG /* Guess what this does? */
+/* global array to record platform interrupt vectors for generic int routing */
+int platform_irq_list[ACPI_MAX_PLATFORM_IRQS];
+
/* These are ugly but will be reclaimed by the kernel */
int __initdata available_cpus;
int __initdata total_cpus;
@@ -42,7 +45,9 @@
void (*pm_power_off) (void);
asm (".weak iosapic_register_legacy_irq");
+asm (".weak iosapic_register_platform_irq");
asm (".weak iosapic_init");
+asm (".weak iosapic_version");
const char *
acpi_get_sysname (void)
@@ -55,6 +60,8 @@
return "hpsim";
# elif defined (CONFIG_IA64_SGI_SN1)
return "sn1";
+# elif defined (CONFIG_IA64_SGI_SN2)
+ return "sn2";
# elif defined (CONFIG_IA64_DIG)
return "dig";
# else
@@ -65,6 +72,25 @@
}
/*
+ * Interrupt routing API for device drivers.
+ * Provides the interrupt vector for a generic platform event
+ * (currently only CPEI implemented)
+ */
+int
+acpi_request_vector(u32 int_type)
+{
+ int vector = -1;
+
+ if (int_type < ACPI_MAX_PLATFORM_IRQS) {
+ /* correctable platform error interrupt */
+ vector = platform_irq_list[int_type];
+ } else
+ printk("acpi_request_vector(): invalid interrupt type\n");
+
+ return vector;
+}
+
+/*
* Configure legacy IRQ information.
*/
static void __init
@@ -139,15 +165,93 @@
}
/*
- * Info on platform interrupt sources: NMI. PMI, INIT, etc.
+ * Extract iosapic info from madt (again) to determine which iosapic
+ * this platform interrupt resides in
+ */
+static int __init
+acpi20_which_iosapic (int global_vector, acpi_madt_t *madt, u32 *irq_base, char **iosapic_address)
+{
+ acpi_entry_iosapic_t *iosapic;
+ char *p, *end;
+ int ver, max_pin;
+
+ p = (char *) (madt + 1);
+ end = p + (madt->header.length - sizeof(acpi_madt_t));
+
+ while (p < end) {
+ switch (*p) {
+ case ACPI20_ENTRY_IO_SAPIC:
+ /* collect IOSAPIC info for platform int use later */
+ iosapic = (acpi_entry_iosapic_t *)p;
+ *irq_base = iosapic->irq_base;
+ *iosapic_address = ioremap(iosapic->address, 0);
+ /* is this the iosapic we're looking for? */
+ ver = iosapic_version(*iosapic_address);
+ max_pin = (ver >> 16) & 0xff;
+ if ((global_vector - *irq_base) <= max_pin)
+ return 0; /* found it! */
+ break;
+ default:
+ break;
+ }
+ p += p[1];
+ }
+ return 1;
+}
+
+/*
+ * Info on platform interrupt sources: NMI, PMI, INIT, etc.
*/
static void __init
-acpi20_platform (char *p)
+acpi20_platform (char *p, acpi_madt_t *madt)
{
+ int vector;
+ u32 irq_base;
+ char *iosapic_address;
+ unsigned long polarity = 0, trigger = 0;
acpi20_entry_platform_src_t *plat = (acpi20_entry_platform_src_t *) p;
printk("PLATFORM: IOSAPIC %x -> Vector %x on CPU %.04u:%.04u\n",
plat->iosapic_vector, plat->global_vector, plat->eid, plat->id);
+
+ /* record platform interrupt vectors for generic int routing code */
+
+ if (!iosapic_register_platform_irq) {
+ printk("acpi20_platform(): no ACPI platform IRQ support\n");
+ return;
+ }
+
+ /* extract polarity and trigger info from flags */
+ switch (plat->flags) {
+ case 0x5: polarity = 1; trigger = 1; break;
+ case 0x7: polarity = 0; trigger = 1; break;
+ case 0xd: polarity = 1; trigger = 0; break;
+ case 0xf: polarity = 0; trigger = 0; break;
+ default:
+ printk("acpi20_platform(): unknown flags 0x%x\n", plat->flags);
+ break;
+ }
+
+ /* which iosapic does this IRQ belong to? */
+ if (acpi20_which_iosapic(plat->global_vector, madt, &irq_base, &iosapic_address)) {
+ printk("acpi20_platform(): I/O SAPIC not found!\n");
+ return;
+ }
+
+ /*
+ * get vector assignment for this IRQ, set attributes, and program the IOSAPIC
+ * routing table
+ */
+ vector = iosapic_register_platform_irq(plat->int_type,
+ plat->global_vector,
+ plat->iosapic_vector,
+ plat->eid,
+ plat->id,
+ polarity,
+ trigger,
+ irq_base,
+ iosapic_address);
+ platform_irq_list[plat->int_type] = vector;
}
/*
@@ -173,8 +277,10 @@
static void __init
acpi20_parse_madt (acpi_madt_t *madt)
{
- acpi_entry_iosapic_t *iosapic;
+ acpi_entry_iosapic_t *iosapic = NULL;
+ acpi20_entry_lsapic_t *lsapic = NULL;
char *p, *end;
+ int i;
/* Base address of IPI Message Block */
if (madt->lapic_address) {
@@ -186,23 +292,27 @@
p = (char *) (madt + 1);
end = p + (madt->header.length - sizeof(acpi_madt_t));
+ /* Initialize platform interrupt vector array */
+ for (i = 0; i < ACPI_MAX_PLATFORM_IRQS; i++)
+ platform_irq_list[i] = -1;
+
/*
- * Splitted entry parsing to ensure ordering.
+ * Split-up entry parsing to ensure ordering.
*/
-
while (p < end) {
switch (*p) {
- case ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE:
+ case ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE:
printk("ACPI 2.0 MADT: LOCAL APIC Override\n");
acpi20_lapic_addr_override(p);
break;
- case ACPI20_ENTRY_LOCAL_SAPIC:
+ case ACPI20_ENTRY_LOCAL_SAPIC:
printk("ACPI 2.0 MADT: LOCAL SAPIC\n");
+ lsapic = (acpi20_entry_lsapic_t *) p;
acpi20_lsapic(p);
break;
- case ACPI20_ENTRY_IO_SAPIC:
+ case ACPI20_ENTRY_IO_SAPIC:
iosapic = (acpi_entry_iosapic_t *) p;
if (iosapic_init)
/*
@@ -218,26 +328,25 @@
);
break;
- case ACPI20_ENTRY_PLATFORM_INT_SOURCE:
+ case ACPI20_ENTRY_PLATFORM_INT_SOURCE:
printk("ACPI 2.0 MADT: PLATFORM INT SOURCE\n");
- acpi20_platform(p);
+ acpi20_platform(p, madt);
break;
- case ACPI20_ENTRY_LOCAL_APIC:
+ case ACPI20_ENTRY_LOCAL_APIC:
printk("ACPI 2.0 MADT: LOCAL APIC entry\n"); break;
- case ACPI20_ENTRY_IO_APIC:
+ case ACPI20_ENTRY_IO_APIC:
printk("ACPI 2.0 MADT: IO APIC entry\n"); break;
- case ACPI20_ENTRY_NMI_SOURCE:
+ case ACPI20_ENTRY_NMI_SOURCE:
printk("ACPI 2.0 MADT: NMI SOURCE entry\n"); break;
- case ACPI20_ENTRY_LOCAL_APIC_NMI:
+ case ACPI20_ENTRY_LOCAL_APIC_NMI:
printk("ACPI 2.0 MADT: LOCAL APIC NMI entry\n"); break;
- case ACPI20_ENTRY_INT_SRC_OVERRIDE:
+ case ACPI20_ENTRY_INT_SRC_OVERRIDE:
break;
- default:
+ default:
printk("ACPI 2.0 MADT: unknown entry skip\n"); break;
break;
}
-
p += p[1];
}
@@ -245,16 +354,35 @@
end = p + (madt->header.length - sizeof(acpi_madt_t));
while (p < end) {
+ switch (*p) {
+ case ACPI20_ENTRY_LOCAL_APIC:
+ if (lsapic) break;
+ printk("ACPI 2.0 MADT: LOCAL APIC entry\n");
+ /* parse local apic if there's no local Sapic */
+ break;
+ case ACPI20_ENTRY_IO_APIC:
+ if (iosapic) break;
+ printk("ACPI 2.0 MADT: IO APIC entry\n");
+ /* parse ioapic if there's no ioSapic */
+ break;
+ default:
+ break;
+ }
+ p += p[1];
+ }
+ p = (char *) (madt + 1);
+ end = p + (madt->header.length - sizeof(acpi_madt_t));
+
+ while (p < end) {
switch (*p) {
- case ACPI20_ENTRY_INT_SRC_OVERRIDE:
+ case ACPI20_ENTRY_INT_SRC_OVERRIDE:
printk("ACPI 2.0 MADT: INT SOURCE Override\n");
acpi_legacy_irq(p);
break;
- default:
+ default:
break;
}
-
p += p[1];
}
diff -urN linux-2.4.13/arch/ia64/kernel/efi.c linux-2.4.13-lia/arch/ia64/kernel/efi.c
--- linux-2.4.13/arch/ia64/kernel/efi.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/efi.c Thu Oct 4 00:21:39 2001
@@ -482,5 +482,7 @@
static void __exit
efivars_exit(void)
{
+#ifdef CONFIG_PROC_FS
remove_proc_entry(efi_dir->name, NULL);
+#endif
}
diff -urN linux-2.4.13/arch/ia64/kernel/efi_stub.S linux-2.4.13-lia/arch/ia64/kernel/efi_stub.S
--- linux-2.4.13/arch/ia64/kernel/efi_stub.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/efi_stub.S Thu Oct 4 00:21:39 2001
@@ -1,8 +1,8 @@
/*
* EFI call stub.
*
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * David Mosberger <davidm@hpl.hp.com>
*
* This stub allows us to make EFI calls in physical mode with interrupts
* turned off. We need this because we can't call SetVirtualMap() until
@@ -68,17 +68,17 @@
;;
andcm r16=loc3,r16 // get psr with IT, DT, and RT bits cleared
mov out3=in4
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret0: mov out4=in5
mov out5=in6
mov out6=in7
- br.call.sptk.few rp¶ // call the EFI function
+ br.call.sptk.many rp¶ // call the EFI function
.ret1: mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret2: mov ar.rsc=loc4 // restore RSE configuration
mov ar.pfs=loc1
mov rp=loc0
mov gp=loc2
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(efi_call_phys)
diff -urN linux-2.4.13/arch/ia64/kernel/efivars.c linux-2.4.13-lia/arch/ia64/kernel/efivars.c
--- linux-2.4.13/arch/ia64/kernel/efivars.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/efivars.c Wed Oct 10 17:40:37 2001
@@ -65,6 +65,7 @@
MODULE_AUTHOR("Matt Domsch <Matt_Domsch@Dell.com>");
MODULE_DESCRIPTION("/proc interface to EFI Variables");
+MODULE_LICENSE("GPL");
#define EFIVARS_VERSION "0.03 2001-Apr-20"
@@ -276,21 +277,20 @@
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
- spin_lock(&efivars_lock);
MOD_INC_USE_COUNT;
var_data = kmalloc(size, GFP_KERNEL);
if (!var_data) {
MOD_DEC_USE_COUNT;
- spin_unlock(&efivars_lock);
return -ENOMEM;
}
if (copy_from_user(var_data, buffer, size)) {
MOD_DEC_USE_COUNT;
- spin_unlock(&efivars_lock);
+ kfree(var_data);
return -EFAULT;
}
+ spin_lock(&efivars_lock);
/* Since the data ptr we've currently got is probably for
a different variable find the right variable.
diff -urN linux-2.4.13/arch/ia64/kernel/entry.S linux-2.4.13-lia/arch/ia64/kernel/entry.S
--- linux-2.4.13/arch/ia64/kernel/entry.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/entry.S Wed Oct 24 18:13:32 2001
@@ -4,7 +4,7 @@
* Kernel entry points.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Asit Mallick <Asit.K.Mallick@intel.com>
@@ -15,7 +15,7 @@
* kernel stack. This allows us to handle interrupts without changing
* to physical mode.
*
- * Jonathan Nickin <nicklin@missioncriticallinux.com>
+ * Jonathan Nicklin <nicklin@missioncriticallinux.com>
* Patrick O'Rourke <orourke@missioncriticallinux.com>
* 11/07/2000
/
@@ -55,7 +55,7 @@
mov out1=in1 // argv
mov out2=in2 // envp
add out3\x16,sp // regs
- br.call.sptk.few rp=sys_execve
+ br.call.sptk.many rp=sys_execve
.ret0: cmp4.ge p6,p7=r8,r0
mov ar.pfs=loc1 // restore ar.pfs
sxt4 r8=r8 // return 64-bit result
@@ -64,7 +64,7 @@
(p6) cmp.ne pKern,pUser=r0,r0 // a successful execve() lands us in user-mode...
mov rp=loc0
(p6) mov ar.pfs=r0 // clear ar.pfs on success
-(p7) br.ret.sptk.few rp
+(p7) br.ret.sptk.many rp
/*
* In theory, we'd have to zap this state only to prevent leaking of
@@ -85,7 +85,7 @@
ldf.fill f26=[sp]; ldf.fill f27=[sp]; mov f28ð
ldf.fill f29=[sp]; ldf.fill f30=[sp]; mov f31ð
mov ar.lc=0
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_execve)
GLOBAL_ENTRY(sys_clone2)
@@ -99,7 +99,7 @@
mov out3=in2
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
- br.call.sptk.few rp=do_fork
+ br.call.sptk.many rp=do_fork
.ret1: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
@@ -118,7 +118,7 @@
mov out3=0
adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s
mov out0=in0 // out0 = clone_flags
- br.call.sptk.few rp=do_fork
+ br.call.sptk.many rp=do_fork
.ret2: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov ar.pfs=loc1
@@ -143,7 +143,7 @@
shr.u r26=r20,KERNEL_PG_SHIFT
mov r16=KERNEL_PG_NUM
;;
- cmp.ne p6,p7=r26,r16 // check >= 64M && < 128M
+ cmp.ne p6,p7=r26,r16 // check whether r26 != KERNEL_PG_NUM
adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
;;
/*
@@ -151,12 +151,13 @@
* again.
*/
(p6) cmp.eq p7,p6=r26,r27
-(p6) br.cond.dpnt.few .map
+(p6) br.cond.dpnt .map
;;
-.done: ld8 sp=[r21] // load kernel stack pointer of new task
+.done:
(p6) ssm psr.ic // if we we had to map, renable the psr.ic bit FIRST!!!
;;
(p6) srlz.d
+ ld8 sp=[r21] // load kernel stack pointer of new task
mov IA64_KR(CURRENT)=r20 // update "current" application register
mov r8=r13 // return pointer to previously running task
mov r13=in0 // set "current" pointer
@@ -167,7 +168,7 @@
#ifdef CONFIG_SMP
sync.i // ensure "fc"s done by this CPU are visible on other CPUs
#endif
- br.ret.sptk.few rp // boogie on out in new context
+ br.ret.sptk.many rp // boogie on out in new context
.map:
rsm psr.i | psr.ic
@@ -184,7 +185,7 @@
mov IA64_KR(CURRENT_STACK)=r26 // remember last page we mapped...
;;
itr.d dtr[r25]=r23 // wire in new mapping...
- br.cond.sptk.many .done
+ br.cond.sptk .done
END(ia64_switch_to)
/*
@@ -212,24 +213,18 @@
.save @priunat,r17
mov r17=ar.unat // preserve caller's
.body
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
adds r3€,sp
;;
lfetch.fault.excl.nt1 [r3],128
-#endif
mov ar.rsc=0 // put RSE in mode: enforced lazy, little endian, pl 0
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
adds r2\x16+128,sp
;;
lfetch.fault.excl.nt1 [r2],128
lfetch.fault.excl.nt1 [r3],128
-#endif
adds r14=SW(R4)+16,sp
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
;;
lfetch.fault.excl [r2]
lfetch.fault.excl [r3]
-#endif
adds r15=SW(R5)+16,sp
;;
mov r18=ar.fpsr // preserve fpsr
@@ -309,7 +304,7 @@
st8 [r2]=r20 // save ar.bspstore
st8 [r3]=r21 // save predicate registers
mov ar.rsc=3 // put RSE back into eager mode, pl 0
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
END(save_switch_stack)
/*
@@ -321,11 +316,9 @@
ENTRY(load_switch_stack)
.prologue
.altrp b7
- .body
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
+ .body
lfetch.fault.nt1 [sp]
-#endif
adds r2=SW(AR_BSPSTORE)+16,sp
adds r3=SW(AR_UNAT)+16,sp
mov ar.rsc=0 // put RSE into enforced lazy mode
@@ -426,7 +419,7 @@
;;
(p6) st4 [r2]=r8
(p6) mov r8=-1
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_syscall)
/*
@@ -441,11 +434,11 @@
.body
mov loc2¶
;;
- br.call.sptk.few rp=syscall_trace
+ br.call.sptk.many rp=syscall_trace
.ret3: mov rp=loc0
mov ar.pfs=loc1
mov b6=loc2
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(invoke_syscall_trace)
/*
@@ -462,21 +455,21 @@
GLOBAL_ENTRY(ia64_trace_syscall)
PT_REGS_UNWIND_INFO(0)
- br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
-.ret6: br.call.sptk.few rp¶ // do the syscall
+ br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch syscall args
+.ret6: br.call.sptk.many rp¶ // do the syscall
strace_check_retval:
cmp.lt p6,p0=r8,r0 // syscall failed?
adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8
adds r3=PT(R10)+16,sp // r3 = &pt_regs.r10
mov r10=0
-(p6) br.cond.sptk.few strace_error // syscall failed ->
+(p6) br.cond.sptk strace_error // syscall failed ->
;; // avoid RAW on r10
strace_save_retval:
.mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8
.mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10
ia64_strace_leave_kernel:
- br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
-.rety: br.cond.sptk.many ia64_leave_kernel
+ br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch return value
+.rety: br.cond.sptk ia64_leave_kernel
strace_error:
ld8 r3=[r2] // load pt_regs.r8
@@ -487,7 +480,7 @@
;;
(p6) mov r10=-1
(p6) mov r8=r9
- br.cond.sptk.few strace_save_retval
+ br.cond.sptk strace_save_retval
END(ia64_trace_syscall)
GLOBAL_ENTRY(ia64_ret_from_clone)
@@ -497,7 +490,7 @@
* Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
* address of the previously executing task.
*/
- br.call.sptk.few rp=invoke_schedule_tail
+ br.call.sptk.many rp=ia64_invoke_schedule_tail
.ret8:
adds r2=IA64_TASK_PTRACE_OFFSET,r13
;;
@@ -505,7 +498,7 @@
;;
mov r8=0
tbit.nz p6,p0=r2,PT_TRACESYS_BIT
-(p6) br strace_check_retval
+(p6) br.cond.spnt strace_check_retval
;; // added stop bits to prevent r8 dependency
END(ia64_ret_from_clone)
// fall through
@@ -519,7 +512,7 @@
(p6) st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit
.mem.offset 8,0
(p6) st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit
-(p7) br.cond.spnt.few handle_syscall_error // handle potential syscall failure
+(p7) br.cond.spnt handle_syscall_error // handle potential syscall failure
END(ia64_ret_from_syscall)
// fall through
GLOBAL_ENTRY(ia64_leave_kernel)
@@ -527,22 +520,22 @@
lfetch.fault [sp]
movl r14=.restart
;;
- MOVBR(.ret.sptk,rp,r14,.restart)
+ mov.ret.sptk rp=r14,.restart
.restart:
adds r17=IA64_TASK_NEED_RESCHED_OFFSET,r13
adds r18=IA64_TASK_SIGPENDING_OFFSET,r13
#ifdef CONFIG_PERFMON
- adds r19=IA64_TASK_PFM_NOTIFY_OFFSET,r13
+ adds r19=IA64_TASK_PFM_MUST_BLOCK_OFFSET,r13
#endif
;;
#ifdef CONFIG_PERFMON
- ld8 r19=[r19] // load current->task.pfm_notify
+(pUser) ld8 r19=[r19] // load current->thread.pfm_must_block
#endif
- ld8 r17=[r17] // load current->need_resched
- ld4 r18=[r18] // load current->sigpending
+(pUser) ld8 r17=[r17] // load current->need_resched
+(pUser) ld4 r18=[r18] // load current->sigpending
;;
#ifdef CONFIG_PERFMON
- cmp.ne p9,p0=r19,r0 // current->task.pfm_notify != 0?
+(pUser) cmp.ne.unc p9,p0=r19,r0 // current->thread.pfm_must_block != 0?
#endif
(pUser) cmp.ne.unc p7,p0=r17,r0 // current->need_resched != 0?
(pUser) cmp.ne.unc p8,p0=r18,r0 // current->sigpending != 0?
@@ -550,7 +543,7 @@
adds r2=PT(R8)+16,r12
adds r3=PT(R9)+16,r12
#ifdef CONFIG_PERFMON
-(p9) br.call.spnt.many b7=pfm_overflow_notify
+(p9) br.call.spnt.many b7=pfm_block_on_overflow
#endif
#if __GNUC__ < 3
(p7) br.call.spnt.many b7=invoke_schedule
@@ -650,13 +643,13 @@
movl r17=PERCPU_ADDR+IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET
;;
ld4 r17=[r17] // r17 = cpu_data->phys_stacked_size_p8
-(pKern) br.cond.dpnt.few skip_rbs_switch
+(pKern) br.cond.dpnt skip_rbs_switch
/*
* Restore user backing store.
*
* NOTE: alloc, loadrs, and cover can't be predicated.
*/
-(pNonSys) br.cond.dpnt.few dont_preserve_current_frame
+(pNonSys) br.cond.dpnt dont_preserve_current_frame
cover // add current frame into dirty partition
;;
mov r19=ar.bsp // get new backing store pointer
@@ -687,7 +680,7 @@
shladd in0=loc1,3,r17
mov in1=0
;;
- .align 32
+// .align 32 // gas-2.11.90 is unable to generate a stop bit after .align
rse_clear_invalid:
// cycle 0
{ .mii
@@ -706,7 +699,7 @@
}{ .mib
mov loc3=0
mov loc4=0
-(pRecurse) br.call.sptk.few b6=rse_clear_invalid
+(pRecurse) br.call.sptk.many b6=rse_clear_invalid
}{ .mfi // cycle 2
mov loc5=0
@@ -715,7 +708,7 @@
}{ .mib
mov loc6=0
mov loc7=0
-(pReturn) br.ret.sptk.few b6
+(pReturn) br.ret.sptk.many b6
}
# undef pRecurse
# undef pReturn
@@ -761,24 +754,24 @@
;;
.mem.offset 0,0; st8.spill [r2]=r9 // store errno in pt_regs.r8 and set unat bit
.mem.offset 8,0; st8.spill [r3]=r10 // store error indication in pt_regs.r10 and set unat bit
- br.cond.sptk.many ia64_leave_kernel
+ br.cond.sptk ia64_leave_kernel
END(handle_syscall_error)
/*
* Invoke schedule_tail(task) while preserving in0-in7, which may be needed
* in case a system call gets restarted.
*/
-ENTRY(invoke_schedule_tail)
+GLOBAL_ENTRY(ia64_invoke_schedule_tail)
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,1,0
mov loc0=rp
mov out0=r8 // Address of previous task
;;
- br.call.sptk.few rp=schedule_tail
+ br.call.sptk.many rp=schedule_tail
.ret11: mov ar.pfs=loc1
mov rp=loc0
br.ret.sptk.many rp
-END(invoke_schedule_tail)
+END(ia64_invoke_schedule_tail)
#if __GNUC__ < 3
@@ -797,7 +790,7 @@
mov loc0=rp
;;
.body
- br.call.sptk.few rp=schedule
+ br.call.sptk.many rp=schedule
.ret14: mov ar.pfs=loc1
mov rp=loc0
br.ret.sptk.many rp
@@ -824,7 +817,7 @@
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
.body
- br.call.sptk.few rp=ia64_do_signal
+ br.call.sptk.many rp=ia64_do_signal
.ret15: .restore sp
adds sp\x16,sp // pop scratch stack space
;;
@@ -849,7 +842,7 @@
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
.body
- br.call.sptk.few rp=ia64_rt_sigsuspend
+ br.call.sptk.many rp=ia64_rt_sigsuspend
.ret17: .restore sp
adds sp\x16,sp // pop scratch stack space
;;
@@ -871,15 +864,15 @@
cmp.eq pNonSys,pSys=r0,r0 // sigreturn isn't a normal syscall...
;;
adds out0\x16,sp // out0 = &sigscratch
- br.call.sptk.few rp=ia64_rt_sigreturn
+ br.call.sptk.many rp=ia64_rt_sigreturn
.ret19: .restore sp 0
adds sp\x16,sp
;;
ld8 r9=[sp] // load new ar.unat
- MOVBR(.sptk,b7,r8,ia64_leave_kernel)
+ mov.sptk b7=r8,ia64_leave_kernel
;;
mov ar.unat=r9
- br b7
+ br.many b7
END(sys_rt_sigreturn)
GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
@@ -890,7 +883,7 @@
mov r16=r0
.prologue
DO_SAVE_SWITCH_STACK
- br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in ivt
+ br.call.sptk.many rp=ia64_handle_unaligned // stack frame setup in ivt
.ret21: .body
DO_LOAD_SWITCH_STACK
br.cond.sptk.many rp // goes to ia64_leave_kernel
@@ -920,14 +913,14 @@
adds out0\x16,sp // &info
mov out1=r13 // current
adds out2\x16+EXTRA_FRAME_SIZE,sp // &switch_stack
- br.call.sptk.few rp=unw_init_frame_info
+ br.call.sptk.many rp=unw_init_frame_info
1: adds out0\x16,sp // &info
mov b6=loc2
mov loc2=gp // save gp across indirect function call
;;
ld8 gp=[in0]
mov out1=in1 // arg
- br.call.sptk.few rp¶ // invoke the callback function
+ br.call.sptk.many rp¶ // invoke the callback function
1: mov gp=loc2 // restore gp
// For now, we don't allow changing registers from within
@@ -1026,7 +1019,7 @@
data8 sys_setpriority
data8 sys_statfs
data8 sys_fstatfs
- data8 ia64_ni_syscall // 1105
+ data8 sys_gettid // 1105
data8 sys_semget
data8 sys_semop
data8 sys_semctl
@@ -1137,7 +1130,7 @@
data8 sys_clone2
data8 sys_getdents64
data8 sys_getunwind // 1215
- data8 ia64_ni_syscall
+ data8 sys_readahead
data8 ia64_ni_syscall
data8 ia64_ni_syscall
data8 ia64_ni_syscall
diff -urN linux-2.4.13/arch/ia64/kernel/entry.h linux-2.4.13-lia/arch/ia64/kernel/entry.h
--- linux-2.4.13/arch/ia64/kernel/entry.h Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/entry.h Thu Oct 4 00:21:39 2001
@@ -1,12 +1,5 @@
#include <linux/config.h>
-/* XXX fixme */
-#if defined(CONFIG_ITANIUM_B1_SPECIFIC)
-# define MOVBR(type,br,gr,lbl) mov br=gr
-#else
-# define MOVBR(type,br,gr,lbl) mov##type br=gr,lbl
-#endif
-
/*
* Preserved registers that are shared between code in ivt.S and entry.S. Be
* careful not to step on these!
@@ -62,7 +55,7 @@
;; \
.fframe IA64_SWITCH_STACK_SIZE; \
adds sp=-IA64_SWITCH_STACK_SIZE,sp; \
- MOVBR(.ret.sptk,b7,r28,1f); \
+ mov.ret.sptk b7=r28,1f; \
SWITCH_STACK_SAVES(0); \
br.cond.sptk.many save_switch_stack; \
1:
@@ -71,7 +64,7 @@
movl r28\x1f; \
;; \
invala; \
- MOVBR(.ret.sptk,b7,r28,1f); \
+ mov.ret.sptk b7=r28,1f; \
br.cond.sptk.many load_switch_stack; \
1: .restore sp; \
adds sp=IA64_SWITCH_STACK_SIZE,sp
diff -urN linux-2.4.13/arch/ia64/kernel/fw-emu.c linux-2.4.13-lia/arch/ia64/kernel/fw-emu.c
--- linux-2.4.13/arch/ia64/kernel/fw-emu.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/fw-emu.c Wed Oct 24 18:13:46 2001
@@ -174,6 +174,43 @@
" ;;\n"
" mov ar.lc=r9\n"
" mov r8=r0\n"
+" ;;\n"
+"1: cmp.eq p6,p7\x15,r28 /* PAL_PERF_MON_INFO */\n"
+"(p7) br.cond.sptk.few 1f\n"
+" mov r8=0 /* status = 0 */\n"
+" movl r9 =0x12082004 /* generic=4 width2 retired=8 cycles\x18 */\n"
+" mov r10=0 /* reserved */\n"
+" mov r11=0 /* reserved */\n"
+" mov r16=0xffff /* implemented PMC */\n"
+" mov r17=0xffff /* implemented PMD */\n"
+" add r18=8,r29 /* second index */\n"
+" ;;\n"
+" st8 [r29]=r16,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r17,16 /* store implemented PMD */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" mov r16=0xf0 /* cycles count capable PMC */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" mov r17=0x10 /* retired bundles capable PMC */\n"
+" ;;\n"
+" st8 [r29]=r16,16 /* store cycles capable */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r17,16 /* store retired bundle capable */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
+" st8 [r29]=r0,16 /* store implemented PMC */\n"
+" st8 [r18]=r0,16 /* clear remaining bits */\n"
+" ;;\n"
"1: br.cond.sptk.few rp\n"
"stacked:\n"
" br.ret.sptk.few rp\n"
@@ -414,11 +451,6 @@
#ifdef CONFIG_IA64_SDV
strcpy(sal_systab->oem_id, "Intel");
strcpy(sal_systab->product_id, "SDV");
-#endif
-
-#ifdef CONFIG_IA64_SGI_SN1_SIM
- strcpy(sal_systab->oem_id, "SGI");
- strcpy(sal_systab->product_id, "SN1");
#endif
/* fill in an entry point: */
diff -urN linux-2.4.13/arch/ia64/kernel/gate.S linux-2.4.13-lia/arch/ia64/kernel/gate.S
--- linux-2.4.13/arch/ia64/kernel/gate.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/gate.S Thu Oct 4 00:21:39 2001
@@ -3,7 +3,7 @@
* region. For now, it contains the signal trampoline code only.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
@@ -18,7 +18,6 @@
# define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET)
# define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET)
# define ARG2_OFF (16 + IA64_SIGFRAME_ARG2_OFFSET)
-# define RBS_BASE_OFF (16 + IA64_SIGFRAME_RBS_BASE_OFFSET)
# define SIGHANDLER_OFF (16 + IA64_SIGFRAME_HANDLER_OFFSET)
# define SIGCONTEXT_OFF (16 + IA64_SIGFRAME_SIGCONTEXT_OFFSET)
@@ -32,6 +31,8 @@
# define PR_OFF IA64_SIGCONTEXT_PR_OFFSET
# define RP_OFF IA64_SIGCONTEXT_B0_OFFSET
# define SP_OFF IA64_SIGCONTEXT_R12_OFFSET
+# define RBS_BASE_OFF IA64_SIGCONTEXT_RBS_BASE_OFFSET
+# define LOADRS_OFF IA64_SIGCONTEXT_LOADRS_OFFSET
# define base0 r2
# define base1 r3
/*
@@ -73,34 +74,37 @@
.vframesp SP_OFF+SIGCONTEXT_OFF
.body
- .prologue
+ .label_state 1
+
adds base0=SIGHANDLER_OFF,sp
- adds base1=RBS_BASE_OFF,sp
+ adds base1=RBS_BASE_OFF+SIGCONTEXT_OFF,sp
br.call.sptk.many rp\x1f
1:
ld8 r17=[base0],(ARG0_OFF-SIGHANDLER_OFF) // get pointer to signal handler's plabel
- ld8 r15=[base1],(ARG1_OFF-RBS_BASE_OFF) // get address of new RBS base (or NULL)
+ ld8 r15=[base1] // get address of new RBS base (or NULL)
cover // push args in interrupted frame onto backing store
;;
+ cmp.ne p8,p0=r15,r0 // do we need to switch the rbs?
+ mov.m r9=ar.bsp // fetch ar.bsp
+ .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+(p8) br.cond.spnt setup_rbs // yup -> (clobbers r14, r15, and r16)
+back_from_setup_rbs:
+
.save ar.pfs, r8
alloc r8=ar.pfs,0,0,3,0 // get CFM0, EC0, and CPL0 into r8
ld8 out0=[base0],16 // load arg0 (signum)
+ adds base1=(ARG1_OFF-(RBS_BASE_OFF+SIGCONTEXT_OFF)),base1
;;
ld8 out1=[base1] // load arg1 (siginfop)
ld8 r10=[r17],8 // get signal handler entry point
;;
ld8 out2=[base0] // load arg2 (sigcontextp)
ld8 gp=[r17] // get signal handler's global pointer
- cmp.ne p8,p0=r15,r0 // do we need to switch the rbs?
- mov.m r17=ar.bsp // fetch ar.bsp
- .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
-(p8) br.cond.spnt.few setup_rbs // yup -> (clobbers r14 and r16)
-back_from_setup_rbs:
adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
;;
.spillsp ar.bsp, BSP_OFF+SIGCONTEXT_OFF
- st8 [base0]=r17,(CFM_OFF-BSP_OFF) // save sc_ar_bsp
+ st8 [base0]=r9,(CFM_OFF-BSP_OFF) // save sc_ar_bsp
dep r8=0,r8,38,26 // clear EC0, CPL0 and reserved bits
adds base1=(FR6_OFF+16+SIGCONTEXT_OFF),sp
;;
@@ -123,7 +127,7 @@
;;
stf.spill [base0]ñ4,32
stf.spill [base1]ñ5,32
- br.call.sptk.few rp¶ // call the signal handler
+ br.call.sptk.many rp¶ // call the signal handler
.ret0: adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
;;
ld8 r15=[base0],(CFM_OFF-BSP_OFF) // fetch sc_ar_bsp and advance to CFM_OFF
@@ -131,7 +135,7 @@
;;
ld8 r8=[base0] // restore (perhaps modified) CFM0, EC0, and CPL0
cmp.ne p8,p0=r14,r15 // do we need to restore the rbs?
-(p8) br.cond.spnt.few restore_rbs // yup -> (clobbers r14 and r16)
+(p8) br.cond.spnt restore_rbs // yup -> (clobbers r14 and r16)
;;
back_from_restore_rbs:
adds base0=(FR6_OFF+SIGCONTEXT_OFF),sp
@@ -154,30 +158,52 @@
mov r15=__NR_rt_sigreturn
break __BREAK_SYSCALL
+ .body
+ .copy_state 1
setup_rbs:
- flushrs // must be first in insn
mov ar.rsc=0 // put RSE into enforced lazy mode
- adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
- mov r14=ar.rnat // get rnat as updated by flushrs
- mov ar.bspstore=r15 // set new register backing store area
+ .save ar.rnat, r16
+ mov r16=ar.rnat // save RNaT before switching backing store area
+ adds r14=(RNAT_OFF+SIGCONTEXT_OFF),sp
+
+ mov ar.bspstore=r15 // switch over to new register backing store area
;;
.spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
- st8 [r16]=r14 // save sc_ar_rnat
+ st8 [r14]=r16 // save sc_ar_rnat
+ adds r14=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+
+ mov.m r16=ar.bsp // sc_loadrs <- (new bsp - new bspstore) << 16
+ ;;
+ invala
+ sub r15=r16,r15
+ ;;
+ shl r15=r15,16
+ ;;
+ st8 [r14]=r15 // save sc_loadrs
mov ar.rsc=0xf // set RSE into eager mode, pl 3
- invala // invalidate ALAT
- br.cond.sptk.many back_from_setup_rbs
+ br.cond.sptk back_from_setup_rbs
+ .prologue
+ .copy_state 1
+ .spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+ .body
restore_rbs:
- flushrs
- mov ar.rsc=0 // put RSE into enforced lazy mode
+ alloc r2=ar.pfs,0,0,0,0 // alloc null frame
+ adds r16=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+ ;;
+ ld8 r14=[r16]
adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
;;
+ mov ar.rsc=r14 // put RSE into enforced lazy mode
ld8 r14=[r16] // get new rnat
- mov ar.bspstore=r15 // set old register backing store area
;;
- mov ar.rnat=r14 // establish new rnat
+ loadrs // restore dirty partition
+ ;;
+ mov ar.bspstore=r15 // switch back to old register backing store area
+ ;;
+ mov ar.rnat=r14 // restore RNaT
mov ar.rsc=0xf // (will be restored later on from sc_ar_rsc)
// invala not necessary as that will happen when returning to user-mode
- br.cond.sptk.many back_from_restore_rbs
+ br.cond.sptk back_from_restore_rbs
END(ia64_sigtramp)
diff -urN linux-2.4.13/arch/ia64/kernel/head.S linux-2.4.13-lia/arch/ia64/kernel/head.S
--- linux-2.4.13/arch/ia64/kernel/head.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/head.S Thu Oct 4 00:21:39 2001
@@ -6,8 +6,8 @@
* entry point.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 2001 Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Intel Corp.
@@ -86,7 +86,8 @@
/*
* Switch into virtual mode:
*/
- movl r16=(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN)
+ movl r16=(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN \
+ |IA64_PSR_DI)
;;
mov cr.ipsr=r16
movl r17\x1f
@@ -183,31 +184,31 @@
alloc r2=ar.pfs,0,0,2,0
movl out0=alive_msg
;;
- br.call.sptk.few rpêrly_printk
+ br.call.sptk.many rpêrly_printk
1: // force new bundle
#endif /* CONFIG_IA64_EARLY_PRINTK */
#ifdef CONFIG_SMP
-(isAP) br.call.sptk.few rp=start_secondary
+(isAP) br.call.sptk.many rp=start_secondary
.ret0:
-(isAP) br.cond.sptk.few self
+(isAP) br.cond.sptk self
#endif
// This is executed by the bootstrap processor (bsp) only:
#ifdef CONFIG_IA64_FW_EMU
// initialize PAL & SAL emulator:
- br.call.sptk.few rp=sys_fw_init
+ br.call.sptk.many rp=sys_fw_init
.ret1:
#endif
- br.call.sptk.few rp=start_kernel
+ br.call.sptk.many rp=start_kernel
.ret2: addl r3=@ltoff(halt_msg),gp
;;
alloc r2=ar.pfs,8,0,2,0
;;
ld8 out0=[r3]
- br.call.sptk.few b0=console_print
-self: br.sptk.few self // endless loop
+ br.call.sptk.many b0=console_print
+self: br.sptk.many self // endless loop
END(_start)
GLOBAL_ENTRY(ia64_save_debug_regs)
@@ -218,7 +219,7 @@
add r19=IA64_NUM_DBG_REGS*8,in0
;;
1: mov r16Ûr[r18]
-#if defined(CONFIG_ITANIUM_C0_SPECIFIC)
+#ifdef CONFIG_ITANIUM
;;
srlz.d
#endif
@@ -227,17 +228,15 @@
;;
st8.nta [in0]=r16,8
st8.nta [r19]=r17,8
- br.cloop.sptk.few 1b
+ br.cloop.sptk.many 1b
;;
mov ar.lc=r20 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_save_debug_regs)
GLOBAL_ENTRY(ia64_load_debug_regs)
alloc r16=ar.pfs,1,0,0,0
-#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC))
lfetch.nta [in0]
-#endif
mov r20=ar.lc // preserve ar.lc
add r19=IA64_NUM_DBG_REGS*8,in0
mov ar.lc=IA64_NUM_DBG_REGS-1
@@ -248,15 +247,15 @@
add r18=1,r18
;;
mov dbr[r18]=r16
-#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) || defined(CONFIG_ITANIUM_C0_SPECIFIC)
+#ifdef CONFIG_ITANIUM
;;
- srlz.d
+ srlz.d // Errata 132 (NoFix status)
#endif
mov ibr[r18]=r17
- br.cloop.sptk.few 1b
+ br.cloop.sptk.many 1b
;;
mov ar.lc=r20 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_load_debug_regs)
GLOBAL_ENTRY(__ia64_save_fpu)
@@ -406,7 +405,7 @@
;;
stf.spill.nta [in0]ñ26,32
stf.spill.nta [ r3]ñ27,32
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_save_fpu)
GLOBAL_ENTRY(__ia64_load_fpu)
@@ -556,7 +555,7 @@
;;
ldf.fill.nta f126=[in0],32
ldf.fill.nta f127=[ r3],32
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_load_fpu)
GLOBAL_ENTRY(__ia64_init_fpu)
@@ -690,7 +689,7 @@
;;
ldf.fill f126=[sp]
mov f127ð
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__ia64_init_fpu)
/*
@@ -738,7 +737,7 @@
rfi // must be last insn in group
;;
1: mov rp=r14
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_switch_mode)
#ifdef CONFIG_IA64_BRL_EMU
@@ -752,7 +751,7 @@
alloc r16=ar.pfs,1,0,0,0; \
mov reg=r32; \
;; \
- br.ret.sptk rp; \
+ br.ret.sptk.many rp; \
END(ia64_set_##reg)
SET_REG(b1);
@@ -816,12 +815,11 @@
;;
cmp.ne p15,p0=tmp,r0
mov tmp=ar.itc
-(p15) br.cond.sptk.few .retry // lock is still busy
+(p15) br.cond.sptk .retry // lock is still busy
;;
// try acquiring lock (we know ar.ccv is still zero!):
mov tmp=1
;;
- IA64_SEMFIX_INSN
cmpxchg4.acq tmp=[r31],tmp,ar.ccv
;;
cmp.eq p15,p0=tmp,r0
diff -urN linux-2.4.13/arch/ia64/kernel/ia64_ksyms.c linux-2.4.13-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-2.4.13/arch/ia64/kernel/ia64_ksyms.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/ia64_ksyms.c Thu Oct 4 00:21:39 2001
@@ -145,4 +145,3 @@
#include <linux/proc_fs.h>
extern struct proc_dir_entry *efi_dir;
EXPORT_SYMBOL(efi_dir);
-
diff -urN linux-2.4.13/arch/ia64/kernel/iosapic.c linux-2.4.13-lia/arch/ia64/kernel/iosapic.c
--- linux-2.4.13/arch/ia64/kernel/iosapic.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/iosapic.c Thu Oct 4 00:21:39 2001
@@ -53,6 +53,7 @@
#include <asm/acpi-ext.h>
#include <asm/acpikcfg.h>
#include <asm/delay.h>
+#include <asm/hw_irq.h>
#include <asm/io.h>
#include <asm/iosapic.h>
#include <asm/machvec.h>
@@ -325,7 +326,7 @@
set_affinity: iosapic_set_affinity
};
-static unsigned int
+unsigned int
iosapic_version (char *addr)
{
/*
@@ -342,6 +343,113 @@
}
/*
+ * ACPI can describe IOSAPIC interrupts via static tables and namespace
+ * methods. This provides an interface to register those interrupts and
+ * program the IOSAPIC RTE.
+ */
+int
+iosapic_register_irq (u32 global_vector, unsigned long polarity, unsigned long
+ edge_triggered, u32 base_irq, char *iosapic_address)
+{
+ irq_desc_t *idesc;
+ struct hw_interrupt_type *irq_type;
+ int vector;
+
+ vector = iosapic_irq_to_vector(global_vector);
+ if (vector < 0)
+ vector = ia64_alloc_irq();
+
+ /* fill in information from this vector's IOSAPIC */
+ iosapic_irq[vector].addr = iosapic_address;
+ iosapic_irq[vector].base_irq = base_irq;
+ iosapic_irq[vector].pin = global_vector - iosapic_irq[vector].base_irq;
+ iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW;
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+
+ if (edge_triggered) {
+ iosapic_irq[vector].trigger = IOSAPIC_EDGE;
+ irq_type = &irq_type_iosapic_edge;
+ } else {
+ iosapic_irq[vector].trigger = IOSAPIC_LEVEL;
+ irq_type = &irq_type_iosapic_level;
+ }
+
+ idesc = irq_desc(vector);
+ if (idesc->handler != irq_type) {
+ if (idesc->handler != &no_irq_type)
+ printk("iosapic_register_irq(): changing vector 0x%02x from"
+ "%s to %s\n", vector, idesc->handler->typename, irq_type->typename);
+ idesc->handler = irq_type;
+ }
+
+ printk("IOSAPIC %x(%s,%s) -> Vector %x\n", global_vector,
+ (polarity ? "high" : "low"), (edge_triggered ? "edge" : "level"), vector);
+
+ /* program the IOSAPIC routing table */
+ set_rte(vector, (ia64_get_lid() >> 16) & 0xffff);
+ return vector;
+}
+
+/*
+ * ACPI calls this when it finds an entry for a platform interrupt.
+ * Note that the irq_base and IOSAPIC address must be set in iosapic_init().
+ */
+int
+iosapic_register_platform_irq (u32 int_type, u32 global_vector, u32 iosapic_vector,
+ u16 eid, u16 id, unsigned long polarity,
+ unsigned long edge_triggered, u32 base_irq, char *iosapic_address)
+{
+ struct hw_interrupt_type *irq_type;
+ irq_desc_t *idesc;
+ int vector;
+
+ switch (int_type) {
+ case ACPI20_ENTRY_PIS_CPEI:
+ vector = IA64_PCE_VECTOR;
+ iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY;
+ break;
+ case ACPI20_ENTRY_PIS_INIT:
+ vector = ia64_alloc_irq();
+ iosapic_irq[vector].dmode = IOSAPIC_INIT;
+ break;
+ default:
+ printk("iosapic_register_platform_irq(): invalid int type\n");
+ return -1;
+ }
+
+ /* fill in information from this vector's IOSAPIC */
+ iosapic_irq[vector].addr = iosapic_address;
+ iosapic_irq[vector].base_irq = base_irq;
+ iosapic_irq[vector].pin = global_vector - iosapic_irq[vector].base_irq;
+ iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW;
+
+ if (edge_triggered) {
+ iosapic_irq[vector].trigger = IOSAPIC_EDGE;
+ irq_type = &irq_type_iosapic_edge;
+ } else {
+ iosapic_irq[vector].trigger = IOSAPIC_LEVEL;
+ irq_type = &irq_type_iosapic_level;
+ }
+
+ idesc = irq_desc(vector);
+ if (idesc->handler != irq_type) {
+ if (idesc->handler != &no_irq_type)
+ printk("iosapic_register_platform_irq(): changing vector 0x%02x from"
+ "%s to %s\n", vector, idesc->handler->typename, irq_type->typename);
+ idesc->handler = irq_type;
+ }
+
+ printk("PLATFORM int %x: IOSAPIC %x(%s,%s) -> Vector %x CPU %.02u:%.02u\n",
+ int_type, global_vector, (polarity ? "high" : "low"),
+ (edge_triggered ? "edge" : "level"), vector, eid, id);
+
+ /* program the IOSAPIC routing table */
+ set_rte(vector, ((id << 8) | eid) & 0xffff);
+ return vector;
+}
+
+
+/*
* ACPI calls this when it finds an entry for a legacy ISA interrupt. Note that the
* irq_base and IOSAPIC address must be set in iosapic_init().
*/
@@ -436,7 +544,7 @@
/* the interrupt route is for another controller... */
continue;
- if (irq < 16)
+ if (pcat_compat && (irq < 16))
vector = isa_irq_to_vector(irq);
else {
vector = iosapic_irq_to_vector(irq);
@@ -515,6 +623,23 @@
printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> 0x%02x\n",
dev->bus->number, PCI_SLOT(dev->devfn), pin, vector);
dev->irq = vector;
+
+#ifdef CONFIG_SMP
+ /*
+ * For platforms that do not support interrupt redirect
+ * via the XTP interface, we can round-robin the PCI
+ * device interrupts to the processors
+ */
+ if (!(smp_int_redirect & SMP_IRQ_REDIRECTION)) {
+ static int cpu_index = 0;
+
+ set_rte(vector, cpu_physical_id(cpu_index) & 0xffff);
+
+ cpu_index++;
+ if (cpu_index >= smp_num_cpus)
+ cpu_index = 0;
+ }
+#endif
}
}
/*
diff -urN linux-2.4.13/arch/ia64/kernel/irq.c linux-2.4.13-lia/arch/ia64/kernel/irq.c
--- linux-2.4.13/arch/ia64/kernel/irq.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/irq.c Thu Oct 4 00:21:39 2001
@@ -33,6 +33,7 @@
#include <linux/irq.h>
#include <linux/proc_fs.h>
+#include <asm/atomic.h>
#include <asm/io.h>
#include <asm/smp.h>
#include <asm/system.h>
@@ -121,7 +122,10 @@
end_none
};
-volatile unsigned long irq_err_count;
+atomic_t irq_err_count;
+#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MISMATCH_DEBUG)
+atomic_t irq_mis_count;
+#endif
/*
* Generic, controller-independent functions:
@@ -164,14 +168,17 @@
p += sprintf(p, "%10u ",
nmi_count(cpu_logical_map(j)));
p += sprintf(p, "\n");
-#if defined(CONFIG_SMP) && defined(__i386__)
+#if defined(CONFIG_SMP) && defined(CONFIG_X86)
p += sprintf(p, "LOC: ");
for (j = 0; j < smp_num_cpus; j++)
p += sprintf(p, "%10u ",
apic_timer_irqs[cpu_logical_map(j)]);
p += sprintf(p, "\n");
#endif
- p += sprintf(p, "ERR: %10lu\n", irq_err_count);
+ p += sprintf(p, "ERR: %10u\n", atomic_read(&irq_err_count));
+#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MISMATCH_DEBUG)
+ p += sprintf(p, "MIS: %10u\n", atomic_read(&irq_mis_count));
+#endif
return p - buf;
}
@@ -183,7 +190,7 @@
#ifdef CONFIG_SMP
unsigned int global_irq_holder = NO_PROC_ID;
-volatile unsigned long global_irq_lock; /* long for set_bit --RR */
+unsigned volatile long global_irq_lock; /* pedantic: long for set_bit --RR */
extern void show_stack(unsigned long* esp);
@@ -201,14 +208,14 @@
printk(" %d",bh_count(i));
printk(" ]\nStack dumps:");
-#if defined(__ia64__)
+#if defined(CONFIG_IA64)
/*
* We can't unwind the stack of another CPU without access to
* the registers of that CPU. And sending an IPI when we're
* in a potentially wedged state doesn't sound like a smart
* idea.
*/
-#elif defined(__i386__)
+#elif defined(CONFIG_X86)
for(i=0;i< smp_num_cpus;i++) {
unsigned long esp;
if(i=cpu)
@@ -261,7 +268,7 @@
/*
* We have to allow irqs to arrive between __sti and __cli
*/
-# ifdef __ia64__
+# ifdef CONFIG_IA64
# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop 0")
# else
# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop")
@@ -331,6 +338,9 @@
/* Uhhuh.. Somebody else got it. Wait.. */
do {
do {
+#ifdef CONFIG_X86
+ rep_nop();
+#endif
} while (test_bit(0,&global_irq_lock));
} while (test_and_set_bit(0,&global_irq_lock));
}
@@ -364,7 +374,7 @@
{
unsigned int flags;
-#ifdef __ia64__
+#ifdef CONFIG_IA64
__save_flags(flags);
if (flags & IA64_PSR_I) {
__cli();
@@ -403,7 +413,7 @@
int cpu = smp_processor_id();
__save_flags(flags);
-#ifdef __ia64__
+#ifdef CONFIG_IA64
local_enabled = (flags & IA64_PSR_I) != 0;
#else
local_enabled = (flags >> EFLAGS_IF_SHIFT) & 1;
@@ -476,13 +486,19 @@
return status;
}
-/*
- * Generic enable/disable code: this just calls
- * down into the PIC-specific version for the actual
- * hardware disable after having gotten the irq
- * controller lock.
+/**
+ * disable_irq_nosync - disable an irq without waiting
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables and Enables are
+ * nested.
+ * Unlike disable_irq(), this function does not ensure existing
+ * instances of the IRQ handler have completed before returning.
+ *
+ * This function may be called from IRQ context.
*/
-void inline disable_irq_nosync(unsigned int irq)
+
+inline void disable_irq_nosync(unsigned int irq)
{
irq_desc_t *desc = irq_desc(irq);
unsigned long flags;
@@ -495,10 +511,19 @@
spin_unlock_irqrestore(&desc->lock, flags);
}
-/*
- * Synchronous version of the above, making sure the IRQ is
- * no longer running on any other IRQ..
+/**
+ * disable_irq - disable an irq and wait for completion
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Enables and Disables are
+ * nested.
+ * This function waits for any pending IRQ handlers for this interrupt
+ * to complete before returning. If you use this function while
+ * holding a resource the IRQ handler may need you will deadlock.
+ *
+ * This function may be called - with care - from IRQ context.
*/
+
void disable_irq(unsigned int irq)
{
disable_irq_nosync(irq);
@@ -512,6 +537,17 @@
#endif
}
+/**
+ * enable_irq - enable handling of an irq
+ * @irq: Interrupt to enable
+ *
+ * Undoes the effect of one call to disable_irq(). If this
+ * matches the last disable, processing of interrupts on this
+ * IRQ line is re-enabled.
+ *
+ * This function may be called from IRQ context.
+ */
+
void enable_irq(unsigned int irq)
{
irq_desc_t *desc = irq_desc(irq);
@@ -533,7 +569,8 @@
desc->depth--;
break;
case 0:
- printk("enable_irq() unbalanced from %p\n", (void *) __builtin_return_address(0));
+ printk("enable_irq(%u) unbalanced from %p\n",
+ irq, (void *) __builtin_return_address(0));
}
spin_unlock_irqrestore(&desc->lock, flags);
}
@@ -626,11 +663,41 @@
desc->handler->end(irq);
spin_unlock(&desc->lock);
}
- if (local_softirq_pending())
- do_softirq();
return 1;
}
+/**
+ * request_irq - allocate an interrupt line
+ * @irq: Interrupt line to allocate
+ * @handler: Function to be called when the IRQ occurs
+ * @irqflags: Interrupt type flags
+ * @devname: An ascii name for the claiming device
+ * @dev_id: A cookie passed back to the handler function
+ *
+ * This call allocates interrupt resources and enables the
+ * interrupt line and IRQ handling. From the point this
+ * call is made your handler function may be invoked. Since
+ * your handler function must clear any interrupt the board
+ * raises, you must take care both to initialise your hardware
+ * and to set up the interrupt handler in the right order.
+ *
+ * Dev_id must be globally unique. Normally the address of the
+ * device data structure is used as the cookie. Since the handler
+ * receives this value it makes sense to use it.
+ *
+ * If your interrupt is shared you must pass a non NULL dev_id
+ * as this is required when freeing the interrupt.
+ *
+ * Flags:
+ *
+ * SA_SHIRQ Interrupt is shared
+ *
+ * SA_INTERRUPT Disable local interrupts while processing
+ *
+ * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ *
+ */
+
int request_irq(unsigned int irq,
void (*handler)(int, void *, struct pt_regs *),
unsigned long irqflags,
@@ -676,6 +743,24 @@
return retval;
}
+/**
+ * free_irq - free an interrupt
+ * @irq: Interrupt line to free
+ * @dev_id: Device identity to free
+ *
+ * Remove an interrupt handler. The handler is removed and if the
+ * interrupt line is no longer in use by any driver it is disabled.
+ * On a shared IRQ the caller must ensure the interrupt is disabled
+ * on the card it drives before calling this function. The function
+ * does not return until any executing interrupts for this IRQ
+ * have completed.
+ *
+ * This function may be called from interrupt context.
+ *
+ * Bugs: Attempting to free an irq in a handler for the same irq hangs
+ * the machine.
+ */
+
void free_irq(unsigned int irq, void *dev_id)
{
irq_desc_t *desc;
@@ -726,6 +811,17 @@
* with "IRQ_WAITING" cleared and the interrupt
* disabled.
*/
+
+static DECLARE_MUTEX(probe_sem);
+
+/**
+ * probe_irq_on - begin an interrupt autodetect
+ *
+ * Commence probing for an interrupt. The interrupts are scanned
+ * and a mask of potential interrupt lines is returned.
+ *
+ */
+
unsigned long probe_irq_on(void)
{
unsigned int i;
@@ -733,6 +829,7 @@
unsigned long val;
unsigned long delay;
+ down(&probe_sem);
/*
* something may have generated an irq long ago and we want to
* flush such a longstanding irq before considering it as spurious.
@@ -799,10 +896,19 @@
return val;
}
-/*
- * Return a mask of triggered interrupts (this
- * can handle only legacy ISA interrupts).
+/**
+ * probe_irq_mask - scan a bitmap of interrupt lines
+ * @val: mask of interrupts to consider
+ *
+ * Scan the ISA bus interrupt lines and return a bitmap of
+ * active interrupts. The interrupt probe logic state is then
+ * returned to its previous value.
+ *
+ * Note: we need to scan all the irq's even though we will
+ * only return ISA irq numbers - just so that we reset them
+ * all to a known state.
*/
+
unsigned int probe_irq_mask(unsigned long val)
{
int i;
@@ -825,14 +931,29 @@
}
spin_unlock_irq(&desc->lock);
}
+ up(&probe_sem);
return mask & val;
}
-/*
- * Return the one interrupt that triggered (this can
- * handle any interrupt source)
+/**
+ * probe_irq_off - end an interrupt autodetect
+ * @val: mask of potential interrupts (unused)
+ *
+ * Scans the unused interrupt lines and returns the line which
+ * appears to have triggered the interrupt. If no interrupt was
+ * found then zero is returned. If more than one interrupt is
+ * found then minus the first candidate is returned to indicate
+ * their is doubt.
+ *
+ * The interrupt probe logic state is returned to its previous
+ * value.
+ *
+ * BUGS: When used in a module (which arguably shouldnt happen)
+ * nothing prevents two IRQ probe callers from overlapping. The
+ * results of this are non-optimal.
*/
+
int probe_irq_off(unsigned long val)
{
int i, irq_found, nr_irqs;
@@ -857,6 +978,7 @@
}
spin_unlock_irq(&desc->lock);
}
+ up(&probe_sem);
if (nr_irqs > 1)
irq_found = -irq_found;
@@ -911,7 +1033,7 @@
if (!shared) {
desc->depth = 0;
- desc->status &= ~IRQ_DISABLED;
+ desc->status &= ~(IRQ_DISABLED | IRQ_AUTODETECT | IRQ_WAITING);
desc->handler->startup(irq);
}
spin_unlock_irqrestore(&desc->lock,flags);
@@ -922,20 +1044,9 @@
static struct proc_dir_entry * root_irq_dir;
static struct proc_dir_entry * irq_dir [NR_IRQS];
-static struct proc_dir_entry * smp_affinity_entry [NR_IRQS];
-
-static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
#define HEX_DIGITS 8
-static int irq_affinity_read_proc (char *page, char **start, off_t off,
- int count, int *eof, void *data)
-{
- if (count < HEX_DIGITS+1)
- return -EINVAL;
- return sprintf (page, "%08lx\n", irq_affinity[(long)data]);
-}
-
static unsigned int parse_hex_value (const char *buffer,
unsigned long count, unsigned long *ret)
{
@@ -973,6 +1084,20 @@
return 0;
}
+#if CONFIG_SMP
+
+static struct proc_dir_entry * smp_affinity_entry [NR_IRQS];
+
+static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
+
+static int irq_affinity_read_proc (char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+{
+ if (count < HEX_DIGITS+1)
+ return -EINVAL;
+ return sprintf (page, "%08lx\n", irq_affinity[(long)data]);
+}
+
static int irq_affinity_write_proc (struct file *file, const char *buffer,
unsigned long count, void *data)
{
@@ -984,7 +1109,6 @@
err = parse_hex_value(buffer, count, &new_value);
-#if CONFIG_SMP
/*
* Do not allow disabling IRQs completely - it's a too easy
* way to make the system unusable accidentally :-) At least
@@ -992,7 +1116,6 @@
*/
if (!(new_value & cpu_online_map))
return -EINVAL;
-#endif
irq_affinity[irq] = new_value;
irq_desc(irq)->handler->set_affinity(irq, new_value);
@@ -1000,6 +1123,8 @@
return full_count;
}
+#endif /* CONFIG_SMP */
+
static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
@@ -1027,7 +1152,6 @@
static void register_irq_proc (unsigned int irq)
{
- struct proc_dir_entry *entry;
char name [MAX_NAMELEN];
if (!root_irq_dir || (irq_desc(irq)->handler = &no_irq_type))
@@ -1039,15 +1163,22 @@
/* create /proc/irq/1234 */
irq_dir[irq] = proc_mkdir(name, root_irq_dir);
- /* create /proc/irq/1234/smp_affinity */
- entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
-
- entry->nlink = 1;
- entry->data = (void *)(long)irq;
- entry->read_proc = irq_affinity_read_proc;
- entry->write_proc = irq_affinity_write_proc;
+#if CONFIG_SMP
+ {
+ struct proc_dir_entry *entry;
+ /* create /proc/irq/1234/smp_affinity */
+ entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
+
+ if (entry) {
+ entry->nlink = 1;
+ entry->data = (void *)(long)irq;
+ entry->read_proc = irq_affinity_read_proc;
+ entry->write_proc = irq_affinity_write_proc;
+ }
- smp_affinity_entry[irq] = entry;
+ smp_affinity_entry[irq] = entry;
+ }
+#endif
}
unsigned long prof_cpu_mask = -1;
@@ -1062,6 +1193,9 @@
/* create /proc/irq/prof_cpu_mask */
entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
+
+ if (!entry)
+ return;
entry->nlink = 1;
entry->data = (void *)&prof_cpu_mask;
diff -urN linux-2.4.13/arch/ia64/kernel/irq_ia64.c linux-2.4.13-lia/arch/ia64/kernel/irq_ia64.c
--- linux-2.4.13/arch/ia64/kernel/irq_ia64.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/irq_ia64.c Thu Oct 4 00:21:39 2001
@@ -1,9 +1,9 @@
/*
* linux/arch/ia64/kernel/irq.c
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 6/10/99: Updated to bring in sync with x86 version to facilitate
* support for SMP and different interrupt controllers.
@@ -131,6 +131,13 @@
ia64_eoi();
vector = ia64_get_ivr();
}
+ /*
+ * This must be done *after* the ia64_eoi(). For example, the keyboard softirq
+ * handler needs to be able to wait for further keyboard interrupts, which can't
+ * come through until ia64_eoi() has been done.
+ */
+ if (local_softirq_pending())
+ do_softirq();
}
#ifdef CONFIG_SMP
diff -urN linux-2.4.13/arch/ia64/kernel/ivt.S linux-2.4.13-lia/arch/ia64/kernel/ivt.S
--- linux-2.4.13/arch/ia64/kernel/ivt.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/ivt.S Wed Oct 10 17:58:45 2001
@@ -2,8 +2,8 @@
* arch/ia64/kernel/ivt.S
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1998-2001 David Mosberger <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger <davidm@hpl.hp.com>
*
* 00/08/23 Asit Mallick <asit.k.mallick@intel.com> TLB handling for SMP
* 00/12/20 David Mosberger-Tang <davidm@hpl.hp.com> DTLB/ITLB handler now uses virtual PT.
@@ -157,7 +157,7 @@
;;
(p10) itc.i r18 // insert the instruction TLB entry
(p11) itc.d r18 // insert the data TLB entry
-(p6) br.spnt.many page_fault // handle bad address/page not present (page fault)
+(p6) br.cond.spnt.many page_fault // handle bad address/page not present (page fault)
mov cr.ifa=r22
/*
@@ -213,7 +213,7 @@
;;
mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
;;
itc.i r18
;;
@@ -251,7 +251,7 @@
;;
mov b0=r29
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
;;
itc.d r18
;;
@@ -286,7 +286,7 @@
;;
(p8) mov cr.iha=r17
(p8) mov r29° // save b0
-(p8) br.cond.dptk.many itlb_fault
+(p8) br.cond.dptk itlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
shr.u r18=r16,57 // move address bit 61 to bit 4
@@ -297,7 +297,7 @@
dep r19=r17,r19,0,12 // insert PTE control bits into r19
;;
or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6
-(p8) br.cond.spnt.many page_fault
+(p8) br.cond.spnt page_fault
;;
itc.i r19 // insert the TLB entry
mov pr=r31,-1
@@ -324,7 +324,7 @@
;;
(p8) mov cr.iha=r17
(p8) mov r29° // save b0
-(p8) br.cond.dptk.many dtlb_fault
+(p8) br.cond.dptk dtlb_fault
#endif
extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl
tbit.nz p6,p7=r20,IA64_ISR_SP_BIT // is speculation bit on?
@@ -333,7 +333,7 @@
;;
andcm r18=0x10,r18 // bit 4=~address-bit(61)
cmp.ne p8,p0=r0,r23
-(p8) br.cond.spnt.many page_fault
+(p8) br.cond.spnt page_fault
dep r21=-1,r21,IA64_PSR_ED_BIT,1
dep r19=r17,r19,0,12 // insert PTE control bits into r19
@@ -429,7 +429,7 @@
;;
(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
-(p6) br.cond.spnt.many page_fault
+(p6) br.cond.spnt page_fault
mov b0=r30
br.sptk.many b0 // return to continuation point
END(nested_dtlb_miss)
@@ -534,15 +534,6 @@
;;
1: ld8 r18=[r17]
;;
-# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC)
- /*
- * Erratum 85 (Access bit fault could be reported before page not present fault)
- * If the PTE is indicates the page is not present, then just turn this into a
- * page fault.
- */
- tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.sptk page_fault // page wasn't present
-# endif
mov ar.ccv=r18 // set compare value for cmpxchg
or r25=_PAGE_A,r18 // set the accessed bit
;;
@@ -564,15 +555,6 @@
;;
1: ld8 r18=[r17]
;;
-# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC)
- /*
- * Erratum 85 (Access bit fault could be reported before page not present fault)
- * If the PTE is indicates the page is not present, then just turn this into a
- * page fault.
- */
- tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
-(p6) br.sptk page_fault // page wasn't present
-# endif
or r18=_PAGE_A,r18 // set the accessed bit
mov b0=r29 // restore b0
;;
@@ -640,7 +622,7 @@
mov r31=pr // prepare to save predicates
;;
cmp.eq p0,p7=r16,r17 // is this a system call? (p7 <- false, if so)
-(p7) br.cond.spnt.many non_syscall
+(p7) br.cond.spnt non_syscall
SAVE_MIN // uses r31; defines r2:
@@ -656,7 +638,7 @@
adds r3=8,r2 // set up second base pointer for SAVE_REST
;;
SAVE_REST
- br.call.sptk rpÞmine_args // clear NaT bits in (potential) syscall args
+ br.call.sptk.many rpÞmine_args // clear NaT bits in (potential) syscall args
mov r3%5
adds r15=-1024,r15 // r15 contains the syscall number---subtract 1024
@@ -698,7 +680,7 @@
st8 [r16]=r18 // store new value for cr.isr
(p8) br.call.sptk.many b6¶ // ignore this return addr
- br.cond.sptk.many ia64_trace_syscall
+ br.cond.sptk ia64_trace_syscall
// NOT REACHED
END(break_fault)
@@ -811,8 +793,8 @@
mov b6=r8
;;
cmp.ne p6,p0=0,r8
-(p6) br.call.dpnt b6¶ // call returns to ia64_leave_kernel
- br.sptk ia64_leave_kernel
+(p6) br.call.dpnt.many b6¶ // call returns to ia64_leave_kernel
+ br.sptk.many ia64_leave_kernel
END(dispatch_illegal_op_fault)
.align 1024
@@ -855,30 +837,30 @@
adds r15=IA64_PT_REGS_R1_OFFSET + 16,sp
;;
cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
- st8 [r15]=r8 // save orignal EAX in r1 (IA32 procs don't use the GP)
+ st8 [r15]=r8 // save original EAX in r1 (IA32 procs don't use the GP)
;;
alloc r15=ar.pfs,0,0,6,0 // must first in an insn group
;;
- ld4 r8=[r14],8 // r8 = EAX (syscall number)
- mov r15"2 // sys_vfork - last implemented system call
+ ld4 r8=[r14],8 // r8 = eax (syscall number)
+ mov r15#0 // number of entries in ia32 system call table
;;
- cmp.leu.unc p6,p7=r8,r15
- ld4 out1=[r14],8 // r9 = ecx
+ cmp.ltu.unc p6,p7=r8,r15
+ ld4 out1=[r14],8 // r9 = ecx
;;
- ld4 out2=[r14],8 // r10 = edx
+ ld4 out2=[r14],8 // r10 = edx
;;
- ld4 out0=[r14] // r11 = ebx
+ ld4 out0=[r14] // r11 = ebx
adds r14=(IA64_PT_REGS_R8_OFFSET-(8*3)) + 16,sp
;;
- ld4 out5=[r14],8 // r13 = ebp
+ ld4 out5=[r14],8 // r13 = ebp
;;
- ld4 out3=[r14],8 // r14 = esi
+ ld4 out3=[r14],8 // r14 = esi
adds r2=IA64_TASK_PTRACE_OFFSET,r13 // r2 = ¤t->ptrace
;;
- ld4 out4=[r14] // R15 = edi
+ ld4 out4=[r14] // r15 = edi
movl r16=ia32_syscall_table
;;
-(p6) shladd r16=r8,3,r16 // Force ni_syscall if not valid syscall number
+(p6) shladd r16=r8,3,r16 // force ni_syscall if not valid syscall number
ld8 r2=[r2] // r2 = current->ptrace
;;
ld8 r16=[r16]
@@ -889,12 +871,12 @@
;;
mov rp=r15
(p8) br.call.sptk.many b6¶
- br.cond.sptk.many ia32_trace_syscall
+ br.cond.sptk ia32_trace_syscall
non_ia32_syscall:
alloc r15=ar.pfs,0,0,2,0
- mov out0=r14 // interrupt #
- add out1\x16,sp // pointer to pt_regs
+ mov out0=r14 // interrupt #
+ add out1\x16,sp // pointer to pt_regs
;; // avoid WAW on CFM
br.call.sptk.many rp=ia32_bad_interrupt
.ret1: movl r15=ia64_leave_kernel
@@ -1085,7 +1067,7 @@
mov r31=pr
;;
cmp4.eq p6,p0=0,r16
-(p6) br.sptk dispatch_illegal_op_fault
+(p6) br.sptk.many dispatch_illegal_op_fault
;;
mov r19$ // fault number
br.sptk.many dispatch_to_fault_handler
diff -urN linux-2.4.13/arch/ia64/kernel/mca.c linux-2.4.13-lia/arch/ia64/kernel/mca.c
--- linux-2.4.13/arch/ia64/kernel/mca.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/mca.c Wed Oct 10 17:42:06 2001
@@ -3,12 +3,20 @@
* Purpose: Generic MCA handling layer
*
* Updated for latest kernel
+ * Copyright (C) 2001 Intel
+ * Copyright (C) Fred Lewis (frederick.v.lewis@intel.com)
+ *
* Copyright (C) 2000 Intel
* Copyright (C) Chuck Fleckenstein (cfleck@co.intel.com)
*
* Copyright (C) 1999 Silicon Graphics, Inc.
* Copyright (C) Vijay Chander(vijay@engr.sgi.com)
*
+ * 01/01/03 F. Lewis Added setup of CMCI and CPEI IRQs, logging of corrected
+ * platform errors, completed code for logging of
+ * corrected & uncorrected machine check errors, and
+ * updated for conformance with Nov. 2000 revision of the
+ * SAL 3.0 spec.
* 00/03/29 C. Fleckenstein Fixed PAL/SAL update issues, began MCA bug fixes, logging issues,
* added min save state dump, added INIT handler.
*/
@@ -16,6 +24,7 @@
#include <linux/types.h>
#include <linux/init.h>
#include <linux/sched.h>
+#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/smp_lock.h>
@@ -27,8 +36,10 @@
#include <asm/mca.h>
#include <asm/irq.h>
-#include <asm/machvec.h>
+#include <asm/hw_irq.h>
+#include <asm/acpi-ext.h>
+#undef MCA_PRT_XTRA_DATA
typedef struct ia64_fptr {
unsigned long fp;
@@ -38,22 +49,67 @@
ia64_mc_info_t ia64_mc_info;
ia64_mca_sal_to_os_state_t ia64_sal_to_os_handoff_state;
ia64_mca_os_to_sal_state_t ia64_os_to_sal_handoff_state;
-u64 ia64_mca_proc_state_dump[256];
+u64 ia64_mca_proc_state_dump[512];
u64 ia64_mca_stack[1024];
u64 ia64_mca_stackframe[32];
u64 ia64_mca_bspstore[1024];
u64 ia64_init_stack[INIT_TASK_SIZE] __attribute__((aligned(16)));
-static void ia64_mca_cmc_vector_setup(int enable,
- int_vector_t cmc_vector);
static void ia64_mca_wakeup_ipi_wait(void);
static void ia64_mca_wakeup(int cpu);
static void ia64_mca_wakeup_all(void);
-static void ia64_log_init(int,int);
-static void ia64_log_get(int,int, prfunc_t);
-static void ia64_log_clear(int,int,int, prfunc_t);
+static void ia64_log_init(int);
extern void ia64_monarch_init_handler (void);
extern void ia64_slave_init_handler (void);
+extern struct hw_interrupt_type irq_type_iosapic_level;
+
+static struct irqaction cmci_irqaction = {
+ handler: ia64_mca_cmc_int_handler,
+ flags: SA_INTERRUPT,
+ name: "cmc_hndlr"
+};
+
+static struct irqaction mca_rdzv_irqaction = {
+ handler: ia64_mca_rendez_int_handler,
+ flags: SA_INTERRUPT,
+ name: "mca_rdzv"
+};
+
+static struct irqaction mca_wkup_irqaction = {
+ handler: ia64_mca_wakeup_int_handler,
+ flags: SA_INTERRUPT,
+ name: "mca_wkup"
+};
+
+static struct irqaction mca_cpe_irqaction = {
+ handler: ia64_mca_cpe_int_handler,
+ flags: SA_INTERRUPT,
+ name: "cpe_hndlr"
+};
+
+/*
+ * ia64_mca_log_sal_error_record
+ *
+ * This function retrieves a specified error record type from SAL, sends it to
+ * the system log, and notifies SALs to clear the record from its non-volatile
+ * memory.
+ *
+ * Inputs : sal_info_type (Type of error record MCA/CMC/CPE/INIT)
+ * Outputs : None
+ */
+void
+ia64_mca_log_sal_error_record(int sal_info_type)
+{
+ /* Get the MCA error record */
+ if (!ia64_log_get(sal_info_type, (prfunc_t)printk))
+ return; // no record retrieved
+
+ /* Log the error record */
+ ia64_log_print(sal_info_type, (prfunc_t)printk);
+
+ /* Clear the CMC SAL logs now that they have been logged */
+ ia64_sal_clear_state_info(sal_info_type);
+}
/*
* hack for now, add platform dependent handlers
@@ -67,10 +123,14 @@
}
void
-cmci_handler_platform (int cmc_irq, void *arg, struct pt_regs *ptregs)
+ia64_mca_cpe_int_handler (int cpe_irq, void *arg, struct pt_regs *ptregs)
{
+ IA64_MCA_DEBUG("ia64_mca_cpe_int_handler: received interrupt. vector = %#x\n", cpe_irq);
+ /* Get the CMC error record and log it */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CPE);
}
+
/*
* This routine will be used to deal with platform specific handling
* of the init, i.e. drop into the kernel debugger on server machine,
@@ -81,17 +141,72 @@
init_handler_platform (struct pt_regs *regs)
{
/* if a kernel debugger is available call it here else just dump the registers */
+
show_regs(regs); /* dump the state info */
+ while (1); /* hang city if no debugger */
}
+/*
+ * ia64_mca_init_platform
+ *
+ * External entry for platform specific MCA initialization.
+ *
+ * Inputs
+ * None
+ *
+ * Outputs
+ * None
+ */
void
-log_print_platform ( void *cur_buff_ptr, prfunc_t prfunc)
+ia64_mca_init_platform (void)
{
+
}
+/*
+ * ia64_mca_check_errors
+ *
+ * External entry to check for error records which may have been posted by SAL
+ * for a prior failure which resulted in a machine shutdown before an the
+ * error could be logged. This function must be called after the filesystem
+ * is initialized.
+ *
+ * Inputs : None
+ *
+ * Outputs : None
+ */
void
-ia64_mca_init_platform (void)
+ia64_mca_check_errors (void)
{
+ /*
+ * If there is an MCA error record pending, get it and log it.
+ */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA);
+}
+
+/*
+ * ia64_mca_register_cpev
+ *
+ * Register the corrected platform error vector with SAL.
+ *
+ * Inputs
+ * cpev Corrected Platform Error Vector number
+ *
+ * Outputs
+ * None
+ */
+static void
+ia64_mca_register_cpev (int cpev)
+{
+ /* Register the CPE interrupt vector with SAL */
+ if (ia64_sal_mc_set_params(SAL_MC_PARAM_CPE_INT, SAL_MC_PARAM_MECHANISM_INT, cpev, 0, 0)) {
+ printk("ia64_mca_platform_init: failed to register Corrected "
+ "Platform Error interrupt vector with SAL.\n");
+ return;
+ }
+
+ IA64_MCA_DEBUG("ia64_mca_platform_init: corrected platform error "
+ "vector %#x setup and enabled\n", cpev);
}
#endif /* PLATFORM_MCA_HANDLERS */
@@ -140,30 +255,36 @@
&& !ia64_pmss_dump_bank0))
printk("\n");
}
- /* hang city for now, until we include debugger or copy to ptregs to show: */
- while (1);
}
/*
* ia64_mca_cmc_vector_setup
- * Setup the correctable machine check vector register in the processor
+ *
+ * Setup the corrected machine check vector register in the processor and
+ * unmask interrupt. This function is invoked on a per-processor basis.
+ *
* Inputs
- * Enable (1 - enable cmc interrupt , 0 - disable)
- * CMC handler entry point (if enabled)
+ * None
*
* Outputs
* None
*/
-static void
-ia64_mca_cmc_vector_setup(int enable,
- int_vector_t cmc_vector)
+void
+ia64_mca_cmc_vector_setup (void)
{
cmcv_reg_t cmcv;
cmcv.cmcv_regval = 0;
- cmcv.cmcv_mask = enable;
- cmcv.cmcv_vector = cmc_vector;
+ cmcv.cmcv_mask = 0; /* Unmask/enable interrupt */
+ cmcv.cmcv_vector = IA64_CMC_VECTOR;
ia64_set_cmcv(cmcv.cmcv_regval);
+
+ IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d corrected "
+ "machine check vector %#x setup and enabled.\n",
+ smp_processor_id(), IA64_CMC_VECTOR);
+
+ IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d CMCV = %#016lx\n",
+ smp_processor_id(), ia64_get_cmcv());
}
@@ -174,26 +295,58 @@
void
mca_test(void)
{
- slpi_buf.slpi_valid.slpi_psi = 1;
- slpi_buf.slpi_valid.slpi_cache_check = 1;
- slpi_buf.slpi_valid.slpi_tlb_check = 1;
- slpi_buf.slpi_valid.slpi_bus_check = 1;
- slpi_buf.slpi_valid.slpi_minstate = 1;
- slpi_buf.slpi_valid.slpi_bank1_gr = 1;
- slpi_buf.slpi_valid.slpi_br = 1;
- slpi_buf.slpi_valid.slpi_cr = 1;
- slpi_buf.slpi_valid.slpi_ar = 1;
- slpi_buf.slpi_valid.slpi_rr = 1;
- slpi_buf.slpi_valid.slpi_fr = 1;
+ slpi_buf.valid.psi_static_struct = 1;
+ slpi_buf.valid.num_cache_check = 1;
+ slpi_buf.valid.num_tlb_check = 1;
+ slpi_buf.valid.num_bus_check = 1;
+ slpi_buf.valid.processor_static_info.minstate = 1;
+ slpi_buf.valid.processor_static_info.br = 1;
+ slpi_buf.valid.processor_static_info.cr = 1;
+ slpi_buf.valid.processor_static_info.ar = 1;
+ slpi_buf.valid.processor_static_info.rr = 1;
+ slpi_buf.valid.processor_static_info.fr = 1;
ia64_os_mca_dispatch();
}
#endif /* #if defined(MCA_TEST) */
+
+/*
+ * verify_guid
+ *
+ * Compares a test guid to a target guid and returns result.
+ *
+ * Inputs
+ * test_guid * (ptr to guid to be verified)
+ * target_guid * (ptr to standard guid to be verified against)
+ *
+ * Outputs
+ * 0 (test verifies against target)
+ * non-zero (test guid does not verify)
+ */
+static int
+verify_guid (efi_guid_t *test, efi_guid_t *target)
+{
+ int rc;
+
+ if ((rc = memcmp((void *)test, (void *)target, sizeof(efi_guid_t)))) {
+ IA64_MCA_DEBUG("ia64_mca_print: invalid guid = "
+ "{ %08x, %04x, %04x, { %#02x, %#02x, %#02x, %#02x, "
+ "%#02x, %#02x, %#02x, %#02x, } } \n ",
+ test->data1, test->data2, test->data3, test->data4[0],
+ test->data4[1], test->data4[2], test->data4[3],
+ test->data4[4], test->data4[5], test->data4[6],
+ test->data4[7]);
+ }
+
+ return rc;
+}
+
/*
* ia64_mca_init
- * Do all the mca specific initialization on a per-processor basis.
+ *
+ * Do all the system level mca specific initialization.
*
* 1. Register spinloop and wakeup request interrupt vectors
*
@@ -201,77 +354,80 @@
*
* 3. Register OS_INIT handler entry point
*
- * 4. Initialize CMCV register to enable/disable CMC interrupt on the
- * processor and hook a handler in the platform-specific ia64_mca_init.
+ * 4. Initialize MCA/CMC/INIT related log buffers maintained by the OS.
*
- * 5. Initialize MCA/CMC/INIT related log buffers maintained by the OS.
+ * Note that this initialization is done very early before some kernel
+ * services are available.
*
- * Inputs
- * None
- * Outputs
- * None
+ * Inputs : None
+ *
+ * Outputs : None
*/
void __init
ia64_mca_init(void)
{
ia64_fptr_t *mon_init_ptr = (ia64_fptr_t *)ia64_monarch_init_handler;
ia64_fptr_t *slave_init_ptr = (ia64_fptr_t *)ia64_slave_init_handler;
+ ia64_fptr_t *mca_hldlr_ptr = (ia64_fptr_t *)ia64_os_mca_dispatch;
int i;
+ s64 rc;
- IA64_MCA_DEBUG("ia64_mca_init : begin\n");
+ IA64_MCA_DEBUG("ia64_mca_init: begin\n");
/* Clear the Rendez checkin flag for all cpus */
- for(i = 0 ; i < IA64_MAXCPUS; i++)
+ for(i = 0 ; i < NR_CPUS; i++)
ia64_mc_info.imi_rendez_checkin[i] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
- /* NOTE : The actual irqs for the rendez, wakeup and
- * cmc interrupts are requested in the platform-specific
- * mca initialization code.
- */
/*
* Register the rendezvous spinloop and wakeup mechanism with SAL
*/
/* Register the rendezvous interrupt vector with SAL */
- if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT,
- SAL_MC_PARAM_MECHANISM_INT,
- IA64_MCA_RENDEZ_VECTOR,
- IA64_MCA_RENDEZ_TIMEOUT,
- 0))
+ if ((rc = ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT,
+ SAL_MC_PARAM_MECHANISM_INT,
+ IA64_MCA_RENDEZ_VECTOR,
+ IA64_MCA_RENDEZ_TIMEOUT,
+ 0)))
+ {
+ printk("ia64_mca_init: Failed to register rendezvous interrupt "
+ "with SAL. rc = %ld\n", rc);
return;
+ }
/* Register the wakeup interrupt vector with SAL */
- if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP,
- SAL_MC_PARAM_MECHANISM_INT,
- IA64_MCA_WAKEUP_VECTOR,
- 0,
- 0))
+ if ((rc = ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP,
+ SAL_MC_PARAM_MECHANISM_INT,
+ IA64_MCA_WAKEUP_VECTOR,
+ 0, 0)))
+ {
+ printk("ia64_mca_init: Failed to register wakeup interrupt with SAL. rc = %ld\n",
+ rc);
return;
+ }
- IA64_MCA_DEBUG("ia64_mca_init : registered mca rendezvous spinloop and wakeup mech.\n");
- /*
- * Setup the correctable machine check vector
- */
- ia64_mca_cmc_vector_setup(IA64_CMC_INT_ENABLE, IA64_CMC_VECTOR);
-
- IA64_MCA_DEBUG("ia64_mca_init : correctable mca vector setup done\n");
+ IA64_MCA_DEBUG("ia64_mca_init: registered mca rendezvous spinloop and wakeup mech.\n");
- ia64_mc_info.imi_mca_handler = __pa(ia64_os_mca_dispatch);
+ ia64_mc_info.imi_mca_handler = __pa(mca_hldlr_ptr->fp);
/*
* XXX - disable SAL checksum by setting size to 0; should be
* __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch);
*/
ia64_mc_info.imi_mca_handler_size = 0;
- /* Register the os mca handler with SAL */
- if (ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
- ia64_mc_info.imi_mca_handler,
- __pa(ia64_get_gp()),
- ia64_mc_info.imi_mca_handler_size,
- 0,0,0))
+ /* Register the os mca handler with SAL */
+ if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
+ ia64_mc_info.imi_mca_handler,
+ mca_hldlr_ptr->gp,
+ ia64_mc_info.imi_mca_handler_size,
+ 0, 0, 0)))
+ {
+ printk("ia64_mca_init: Failed to register os mca handler with SAL. rc = %ld\n",
+ rc);
return;
+ }
- IA64_MCA_DEBUG("ia64_mca_init : registered os mca handler with SAL\n");
+ IA64_MCA_DEBUG("ia64_mca_init: registered os mca handler with SAL at 0x%lx, gp = 0x%lx\n",
+ ia64_mc_info.imi_mca_handler, mca_hldlr_ptr->gp);
/*
* XXX - disable SAL checksum by setting size to 0, should be
@@ -282,53 +438,87 @@
ia64_mc_info.imi_slave_init_handler = __pa(slave_init_ptr->fp);
ia64_mc_info.imi_slave_init_handler_size = 0;
- IA64_MCA_DEBUG("ia64_mca_init : os init handler at %lx\n",ia64_mc_info.imi_monarch_init_handler);
+ IA64_MCA_DEBUG("ia64_mca_init: os init handler at %lx\n",
+ ia64_mc_info.imi_monarch_init_handler);
/* Register the os init handler with SAL */
- if (ia64_sal_set_vectors(SAL_VECTOR_OS_INIT,
- ia64_mc_info.imi_monarch_init_handler,
- __pa(ia64_get_gp()),
- ia64_mc_info.imi_monarch_init_handler_size,
- ia64_mc_info.imi_slave_init_handler,
- __pa(ia64_get_gp()),
- ia64_mc_info.imi_slave_init_handler_size))
+ if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_INIT,
+ ia64_mc_info.imi_monarch_init_handler,
+ __pa(ia64_get_gp()),
+ ia64_mc_info.imi_monarch_init_handler_size,
+ ia64_mc_info.imi_slave_init_handler,
+ __pa(ia64_get_gp()),
+ ia64_mc_info.imi_slave_init_handler_size)))
+ {
+ printk("ia64_mca_init: Failed to register m/s init handlers with SAL. rc = %ld\n",
+ rc);
+ return;
+ }
+ IA64_MCA_DEBUG("ia64_mca_init: registered os init handler with SAL\n");
- return;
+ /*
+ * Configure the CMCI vector and handler. Interrupts for CMC are
+ * per-processor, so AP CMC interrupts are setup in smp_callin() (smp.c).
+ */
+ register_percpu_irq(IA64_CMC_VECTOR, &cmci_irqaction);
+ ia64_mca_cmc_vector_setup(); /* Setup vector on BSP & enable */
- IA64_MCA_DEBUG("ia64_mca_init : registered os init handler with SAL\n");
+ /* Setup the MCA rendezvous interrupt vector */
+ register_percpu_irq(IA64_MCA_RENDEZ_VECTOR, &mca_rdzv_irqaction);
+
+ /* Setup the MCA wakeup interrupt vector */
+ register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction);
+
+ /* Setup the CPE interrupt vector */
+ {
+ irq_desc_t *desc;
+ unsigned int irq;
+ int cpev = acpi_request_vector(ACPI20_ENTRY_PIS_CPEI);
+
+ if (cpev >= 0) {
+ for (irq = 0; irq < NR_IRQS; ++irq)
+ if (irq_to_vector(irq) = cpev) {
+ desc = irq_desc(irq);
+ desc->status |= IRQ_PER_CPU;
+ desc->handler = &irq_type_iosapic_level;
+ setup_irq(irq, &mca_cpe_irqaction);
+ }
+ ia64_mca_register_cpev(cpev);
+ } else
+ printk("ia64_mca_init: Failed to get routed CPEI vector from ACPI.\n");
+ }
/* Initialize the areas set aside by the OS to buffer the
* platform/processor error states for MCA/INIT/CMC
* handling.
*/
- ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM);
- ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM);
- ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR);
- ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM);
-
- ia64_mca_init_platform();
-
- IA64_MCA_DEBUG("ia64_mca_init : platform-specific mca handling setup done\n");
+ ia64_log_init(SAL_INFO_TYPE_MCA);
+ ia64_log_init(SAL_INFO_TYPE_INIT);
+ ia64_log_init(SAL_INFO_TYPE_CMC);
+ ia64_log_init(SAL_INFO_TYPE_CPE);
#if defined(MCA_TEST)
mca_test();
#endif /* #if defined(MCA_TEST) */
printk("Mca related initialization done\n");
+
+#if 0 // Too early in initialization -- error log is lost
+ /* Do post-failure MCA error logging */
+ ia64_mca_check_errors();
+#endif // Too early in initialization -- error log is lost
}
/*
* ia64_mca_wakeup_ipi_wait
+ *
* Wait for the inter-cpu interrupt to be sent by the
* monarch processor once it is done with handling the
* MCA.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_wakeup_ipi_wait(void)
@@ -339,16 +529,16 @@
do {
switch(irr_num) {
- case 0:
+ case 0:
irr = ia64_get_irr0();
break;
- case 1:
+ case 1:
irr = ia64_get_irr1();
break;
- case 2:
+ case 2:
irr = ia64_get_irr2();
break;
- case 3:
+ case 3:
irr = ia64_get_irr3();
break;
}
@@ -357,26 +547,28 @@
/*
* ia64_mca_wakeup
+ *
* Send an inter-cpu interrupt to wake-up a particular cpu
* and mark that cpu to be out of rendez.
- * Inputs
- * cpuid
- * Outputs
- * None
+ *
+ * Inputs : cpuid
+ * Outputs : None
*/
void
ia64_mca_wakeup(int cpu)
{
platform_send_ipi(cpu, IA64_MCA_WAKEUP_VECTOR, IA64_IPI_DM_INT, 0);
ia64_mc_info.imi_rendez_checkin[cpu] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
+
}
+
/*
* ia64_mca_wakeup_all
+ *
* Wakeup all the cpus which have rendez'ed previously.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_wakeup_all(void)
@@ -389,15 +581,16 @@
ia64_mca_wakeup(cpu);
}
+
/*
* ia64_mca_rendez_interrupt_handler
+ *
* This is handler used to put slave processors into spinloop
* while the monarch processor does the mca handling and later
* wake each slave up once the monarch is done.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_rendez_int_handler(int rendez_irq, void *arg, struct pt_regs *ptregs)
@@ -423,23 +616,22 @@
/* Enable all interrupts */
restore_flags(flags);
-
-
}
/*
* ia64_mca_wakeup_int_handler
+ *
* The interrupt handler for processing the inter-cpu interrupt to the
* slave cpu which was spinning in the rendez loop.
* Since this spinning is done by turning off the interrupts and
* polling on the wakeup-interrupt bit in the IRR, there is
* nothing useful to be done in the handler.
- * Inputs
- * wakeup_irq (Wakeup-interrupt bit)
+ *
+ * Inputs : wakeup_irq (Wakeup-interrupt bit)
* arg (Interrupt handler specific argument)
* ptregs (Exception frame at the time of the interrupt)
- * Outputs
+ * Outputs : None
*
*/
void
@@ -450,16 +642,16 @@
/*
* ia64_return_to_sal_check
+ *
* This is function called before going back from the OS_MCA handler
* to the OS_MCA dispatch code which finally takes the control back
* to the SAL.
* The main purpose of this routine is to setup the OS_MCA to SAL
* return state which can be used by the OS_MCA dispatch code
* just before going back to SAL.
- * Inputs
- * None
- * Outputs
- * None
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
@@ -474,11 +666,13 @@
ia64_os_to_sal_handoff_state.imots_sal_check_ra ia64_sal_to_os_handoff_state.imsto_sal_check_ra;
- /* For now ignore the MCA */
- ia64_os_to_sal_handoff_state.imots_os_status = IA64_MCA_CORRECTED;
+ /* Cold Boot for uncorrectable MCA */
+ ia64_os_to_sal_handoff_state.imots_os_status = IA64_MCA_COLD_BOOT;
}
+
/*
* ia64_mca_ucmc_handler
+ *
* This is uncorrectable machine check handler called from OS_MCA
* dispatch code which is in turn called from SAL_CHECK().
* This is the place where the core of OS MCA handling is done.
@@ -487,93 +681,92 @@
* monarch processor. Once the monarch is done with MCA handling
* further MCA logging is enabled by clearing logs.
* Monarch also has the duty of sending wakeup-IPIs to pull the
- * slave processors out of rendez. spinloop.
- * Inputs
- * None
- * Outputs
- * None
+ * slave processors out of rendezvous spinloop.
+ *
+ * Inputs : None
+ * Outputs : None
*/
void
ia64_mca_ucmc_handler(void)
{
+#if 0 /* stubbed out @FVL */
+ /*
+ * Attempting to log a DBE error Causes "reserved register/field panic"
+ * in printk.
+ */
- /* Get the MCA processor log */
- ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- /* Get the MCA platform log */
- ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
-
- ia64_log_print(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+ /* Get the MCA error record and log it */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA);
+#endif /* stubbed out @FVL */
/*
- * Do some error handling - Platform-specific mca handler is called at this point
+ * Do Platform-specific mca error handling if required.
*/
-
mca_handler_platform() ;
- /* Clear the SAL MCA logs */
- ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, 1, printk);
- ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, 1, printk);
-
- /* Wakeup all the processors which are spinning in the rendezvous
- * loop.
+ /*
+ * Wakeup all the processors which are spinning in the rendezvous
+ * loop.
*/
ia64_mca_wakeup_all();
+
+ /* Return to SAL */
ia64_return_to_sal_check();
}
/*
* ia64_mca_cmc_int_handler
- * This is correctable machine check interrupt handler.
+ *
+ * This is corrected machine check interrupt handler.
* Right now the logs are extracted and displayed in a well-defined
* format.
+ *
* Inputs
- * None
+ * interrupt number
+ * client data arg ptr
+ * saved registers ptr
+ *
* Outputs
* None
*/
void
ia64_mca_cmc_int_handler(int cmc_irq, void *arg, struct pt_regs *ptregs)
{
- /* Get the CMC processor log */
- ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- /* Get the CMC platform log */
- ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
-
-
- ia64_log_print(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- cmci_handler_platform(cmc_irq, arg, ptregs);
+ IA64_MCA_DEBUG("ia64_mca_cmc_int_handler: received interrupt vector = %#x on CPU %d\n",
+ cmc_irq, smp_processor_id());
- /* Clear the CMC SAL logs now that they have been saved in the OS buffer */
- ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC);
+ /* Get the CMC error record and log it */
+ ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CMC);
}
/*
* IA64_MCA log support
*/
#define IA64_MAX_LOGS 2 /* Double-buffering for nested MCAs */
-#define IA64_MAX_LOG_TYPES 3 /* MCA, CMC, INIT */
-#define IA64_MAX_LOG_SUBTYPES 2 /* Processor, Platform */
+#define IA64_MAX_LOG_TYPES 4 /* MCA, INIT, CMC, CPE */
-typedef struct ia64_state_log_s {
+typedef struct ia64_state_log_s
+{
spinlock_t isl_lock;
int isl_index;
- ia64_psilog_t isl_log[IA64_MAX_LOGS]; /* need space to store header + error log */
+ ia64_err_rec_t isl_log[IA64_MAX_LOGS]; /* need space to store header + error log */
} ia64_state_log_t;
-static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES][IA64_MAX_LOG_SUBTYPES];
+static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES];
-#define IA64_LOG_LOCK_INIT(it, sit) spin_lock_init(&ia64_state_log[it][sit].isl_lock)
-#define IA64_LOG_LOCK(it, sit) spin_lock_irqsave(&ia64_state_log[it][sit].isl_lock, s)
-#define IA64_LOG_UNLOCK(it, sit) spin_unlock_irqrestore(&ia64_state_log[it][sit].isl_lock,\
- s)
-#define IA64_LOG_NEXT_INDEX(it, sit) ia64_state_log[it][sit].isl_index
-#define IA64_LOG_CURR_INDEX(it, sit) 1 - ia64_state_log[it][sit].isl_index
-#define IA64_LOG_INDEX_INC(it, sit) \
- ia64_state_log[it][sit].isl_index = 1 - ia64_state_log[it][sit].isl_index
-#define IA64_LOG_INDEX_DEC(it, sit) \
- ia64_state_log[it][sit].isl_index = 1 - ia64_state_log[it][sit].isl_index
-#define IA64_LOG_NEXT_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].isl_log[IA64_LOG_NEXT_INDEX(it,sit)]))
-#define IA64_LOG_CURR_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].isl_log[IA64_LOG_CURR_INDEX(it,sit)]))
+/* Note: Some of these macros assume IA64_MAX_LOGS is always 2. Should be */
+/* fixed. @FVL */
+#define IA64_LOG_LOCK_INIT(it) spin_lock_init(&ia64_state_log[it].isl_lock)
+#define IA64_LOG_LOCK(it) spin_lock_irqsave(&ia64_state_log[it].isl_lock, s)
+#define IA64_LOG_UNLOCK(it) spin_unlock_irqrestore(&ia64_state_log[it].isl_lock,s)
+#define IA64_LOG_NEXT_INDEX(it) ia64_state_log[it].isl_index
+#define IA64_LOG_CURR_INDEX(it) 1 - ia64_state_log[it].isl_index
+#define IA64_LOG_INDEX_INC(it) \
+ ia64_state_log[it].isl_index = 1 - ia64_state_log[it].isl_index
+#define IA64_LOG_INDEX_DEC(it) \
+ ia64_state_log[it].isl_index = 1 - ia64_state_log[it].isl_index
+#define IA64_LOG_NEXT_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)]))
+#define IA64_LOG_CURR_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)]))
/*
* C portion of the OS INIT handler
@@ -584,123 +777,217 @@
*
* Returns:
* 0 if SAL must warm boot the System
- * 1 if SAL must retrun to interrupted context using PAL_MC_RESUME
+ * 1 if SAL must return to interrupted context using PAL_MC_RESUME
*
*/
-
void
ia64_init_handler (struct pt_regs *regs)
{
sal_log_processor_info_t *proc_ptr;
- ia64_psilog_t *plog_ptr;
+ ia64_err_rec_t *plog_ptr;
printk("Entered OS INIT handler\n");
/* Get the INIT processor log */
- ia64_log_get(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
- /* Get the INIT platform log */
- ia64_log_get(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
+ if (!ia64_log_get(SAL_INFO_TYPE_INIT, (prfunc_t)printk))
+ return; // no record retrieved
#ifdef IA64_DUMP_ALL_PROC_INFO
- ia64_log_print(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+ ia64_log_print(SAL_INFO_TYPE_INIT, (prfunc_t)printk);
#endif
/*
* get pointer to min state save area
*
*/
- plog_ptr=(ia64_psilog_t *)IA64_LOG_CURR_BUFFER(SAL_INFO_TYPE_INIT,
- SAL_SUB_INFO_TYPE_PROCESSOR);
- proc_ptr = &plog_ptr->devlog.proclog;
+ plog_ptr=(ia64_err_rec_t *)IA64_LOG_CURR_BUFFER(SAL_INFO_TYPE_INIT);
+ proc_ptr = &plog_ptr->proc_err;
- ia64_process_min_state_save(&proc_ptr->slpi_min_state_area,regs);
-
- init_handler_platform(regs); /* call platform specific routines */
+ ia64_process_min_state_save(&proc_ptr->processor_static_info.min_state_area,
+ regs);
/* Clear the INIT SAL logs now that they have been saved in the OS buffer */
ia64_sal_clear_state_info(SAL_INFO_TYPE_INIT);
+
+ init_handler_platform(regs); /* call platform specific routines */
+}
+
+/*
+ * ia64_log_prt_guid
+ *
+ * Print a formatted GUID.
+ *
+ * Inputs : p_guid (ptr to the GUID)
+ * prfunc (print function)
+ * Outputs : None
+ *
+ */
+void
+ia64_log_prt_guid (efi_guid_t *p_guid, prfunc_t prfunc)
+{
+ printk("GUID = { %08x, %04x, %04x, { %#02x, %#02x, %#02x, %#02x, "
+ "%#02x, %#02x, %#02x, %#02x, } } \n ", p_guid->data1,
+ p_guid->data2, p_guid->data3, p_guid->data4[0], p_guid->data4[1],
+ p_guid->data4[2], p_guid->data4[3], p_guid->data4[4],
+ p_guid->data4[5], p_guid->data4[6], p_guid->data4[7]);
+}
+
+static void
+ia64_log_hexdump(unsigned char *p, unsigned long n_ch, prfunc_t prfunc)
+{
+ int i, j;
+
+ if (!p)
+ return;
+
+ for (i = 0; i < n_ch;) {
+ prfunc("%p ", (void *)p);
+ for (j = 0; (j < 16) && (i < n_ch); i++, j++, p++) {
+ prfunc("%02x ", *p);
+ }
+ prfunc("\n");
+ }
+}
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+
+static void
+ia64_log_prt_record_header (sal_log_record_header_t *rh, prfunc_t prfunc)
+{
+ prfunc("SAL RECORD HEADER: Record buffer = %p, header size = %ld\n",
+ (void *)rh, sizeof(sal_log_record_header_t));
+ ia64_log_hexdump((unsigned char *)rh, sizeof(sal_log_record_header_t),
+ (prfunc_t)prfunc);
+ prfunc("Total record length = %d\n", rh->len);
+ ia64_log_prt_guid(&rh->platform_guid, prfunc);
+ prfunc("End of SAL RECORD HEADER\n");
+}
+
+static void
+ia64_log_prt_section_header (sal_log_section_hdr_t *sh, prfunc_t prfunc)
+{
+ prfunc("SAL SECTION HEADER: Record buffer = %p, header size = %ld\n",
+ (void *)sh, sizeof(sal_log_section_hdr_t));
+ ia64_log_hexdump((unsigned char *)sh, sizeof(sal_log_section_hdr_t),
+ (prfunc_t)prfunc);
+ prfunc("Length of section & header = %d\n", sh->len);
+ ia64_log_prt_guid(&sh->guid, prfunc);
+ prfunc("End of SAL SECTION HEADER\n");
}
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
/*
* ia64_log_init
* Reset the OS ia64 log buffer
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE})
* Outputs : None
*/
void
-ia64_log_init(int sal_info_type, int sal_sub_info_type)
+ia64_log_init(int sal_info_type)
{
- IA64_LOG_LOCK_INIT(sal_info_type, sal_sub_info_type);
- IA64_LOG_NEXT_INDEX(sal_info_type, sal_sub_info_type) = 0;
- memset(IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type), 0,
- sizeof(ia64_psilog_t) * IA64_MAX_LOGS);
+ IA64_LOG_LOCK_INIT(sal_info_type);
+ IA64_LOG_NEXT_INDEX(sal_info_type) = 0;
+ memset(IA64_LOG_NEXT_BUFFER(sal_info_type), 0,
+ sizeof(ia64_err_rec_t) * IA64_MAX_LOGS);
}
/*
* ia64_log_get
+ *
* Get the current MCA log from SAL and copy it into the OS log buffer.
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
- * Outputs : None
+ *
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE})
+ * prfunc (fn ptr of log output function)
+ * Outputs : size (total record length)
*
*/
-void
-ia64_log_get(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc)
+u64
+ia64_log_get(int sal_info_type, prfunc_t prfunc)
{
- sal_log_header_t *log_buffer;
- int s,total_len=0;
-
- IA64_LOG_LOCK(sal_info_type, sal_sub_info_type);
+ sal_log_record_header_t *log_buffer;
+ u64 total_len = 0;
+ int s;
+ IA64_LOG_LOCK(sal_info_type);
/* Get the process state information */
- log_buffer = IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type);
-
- if (!(total_len=ia64_sal_get_state_info(sal_info_type,(u64 *)log_buffer)))
- prfunc("ia64_mca_log_get : Getting processor log failed\n");
-
- IA64_MCA_DEBUG("ia64_log_get: retrieved %d bytes of error information\n",total_len);
+ log_buffer = IA64_LOG_NEXT_BUFFER(sal_info_type);
- IA64_LOG_INDEX_INC(sal_info_type, sal_sub_info_type);
-
- IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type);
+ total_len = ia64_sal_get_state_info(sal_info_type, (u64 *)log_buffer);
+ if (total_len) {
+ IA64_LOG_INDEX_INC(sal_info_type);
+ IA64_LOG_UNLOCK(sal_info_type);
+ IA64_MCA_DEBUG("ia64_log_get: SAL error record type %d retrieved. "
+ "Record length = %ld\n", sal_info_type, total_len);
+ return total_len;
+ } else {
+ IA64_LOG_UNLOCK(sal_info_type);
+ prfunc("ia64_log_get: Failed to retrieve SAL error record type %d\n",
+ sal_info_type);
+ return 0;
+ }
}
/*
- * ia64_log_clear
- * Clear the current MCA log from SAL and dpending on the clear_os_buffer flags
- * clear the OS log buffer also
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
- * clear_os_buffer
+ * ia64_log_prt_oem_data
+ *
+ * Print OEM specific data if included.
+ *
+ * Inputs : header_len (length passed in section header)
+ * sect_len (default length of section type)
+ * p_data (ptr to data)
* prfunc (print function)
* Outputs : None
*
*/
void
-ia64_log_clear(int sal_info_type, int sal_sub_info_type, int clear_os_buffer, prfunc_t prfunc)
+ia64_log_prt_oem_data (int header_len, int sect_len, u8 *p_data, prfunc_t prfunc)
{
- if (ia64_sal_clear_state_info(sal_info_type))
- prfunc("ia64_mca_log_get : Clearing processor log failed\n");
-
- if (clear_os_buffer) {
- sal_log_header_t *log_buffer;
- int s;
-
- IA64_LOG_LOCK(sal_info_type, sal_sub_info_type);
+ int oem_data_len, i;
- /* Get the process state information */
- log_buffer = IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type);
-
- memset(log_buffer, 0, sizeof(ia64_psilog_t));
-
- IA64_LOG_INDEX_DEC(sal_info_type, sal_sub_info_type);
-
- IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type);
+ if ((oem_data_len = header_len - sect_len) > 0) {
+ prfunc(" OEM Specific Data:");
+ for (i = 0; i < oem_data_len; i++, p_data++)
+ prfunc(" %02x", *p_data);
}
+ prfunc("\n");
+}
+/*
+ * ia64_log_rec_header_print
+ *
+ * Log info from the SAL error record header.
+ *
+ * Inputs : lh * (ptr to SAL log error record header)
+ * prfunc (fn ptr of log output function to use)
+ * Outputs : None
+ */
+void
+ia64_log_rec_header_print (sal_log_record_header_t *lh, prfunc_t prfunc)
+{
+ char str_buf[32];
+
+ sprintf(str_buf, "%2d.%02d",
+ (lh->revision.major >> 4) * 10 + (lh->revision.major & 0xf),
+ (lh->revision.minor >> 4) * 10 + (lh->revision.minor & 0xf));
+ prfunc("+Err Record ID: %d SAL Rev: %s\n", lh->id, str_buf);
+ sprintf(str_buf, "%02d/%02d/%04d/ %02d:%02d:%02d",
+ (lh->timestamp.slh_month >> 4) * 10 +
+ (lh->timestamp.slh_month & 0xf),
+ (lh->timestamp.slh_day >> 4) * 10 +
+ (lh->timestamp.slh_day & 0xf),
+ (lh->timestamp.slh_century >> 4) * 1000 +
+ (lh->timestamp.slh_century & 0xf) * 100 +
+ (lh->timestamp.slh_year >> 4) * 10 +
+ (lh->timestamp.slh_year & 0xf),
+ (lh->timestamp.slh_hour >> 4) * 10 +
+ (lh->timestamp.slh_hour & 0xf),
+ (lh->timestamp.slh_minute >> 4) * 10 +
+ (lh->timestamp.slh_minute & 0xf),
+ (lh->timestamp.slh_second >> 4) * 10 +
+ (lh->timestamp.slh_second & 0xf));
+ prfunc("+Time: %s Severity %d\n", str_buf, lh->severity);
}
/*
@@ -729,6 +1016,33 @@
prfunc("+ %s[%d] 0x%lx\n", reg_prefix, i, regs[i]);
}
+/*
+ * ia64_log_processor_fp_regs_print
+ * Print the contents of the saved floating page register(s) in the format
+ * <reg_prefix>[<index>] <value>
+ *
+ * Inputs: ia64_fpreg (Register save buffer)
+ * reg_num (# of registers)
+ * reg_class (application/banked/control/bank1_general)
+ * reg_prefix (ar/br/cr/b1_gr)
+ * Outputs: None
+ *
+ */
+void
+ia64_log_processor_fp_regs_print (struct ia64_fpreg *regs,
+ int reg_num,
+ char *reg_class,
+ char *reg_prefix,
+ prfunc_t prfunc)
+{
+ int i;
+
+ prfunc("+%s Registers\n", reg_class);
+ for (i = 0; i < reg_num; i++)
+ prfunc("+ %s[%d] 0x%lx%016lx\n", reg_prefix, i, regs[i].u.bits[1],
+ regs[i].u.bits[0]);
+}
+
static char *pal_mesi_state[] = {
"Invalid",
"Shared",
@@ -754,69 +1068,91 @@
/*
* ia64_log_cache_check_info_print
* Display the machine check information related to cache error(s).
- * Inputs : i (Multiple errors are logged, i - index of logged error)
- * info (Machine check info logged by the PAL and later
+ * Inputs: i (Multiple errors are logged, i - index of logged error)
+ * cc_info * (Ptr to cache check info logged by the PAL and later
* captured by the SAL)
- * target_addr (Address which caused the cache error)
- * Outputs : None
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
*/
void
-ia64_log_cache_check_info_print(int i,
- pal_cache_check_info_t info,
- u64 target_addr,
- prfunc_t prfunc)
+ia64_log_cache_check_info_print (int i,
+ sal_log_mod_error_info_t *cache_check_info,
+ prfunc_t prfunc)
{
+ pal_cache_check_info_t *info;
+ u64 target_addr;
+
+ if (!cache_check_info->valid.check_info) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid cache_check_info[%d]\n",i);
+ return; /* If check info data not valid, skip it */
+ }
+
+ info = (pal_cache_check_info_t *)&cache_check_info->check_info;
+ target_addr = cache_check_info->target_identifier;
+
prfunc("+ Cache check info[%d]\n+", i);
- prfunc(" Level: L%d",info.level);
- if (info.mv)
- prfunc(" ,Mesi: %s",pal_mesi_state[info.mesi]);
- prfunc(" ,Index: %d,", info.index);
- if (info.ic)
- prfunc(" ,Cache: Instruction");
- if (info.dc)
- prfunc(" ,Cache: Data");
- if (info.tl)
- prfunc(" ,Line: Tag");
- if (info.dl)
- prfunc(" ,Line: Data");
- prfunc(" ,Operation: %s,", pal_cache_op[info.op]);
- if (info.wv)
- prfunc(" ,Way: %d,", info.way);
- if (info.tv)
- prfunc(" ,Target Addr: 0x%lx", target_addr);
- if (info.mc)
- prfunc(" ,MC: Corrected");
+ prfunc(" Level: L%d,",info->level);
+ if (info->mv)
+ prfunc(" Mesi: %s,",pal_mesi_state[info->mesi]);
+ prfunc(" Index: %d,", info->index);
+ if (info->ic)
+ prfunc(" Cache: Instruction,");
+ if (info->dc)
+ prfunc(" Cache: Data,");
+ if (info->tl)
+ prfunc(" Line: Tag,");
+ if (info->dl)
+ prfunc(" Line: Data,");
+ prfunc(" Operation: %s,", pal_cache_op[info->op]);
+ if (info->wv)
+ prfunc(" Way: %d,", info->way);
+ if (cache_check_info->valid.target_identifier)
+ /* Hope target address is saved in target_identifier */
+ if (info->tv)
+ prfunc(" Target Addr: 0x%lx,", target_addr);
+ if (info->mc)
+ prfunc(" MC: Corrected");
prfunc("\n");
}
/*
* ia64_log_tlb_check_info_print
* Display the machine check information related to tlb error(s).
- * Inputs : i (Multiple errors are logged, i - index of logged error)
- * info (Machine check info logged by the PAL and later
+ * Inputs: i (Multiple errors are logged, i - index of logged error)
+ * tlb_info * (Ptr to machine check info logged by the PAL and later
* captured by the SAL)
- * Outputs : None
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
*/
-
void
-ia64_log_tlb_check_info_print(int i,
- pal_tlb_check_info_t info,
- prfunc_t prfunc)
+ia64_log_tlb_check_info_print (int i,
+ sal_log_mod_error_info_t *tlb_check_info,
+ prfunc_t prfunc)
+
{
+ pal_tlb_check_info_t *info;
+
+ if (!tlb_check_info->valid.check_info) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid tlb_check_info[%d]\n", i);
+ return; /* If check info data not valid, skip it */
+ }
+
+ info = (pal_tlb_check_info_t *)&tlb_check_info->check_info;
+
prfunc("+ TLB Check Info [%d]\n+", i);
- if (info.itc)
+ if (info->itc)
prfunc(" Failure: Instruction Translation Cache");
- if (info.dtc)
+ if (info->dtc)
prfunc(" Failure: Data Translation Cache");
- if (info.itr) {
+ if (info->itr) {
prfunc(" Failure: Instruction Translation Register");
- prfunc(" ,Slot: %d", info.tr_slot);
+ prfunc(" ,Slot: %d", info->tr_slot);
}
- if (info.dtr) {
+ if (info->dtr) {
prfunc(" Failure: Data Translation Register");
- prfunc(" ,Slot: %d", info.tr_slot);
+ prfunc(" ,Slot: %d", info->tr_slot);
}
- if (info.mc)
+ if (info->mc)
prfunc(" ,MC: Corrected");
prfunc("\n");
}
@@ -824,159 +1160,719 @@
/*
* ia64_log_bus_check_info_print
* Display the machine check information related to bus error(s).
- * Inputs : i (Multiple errors are logged, i - index of logged error)
- * info (Machine check info logged by the PAL and later
+ * Inputs: i (Multiple errors are logged, i - index of logged error)
+ * bus_info * (Ptr to machine check info logged by the PAL and later
* captured by the SAL)
- * req_addr (Address of the requestor of the transaction)
- * resp_addr (Address of the responder of the transaction)
- * target_addr (Address where the data was to be delivered to or
- * obtained from)
- * Outputs : None
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
*/
void
-ia64_log_bus_check_info_print(int i,
- pal_bus_check_info_t info,
- u64 req_addr,
- u64 resp_addr,
- u64 targ_addr,
- prfunc_t prfunc)
-{
+ia64_log_bus_check_info_print (int i,
+ sal_log_mod_error_info_t *bus_check_info,
+ prfunc_t prfunc)
+{
+ pal_bus_check_info_t *info;
+ u64 req_addr; /* Address of the requestor of the transaction */
+ u64 resp_addr; /* Address of the responder of the transaction */
+ u64 targ_addr; /* Address where the data was to be delivered to */
+ /* or obtained from */
+
+ if (!bus_check_info->valid.check_info) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: invalid bus_check_info[%d]\n", i);
+ return; /* If check info data not valid, skip it */
+ }
+
+ info = (pal_bus_check_info_t *)&bus_check_info->check_info;
+ req_addr = bus_check_info->requestor_identifier;
+ resp_addr = bus_check_info->responder_identifier;
+ targ_addr = bus_check_info->target_identifier;
+
prfunc("+ BUS Check Info [%d]\n+", i);
- prfunc(" Status Info: %d", info.bsi);
- prfunc(" ,Severity: %d", info.sev);
- prfunc(" ,Transaction Type: %d", info.type);
- prfunc(" ,Transaction Size: %d", info.size);
- if (info.cc)
+ prfunc(" Status Info: %d", info->bsi);
+ prfunc(" ,Severity: %d", info->sev);
+ prfunc(" ,Transaction Type: %d", info->type);
+ prfunc(" ,Transaction Size: %d", info->size);
+ if (info->cc)
prfunc(" ,Cache-cache-transfer");
- if (info.ib)
+ if (info->ib)
prfunc(" ,Error: Internal");
- if (info.eb)
+ if (info->eb)
prfunc(" ,Error: External");
- if (info.mc)
+ if (info->mc)
prfunc(" ,MC: Corrected");
- if (info.tv)
+ if (info->tv)
prfunc(" ,Target Address: 0x%lx", targ_addr);
- if (info.rq)
+ if (info->rq)
prfunc(" ,Requestor Address: 0x%lx", req_addr);
- if (info.tv)
+ if (info->tv)
prfunc(" ,Responder Address: 0x%lx", resp_addr);
prfunc("\n");
}
/*
+ * ia64_log_mem_dev_err_info_print
+ *
+ * Format and log the platform memory device error record section data.
+ *
+ * Inputs: mem_dev_err_info * (Ptr to memory device error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_mem_dev_err_info_print (sal_log_mem_dev_err_info_t *mdei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Mem Error Detail: ");
+
+ if (mdei->valid.error_status)
+ prfunc(" Error Status: %#lx,", mdei->error_status);
+ if (mdei->valid.physical_addr)
+ prfunc(" Physical Address: %#lx,", mdei->physical_addr);
+ if (mdei->valid.addr_mask)
+ prfunc(" Address Mask: %#lx,", mdei->addr_mask);
+ if (mdei->valid.node)
+ prfunc(" Node: %d,", mdei->node);
+ if (mdei->valid.card)
+ prfunc(" Card: %d,", mdei->card);
+ if (mdei->valid.module)
+ prfunc(" Module: %d,", mdei->module);
+ if (mdei->valid.bank)
+ prfunc(" Bank: %d,", mdei->bank);
+ if (mdei->valid.device)
+ prfunc(" Device: %d,", mdei->device);
+ if (mdei->valid.row)
+ prfunc(" Row: %d,", mdei->row);
+ if (mdei->valid.column)
+ prfunc(" Column: %d,", mdei->column);
+ if (mdei->valid.bit_position)
+ prfunc(" Bit Position: %d,", mdei->bit_position);
+ if (mdei->valid.target_id)
+ prfunc(" ,Target Address: %#lx,", mdei->target_id);
+ if (mdei->valid.requestor_id)
+ prfunc(" ,Requestor Address: %#lx,", mdei->requestor_id);
+ if (mdei->valid.responder_id)
+ prfunc(" ,Responder Address: %#lx,", mdei->responder_id);
+ if (mdei->valid.bus_spec_data)
+ prfunc(" Bus Specific Data: %#lx,", mdei->bus_spec_data);
+ prfunc("\n");
+
+ if (mdei->valid.oem_id) {
+ u8 *p_data = &(mdei->oem_id[0]);
+ int i;
+
+ prfunc(" OEM Memory Controller ID:");
+ for (i = 0; i < 16; i++, p_data++)
+ prfunc(" %02x", *p_data);
+ prfunc("\n");
+ }
+
+ if (mdei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)mdei->header.len,
+ (int)sizeof(sal_log_mem_dev_err_info_t) - 1,
+ &(mdei->oem_data[0]), prfunc);
+ }
+}
+
+/*
+ * ia64_log_sel_dev_err_info_print
+ *
+ * Format and log the platform SEL device error record section data.
+ *
+ * Inputs: sel_dev_err_info * (Ptr to the SEL device error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_sel_dev_err_info_print (sal_log_sel_dev_err_info_t *sdei,
+ prfunc_t prfunc)
+{
+ int i;
+
+ prfunc("+ SEL Device Error Detail: ");
+
+ if (sdei->valid.record_id)
+ prfunc(" Record ID: %#x", sdei->record_id);
+ if (sdei->valid.record_type)
+ prfunc(" Record Type: %#x", sdei->record_type);
+ prfunc(" Time Stamp: ");
+ for (i = 0; i < 4; i++)
+ prfunc("%1d", sdei->timestamp[i]);
+ if (sdei->valid.generator_id)
+ prfunc(" Generator ID: %#x", sdei->generator_id);
+ if (sdei->valid.evm_rev)
+ prfunc(" Message Format Version: %#x", sdei->evm_rev);
+ if (sdei->valid.sensor_type)
+ prfunc(" Sensor Type: %#x", sdei->sensor_type);
+ if (sdei->valid.sensor_num)
+ prfunc(" Sensor Number: %#x", sdei->sensor_num);
+ if (sdei->valid.event_dir)
+ prfunc(" Event Direction Type: %#x", sdei->event_dir);
+ if (sdei->valid.event_data1)
+ prfunc(" Data1: %#x", sdei->event_data1);
+ if (sdei->valid.event_data2)
+ prfunc(" Data2: %#x", sdei->event_data2);
+ if (sdei->valid.event_data3)
+ prfunc(" Data3: %#x", sdei->event_data3);
+ prfunc("\n");
+
+}
+
+/*
+ * ia64_log_pci_bus_err_info_print
+ *
+ * Format and log the platform PCI bus error record section data.
+ *
+ * Inputs: pci_bus_err_info * (Ptr to the PCI bus error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_pci_bus_err_info_print (sal_log_pci_bus_err_info_t *pbei,
+ prfunc_t prfunc)
+{
+ prfunc("+ PCI Bus Error Detail: ");
+
+ if (pbei->valid.err_status)
+ prfunc(" Error Status: %#lx", pbei->err_status);
+ if (pbei->valid.err_type)
+ prfunc(" Error Type: %#x", pbei->err_type);
+ if (pbei->valid.bus_id)
+ prfunc(" Bus ID: %#x", pbei->bus_id);
+ if (pbei->valid.bus_address)
+ prfunc(" Bus Address: %#lx", pbei->bus_address);
+ if (pbei->valid.bus_data)
+ prfunc(" Bus Data: %#lx", pbei->bus_data);
+ if (pbei->valid.bus_cmd)
+ prfunc(" Bus Command: %#lx", pbei->bus_cmd);
+ if (pbei->valid.requestor_id)
+ prfunc(" Requestor ID: %#lx", pbei->requestor_id);
+ if (pbei->valid.responder_id)
+ prfunc(" Responder ID: %#lx", pbei->responder_id);
+ if (pbei->valid.target_id)
+ prfunc(" Target ID: %#lx", pbei->target_id);
+ if (pbei->valid.oem_data)
+ prfunc("\n");
+
+ if (pbei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)pbei->header.len,
+ (int)sizeof(sal_log_pci_bus_err_info_t) - 1,
+ &(pbei->oem_data[0]), prfunc);
+ }
+}
+
+/*
+ * ia64_log_smbios_dev_err_info_print
+ *
+ * Format and log the platform SMBIOS device error record section data.
+ *
+ * Inputs: smbios_dev_err_info * (Ptr to the SMBIOS device error record
+ * section returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_smbios_dev_err_info_print (sal_log_smbios_dev_err_info_t *sdei,
+ prfunc_t prfunc)
+{
+ u8 i;
+
+ prfunc("+ SMBIOS Device Error Detail: ");
+
+ if (sdei->valid.event_type)
+ prfunc(" Event Type: %#x", sdei->event_type);
+ if (sdei->valid.time_stamp) {
+ prfunc(" Time Stamp: ");
+ for (i = 0; i < 6; i++)
+ prfunc("%d", sdei->time_stamp[i]);
+ }
+ if ((sdei->valid.data) && (sdei->valid.length)) {
+ prfunc(" Data: ");
+ for (i = 0; i < sdei->length; i++)
+ prfunc(" %02x", sdei->data[i]);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_pci_comp_err_info_print
+ *
+ * Format and log the platform PCI component error record section data.
+ *
+ * Inputs: pci_comp_err_info * (Ptr to the PCI component error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_pci_comp_err_info_print(sal_log_pci_comp_err_info_t *pcei,
+ prfunc_t prfunc)
+{
+ u32 n_mem_regs, n_io_regs;
+ u64 i, n_pci_data;
+ u64 *p_reg_data;
+ u8 *p_oem_data;
+
+ prfunc("+ PCI Component Error Detail: ");
+
+ if (pcei->valid.err_status)
+ prfunc(" Error Status: %#lx\n", pcei->err_status);
+ if (pcei->valid.comp_info)
+ prfunc(" Component Info: Vendor Id = %#x, Device Id = %#x,"
+ " Class Code = %#x, Seg/Bus/Dev/Func = %d/%d/%d/%d\n",
+ pcei->comp_info.vendor_id, pcei->comp_info.device_id,
+ pcei->comp_info.class_code, pcei->comp_info.seg_num,
+ pcei->comp_info.bus_num, pcei->comp_info.dev_num,
+ pcei->comp_info.func_num);
+
+ n_mem_regs = (pcei->valid.num_mem_regs) ? pcei->num_mem_regs : 0;
+ n_io_regs = (pcei->valid.num_io_regs) ? pcei->num_io_regs : 0;
+ p_reg_data = &(pcei->reg_data_pairs[0]);
+ p_oem_data = (u8 *)p_reg_data +
+ (n_mem_regs + n_io_regs) * 2 * sizeof(u64);
+ n_pci_data = p_oem_data - (u8 *)pcei;
+
+ if (n_pci_data > pcei->header.len) {
+ prfunc(" Invalid PCI Component Error Record format: length = %ld, "
+ " Size PCI Data = %d, Num Mem-Map/IO-Map Regs = %ld/%ld\n",
+ pcei->header.len, n_pci_data, n_mem_regs, n_io_regs);
+ return;
+ }
+
+ if (n_mem_regs) {
+ prfunc(" Memory Mapped Registers\n Address \tValue\n");
+ for (i = 0; i < pcei->num_mem_regs; i++) {
+ prfunc(" %#lx %#lx\n", p_reg_data[0], p_reg_data[1]);
+ p_reg_data += 2;
+ }
+ }
+ if (n_io_regs) {
+ prfunc(" I/O Mapped Registers\n Address \tValue\n");
+ for (i = 0; i < pcei->num_io_regs; i++) {
+ prfunc(" %#lx %#lx\n", p_reg_data[0], p_reg_data[1]);
+ p_reg_data += 2;
+ }
+ }
+ if (pcei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)pcei->header.len, n_pci_data,
+ p_oem_data, prfunc);
+ prfunc("\n");
+ }
+}
+
+/*
+ * ia64_log_plat_specific_err_info_print
+ *
+ * Format and log the platform specifie error record section data.
+ *
+ * Inputs: sel_dev_err_info * (Ptr to the platform specific error record
+ * section returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_plat_specific_err_info_print (sal_log_plat_specific_err_info_t *psei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Platform Specific Error Detail: ");
+
+ if (psei->valid.err_status)
+ prfunc(" Error Status: %#lx", psei->err_status);
+ if (psei->valid.guid) {
+ prfunc(" GUID: ");
+ ia64_log_prt_guid(&psei->guid, prfunc);
+ }
+ if (psei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)psei->header.len,
+ (int)sizeof(sal_log_plat_specific_err_info_t) - 1,
+ &(psei->oem_data[0]), prfunc);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_host_ctlr_err_info_print
+ *
+ * Format and log the platform host controller error record section data.
+ *
+ * Inputs: host_ctlr_err_info * (Ptr to the host controller error record
+ * section returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_host_ctlr_err_info_print (sal_log_host_ctlr_err_info_t *hcei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Host Controller Error Detail: ");
+
+ if (hcei->valid.err_status)
+ prfunc(" Error Status: %#lx", hcei->err_status);
+ if (hcei->valid.requestor_id)
+ prfunc(" Requestor ID: %#lx", hcei->requestor_id);
+ if (hcei->valid.responder_id)
+ prfunc(" Responder ID: %#lx", hcei->responder_id);
+ if (hcei->valid.target_id)
+ prfunc(" Target ID: %#lx", hcei->target_id);
+ if (hcei->valid.bus_spec_data)
+ prfunc(" Bus Specific Data: %#lx", hcei->bus_spec_data);
+ if (hcei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)hcei->header.len,
+ (int)sizeof(sal_log_host_ctlr_err_info_t) - 1,
+ &(hcei->oem_data[0]), prfunc);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_plat_bus_err_info_print
+ *
+ * Format and log the platform bus error record section data.
+ *
+ * Inputs: plat_bus_err_info * (Ptr to the platform bus error record section
+ * returned by SAL)
+ * prfunc (fn ptr of print function to be used for output)
+ * Outputs: None
+ */
+void
+ia64_log_plat_bus_err_info_print (sal_log_plat_bus_err_info_t *pbei,
+ prfunc_t prfunc)
+{
+ prfunc("+ Platform Bus Error Detail: ");
+
+ if (pbei->valid.err_status)
+ prfunc(" Error Status: %#lx", pbei->err_status);
+ if (pbei->valid.requestor_id)
+ prfunc(" Requestor ID: %#lx", pbei->requestor_id);
+ if (pbei->valid.responder_id)
+ prfunc(" Responder ID: %#lx", pbei->responder_id);
+ if (pbei->valid.target_id)
+ prfunc(" Target ID: %#lx", pbei->target_id);
+ if (pbei->valid.bus_spec_data)
+ prfunc(" Bus Specific Data: %#lx", pbei->bus_spec_data);
+ if (pbei->valid.oem_data) {
+ ia64_log_prt_oem_data((int)pbei->header.len,
+ (int)sizeof(sal_log_plat_bus_err_info_t) - 1,
+ &(pbei->oem_data[0]), prfunc);
+ }
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_proc_dev_err_info_print
+ *
+ * Display the processor device error record.
+ *
+ * Inputs: sal_log_processor_info_t * (Ptr to processor device error record
+ * section body).
+ * prfunc (fn ptr of print function to be used
+ * for output).
+ * Outputs: None
+ */
+void
+ia64_log_proc_dev_err_info_print (sal_log_processor_info_t *slpi,
+ prfunc_t prfunc)
+{
+ size_t d_len = slpi->header.len - sizeof(sal_log_section_hdr_t);
+ sal_processor_static_info_t *spsi;
+ int i;
+ sal_log_mod_error_info_t *p_data;
+
+ prfunc("+Processor Device Error Info Section\n");
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ {
+ char *p_data = (char *)&slpi->valid;
+
+ prfunc("SAL_PROC_DEV_ERR SECTION DATA: Data buffer = %p, "
+ "Data size = %ld\n", (void *)p_data, d_len);
+ ia64_log_hexdump(p_data, d_len, prfunc);
+ prfunc("End of SAL_PROC_DEV_ERR SECTION DATA\n");
+ }
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if (slpi->valid.proc_error_map)
+ prfunc(" Processor Error Map: %#lx\n", slpi->proc_error_map);
+
+ if (slpi->valid.proc_state_param)
+ prfunc(" Processor State Param: %#lx\n", slpi->proc_state_parameter);
+
+ if (slpi->valid.proc_cr_lid)
+ prfunc(" Processor LID: %#lx\n", slpi->proc_cr_lid);
+
+ /*
+ * Note: March 2001 SAL spec states that if the number of elements in any
+ * of the MOD_ERROR_INFO_STRUCT arrays is zero, the entire array is
+ * absent. Also, current implementations only allocate space for number of
+ * elements used. So we walk the data pointer from here on.
+ */
+ p_data = &slpi->cache_check_info[0];
+
+ /* Print the cache check information if any*/
+ for (i = 0 ; i < slpi->valid.num_cache_check; i++, p_data++)
+ ia64_log_cache_check_info_print(i, p_data, prfunc);
+
+ /* Print the tlb check information if any*/
+ for (i = 0 ; i < slpi->valid.num_tlb_check; i++, p_data++)
+ ia64_log_tlb_check_info_print(i, p_data, prfunc);
+
+ /* Print the bus check information if any*/
+ for (i = 0 ; i < slpi->valid.num_bus_check; i++, p_data++)
+ ia64_log_bus_check_info_print(i, p_data, prfunc);
+
+ /* Print the reg file check information if any*/
+ for (i = 0 ; i < slpi->valid.num_reg_file_check; i++, p_data++)
+ ia64_log_hexdump((u8 *)p_data, sizeof(sal_log_mod_error_info_t),
+ prfunc); /* Just hex dump for now */
+
+ /* Print the ms check information if any*/
+ for (i = 0 ; i < slpi->valid.num_ms_check; i++, p_data++)
+ ia64_log_hexdump((u8 *)p_data, sizeof(sal_log_mod_error_info_t),
+ prfunc); /* Just hex dump for now */
+
+ /* Print CPUID registers if any*/
+ if (slpi->valid.cpuid_info) {
+ u64 *p = (u64 *)p_data;
+
+ prfunc(" CPUID Regs: %#lx %#lx %#lx %#lx\n", p[0], p[1], p[2], p[3]);
+ p_data++;
+ }
+
+ /* Print processor static info if any */
+ if (slpi->valid.psi_static_struct) {
+ spsi = (sal_processor_static_info_t *)p_data;
+
+ /* Print branch register contents if valid */
+ if (spsi->valid.br)
+ ia64_log_processor_regs_print(spsi->br, 8, "Branch", "br",
+ prfunc);
+
+ /* Print control register contents if valid */
+ if (spsi->valid.cr)
+ ia64_log_processor_regs_print(spsi->cr, 128, "Control", "cr",
+ prfunc);
+
+ /* Print application register contents if valid */
+ if (spsi->valid.ar)
+ ia64_log_processor_regs_print(spsi->ar, 128, "Application",
+ "ar", prfunc);
+
+ /* Print region register contents if valid */
+ if (spsi->valid.rr)
+ ia64_log_processor_regs_print(spsi->rr, 8, "Region", "rr",
+ prfunc);
+
+ /* Print floating-point register contents if valid */
+ if (spsi->valid.fr)
+ ia64_log_processor_fp_regs_print(spsi->fr, 128, "Floating-point", "fr",
+ prfunc);
+ }
+}
+
+/*
* ia64_log_processor_info_print
+ *
* Display the processor-specific information logged by PAL as a part
* of MCA or INIT or CMC.
- * Inputs : lh (Pointer of the sal log header which specifies the format
- * of SAL state info as specified by the SAL spec).
+ *
+ * Inputs : lh (Pointer of the sal log header which specifies the
+ * format of SAL state info as specified by the SAL spec).
+ * prfunc (fn ptr of print function to be used for output).
* Outputs : None
*/
void
-ia64_log_processor_info_print(sal_log_header_t *lh, prfunc_t prfunc)
+ia64_log_processor_info_print(sal_log_record_header_t *lh, prfunc_t prfunc)
{
- sal_log_processor_info_t *slpi;
- int i;
+ sal_log_section_hdr_t *slsh;
+ int n_sects;
+ int ercd_pos;
if (!lh)
return;
- if (lh->slh_log_type != SAL_SUB_INFO_TYPE_PROCESSOR)
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_record_header(lh, prfunc);
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if ((ercd_pos = sizeof(sal_log_record_header_t)) >= lh->len) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
+ "truncated SAL CMC error record. len = %d\n",
+ lh->len);
return;
+ }
- slpi = (sal_log_processor_info_t *)((char *)lh+sizeof(sal_log_header_t)); /* point to proc info */
+ /* Print record header info */
+ ia64_log_rec_header_print(lh, prfunc);
- if (!slpi) {
- prfunc("No Processor Error Log found\n");
- return;
+ for (n_sects = 0; (ercd_pos < lh->len); n_sects++, ercd_pos += slsh->len) {
+ /* point to next section header */
+ slsh = (sal_log_section_hdr_t *)((char *)lh + ercd_pos);
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_section_header(slsh, prfunc);
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if (verify_guid((void *)&slsh->guid, (void *)&(SAL_PROC_DEV_ERR_SECT_GUID))) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n");
+ continue;
+ }
+
+ /*
+ * Now process processor device error record section
+ */
+ ia64_log_proc_dev_err_info_print((sal_log_processor_info_t *)slsh,
+ printk);
}
- /* Print branch register contents if valid */
- if (slpi->slpi_valid.slpi_br)
- ia64_log_processor_regs_print(slpi->slpi_br, 8, "Branch", "br", prfunc);
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
+ "found %d sections in SAL CMC error record. len = %d\n",
+ n_sects, lh->len);
+ if (!n_sects) {
+ prfunc("No Processor Device Error Info Section found\n");
+ return;
+ }
+}
- /* Print control register contents if valid */
- if (slpi->slpi_valid.slpi_cr)
- ia64_log_processor_regs_print(slpi->slpi_cr, 128, "Control", "cr", prfunc);
+/*
+ * ia64_log_platform_info_print
+ *
+ * Format and Log the SAL Platform Error Record.
+ *
+ * Inputs : lh (Pointer to the sal error record header with format
+ * specified by the SAL spec).
+ * prfunc (fn ptr of log output function to use)
+ * Outputs : None
+ */
+void
+ia64_log_platform_info_print (sal_log_record_header_t *lh, prfunc_t prfunc)
+{
+ sal_log_section_hdr_t *slsh;
+ int n_sects;
+ int ercd_pos;
- /* Print application register contents if valid */
- if (slpi->slpi_valid.slpi_ar)
- ia64_log_processor_regs_print(slpi->slpi_br, 128, "Application", "ar", prfunc);
+ if (!lh)
+ return;
- /* Print region register contents if valid */
- if (slpi->slpi_valid.slpi_rr)
- ia64_log_processor_regs_print(slpi->slpi_rr, 8, "Region", "rr", prfunc);
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_record_header(lh, prfunc);
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
+
+ if ((ercd_pos = sizeof(sal_log_record_header_t)) >= lh->len) {
+ IA64_MCA_DEBUG("ia64_mca_log_print: "
+ "truncated SAL error record. len = %d\n",
+ lh->len);
+ return;
+ }
- /* Print floating-point register contents if valid */
- if (slpi->slpi_valid.slpi_fr)
- ia64_log_processor_regs_print(slpi->slpi_fr, 128, "Floating-point", "fr",
- prfunc);
+ /* Print record header info */
+ ia64_log_rec_header_print(lh, prfunc);
- /* Print the cache check information if any*/
- for (i = 0 ; i < MAX_CACHE_ERRORS; i++)
- ia64_log_cache_check_info_print(i,
- slpi->slpi_cache_check_info[i].slpi_cache_check,
- slpi->slpi_cache_check_info[i].slpi_target_address,
- prfunc);
- /* Print the tlb check information if any*/
- for (i = 0 ; i < MAX_TLB_ERRORS; i++)
- ia64_log_tlb_check_info_print(i,slpi->slpi_tlb_check_info[i], prfunc);
+ for (n_sects = 0; (ercd_pos < lh->len); n_sects++, ercd_pos += slsh->len) {
+ /* point to next section header */
+ slsh = (sal_log_section_hdr_t *)((char *)lh + ercd_pos);
+
+#ifdef MCA_PRT_XTRA_DATA // for test only @FVL
+ ia64_log_prt_section_header(slsh, prfunc);
+
+ if (efi_guidcmp(slsh->guid, SAL_PROC_DEV_ERR_SECT_GUID) != 0) {
+ size_t d_len = slsh->len - sizeof(sal_log_section_hdr_t);
+ char *p_data = (char *)&((sal_log_mem_dev_err_info_t *)slsh)->valid;
+
+ prfunc("Start of Platform Err Data Section: Data buffer = %p, "
+ "Data size = %ld\n", (void *)p_data, d_len);
+ ia64_log_hexdump(p_data, d_len, prfunc);
+ prfunc("End of Platform Err Data Section\n");
+ }
+#endif // MCA_PRT_XTRA_DATA for test only @FVL
- /* Print the bus check information if any*/
- for (i = 0 ; i < MAX_BUS_ERRORS; i++)
- ia64_log_bus_check_info_print(i,
- slpi->slpi_bus_check_info[i].slpi_bus_check,
- slpi->slpi_bus_check_info[i].slpi_requestor_addr,
- slpi->slpi_bus_check_info[i].slpi_responder_addr,
- slpi->slpi_bus_check_info[i].slpi_target_addr,
- prfunc);
+ /*
+ * Now process CPE error record section
+ */
+ if (efi_guidcmp(slsh->guid, SAL_PROC_DEV_ERR_SECT_GUID) = 0) {
+ ia64_log_proc_dev_err_info_print((sal_log_processor_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_MEM_DEV_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Memory Device Error Info Section\n");
+ ia64_log_mem_dev_err_info_print((sal_log_mem_dev_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SEL_DEV_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform SEL Device Error Info Section\n");
+ ia64_log_sel_dev_err_info_print((sal_log_sel_dev_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_BUS_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform PCI Bus Error Info Section\n");
+ ia64_log_pci_bus_err_info_print((sal_log_pci_bus_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SMBIOS_DEV_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform SMBIOS Device Error Info Section\n");
+ ia64_log_smbios_dev_err_info_print((sal_log_smbios_dev_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_COMP_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform PCI Component Error Info Section\n");
+ ia64_log_pci_comp_err_info_print((sal_log_pci_comp_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SPECIFIC_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Specific Error Info Section\n");
+ ia64_log_plat_specific_err_info_print((sal_log_plat_specific_err_info_t *)
+ slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_HOST_CTLR_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Host Controller Error Info Section\n");
+ ia64_log_host_ctlr_err_info_print((sal_log_host_ctlr_err_info_t *)slsh,
+ prfunc);
+ } else if (efi_guidcmp(slsh->guid, SAL_PLAT_BUS_ERR_SECT_GUID) = 0) {
+ prfunc("+Platform Bus Error Info Section\n");
+ ia64_log_plat_bus_err_info_print((sal_log_plat_bus_err_info_t *)slsh,
+ prfunc);
+ } else {
+ IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n");
+ continue;
+ }
+ }
+ IA64_MCA_DEBUG("ia64_mca_log_print: found %d sections in SAL error record. len = %d\n",
+ n_sects, lh->len);
+ if (!n_sects) {
+ prfunc("No Platform Error Info Sections found\n");
+ return;
+ }
}
/*
* ia64_log_print
- * Display the contents of the OS error log information
- * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
- * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ *
+ * Displays the contents of the OS error log information
+ *
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE})
+ * prfunc (fn ptr of log output function to use)
* Outputs : None
*/
void
-ia64_log_print(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc)
+ia64_log_print(int sal_info_type, prfunc_t prfunc)
{
- char *info_type, *sub_info_type;
-
switch(sal_info_type) {
- case SAL_INFO_TYPE_MCA:
- info_type = "MCA";
+ case SAL_INFO_TYPE_MCA:
+ prfunc("+BEGIN HARDWARE ERROR STATE AT MCA\n");
+ ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc);
+ prfunc("+END HARDWARE ERROR STATE AT MCA\n");
break;
- case SAL_INFO_TYPE_INIT:
- info_type = "INIT";
+ case SAL_INFO_TYPE_INIT:
+ prfunc("+MCA INIT ERROR LOG (UNIMPLEMENTED)\n");
break;
- case SAL_INFO_TYPE_CMC:
- info_type = "CMC";
+ case SAL_INFO_TYPE_CMC:
+ prfunc("+BEGIN HARDWARE ERROR STATE AT CMC\n");
+ ia64_log_processor_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc);
+ prfunc("+END HARDWARE ERROR STATE AT CMC\n");
break;
- default:
- info_type = "UNKNOWN";
+ case SAL_INFO_TYPE_CPE:
+ prfunc("+BEGIN HARDWARE ERROR STATE AT CPE\n");
+ ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc);
+ prfunc("+END HARDWARE ERROR STATE AT CPE\n");
break;
- }
-
- switch(sal_sub_info_type) {
- case SAL_SUB_INFO_TYPE_PROCESSOR:
- sub_info_type = "PROCESSOR";
- break;
- case SAL_SUB_INFO_TYPE_PLATFORM:
- sub_info_type = "PLATFORM";
- break;
- default:
- sub_info_type = "UNKNOWN";
+ default:
+ prfunc("+MCA UNKNOWN ERROR LOG (UNIMPLEMENTED)\n");
break;
}
-
- prfunc("+BEGIN HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type);
- if (sal_sub_info_type = SAL_SUB_INFO_TYPE_PROCESSOR)
- ia64_log_processor_info_print(
- IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type),
- prfunc);
- else
- log_print_platform(IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type),prfunc);
- prfunc("+END HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type);
}
diff -urN linux-2.4.13/arch/ia64/kernel/mca_asm.S linux-2.4.13-lia/arch/ia64/kernel/mca_asm.S
--- linux-2.4.13/arch/ia64/kernel/mca_asm.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/mca_asm.S Thu Oct 4 00:21:39 2001
@@ -9,6 +9,7 @@
//
#include <linux/config.h>
+#include <asm/asmmacro.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/mca_asm.h>
@@ -23,7 +24,7 @@
#include "minstate.h"
/*
- * SAL_TO_OS_MCA_HANDOFF_STATE
+ * SAL_TO_OS_MCA_HANDOFF_STATE (SAL 3.0 spec)
* 1. GR1 = OS GP
* 2. GR8 = PAL_PROC physical address
* 3. GR9 = SAL_PROC physical address
@@ -33,6 +34,7 @@
*/
#define SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(_tmp) \
movl _tmp=ia64_sal_to_os_handoff_state;; \
+ DATA_VA_TO_PA(_tmp);; \
st8 [_tmp]=r1,0x08;; \
st8 [_tmp]=r8,0x08;; \
st8 [_tmp]=r9,0x08;; \
@@ -41,47 +43,29 @@
st8 [_tmp]=r12,0x08;;
/*
- * OS_MCA_TO_SAL_HANDOFF_STATE
- * 1. GR8 = OS_MCA status
- * 2. GR9 = SAL GP (physical)
- * 3. GR22 = New min state save area pointer
+ * OS_MCA_TO_SAL_HANDOFF_STATE (SAL 3.0 spec)
+ * 1. GR8 = OS_MCA return status
+ * 2. GR9 = SAL GP (physical)
+ * 3. GR10 = 0/1 returning same/new context
+ * 4. GR22 = New min state save area pointer
+ * returns ptr to SAL rtn save loc in _tmp
*/
-#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \
- movl _tmp=ia64_os_to_sal_handoff_state;; \
- DATA_VA_TO_PA(_tmp);; \
- ld8 r8=[_tmp],0x08;; \
- ld8 r9=[_tmp],0x08;; \
- ld8 r22=[_tmp],0x08;;
-
-/*
- * BRANCH
- * Jump to the instruction referenced by
- * "to_label".
- * Branch is taken only if the predicate
- * register "p" is true.
- * "ip" is the address of the instruction
- * located at "from_label".
- * "temp" is a scratch register like r2
- * "adjust" needed for HP compiler.
- * A screwup somewhere with constant arithmetic.
- */
-#define BRANCH(to_label, temp, p, adjust) \
-100: (p) mov temp=ip; \
- ;; \
- (p) adds temp=to_label-100b,temp;\
- ;; \
- (p) adds tempjust,temp; \
- ;; \
- (p) mov b1=temp ; \
- (p) br b1
+#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \
+ movl _tmp=ia64_os_to_sal_handoff_state;; \
+ DATA_VA_TO_PA(_tmp);; \
+ ld8 r8=[_tmp],0x08;; \
+ ld8 r9=[_tmp],0x08;; \
+ ld8 r10=[_tmp],0x08;; \
+ ld8 r22=[_tmp],0x08;; \
+ movl _tmp=ia64_sal_to_os_handoff_state;; \
+ DATA_VA_TO_PA(_tmp);; \
+ add _tmp=0x28,_tmp;; // point to SAL rtn save location
.global ia64_os_mca_dispatch
.global ia64_os_mca_dispatch_end
.global ia64_sal_to_os_handoff_state
.global ia64_os_to_sal_handoff_state
- .global ia64_os_mca_ucmc_handler
.global ia64_mca_proc_state_dump
- .global ia64_mca_proc_state_restore
.global ia64_mca_stack
.global ia64_mca_stackframe
.global ia64_mca_bspstore
@@ -100,7 +84,7 @@
#endif /* #if defined(MCA_TEST) */
// Save the SAL to OS MCA handoff state as defined
- // by SAL SPEC 2.5
+ // by SAL SPEC 3.0
// NOTE : The order in which the state gets saved
// is dependent on the way the C-structure
// for ia64_mca_sal_to_os_state_t has been
@@ -110,15 +94,20 @@
// LOG PROCESSOR STATE INFO FROM HERE ON..
;;
begin_os_mca_dump:
- BRANCH(ia64_os_mca_proc_state_dump, r2, p0, 0x0)
- ;;
+ br ia64_os_mca_proc_state_dump;;
+
ia64_os_mca_done_dump:
// Setup new stack frame for OS_MCA handling
- movl r2=ia64_mca_bspstore // local bspstore area location in r2
- movl r3=ia64_mca_stackframe // save stack frame to memory in r3
+ movl r2=ia64_mca_bspstore;; // local bspstore area location in r2
+ DATA_VA_TO_PA(r2);;
+ movl r3=ia64_mca_stackframe;; // save stack frame to memory in r3
+ DATA_VA_TO_PA(r3);;
rse_switch_context(r6,r3,r2);; // RSC management in this new context
movl r12=ia64_mca_stack;;
+ mov r2=8*1024;; // stack size must be same as c array
+ add r12=r2,r12;; // stack base @ bottom of array
+ DATA_VA_TO_PA(r12);;
// Enter virtual mode from physical mode
VIRTUAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_begin, r4)
@@ -127,7 +116,7 @@
// call our handler
movl r2=ia64_mca_ucmc_handler;;
mov b6=r2;;
- br.call.sptk.few b0¶
+ br.call.sptk.many b0¶;;
.ret0:
// Revert back to physical mode before going back to SAL
PHYSICAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_end, r4)
@@ -135,9 +124,9 @@
#if defined(MCA_TEST)
// Pretend that we are in interrupt context
- mov r2=psr
- dep r2=0, r2, PSR_IC, 2;
- mov psr.l = r2
+ mov r2=psr;;
+ dep r2=0, r2, PSR_IC, 2;;
+ mov psr.l = r2;;
#endif /* #if defined(MCA_TEST) */
// restore the original stack frame here
@@ -152,15 +141,14 @@
mov r8=gp
;;
begin_os_mca_restore:
- BRANCH(ia64_os_mca_proc_state_restore, r2, p0, 0x0)
- ;;
+ br ia64_os_mca_proc_state_restore;;
ia64_os_mca_done_restore:
;;
// branch back to SALE_CHECK
OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(r2)
ld8 r3=[r2];;
- mov b0=r3 // SAL_CHECK return address
+ mov b0=r3;; // SAL_CHECK return address
br b0
;;
ia64_os_mca_dispatch_end:
@@ -178,8 +166,10 @@
//--
ia64_os_mca_proc_state_dump:
-// Get and save GR0-31 from Proc. Min. State Save Area to SAL PSI
+// Save bank 1 GRs 16-31 which will be used by c-language code when we switch
+// to virtual addressing mode.
movl r2=ia64_mca_proc_state_dump;; // Os state dump area
+ DATA_VA_TO_PA(r2) // convert to to physical address
// save ar.NaT
mov r5=ar.unat // ar.unat
@@ -250,16 +240,16 @@
// if PSR.ic=0, reading interruption registers causes an illegal operation fault
mov r3=psr;;
tbit.nz.unc p6,p0=r3,PSR_IC;; // PSI Valid Log bit pos. test
-(p6) st8 [r2]=r0,9*8+160 // increment by 168 byte inc.
+(p6) st8 [r2]=r0,9*8+160 // increment by 232 byte inc.
begin_skip_intr_regs:
- BRANCH(SkipIntrRegs, r9, p6, 0x0)
- ;;
+(p6) br SkipIntrRegs;;
+
add r4=8,r2 // duplicate r2 in r4
add r6=2*8,r2 // duplicate r2 in r6
mov r3=cr16 // cr.ipsr
mov r5=cr17 // cr.isr
- mov r7=r0;; // cr.ida => cr18
+ mov r7=r0;; // cr.ida => cr18 (reserved)
st8 [r2]=r3,3*8
st8 [r4]=r5,3*8
st8 [r6]=r7,3*8;;
@@ -394,8 +384,7 @@
br.cloop.sptk.few cStRR
;;
end_os_mca_dump:
- BRANCH(ia64_os_mca_done_dump, r2, p0, -0x10)
- ;;
+ br ia64_os_mca_done_dump;;
//EndStub//////////////////////////////////////////////////////////////////////
@@ -484,11 +473,10 @@
// if PSR.ic=1, reading interruption registers causes an illegal operation fault
mov r3=psr;;
tbit.nz.unc p6,p0=r3,PSR_IC;; // PSI Valid Log bit pos. test
-(p6) st8 [r2]=r0,9*8+160 // increment by 160 byte inc.
+(p6) st8 [r2]=r0,9*8+160 // increment by 232 byte inc.
begin_rskip_intr_regs:
- BRANCH(rSkipIntrRegs, r9, p6, 0x0)
- ;;
+(p6) br rSkipIntrRegs;;
add r4=8,r2 // duplicate r2 in r4
add r6=2*8,r2;; // duplicate r2 in r4
@@ -498,7 +486,7 @@
ld8 r7=[r6],3*8;;
mov cr16=r3 // cr.ipsr
mov cr17=r5 // cr.isr is read only
-// mov cr18=r7;; // cr.ida
+// mov cr18=r7;; // cr.ida (reserved - don't restore)
ld8 r3=[r2],3*8
ld8 r5=[r4],3*8
@@ -629,8 +617,8 @@
mov ar.lc=r5
;;
end_os_mca_restore:
- BRANCH(ia64_os_mca_done_restore, r2, p0, -0x20)
- ;;
+ br ia64_os_mca_done_restore;;
+
//EndStub//////////////////////////////////////////////////////////////////////
// ok, the issue here is that we need to save state information so
@@ -660,12 +648,7 @@
// 6. GR12 = Return address to location within SAL_INIT procedure
- .text
- .align 16
-.global ia64_monarch_init_handler
-.proc ia64_monarch_init_handler
-ia64_monarch_init_handler:
-
+GLOBAL_ENTRY(ia64_monarch_init_handler)
#if defined(CONFIG_SMP) && defined(SAL_MPINIT_WORKAROUND)
//
// work around SAL bug that sends all processors to monarch entry
@@ -741,13 +724,12 @@
adds out0\x16,sp // out0 = pointer to pt_regs
;;
- br.call.sptk.few rp=ia64_init_handler
+ br.call.sptk.many rp=ia64_init_handler
.ret1:
return_from_init:
br.sptk return_from_init
-
- .endp
+END(ia64_monarch_init_handler)
//
// SAL to OS entry point for INIT on the slave processor
@@ -755,14 +737,6 @@
// as a part of ia64_mca_init.
//
- .text
- .align 16
-.global ia64_slave_init_handler
-.proc ia64_slave_init_handler
-ia64_slave_init_handler:
-
-
-slave_init_spin_me:
- br.sptk slave_init_spin_me
- ;;
- .endp
+GLOBAL_ENTRY(ia64_slave_init_handler)
+1: br.sptk 1b
+END(ia64_slave_init_handler)
diff -urN linux-2.4.13/arch/ia64/kernel/pal.S linux-2.4.13-lia/arch/ia64/kernel/pal.S
--- linux-2.4.13/arch/ia64/kernel/pal.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/pal.S Thu Oct 4 00:21:39 2001
@@ -4,8 +4,9 @@
*
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1999-2001 Hewlett-Packard Co
+ * David Mosberger <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 05/22/2000 eranian Added support for stacked register calls
* 05/24/2000 eranian Added support for physical mode static calls
@@ -31,7 +32,7 @@
movl r2=pal_entry_point
;;
st8 [r2]=in0
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(ia64_pal_handler_init)
/*
@@ -41,7 +42,7 @@
*/
GLOBAL_ENTRY(ia64_pal_default_handler)
mov r8=-1
- br.cond.sptk.few rp
+ br.cond.sptk.many rp
END(ia64_pal_default_handler)
/*
@@ -79,13 +80,13 @@
;;
(p6) srlz.i
mov rp = r8
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
1: mov psr.l = loc3
mov ar.pfs = loc1
mov rp = loc0
;;
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_static)
/*
@@ -120,7 +121,7 @@
mov rp = loc0
;;
srlz.d // serialize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_stacked)
/*
@@ -173,13 +174,13 @@
or loc3=loc3,r17 // add in psr the bits to set
;;
andcm r16=loc3,r16 // removes bits to clear from psr
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret1: mov rp = r8 // install return address (physical)
- br.cond.sptk.few b7
+ br.cond.sptk.many b7
1:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret2:
mov psr.l = loc3 // restore init PSR
@@ -188,7 +189,7 @@
;;
mov ar.rsc=loc4 // restore RSE configuration
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_phys_static)
/*
@@ -227,13 +228,13 @@
mov b7 = loc2 // install target to branch reg
;;
andcm r16=loc3,r16 // removes bits to clear from psr
- br.call.sptk.few rp=ia64_switch_mode
+ br.call.sptk.many rp=ia64_switch_mode
.ret6:
br.call.sptk.many rp· // now make the call
.ret7:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
- br.call.sptk.few rp=ia64_switch_mode // return to virtual mode
+ br.call.sptk.many rp=ia64_switch_mode // return to virtual mode
.ret8: mov psr.l = loc3 // restore init PSR
mov ar.pfs = loc1
@@ -241,6 +242,6 @@
;;
mov ar.rsc=loc4 // restore RSE configuration
srlz.d // seralize restoration of psr.l
- br.ret.sptk.few b0
+ br.ret.sptk.many b0
END(ia64_pal_call_phys_stacked)
diff -urN linux-2.4.13/arch/ia64/kernel/palinfo.c linux-2.4.13-lia/arch/ia64/kernel/palinfo.c
--- linux-2.4.13/arch/ia64/kernel/palinfo.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/palinfo.c Wed Oct 24 18:14:08 2001
@@ -6,12 +6,13 @@
* Intel IA-64 Architecture Software Developer's Manual v1.0.
*
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 05/26/2000 S.Eranian initial release
* 08/21/2000 S.Eranian updated to July 2000 PAL specs
* 02/05/2001 S.Eranian fixed module support
+ * 10/23/2001 S.Eranian updated pal_perf_mon_info bug fixes
*/
#include <linux/config.h>
#include <linux/types.h>
@@ -32,8 +33,9 @@
MODULE_AUTHOR("Stephane Eranian <eranian@hpl.hp.com>");
MODULE_DESCRIPTION("/proc interface to IA-64 PAL");
+MODULE_LICENSE("GPL");
-#define PALINFO_VERSION "0.4"
+#define PALINFO_VERSION "0.5"
#ifdef CONFIG_SMP
#define cpu_is_online(i) (cpu_online_map & (1UL << i))
@@ -606,15 +608,6 @@
if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0) return 0;
-#ifdef IA64_PAL_PERF_MON_INFO_BUG
- /*
- * This bug has been fixed in PAL 2.2.9 and higher
- */
- pm_buffer[5]=0x3;
- pm_info.pal_perf_mon_info_s.cycles = 0x12;
- pm_info.pal_perf_mon_info_s.retired = 0x08;
-#endif
-
p += sprintf(p, "PMC/PMD pairs : %d\n" \
"Counter width : %d bits\n" \
"Cycle event number : %d\n" \
@@ -636,6 +629,14 @@
p = bitregister_process(p, pm_buffer+8, 256);
p += sprintf(p, "\nRetired bundles count capable : ");
+
+#ifdef CONFIG_ITANIUM
+ /*
+ * PAL_PERF_MON_INFO reports that only PMC4 can be used to count CPU_CYCLES
+ * which is wrong, both PMC4 and PMD5 support it.
+ */
+ if (pm_buffer[12] = 0x10) pm_buffer[12]=0x30;
+#endif
p = bitregister_process(p, pm_buffer+12, 256);
diff -urN linux-2.4.13/arch/ia64/kernel/pci.c linux-2.4.13-lia/arch/ia64/kernel/pci.c
--- linux-2.4.13/arch/ia64/kernel/pci.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/pci.c Thu Oct 4 00:21:39 2001
@@ -38,6 +38,10 @@
#define DBG(x...)
#endif
+#ifdef CONFIG_IA64_MCA
+extern void ia64_mca_check_errors( void );
+#endif
+
/*
* This interrupt-safe spinlock protects all accesses to PCI
* configuration space.
@@ -122,6 +126,10 @@
# define PCI_BUSES_TO_SCAN 255
int i;
+#ifdef CONFIG_IA64_MCA
+ ia64_mca_check_errors(); /* For post-failure MCA error logging */
+#endif
+
platform_pci_fixup(0); /* phase 0 initialization (before PCI bus has been scanned) */
printk("PCI: Probing PCI hardware\n");
@@ -194,4 +202,40 @@
pcibios_setup (char *str)
{
return NULL;
+}
+
+int
+pci_mmap_page_range (struct pci_dev *dev, struct vm_area_struct *vma,
+ enum pci_mmap_state mmap_state, int write_combine)
+{
+ /*
+ * I/O space cannot be accessed via normal processor loads and stores on this
+ * platform.
+ */
+ if (mmap_state = pci_mmap_io)
+ /*
+ * XXX we could relax this for I/O spaces for which ACPI indicates that
+ * the space is 1-to-1 mapped. But at the moment, we don't support
+ * multiple PCI address spaces and the legacy I/O space is not 1-to-1
+ * mapped, so this is moot.
+ */
+ return -EINVAL;
+
+ /*
+ * Leave vm_pgoff as-is, the PCI space address is the physical address on this
+ * platform.
+ */
+ vma->vm_flags |= (VM_SHM | VM_LOCKED | VM_IO);
+
+ if (write_combine)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+ else
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+ if (remap_page_range(vma->vm_start, vma->vm_pgoff << PAGE_SHIFT,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot))
+ return -EAGAIN;
+
+ return 0;
}
diff -urN linux-2.4.13/arch/ia64/kernel/perfmon.c linux-2.4.13-lia/arch/ia64/kernel/perfmon.c
--- linux-2.4.13/arch/ia64/kernel/perfmon.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/perfmon.c Thu Oct 4 00:21:39 2001
@@ -38,7 +38,7 @@
#ifdef CONFIG_PERFMON
-#define PFM_VERSION "0.2"
+#define PFM_VERSION "0.3"
#define PFM_SMPL_HDR_VERSION 1
#define PMU_FIRST_COUNTER 4 /* first generic counter */
@@ -52,6 +52,7 @@
#define PFM_DISABLE 0xa6 /* freeze only */
#define PFM_RESTART 0xcf
#define PFM_CREATE_CONTEXT 0xa7
+#define PFM_DESTROY_CONTEXT 0xa8
/*
* Those 2 are just meant for debugging. I considered using sysctl() for
* that but it is a little bit too pervasive. This solution is at least
@@ -60,6 +61,8 @@
#define PFM_DEBUG_ON 0xe0
#define PFM_DEBUG_OFF 0xe1
+#define PFM_DEBUG_BASE PFM_DEBUG_ON
+
/*
* perfmon API flags
@@ -68,7 +71,8 @@
#define PFM_FL_INHERIT_ONCE 0x01 /* clone pfm_context only once across fork() */
#define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork() */
#define PFM_FL_SMPL_OVFL_NOBLOCK 0x04 /* do not block on sampling buffer overflow */
-#define PFM_FL_SYSTEMWIDE 0x08 /* create a systemwide context */
+#define PFM_FL_SYSTEM_WIDE 0x08 /* create a system wide context */
+#define PFM_FL_EXCL_INTR 0x10 /* exclude interrupt from system wide monitoring */
/*
* PMC API flags
@@ -87,7 +91,7 @@
#endif
#define PMC_IS_IMPL(i) (i < pmu_conf.num_pmcs && pmu_conf.impl_regs[i>>6] & (1<< (i&~(64-1))))
-#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
+#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1))))
#define PMD_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
#define PMC_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters))
@@ -197,7 +201,8 @@
unsigned int noblock:1; /* block/don't block on overflow with notification */
unsigned int system:1; /* do system wide monitoring */
unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */
- unsigned int reserved:27;
+ unsigned int exclintr:1;/* exlcude interrupts from system wide monitoring */
+ unsigned int reserved:26;
} pfm_context_flags_t;
typedef struct pfm_context {
@@ -207,26 +212,33 @@
unsigned long ctx_iear_counter; /* which PMD holds I-EAR */
unsigned long ctx_btb_counter; /* which PMD holds BTB */
- pid_t ctx_notify_pid; /* who to notify on overflow */
- int ctx_notify_sig; /* XXX: SIGPROF or other */
- pfm_context_flags_t ctx_flags; /* block/noblock */
- pid_t ctx_creator; /* pid of creator (debug) */
- unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */
- unsigned long ctx_smpl_regs; /* which registers to record on overflow */
+ spinlock_t ctx_notify_lock;
+ pfm_context_flags_t ctx_flags; /* block/noblock */
+ int ctx_notify_sig; /* XXX: SIGPROF or other */
+ struct task_struct *ctx_notify_task; /* who to notify on overflow */
+ struct task_struct *ctx_creator; /* pid of creator (debug) */
+
+ unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */
+ unsigned long ctx_smpl_regs; /* which registers to record on overflow */
+
+ struct semaphore ctx_restart_sem; /* use for blocking notification mode */
- struct semaphore ctx_restart_sem; /* use for blocking notification mode */
+ unsigned long ctx_used_pmds[4]; /* bitmask of used PMD (speedup ctxsw) */
+ unsigned long ctx_used_pmcs[4]; /* bitmask of used PMC (speedup ctxsw) */
pfm_counter_t ctx_pmds[IA64_NUM_PMD_COUNTERS]; /* XXX: size should be dynamic */
+
} pfm_context_t;
+#define CTX_USED_PMD(ctx,n) (ctx)->ctx_used_pmds[(n)>>6] |= 1<< ((n) % 64)
+#define CTX_USED_PMC(ctx,n) (ctx)->ctx_used_pmcs[(n)>>6] |= 1<< ((n) % 64)
+
#define ctx_fl_inherit ctx_flags.inherit
#define ctx_fl_noblock ctx_flags.noblock
#define ctx_fl_system ctx_flags.system
#define ctx_fl_frozen ctx_flags.frozen
+#define ctx_fl_exclintr ctx_flags.exclintr
-#define CTX_IS_DEAR(c,n) ((c)->ctx_dear_counter = (n))
-#define CTX_IS_IEAR(c,n) ((c)->ctx_iear_counter = (n))
-#define CTX_IS_BTB(c,n) ((c)->ctx_btb_counter = (n))
#define CTX_OVFL_NOBLOCK(c) ((c)->ctx_fl_noblock = 1)
#define CTX_INHERIT_MODE(c) ((c)->ctx_fl_inherit)
#define CTX_HAS_SMPL(c) ((c)->ctx_smpl_buf != NULL)
@@ -234,17 +246,15 @@
static pmu_config_t pmu_conf;
/* for debug only */
-static unsigned long pfm_debug=0; /* 0= nodebug, >0= debug output on */
+static int pfm_debug=0; /* 0= nodebug, >0= debug output on */
+
#define DBprintk(a) \
do { \
- if (pfm_debug >0) { printk(__FUNCTION__" "); printk a; } \
+ if (pfm_debug >0) { printk(__FUNCTION__" %d: ", __LINE__); printk a; } \
} while (0);
-static void perfmon_softint(unsigned long ignored);
static void ia64_reset_pmu(void);
-DECLARE_TASKLET(pfm_tasklet, perfmon_softint, 0);
-
/*
* structure used to pass information between the interrupt handler
* and the tasklet.
@@ -256,26 +266,42 @@
unsigned long bitvect; /* which counters have overflowed */
} notification_info_t;
-#define notification_is_invalid(i) (i->to_pid < 2)
-/* will need to be cache line padded */
-static notification_info_t notify_info[NR_CPUS];
+typedef struct {
+ unsigned long pfs_proc_sessions;
+ unsigned long pfs_sys_session; /* can only be 0/1 */
+ unsigned long pfs_dfl_dcr; /* XXX: hack */
+ unsigned int pfs_pp;
+} pfm_session_t;
-/*
- * We force cache line alignment to avoid false sharing
- * given that we have one entry per CPU.
- */
-static struct {
+struct {
struct task_struct *owner;
} ____cacheline_aligned pmu_owners[NR_CPUS];
-/* helper macros */
+
+
+/*
+ * helper macros
+ */
#define SET_PMU_OWNER(t) do { pmu_owners[smp_processor_id()].owner = (t); } while(0);
#define PMU_OWNER() pmu_owners[smp_processor_id()].owner
+#ifdef CONFIG_SMP
+#define PFM_CAN_DO_LAZY() (smp_num_cpus=1 && pfs_info.pfs_sys_session=0)
+#else
+#define PFM_CAN_DO_LAZY() (pfs_info.pfs_sys_session=0)
+#endif
+
+static void pfm_lazy_save_regs (struct task_struct *ta);
+
/* for debug only */
static struct proc_dir_entry *perfmon_dir;
/*
+ * XXX: hack to indicate that a system wide monitoring session is active
+ */
+static pfm_session_t pfs_info;
+
+/*
* finds the number of PM(C|D) registers given
* the bitvector returned by PAL
*/
@@ -339,8 +365,7 @@
static inline unsigned long
kvirt_to_pa(unsigned long adr)
{
- __u64 pa;
- __asm__ __volatile__ ("tpa %0 = %1" : "=r"(pa) : "r"(adr) : "memory");
+ __u64 pa = ia64_tpa(adr);
DBprintk(("kv2pa(%lx-->%lx)\n", adr, pa));
return pa;
}
@@ -568,25 +593,44 @@
static int
pfx_is_sane(pfreq_context_t *pfx)
{
+ int ctx_flags;
+
/* valid signal */
- if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return 0;
+ //if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return -EINVAL;
+ if (pfx->notify_sig !=0 && pfx->notify_sig != SIGPROF) return -EINVAL;
/* cannot send to process 1, 0 means do not notify */
- if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return 0;
+ if (pfx->notify_pid < 0 || pfx->notify_pid = 1) return -EINVAL;
+
+ ctx_flags = pfx->flags;
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+#ifdef CONFIG_SMP
+ if (smp_num_cpus > 1) {
+ printk("perfmon: system wide monitoring on SMP not yet supported\n");
+ return -EINVAL;
+ }
+#endif
+ if ((ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) = 0) {
+ printk("perfmon: system wide monitoring cannot use blocking notification mode\n");
+ return -EINVAL;
+ }
+ }
/* probably more to add here */
- return 1;
+ return 0;
}
static int
-pfm_context_create(struct task_struct *task, int flags, perfmon_req_t *req)
+pfm_context_create(int flags, perfmon_req_t *req)
{
pfm_context_t *ctx;
+ struct task_struct *task = NULL;
perfmon_req_t tmp;
void *uaddr = NULL;
- int ret = -EFAULT;
+ int ret;
int ctx_flags;
+ pid_t pid;
/* to go away */
if (flags) {
@@ -595,48 +639,156 @@
if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
+ ret = pfx_is_sane(&tmp.pfr_ctx);
+ if (ret < 0) return ret;
+
ctx_flags = tmp.pfr_ctx.flags;
- /* not yet supported */
- if (ctx_flags & PFM_FL_SYSTEMWIDE) return -EINVAL;
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+ /*
+ * XXX: This is not AT ALL SMP safe
+ */
+ if (pfs_info.pfs_proc_sessions > 0) return -EBUSY;
+ if (pfs_info.pfs_sys_session > 0) return -EBUSY;
+
+ pfs_info.pfs_sys_session = 1;
- if (!pfx_is_sane(&tmp.pfr_ctx)) return -EINVAL;
+ } else if (pfs_info.pfs_sys_session >0) {
+ /* no per-process monitoring while there is a system wide session */
+ return -EBUSY;
+ } else
+ pfs_info.pfs_proc_sessions++;
ctx = pfm_context_alloc();
- if (!ctx) return -ENOMEM;
+ if (!ctx) goto error;
+
+ /* record the creator (debug only) */
+ ctx->ctx_creator = current;
+
+ pid = tmp.pfr_ctx.notify_pid;
+
+ spin_lock_init(&ctx->ctx_notify_lock);
+
+ if (pid = current->pid) {
+ ctx->ctx_notify_task = task = current;
+ current->thread.pfm_context = ctx;
+
+ atomic_set(¤t->thread.pfm_notifiers_check, 1);
+
+ } else if (pid!=0) {
+ read_lock(&tasklist_lock);
+
+ task = find_task_by_pid(pid);
+ if (task) {
+ /*
+ * record who to notify
+ */
+ ctx->ctx_notify_task = task;
+
+ /*
+ * make visible
+ * must be done inside critical section
+ *
+ * if the initialization does not go through it is still
+ * okay because child will do the scan for nothing which
+ * won't hurt.
+ */
+ current->thread.pfm_context = ctx;
+
+ /*
+ * will cause task to check on exit for monitored
+ * processes that would notify it. see release_thread()
+ * Note: the scan MUST be done in release thread, once the
+ * task has been detached from the tasklist otherwise you are
+ * exposed to race conditions.
+ */
+ atomic_add(1, &task->thread.pfm_notifiers_check);
+ }
+ read_unlock(&tasklist_lock);
+ }
- /* record who the creator is (for debug) */
- ctx->ctx_creator = task->pid;
+ /*
+ * notification process does not exist
+ */
+ if (pid != 0 && task = NULL) {
+ ret = -EINVAL;
+ goto buffer_error;
+ }
- ctx->ctx_notify_pid = tmp.pfr_ctx.notify_pid;
ctx->ctx_notify_sig = SIGPROF; /* siginfo imposes a fixed signal */
if (tmp.pfr_ctx.smpl_entries) {
DBprintk((" sampling entries=%ld\n",tmp.pfr_ctx.smpl_entries));
- if ((ret=pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs, tmp.pfr_ctx.smpl_entries, &uaddr)) ) goto buffer_error;
+
+ ret = pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs,
+ tmp.pfr_ctx.smpl_entries, &uaddr);
+ if (ret<0) goto buffer_error;
+
tmp.pfr_ctx.smpl_vaddr = uaddr;
}
/* initialization of context's flags */
- ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
- ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0;
- ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEMWIDE) ? 1: 0;
- ctx->ctx_fl_frozen = 0;
+ ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK;
+ ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0;
+ ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEM_WIDE) ? 1: 0;
+ ctx->ctx_fl_exclintr = (ctx_flags & PFM_FL_EXCL_INTR) ? 1: 0;
+ ctx->ctx_fl_frozen = 0;
+
+ /*
+ * Keep track of the pmds we want to sample
+ * XXX: may be we don't need to save/restore the DEAR/IEAR pmds
+ * but we do need the BTB for sure. This is because of a hardware
+ * buffer of 1 only for non-BTB pmds.
+ */
+ ctx->ctx_used_pmds[0] = tmp.pfr_ctx.smpl_regs;
+ ctx->ctx_used_pmcs[0] = 1; /* always save/restore PMC[0] */
sema_init(&ctx->ctx_restart_sem, 0); /* init this semaphore to locked */
- if (copy_to_user(req, &tmp, sizeof(tmp))) goto buffer_error;
- DBprintk((" context=%p, pid=%d notify_sig %d notify_pid=%d\n",(void *)ctx, task->pid, ctx->ctx_notify_sig, ctx->ctx_notify_pid));
- DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+ if (copy_to_user(req, &tmp, sizeof(tmp))) {
+ ret = -EFAULT;
+ goto buffer_error;
+ }
+
+ DBprintk((" context=%p, pid=%d notify_sig %d notify_task=%p\n",(void *)ctx, current->pid, ctx->ctx_notify_sig, ctx->ctx_notify_task));
+ DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, current->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system));
+
+ /*
+ * when no notification is required, we can make this visible at the last moment
+ */
+ if (pid = 0) current->thread.pfm_context = ctx;
+
+ /*
+ * by default, we always include interrupts for system wide
+ * DCR.pp is set by default to zero by kernel in cpu_init()
+ */
+ if (ctx->ctx_fl_system) {
+ if (ctx->ctx_fl_exclintr = 0) {
+ unsigned long dcr = ia64_get_dcr();
+
+ ia64_set_dcr(dcr|IA64_DCR_PP);
+ /*
+ * keep track of the kernel default value
+ */
+ pfs_info.pfs_dfl_dcr = dcr;
- /* link with task */
- task->thread.pfm_context = ctx;
+ DBprintk((" dcr.pp is set\n"));
+ }
+ }
return 0;
buffer_error:
- vfree(ctx);
-
+ pfm_context_free(ctx);
+error:
+ /*
+ * undo session reservation
+ */
+ if (ctx_flags & PFM_FL_SYSTEM_WIDE) {
+ pfs_info.pfs_sys_session = 0;
+ } else {
+ pfs_info.pfs_proc_sessions--;
+ }
return ret;
}
@@ -656,8 +808,20 @@
/* upper part is ignored on rval */
ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
+
+ /*
+ * we must reset BTB index (clears pmd16.full to make
+ * sure we do not report the same branches twice.
+ * The non-blocking case in handled in update_counters()
+ */
+ if (cnum = ctx->ctx_btb_counter) {
+ DBprintk(("reseting PMD16\n"));
+ ia64_set_pmd(16, 0);
+ }
}
}
+ /* just in case ! */
+ ctx->ctx_ovfl_regs = 0;
}
static int
@@ -695,20 +859,23 @@
} else if (PMC_IS_BTB(&tmp.pfr_reg.reg_value)) {
ctx->ctx_btb_counter = cnum;
}
-
+#if 0
if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY)
ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY;
+#endif
}
-
+ /* keep track of what we use */
+ CTX_USED_PMC(ctx, cnum);
ia64_set_pmc(cnum, tmp.pfr_reg.reg_value);
- DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags));
+
+ DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x used_pmcs=0%lx\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags, ctx->ctx_used_pmcs[0]));
}
/*
* we have to set this here event hough we haven't necessarily started monitoring
* because we may be context switched out
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
return 0;
}
@@ -741,25 +908,32 @@
ctx->ctx_pmds[k].val = tmp.pfr_reg.reg_value & ~pmu_conf.perf_ovfl_val;
ctx->ctx_pmds[k].smpl_rval = tmp.pfr_reg.reg_smpl_reset;
ctx->ctx_pmds[k].ovfl_rval = tmp.pfr_reg.reg_ovfl_reset;
+
+ if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY)
+ ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY;
}
+ /* keep track of what we use */
+ CTX_USED_PMD(ctx, cnum);
/* writes to unimplemented part is ignored, so this is safe */
ia64_set_pmd(cnum, tmp.pfr_reg.reg_value);
/* to go away */
ia64_srlz_d();
- DBprintk((" setting PMD[%ld]: pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx\n",
+ DBprintk((" setting PMD[%ld]: ovfl_notify=%d pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx used_pmds=0%lx\n",
cnum,
+ PMD_OVFL_NOTIFY(ctx, cnum - PMU_FIRST_COUNTER),
ctx->ctx_pmds[k].val,
ctx->ctx_pmds[k].ovfl_rval,
ctx->ctx_pmds[k].smpl_rval,
- ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val));
+ ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val,
+ ctx->ctx_used_pmds[0]));
}
/*
* we have to set this here event hough we haven't necessarily started monitoring
* because we may be context switched out
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
return 0;
}
@@ -783,6 +957,8 @@
/* XXX: ctx locking may be required here */
for (i = 0; i < count; i++, req++) {
+ unsigned long reg_val = ~0, ctx_val = ~0;
+
if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT;
if (!PMD_IS_IMPL(tmp.pfr_reg.reg_num)) return -EINVAL;
@@ -791,23 +967,25 @@
if (ta = current){
val = ia64_get_pmd(tmp.pfr_reg.reg_num);
} else {
- val = th->pmd[tmp.pfr_reg.reg_num];
+ val = reg_val = th->pmd[tmp.pfr_reg.reg_num];
}
val &= pmu_conf.perf_ovfl_val;
/*
* lower part of .val may not be zero, so we must be an addition because of
* residual count (see update_counters).
*/
- val += ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER].val;
+ val += ctx_val = ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER].val;
} else {
/* for now */
if (ta != current) return -EINVAL;
+ ia64_srlz_d();
val = ia64_get_pmd(tmp.pfr_reg.reg_num);
}
tmp.pfr_reg.reg_value = val;
- DBprintk((" reading PMD[%ld]=0x%lx\n", tmp.pfr_reg.reg_num, val));
+ DBprintk((" reading PMD[%ld]=0x%lx reg=0x%lx ctx_val=0x%lx pmc=0x%lx\n",
+ tmp.pfr_reg.reg_num, val, reg_val, ctx_val, ia64_get_pmc(tmp.pfr_reg.reg_num)));
if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT;
}
@@ -822,7 +1000,7 @@
void *sem = &ctx->ctx_restart_sem;
if (task = current) {
- DBprintk((" restartig self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
+ DBprintk((" restarting self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen));
pfm_reset_regs(ctx);
@@ -871,6 +1049,23 @@
return 0;
}
+/*
+ * system-wide mode: propagate activation/desactivation throughout the tasklist
+ *
+ * XXX: does not work for SMP, of course
+ */
+static void
+pfm_process_tasklist(int cmd)
+{
+ struct task_struct *p;
+ struct pt_regs *regs;
+
+ for_each_task(p) {
+ regs = (struct pt_regs *)((unsigned long)p + IA64_STK_OFFSET);
+ regs--;
+ ia64_psr(regs)->pp = cmd;
+ }
+}
static int
do_perfmonctl (struct task_struct *task, int cmd, int flags, perfmon_req_t *req, int count, struct pt_regs *regs)
@@ -881,19 +1076,26 @@
memset(&tmp, 0, sizeof(tmp));
+ if (ctx = NULL && cmd != PFM_CREATE_CONTEXT && cmd < PFM_DEBUG_BASE) {
+ DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid));
+ return -EINVAL;
+ }
+
switch (cmd) {
case PFM_CREATE_CONTEXT:
/* a context has already been defined */
if (ctx) return -EBUSY;
- /* may be a temporary limitation */
+ /*
+ * cannot directly create a context in another process
+ */
if (task != current) return -EINVAL;
if (req = NULL || count != 1) return -EINVAL;
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- return pfm_context_create(task, flags, req);
+ return pfm_context_create(flags, req);
case PFM_WRITE_PMCS:
/* we don't quite support this right now */
@@ -901,10 +1103,6 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_write_pmcs(task, req, count);
case PFM_WRITE_PMDS:
@@ -913,45 +1111,41 @@
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_WRITE_PMDS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_write_pmds(task, req, count);
case PFM_START:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- if (!ctx) {
- DBprintk((" PFM_START: no context for task %d\n", task->pid));
- return -EINVAL;
- }
+ if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER());
SET_PMU_OWNER(current);
/* will start monitoring right after rfi */
ia64_psr(regs)->up = 1;
+ ia64_psr(regs)->pp = 1;
+
+ if (ctx->ctx_fl_system) {
+ pfm_process_tasklist(1);
+ pfs_info.pfs_pp = 1;
+ }
/*
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
ia64_set_pmc(0, 0);
ia64_srlz_d();
- break;
+ break;
case PFM_ENABLE:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- if (!ctx) {
- DBprintk((" PFM_ENABLE: no context for task %d\n", task->pid));
- return -EINVAL;
- }
+ if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER());
/* reset all registers to stable quiet state */
ia64_reset_pmu();
@@ -969,7 +1163,7 @@
* mark the state as valid.
* this will trigger save/restore at context switch
*/
- th->flags |= IA64_THREAD_PM_VALID;
+ if (ctx->ctx_fl_system=0) th->flags |= IA64_THREAD_PM_VALID;
/* simply unfreeze */
ia64_set_pmc(0, 0);
@@ -983,54 +1177,41 @@
/* simply freeze */
ia64_set_pmc(0, 1);
ia64_srlz_d();
+ /*
+ * XXX: cannot really toggle IA64_THREAD_PM_VALID
+ * but context is still considered valid, so any
+ * read request would return something valid. Same
+ * thing when this task terminates (pfm_flush_regs()).
+ */
break;
case PFM_READ_PMDS:
if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
if (!access_ok(VERIFY_WRITE, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT;
- if (!ctx) {
- DBprintk((" PFM_READ_PMDS: no context for task %d\n", task->pid));
- return -EINVAL;
- }
return pfm_read_pmds(task, req, count);
case PFM_STOP:
/* we don't quite support this right now */
if (task != current) return -EINVAL;
- ia64_set_pmc(0, 1);
- ia64_srlz_d();
-
+ /* simply stop monitors, not PMU */
ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->pp = 0;
- th->flags &= ~IA64_THREAD_PM_VALID;
-
- SET_PMU_OWNER(NULL);
-
- /* we probably will need some more cleanup here */
- break;
-
- case PFM_DEBUG_ON:
- printk(" debugging on\n");
- pfm_debug = 1;
- break;
+ if (ctx->ctx_fl_system) {
+ pfm_process_tasklist(0);
+ pfs_info.pfs_pp = 0;
+ }
- case PFM_DEBUG_OFF:
- printk(" debugging off\n");
- pfm_debug = 0;
break;
case PFM_RESTART: /* temporary, will most likely end up as a PFM_ENABLE */
- if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0 && ctx->ctx_fl_system=0) {
printk(" PFM_RESTART not monitoring\n");
return -EINVAL;
}
- if (!ctx) {
- printk(" PFM_RESTART no ctx for %d\n", task->pid);
- return -EINVAL;
- }
if (CTX_OVFL_NOBLOCK(ctx) = 0 && ctx->ctx_fl_frozen=0) {
printk("task %d without pmu_frozen set\n", task->pid);
return -EINVAL;
@@ -1038,6 +1219,37 @@
return pfm_do_restart(task); /* we only look at first entry */
+ case PFM_DESTROY_CONTEXT:
+ /* we don't quite support this right now */
+ if (task != current) return -EINVAL;
+
+ /* first stop monitors */
+ ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->pp = 0;
+
+ /* then freeze PMU */
+ ia64_set_pmc(0, 1);
+ ia64_srlz_d();
+
+ /* don't save/restore on context switch */
+ if (ctx->ctx_fl_system =0) task->thread.flags &= ~IA64_THREAD_PM_VALID;
+
+ SET_PMU_OWNER(NULL);
+
+ /* now free context and related state */
+ pfm_context_exit(task);
+ break;
+
+ case PFM_DEBUG_ON:
+ printk("perfmon debugging on\n");
+ pfm_debug = 1;
+ break;
+
+ case PFM_DEBUG_OFF:
+ printk("perfmon debugging off\n");
+ pfm_debug = 0;
+ break;
+
default:
DBprintk((" UNknown command 0x%x\n", cmd));
return -EINVAL;
@@ -1074,11 +1286,8 @@
/* XXX: pid interface is going away in favor of pfm context */
if (pid != current->pid) {
read_lock(&tasklist_lock);
- {
- child = find_task_by_pid(pid);
- if (child)
- get_task_struct(child);
- }
+
+ child = find_task_by_pid(pid);
if (!child) goto abort_call;
@@ -1101,93 +1310,44 @@
return ret;
}
-
-/*
- * This function is invoked on the exit path of the kernel. Therefore it must make sure
- * it does does modify the caller's input registers (in0-in7) in case of entry by system call
- * which can be restarted. That's why it's declared as a system call and all 8 possible args
- * are declared even though not used.
- */
#if __GNUC__ >= 3
void asmlinkage
-pfm_overflow_notify(void)
+pfm_block_on_overflow(void)
#else
void asmlinkage
-pfm_overflow_notify(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
+pfm_block_on_overflow(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7)
#endif
{
- struct task_struct *task;
struct thread_struct *th = ¤t->thread;
pfm_context_t *ctx = current->thread.pfm_context;
- struct siginfo si;
int ret;
/*
- * do some sanity checks first
- */
- if (!ctx) {
- printk("perfmon: process %d has no PFM context\n", current->pid);
- return;
- }
- if (ctx->ctx_notify_pid < 2) {
- printk("perfmon: process %d invalid notify_pid=%d\n", current->pid, ctx->ctx_notify_pid);
- return;
- }
-
- DBprintk((" current=%d ctx=%p bv=0%lx\n", current->pid, (void *)ctx, ctx->ctx_ovfl_regs));
- /*
* NO matter what notify_pid is,
* we clear overflow, won't notify again
*/
- th->pfm_pend_notify = 0;
+ th->pfm_must_block = 0;
/*
- * When measuring in kernel mode and non-blocking fashion, it is possible to
- * get an overflow while executing this code. Therefore the state of pend_notify
- * and ovfl_regs can be altered. The important point is not to loose any notification.
- * It is fine to get called for nothing. To make sure we do collect as much state as
- * possible, update_counters() always uses |= to add bit to the ovfl_regs field.
- *
- * In certain cases, it is possible to come here, with ovfl_regs = 0;
- *
- * XXX: pend_notify and ovfl_regs could be merged maybe !
+ * do some sanity checks first
*/
- if (ctx->ctx_ovfl_regs = 0) {
- printk("perfmon: spurious overflow notification from pid %d\n", current->pid);
+ if (!ctx) {
+ printk("perfmon: process %d has no PFM context\n", current->pid);
return;
}
- read_lock(&tasklist_lock);
-
- task = find_task_by_pid(ctx->ctx_notify_pid);
-
- if (task) {
- si.si_signo = ctx->ctx_notify_sig;
- si.si_errno = 0;
- si.si_code = PROF_OVFL; /* goes to user */
- si.si_addr = NULL;
- si.si_pid = current->pid; /* who is sending */
- si.si_pfm_ovfl = ctx->ctx_ovfl_regs;
-
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
-
- /* must be done with tasklist_lock locked */
- ret = send_sig_info(ctx->ctx_notify_sig, &si, task);
- if (ret != 0) {
- DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_pid, ret));
- task = NULL; /* will cause return */
- }
- } else {
- printk("perfmon: notify_pid %d not found\n", ctx->ctx_notify_pid);
+ if (ctx->ctx_notify_task = 0) {
+ printk("perfmon: process %d has no task to notify\n", current->pid);
+ return;
}
- read_unlock(&tasklist_lock);
+ DBprintk((" current=%d task=%d\n", current->pid, ctx->ctx_notify_task->pid));
- /* now that we have released the lock handle error condition */
- if (!task || CTX_OVFL_NOBLOCK(ctx)) {
- /* we clear all pending overflow bits in noblock mode */
- ctx->ctx_ovfl_regs = 0;
+ /* should not happen */
+ if (CTX_OVFL_NOBLOCK(ctx)) {
+ printk("perfmon: process %d non-blocking ctx should not be here\n", current->pid);
return;
}
+
DBprintk((" CPU%d %d before sleep\n", smp_processor_id(), current->pid));
/*
@@ -1211,9 +1371,6 @@
pfm_reset_regs(ctx);
- /* now we can clear this mask */
- ctx->ctx_ovfl_regs = 0;
-
/*
* Unlock sampling buffer and reset index atomically
* XXX: not really needed when blocking
@@ -1232,84 +1389,14 @@
}
}
-static void
-perfmon_softint(unsigned long ignored)
-{
- notification_info_t *info;
- int my_cpu = smp_processor_id();
- struct task_struct *task;
- struct siginfo si;
-
- info = notify_info+my_cpu;
-
- DBprintk((" CPU%d current=%d to_pid=%d from_pid=%d bv=0x%lx\n", \
- smp_processor_id(), current->pid, info->to_pid, info->from_pid, info->bitvect));
-
- /* assumption check */
- if (info->from_pid = info->to_pid) {
- DBprintk((" Tasklet assumption error: from=%d tor=%d\n", info->from_pid, info->to_pid));
- return;
- }
-
- if (notification_is_invalid(info)) {
- DBprintk((" invalid notification information\n"));
- return;
- }
-
- /* sanity check */
- if (info->to_pid = 1) {
- DBprintk((" cannot notify init\n"));
- return;
- }
- /*
- * XXX: needs way more checks here to make sure we send to a task we have control over
- */
- read_lock(&tasklist_lock);
-
- task = find_task_by_pid(info->to_pid);
-
- DBprintk((" after find %p\n", (void *)task));
-
- if (task) {
- int ret;
-
- si.si_signo = SIGPROF;
- si.si_errno = 0;
- si.si_code = PROF_OVFL; /* goes to user */
- si.si_addr = NULL;
- si.si_pid = info->from_pid; /* who is sending */
- si.si_pfm_ovfl = info->bitvect;
-
- DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task));
-
- /* must be done with tasklist_lock locked */
- ret = send_sig_info(SIGPROF, &si, task);
- if (ret != 0)
- DBprintk((" send_sig_info(process %d, SIGPROF)=%d\n", info->to_pid, ret));
-
- /* invalidate notification */
- info->to_pid = info->from_pid = 0;
- info->bitvect = 0;
- }
-
- read_unlock(&tasklist_lock);
-
- DBprintk((" after unlock %p\n", (void *)task));
-
- if (!task) {
- printk("perfmon: CPU%d cannot find process %d\n", smp_processor_id(), info->to_pid);
- }
-}
-
/*
* main overflow processing routine.
* it can be called from the interrupt path or explicitely during the context switch code
* Return:
- * 0 : do not unfreeze the PMU
- * 1 : PMU can be unfrozen
+ * new value of pmc[0]. if 0x0 then unfreeze, else keep frozen
*/
-static unsigned long
-update_counters (struct task_struct *ta, u64 pmc0, struct pt_regs *regs)
+unsigned long
+update_counters (struct task_struct *task, u64 pmc0, struct pt_regs *regs)
{
unsigned long mask, i, cnum;
struct thread_struct *th;
@@ -1317,7 +1404,9 @@
unsigned long bv = 0;
int my_cpu = smp_processor_id();
int ret = 1, buffer_is_full = 0;
- int ovfl_is_smpl, can_notify, need_reset_pmd16=0;
+ int ovfl_has_long_recovery, can_notify, need_reset_pmd16=0;
+ struct siginfo si;
+
/*
* It is never safe to access the task for which the overflow interrupt is destinated
* using the current variable as the interrupt may occur in the middle of a context switch
@@ -1331,23 +1420,23 @@
* valid one, i.e. the one that caused the interrupt.
*/
- if (ta = NULL) {
+ if (task = NULL) {
DBprintk((" owners[%d]=NULL\n", my_cpu));
return 0x1;
}
- th = &ta->thread;
+ th = &task->thread;
ctx = th->pfm_context;
/*
* XXX: debug test
* Don't think this could happen given upfront tests
*/
- if ((th->flags & IA64_THREAD_PM_VALID) = 0) {
- printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", ta->pid);
+ if ((th->flags & IA64_THREAD_PM_VALID) = 0 && ctx->ctx_fl_system = 0) {
+ printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", task->pid);
return 0x1;
}
if (!ctx) {
- printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", ta->pid);
+ printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", task->pid);
return 0;
}
@@ -1355,16 +1444,21 @@
* sanity test. Should never happen
*/
if ((pmc0 & 0x1 )= 0) {
- printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", ta->pid, pmc0);
+ printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", task->pid, pmc0);
return 0x0;
}
mask = pmc0 >> PMU_FIRST_COUNTER;
- DBprintk(("pmc0=0x%lx pid=%d\n", pmc0, ta->pid));
-
- DBprintk(("ctx is in %s mode\n", CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK"));
+ DBprintk(("pmc0=0x%lx pid=%d owner=%d iip=0x%lx, ctx is in %s mode used_pmds=0x%lx used_pmcs=0x%lx\n",
+ pmc0, task->pid, PMU_OWNER()->pid, regs->cr_iip,
+ CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK",
+ ctx->ctx_used_pmds[0],
+ ctx->ctx_used_pmcs[0]));
+ /*
+ * XXX: need to record sample only when an EAR/BTB has overflowed
+ */
if (CTX_HAS_SMPL(ctx)) {
pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
unsigned long *e, m, idx=0;
@@ -1372,11 +1466,15 @@
int j;
idx = ia64_fetch_and_add(1, &psb->psb_index);
- DBprintk((" trying to record index=%ld entries=%ld\n", idx, psb->psb_entries));
+ DBprintk((" recording index=%ld entries=%ld\n", idx, psb->psb_entries));
/*
* XXX: there is a small chance that we could run out on index before resetting
* but index is unsigned long, so it will take some time.....
+ * We use > instead of = because fetch_and_add() is off by one (see below)
+ *
+ * This case can happen in non-blocking mode or with multiple processes.
+ * For non-blocking, we need to reload and continue.
*/
if (idx > psb->psb_entries) {
buffer_is_full = 1;
@@ -1388,7 +1486,7 @@
h = (perfmon_smpl_entry_t *)(((char *)psb->psb_addr) + idx*(psb->psb_entry_size));
- h->pid = ta->pid;
+ h->pid = task->pid;
h->cpu = my_cpu;
h->rate = 0;
h->ip = regs ? regs->cr_iip : 0x0; /* where did the fault happened */
@@ -1398,6 +1496,7 @@
h->stamp = perfmon_get_stamp();
e = (unsigned long *)(h+1);
+
/*
* selectively store PMDs in increasing index number
*/
@@ -1406,35 +1505,66 @@
if (PMD_IS_COUNTER(j))
*e = ctx->ctx_pmds[j-PMU_FIRST_COUNTER].val
+ (ia64_get_pmd(j) & pmu_conf.perf_ovfl_val);
- else
+ else {
*e = ia64_get_pmd(j); /* slow */
+ }
DBprintk((" e=%p pmd%d =0x%lx\n", (void *)e, j, *e));
e++;
}
}
- /* make the new entry visible to user, needs to be atomic */
+ /*
+ * make the new entry visible to user, needs to be atomic
+ */
ia64_fetch_and_add(1, &psb->psb_hdr->hdr_count);
DBprintk((" index=%ld entries=%ld hdr_count=%ld\n", idx, psb->psb_entries, psb->psb_hdr->hdr_count));
-
- /* sampling buffer full ? */
+ /*
+ * sampling buffer full ?
+ */
if (idx = (psb->psb_entries-1)) {
- bv = mask;
+ /*
+ * will cause notification, cannot be 0
+ */
+ bv = mask << PMU_FIRST_COUNTER;
+
buffer_is_full = 1;
DBprintk((" sampling buffer full must notify bv=0x%lx\n", bv));
- if (!CTX_OVFL_NOBLOCK(ctx)) goto buffer_full;
+ /*
+ * we do not reload here, when context is blocking
+ */
+ if (!CTX_OVFL_NOBLOCK(ctx)) goto no_reload;
+
/*
* here, we have a full buffer but we are in non-blocking mode
- * so we need to reloads overflowed PMDs with sampling reset values
- * and restart
+ * so we need to reload overflowed PMDs with sampling reset values
+ * and restart right away.
*/
}
+ /* FALL THROUGH */
}
reload_pmds:
- ovfl_is_smpl = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full;
- can_notify = CTX_HAS_SMPL(ctx) = 0 && ctx->ctx_notify_pid;
+
+ /*
+ * in the case of a non-blocking context, we reload
+ * with the ovfl_rval when no user notification is taking place (short recovery)
+ * otherwise when the buffer is full which requires user interaction) then we use
+ * smpl_rval which is the long_recovery path (disturbance introduce by user execution).
+ *
+ * XXX: implies that when buffer is full then there is always notification.
+ */
+ ovfl_has_long_recovery = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full;
+
+ /*
+ * XXX: CTX_HAS_SMPL() should really be something like CTX_HAS_SMPL() and is activated,i.e.,
+ * one of the PMC is configured for EAR/BTB.
+ *
+ * When sampling, we can only notify when the sampling buffer is full.
+ */
+ can_notify = CTX_HAS_SMPL(ctx) = 0 && ctx->ctx_notify_task;
+
+ DBprintk((" ovfl_has_long_recovery=%d can_notify=%d\n", ovfl_has_long_recovery, can_notify));
for (i = 0, cnum = PMU_FIRST_COUNTER; mask ; cnum++, i++, mask >>= 1) {
@@ -1456,7 +1586,7 @@
DBprintk((" pmod[%ld].val=0x%lx pmd=0x%lx\n", i, ctx->ctx_pmds[i].val, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val));
if (can_notify && PMD_OVFL_NOTIFY(ctx, i)) {
- DBprintk((" CPU%d should notify process %d with signal %d\n", my_cpu, ctx->ctx_notify_pid, ctx->ctx_notify_sig));
+ DBprintk((" CPU%d should notify task %p with signal %d\n", my_cpu, ctx->ctx_notify_task, ctx->ctx_notify_sig));
bv |= 1 << i;
} else {
DBprintk((" CPU%d PMD[%ld] overflow, no notification\n", my_cpu, cnum));
@@ -1467,93 +1597,150 @@
*/
/* writes to upper part are ignored, so this is safe */
- if (ovfl_is_smpl) {
- DBprintk((" CPU%d PMD[%ld] reloaded with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ if (ovfl_has_long_recovery) {
+ DBprintk((" CPU%d PMD[%ld] reload with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval);
} else {
- DBprintk((" CPU%d PMD[%ld] reloaded with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
+ DBprintk((" CPU%d PMD[%ld] reload with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval));
ia64_set_pmd(cnum, ctx->ctx_pmds[i].ovfl_rval);
}
}
if (cnum = ctx->ctx_btb_counter) need_reset_pmd16=1;
}
/*
- * In case of BTB, overflow
- * we need to reset the BTB index.
+ * In case of BTB overflow we need to reset the BTB index.
*/
if (need_reset_pmd16) {
DBprintk(("reset PMD16\n"));
ia64_set_pmd(16, 0);
}
-buffer_full:
- /* see pfm_overflow_notify() on details for why we use |= here */
- ctx->ctx_ovfl_regs |= bv;
- /* nobody to notify, return and unfreeze */
+no_reload:
+
+ /*
+ * some counters overflowed, but they did not require
+ * user notification, so after having reloaded them above
+ * we simply restart
+ */
if (!bv) return 0x0;
+ ctx->ctx_ovfl_regs = bv; /* keep track of what to reset when unblocking */
+ /*
+ * Now we know that:
+ * - we have some counters which overflowed (contains in bv)
+ * - someone has asked to be notified on overflow.
+ */
+
+
+ /*
+ * If the notification task is still present, then notify_task is non
+ * null. It is clean by that task if it ever exits before we do.
+ */
- if (ctx->ctx_notify_pid = ta->pid) {
- struct siginfo si;
+ if (ctx->ctx_notify_task) {
si.si_errno = 0;
si.si_addr = NULL;
- si.si_pid = ta->pid; /* who is sending */
-
+ si.si_pid = task->pid; /* who is sending */
si.si_signo = ctx->ctx_notify_sig; /* is SIGPROF */
si.si_code = PROF_OVFL; /* goes to user */
si.si_pfm_ovfl = bv;
+
/*
- * in this case, we don't stop the task, we let it go on. It will
- * necessarily go to the signal handler (if any) when it goes back to
- * user mode.
+ * when the target of the signal is not ourself, we have to be more
+ * careful. The notify_task may being cleared by the target task itself
+ * in release_thread(). We must ensure mutual exclusion here such that
+ * the signal is delivered (even to a dying task) safely.
*/
- DBprintk((" sending %d notification to self %d\n", si.si_signo, ta->pid));
-
- /* this call is safe in an interrupt handler */
- ret = send_sig_info(ctx->ctx_notify_sig, &si, ta);
- if (ret != 0)
- printk(" send_sig_info(process %d, SIGPROF)=%d\n", ta->pid, ret);
- /*
- * no matter if we block or not, we keep PMU frozen and do not unfreeze on ctxsw
- */
- ctx->ctx_fl_frozen = 1;
+ if (ctx->ctx_notify_task != current) {
+ /*
+ * grab the notification lock for this task
+ */
+ spin_lock(&ctx->ctx_notify_lock);
- } else {
-#if 0
/*
- * The tasklet is guaranteed to be scheduled for this CPU only
+ * now notify_task cannot be modified until we're done
+ * if NULL, they it got modified while we were in the handler
*/
- notify_info[my_cpu].to_pid = ctx->notify_pid;
- notify_info[my_cpu].from_pid = ta->pid; /* for debug only */
- notify_info[my_cpu].bitvect = bv;
- /* tasklet is inserted and active */
- tasklet_schedule(&pfm_tasklet);
-#endif
+ if (ctx->ctx_notify_task = NULL) {
+ spin_unlock(&ctx->ctx_notify_lock);
+ goto lost_notify;
+ }
/*
- * stored the vector of overflowed registers for use in notification
- * mark that a notification/blocking is pending (arm the trap)
+ * required by send_sig_info() to make sure the target
+ * task does not disappear on us.
*/
- th->pfm_pend_notify = 1;
+ read_lock(&tasklist_lock);
+ }
+ /*
+ * in this case, we don't stop the task, we let it go on. It will
+ * necessarily go to the signal handler (if any) when it goes back to
+ * user mode.
+ */
+ DBprintk((" %d sending %d notification to %d\n", task->pid, si.si_signo, ctx->ctx_notify_task->pid));
+
+
+ /*
+ * this call is safe in an interrupt handler, so does read_lock() on tasklist_lock
+ */
+ ret = send_sig_info(ctx->ctx_notify_sig, &si, ctx->ctx_notify_task);
+ if (ret != 0) printk(" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_task->pid, ret);
+ /*
+ * now undo the protections in order
+ */
+ if (ctx->ctx_notify_task != current) {
+ read_unlock(&tasklist_lock);
+ spin_unlock(&ctx->ctx_notify_lock);
+ }
/*
- * if we do block, then keep PMU frozen until restart
+ * if we block set the pfm_must_block bit
+ * when in block mode, we can effectively block only when the notified
+ * task is not self, otherwise we would deadlock.
+ * in this configuration, the notification is sent, the task will not
+ * block on the way back to user mode, but the PMU will be kept frozen
+ * until PFM_RESTART.
+ * Note that here there is still a race condition with notify_task
+ * possibly being nullified behind our back, but this is fine because
+ * it can only be changed to NULL which by construction, can only be
+ * done when notify_task != current. So if it was already different
+ * before, changing it to NULL will still maintain this invariant.
+ * Of course, when it is equal to current it cannot change at this point.
*/
- if (!CTX_OVFL_NOBLOCK(ctx)) ctx->ctx_fl_frozen = 1;
+ if (!CTX_OVFL_NOBLOCK(ctx) && ctx->ctx_notify_task != current) {
+ th->pfm_must_block = 1; /* will cause blocking */
+ }
+ } else {
+lost_notify:
+ DBprintk((" notification task has disappeared !\n"));
+ /*
+ * for a non-blocking context, we make sure we do not fall into the pfm_overflow_notify()
+ * trap. Also in the case of a blocking context with lost notify process, then we do not
+ * want to block either (even though it is interruptible). In this case, the PMU will be kept
+ * frozen and the process will run to completion without monitoring enabled.
+ *
+ * Of course, we cannot loose notify process when self-monitoring.
+ */
+ th->pfm_must_block = 0;
- DBprintk((" process %d notify ovfl_regs=0x%lx\n", ta->pid, bv));
}
/*
- * keep PMU frozen (and overflowed bits cleared) when we have to stop,
- * otherwise return a resume 'value' for PMC[0]
- *
- * XXX: maybe that's enough to get rid of ctx_fl_frozen ?
+ * if we block, we keep the PMU frozen. If non-blocking we restart.
+ * in the case of non-blocking were the notify process is lost, we also
+ * restart.
*/
- DBprintk((" will return pmc0=0x%x\n",ctx->ctx_fl_frozen ? 0x1 : 0x0));
+ if (!CTX_OVFL_NOBLOCK(ctx))
+ ctx->ctx_fl_frozen = 1;
+ else
+ ctx->ctx_fl_frozen = 0;
+
+ DBprintk((" reload pmc0=0x%x must_block=%ld\n",
+ ctx->ctx_fl_frozen ? 0x1 : 0x0, th->pfm_must_block));
+
return ctx->ctx_fl_frozen ? 0x1 : 0x0;
}
@@ -1595,10 +1782,17 @@
u64 pmc0 = ia64_get_pmc(0);
int i;
- p += sprintf(p, "PMC[0]=%lx\nPerfmon debug: %s\n", pmc0, pfm_debug ? "On" : "Off");
+ p += sprintf(p, "CPU%d.pmc[0]=%lx\nPerfmon debug: %s\n", smp_processor_id(), pmc0, pfm_debug ? "On" : "Off");
+ p += sprintf(p, "proc_sessions=%lu sys_sessions=%lu\n",
+ pfs_info.pfs_proc_sessions,
+ pfs_info.pfs_sys_session);
+
for(i=0; i < NR_CPUS; i++) {
- if (cpu_is_online(i))
- p += sprintf(p, "CPU%d.PMU %d\n", i, pmu_owners[i].owner ? pmu_owners[i].owner->pid: 0);
+ if (cpu_is_online(i)) {
+ p += sprintf(p, "CPU%d.pmu_owner: %-6d\n",
+ i,
+ pmu_owners[i].owner ? pmu_owners[i].owner->pid: -1);
+ }
}
return p - page;
}
@@ -1648,8 +1842,8 @@
}
pmu_conf.perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1;
pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic;
- pmu_conf.num_pmds = find_num_pm_regs(pmu_conf.impl_regs);
- pmu_conf.num_pmcs = find_num_pm_regs(&pmu_conf.impl_regs[4]);
+ pmu_conf.num_pmcs = find_num_pm_regs(pmu_conf.impl_regs);
+ pmu_conf.num_pmds = find_num_pm_regs(&pmu_conf.impl_regs[4]);
printk("perfmon: %d bits counters (max value 0x%lx)\n", pm_info.pal_perf_mon_info_s.width, pmu_conf.perf_ovfl_val);
printk("perfmon: %ld PMC/PMD pairs, %ld PMCs, %ld PMDs\n", pmu_conf.max_counters, pmu_conf.num_pmcs, pmu_conf.num_pmds);
@@ -1681,21 +1875,19 @@
ia64_srlz_d();
}
-/*
- * XXX: for system wide this function MUST never be called
- */
void
pfm_save_regs (struct task_struct *ta)
{
struct task_struct *owner;
+ pfm_context_t *ctx;
struct thread_struct *t;
u64 pmc0, psr;
+ unsigned long mask;
int i;
- if (ta = NULL) {
- panic(__FUNCTION__" task is NULL\n");
- }
- t = &ta->thread;
+ t = &ta->thread;
+ ctx = ta->thread.pfm_context;
+
/*
* We must make sure that we don't loose any potential overflow
* interrupt while saving PMU context. In this code, external
@@ -1715,7 +1907,7 @@
* in kernel.
* By now, we could still have an overflow interrupt in-flight.
*/
- __asm__ __volatile__ ("rum psr.up;;"::: "memory");
+ __asm__ __volatile__ ("rsm psr.up|psr.pp;;"::: "memory");
/*
* Mark the PMU as not owned
@@ -1744,7 +1936,6 @@
* next process does not start with monitoring on if not requested
*/
ia64_set_pmc(0, 1);
- ia64_srlz_d();
/*
* Check for overflow bits and proceed manually if needed
@@ -1755,94 +1946,111 @@
* next time the task exits from the kernel.
*/
if (pmc0 & ~0x1) {
- if (owner != ta) printk(__FUNCTION__" owner=%p task=%p\n", (void *)owner, (void *)ta);
- printk(__FUNCTION__" Warning: pmc[0]=0x%lx explicit call\n", pmc0);
-
- pmc0 = update_counters(owner, pmc0, NULL);
+ update_counters(owner, pmc0, NULL);
/* we will save the updated version of pmc0 */
}
-
/*
* restore PSR for context switch to save
*/
__asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(psr): "memory");
+ /*
+ * we do not save registers if we can do lazy
+ */
+ if (PFM_CAN_DO_LAZY()) {
+ SET_PMU_OWNER(owner);
+ return;
+ }
/*
* XXX needs further optimization.
* Also must take holes into account
*/
- for (i=0; i< pmu_conf.num_pmds; i++) {
- t->pmd[i] = ia64_get_pmd(i);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i);
}
/* skip PMC[0], we handle it separately */
- for (i=1; i< pmu_conf.num_pmcs; i++) {
- t->pmc[i] = ia64_get_pmc(i);
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i);
}
-
/*
* Throughout this code we could have gotten an overflow interrupt. It is transformed
* into a spurious interrupt as soon as we give up pmu ownership.
*/
}
-void
-pfm_load_regs (struct task_struct *ta)
+static void
+pfm_lazy_save_regs (struct task_struct *ta)
{
- struct thread_struct *t = &ta->thread;
- pfm_context_t *ctx = ta->thread.pfm_context;
+ pfm_context_t *ctx;
+ struct thread_struct *t;
+ unsigned long mask;
int i;
+ DBprintk((" on [%d] by [%d]\n", ta->pid, current->pid));
+
+ t = &ta->thread;
+ ctx = ta->thread.pfm_context;
/*
* XXX needs further optimization.
* Also must take holes into account
*/
- for (i=0; i< pmu_conf.num_pmds; i++) {
- ia64_set_pmd(i, t->pmd[i]);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i);
}
-
- /* skip PMC[0] to avoid side effects */
- for (i=1; i< pmu_conf.num_pmcs; i++) {
- ia64_set_pmc(i, t->pmc[i]);
+
+ /* skip PMC[0], we handle it separately */
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i);
}
+ SET_PMU_OWNER(NULL);
+}
+
+void
+pfm_load_regs (struct task_struct *ta)
+{
+ struct thread_struct *t = &ta->thread;
+ pfm_context_t *ctx = ta->thread.pfm_context;
+ struct task_struct *owner;
+ unsigned long mask;
+ int i;
+
+ owner = PMU_OWNER();
+ if (owner = ta) goto skip_restore;
+ if (owner) pfm_lazy_save_regs(owner);
- /*
- * we first restore ownership of the PMU to the 'soon to be current'
- * context. This way, if, as soon as we unfreeze the PMU at the end
- * of this function, we get an interrupt, we attribute it to the correct
- * task
- */
SET_PMU_OWNER(ta);
-#if 0
- /*
- * check if we had pending overflow before context switching out
- * If so, we invoke the handler manually, i.e. simulate interrupt.
- *
- * XXX: given that we do not use the tasklet anymore to stop, we can
- * move this back to the pfm_save_regs() routine.
- */
- if (t->pmc[0] & ~0x1) {
- /* freeze set in pfm_save_regs() */
- DBprintk((" pmc[0]=0x%lx manual interrupt\n",t->pmc[0]));
- update_counters(ta, t->pmc[0], NULL);
+ mask = ctx->ctx_used_pmds[0];
+ for (i=0; mask; i++, mask>>=1) {
+ if (mask & 0x1) ia64_set_pmd(i, t->pmd[i]);
}
-#endif
+ /* skip PMC[0] to avoid side effects */
+ mask = ctx->ctx_used_pmcs[0]>>1;
+ for (i=1; mask; i++, mask>>=1) {
+ if (mask & 0x1) ia64_set_pmc(i, t->pmc[i]);
+ }
+skip_restore:
/*
* unfreeze only when possible
*/
if (ctx->ctx_fl_frozen = 0) {
ia64_set_pmc(0, 0);
ia64_srlz_d();
+ /* place where we potentially (kernel level) start monitoring again */
}
}
/*
* This function is called when a thread exits (from exit_thread()).
- * This is a simplified pfm_save_regs() that simply flushes hthe current
+ * This is a simplified pfm_save_regs() that simply flushes the current
* register state into the save area taking into account any pending
* overflow. This time no notification is sent because the taks is dying
* anyway. The inline processing of overflows avoids loosing some counts.
@@ -1933,12 +2141,20 @@
/* collect latest results */
ctx->ctx_pmds[i].val += ia64_get_pmd(j) & pmu_conf.perf_ovfl_val;
+ /*
+ * now everything is in ctx_pmds[] and we need
+ * to clear the saved context from save_regs() such that
+ * pfm_read_pmds() gets the correct value
+ */
+ ta->thread.pmd[j] = 0;
+
/* take care of overflow inline */
if (mask & 0x1) {
ctx->ctx_pmds[i].val += 1 + pmu_conf.perf_ovfl_val;
DBprintk((" PMD[%d] overflowed pmd=0x%lx pmds.val=0x%lx\n",
j, ia64_get_pmd(j), ctx->ctx_pmds[i].val));
}
+ mask >>=1;
}
}
@@ -1977,7 +2193,7 @@
/* clears all PMD registers */
for(i=0;i< pmu_conf.num_pmds; i++) {
- if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0);
+ if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0);
}
ia64_srlz_d();
}
@@ -1986,7 +2202,7 @@
* task is the newly created task
*/
int
-pfm_inherit(struct task_struct *task)
+pfm_inherit(struct task_struct *task, struct pt_regs *regs)
{
pfm_context_t *ctx = current->thread.pfm_context;
pfm_context_t *nctx;
@@ -1994,12 +2210,22 @@
int i, cnum;
/*
+ * bypass completely for system wide
+ */
+ if (pfs_info.pfs_sys_session) {
+ DBprintk((" enabling psr.pp for %d\n", task->pid));
+ ia64_psr(regs)->pp = pfs_info.pfs_pp;
+ return 0;
+ }
+
+ /*
* takes care of easiest case first
*/
if (CTX_INHERIT_MODE(ctx) = PFM_FL_INHERIT_NONE) {
DBprintk((" removing PFM context for %d\n", task->pid));
task->thread.pfm_context = NULL;
- task->thread.pfm_pend_notify = 0;
+ task->thread.pfm_must_block = 0;
+ atomic_set(&task->thread.pfm_notifiers_check, 0);
/* copy_thread() clears IA64_THREAD_PM_VALID */
return 0;
}
@@ -2009,9 +2235,11 @@
/* copy content */
*nctx = *ctx;
- if (ctx->ctx_fl_inherit = PFM_FL_INHERIT_ONCE) {
+ if (CTX_INHERIT_MODE(ctx) = PFM_FL_INHERIT_ONCE) {
nctx->ctx_fl_inherit = PFM_FL_INHERIT_NONE;
+ atomic_set(&task->thread.pfm_notifiers_check, 0);
DBprintk((" downgrading to INHERIT_NONE for %d\n", task->pid));
+ pfs_info.pfs_proc_sessions++;
}
/* initialize counters in new context */
@@ -2033,7 +2261,7 @@
sema_init(&nctx->ctx_restart_sem, 0); /* reset this semaphore to locked */
/* clear pending notification */
- th->pfm_pend_notify = 0;
+ th->pfm_must_block = 0;
/* link with new task */
th->pfm_context = nctx;
@@ -2052,7 +2280,10 @@
return 0;
}
-/* called from exit_thread() */
+/*
+ * called from release_thread(), at this point this task is not in the
+ * tasklist anymore
+ */
void
pfm_context_exit(struct task_struct *task)
{
@@ -2068,16 +2299,126 @@
pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf;
/* if only user left, then remove */
- DBprintk((" pid %d: task %d sampling psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter));
+ DBprintk((" [%d] [%d] psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter));
if (atomic_dec_and_test(&psb->psb_refcnt) ) {
rvfree(psb->psb_hdr, psb->psb_size);
vfree(psb);
- DBprintk((" pid %d: cleaning task %d sampling buffer\n", current->pid, task->pid ));
+ DBprintk((" [%d] cleaning [%d] sampling buffer\n", current->pid, task->pid ));
+ }
+ }
+ DBprintk((" [%d] cleaning [%d] pfm_context @%p\n", current->pid, task->pid, (void *)ctx));
+
+ /*
+ * To avoid getting the notified task scan the entire process list
+ * when it exits because it would have pfm_notifiers_check set, we
+ * decrease it by 1 to inform the task, that one less task is going
+ * to send it notification. each new notifer increases this field by
+ * 1 in pfm_context_create(). Of course, there is race condition between
+ * decreasing the value and the notified task exiting. The danger comes
+ * from the fact that we have a direct pointer to its task structure
+ * thereby bypassing the tasklist. We must make sure that if we have
+ * notify_task!= NULL, the target task is still somewhat present. It may
+ * already be detached from the tasklist but that's okay. Note that it is
+ * okay if we 'miss the deadline' and the task scans the list for nothing,
+ * it will affect performance but not correctness. The correctness is ensured
+ * by using the notify_lock whic prevents the notify_task from changing on us.
+ * Once holdhing this lock, if we see notify_task!= NULL, then it will stay like
+ * that until we release the lock. If it is NULL already then we came too late.
+ */
+ spin_lock(&ctx->ctx_notify_lock);
+
+ if (ctx->ctx_notify_task) {
+ DBprintk((" [%d] [%d] atomic_sub on [%d] notifiers=%u\n", current->pid, task->pid,
+ ctx->ctx_notify_task->pid,
+ atomic_read(&ctx->ctx_notify_task->thread.pfm_notifiers_check)));
+
+ atomic_sub(1, &ctx->ctx_notify_task->thread.pfm_notifiers_check);
+ }
+
+ spin_unlock(&ctx->ctx_notify_lock);
+
+ if (ctx->ctx_fl_system) {
+ /*
+ * if included interrupts (true by default), then reset
+ * to get default value
+ */
+ if (ctx->ctx_fl_exclintr = 0) {
+ /*
+ * reload kernel default DCR value
+ */
+ ia64_set_dcr(pfs_info.pfs_dfl_dcr);
+ DBprintk((" restored dcr to 0x%lx\n", pfs_info.pfs_dfl_dcr));
}
+ /*
+ * free system wide session slot
+ */
+ pfs_info.pfs_sys_session = 0;
+ } else {
+ pfs_info.pfs_proc_sessions--;
}
- DBprintk((" pid %d: task %d pfm_context is freed @%p\n", current->pid, task->pid, (void *)ctx));
+
pfm_context_free(ctx);
+ /*
+ * clean pfm state in thread structure,
+ */
+ task->thread.pfm_context = NULL;
+ task->thread.pfm_must_block = 0;
+ /* pfm_notifiers is cleaned in pfm_cleanup_notifiers() */
+
+}
+
+void
+pfm_cleanup_notifiers(struct task_struct *task)
+{
+ struct task_struct *p;
+ pfm_context_t *ctx;
+
+ DBprintk((" [%d] called\n", task->pid));
+
+ read_lock(&tasklist_lock);
+
+ for_each_task(p) {
+ /*
+ * It is safe to do the 2-step test here, because thread.ctx
+ * is cleaned up only in release_thread() and at that point
+ * the task has been detached from the tasklist which is an
+ * operation which uses the write_lock() on the tasklist_lock
+ * so it cannot run concurrently to this loop. So we have the
+ * guarantee that if we find p and it has a perfmon ctx then
+ * it is going to stay like this for the entire execution of this
+ * loop.
+ */
+ ctx = p->thread.pfm_context;
+
+ DBprintk((" [%d] scanning task [%d] ctx=%p\n", task->pid, p->pid, ctx));
+
+ if (ctx && ctx->ctx_notify_task = task) {
+ DBprintk((" trying for notifier %d in %d\n", task->pid, p->pid));
+ /*
+ * the spinlock is required to take care of a race condition
+ * with the send_sig_info() call. We must make sure that
+ * either the send_sig_info() completes using a valid task,
+ * or the notify_task is cleared before the send_sig_info()
+ * can pick up a stale value. Note that by the time this
+ * function is executed the 'task' is already detached from the
+ * tasklist. The problem is that the notifiers have a direct
+ * pointer to it. It is okay to send a signal to a task in this
+ * stage, it simply will have no effect. But it is better than sending
+ * to a completely destroyed task or worse to a new task using the same
+ * task_struct address.
+ */
+ spin_lock(&ctx->ctx_notify_lock);
+
+ ctx->ctx_notify_task = NULL;
+
+ spin_unlock(&ctx->ctx_notify_lock);
+
+ DBprintk((" done for notifier %d in %d\n", task->pid, p->pid));
+ }
+ }
+ read_unlock(&tasklist_lock);
+
}
#else /* !CONFIG_PERFMON */
diff -urN linux-2.4.13/arch/ia64/kernel/process.c linux-2.4.13-lia/arch/ia64/kernel/process.c
--- linux-2.4.13/arch/ia64/kernel/process.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/process.c Wed Oct 24 18:14:43 2001
@@ -63,7 +63,8 @@
{
unsigned long ip = regs->cr_iip + ia64_psr(regs)->ri;
- printk("\npsr : %016lx ifs : %016lx ip : [<%016lx>] %s\n",
+ printk("\nPid: %d, comm: %20s\n", current->pid, current->comm);
+ printk("psr : %016lx ifs : %016lx ip : [<%016lx>] %s\n",
regs->cr_ipsr, regs->cr_ifs, ip, print_tainted());
printk("unat: %016lx pfs : %016lx rsc : %016lx\n",
regs->ar_unat, regs->ar_pfs, regs->ar_rsc);
@@ -201,7 +202,7 @@
{
unsigned long rbs, child_rbs, rbs_size, stack_offset, stack_top, stack_used;
struct switch_stack *child_stack, *stack;
- extern char ia64_ret_from_clone;
+ extern char ia64_ret_from_clone, ia32_ret_from_clone;
struct pt_regs *child_ptregs;
int retval = 0;
@@ -250,7 +251,10 @@
child_ptregs->r12 = (unsigned long) (child_ptregs + 1); /* kernel sp */
child_ptregs->r13 = (unsigned long) p; /* set `current' pointer */
}
- child_stack->b0 = (unsigned long) &ia64_ret_from_clone;
+ if (IS_IA32_PROCESS(regs))
+ child_stack->b0 = (unsigned long) &ia32_ret_from_clone;
+ else
+ child_stack->b0 = (unsigned long) &ia64_ret_from_clone;
child_stack->ar_bspstore = child_rbs + rbs_size;
/* copy parts of thread_struct: */
@@ -285,9 +289,8 @@
ia32_save_state(p);
#endif
#ifdef CONFIG_PERFMON
- p->thread.pfm_pend_notify = 0;
if (p->thread.pfm_context)
- retval = pfm_inherit(p);
+ retval = pfm_inherit(p, child_ptregs);
#endif
return retval;
}
@@ -441,11 +444,24 @@
}
#ifdef CONFIG_PERFMON
+/*
+ * By the time we get here, the task is detached from the tasklist. This is important
+ * because it means that no other tasks can ever find it as a notifiied task, therfore
+ * there is no race condition between this code and let's say a pfm_context_create().
+ * Conversely, the pfm_cleanup_notifiers() cannot try to access a task's pfm context if
+ * this other task is in the middle of its own pfm_context_exit() because it would alreayd
+ * be out of the task list. Note that this case is very unlikely between a direct child
+ * and its parents (if it is the notified process) because of the way the exit is notified
+ * via SIGCHLD.
+ */
void
release_thread (struct task_struct *task)
{
if (task->thread.pfm_context)
pfm_context_exit(task);
+
+ if (atomic_read(&task->thread.pfm_notifiers_check) > 0)
+ pfm_cleanup_notifiers(task);
}
#endif
@@ -516,6 +532,29 @@
}
void
+cpu_halt (void)
+{
+ pal_power_mgmt_info_u_t power_info[8];
+ unsigned long min_power;
+ int i, min_power_state;
+
+ if (ia64_pal_halt_info(power_info) != 0)
+ return;
+
+ min_power_state = 0;
+ min_power = power_info[0].pal_power_mgmt_info_s.power_consumption;
+ for (i = 1; i < 8; ++i)
+ if (power_info[i].pal_power_mgmt_info_s.im
+ && power_info[i].pal_power_mgmt_info_s.power_consumption < min_power) {
+ min_power = power_info[i].pal_power_mgmt_info_s.power_consumption;
+ min_power_state = i;
+ }
+
+ while (1)
+ ia64_pal_halt(min_power_state);
+}
+
+void
machine_restart (char *restart_cmd)
{
(*efi.reset_system)(EFI_RESET_WARM, 0, 0, 0);
@@ -524,6 +563,7 @@
void
machine_halt (void)
{
+ cpu_halt();
}
void
@@ -531,4 +571,5 @@
{
if (pm_power_off)
pm_power_off();
+ machine_halt();
}
diff -urN linux-2.4.13/arch/ia64/kernel/ptrace.c linux-2.4.13-lia/arch/ia64/kernel/ptrace.c
--- linux-2.4.13/arch/ia64/kernel/ptrace.c Mon Sep 24 15:06:13 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/ptrace.c Wed Oct 10 17:43:07 2001
@@ -2,7 +2,7 @@
* Kernel support for the ptrace() and syscall tracing interfaces.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Derived from the x86 and Alpha versions. Most of the code in here
* could actually be factored into a common set of routines.
@@ -794,11 +794,14 @@
*
* Make sure the single step bit is not set.
*/
-void ptrace_disable(struct task_struct *child)
+void
+ptrace_disable (struct task_struct *child)
{
+ struct ia64_psr *child_psr = ia64_psr(ia64_task_regs(child));
+
/* make sure the single step/take-branch tra bits are not set: */
- ia64_psr(pt)->ss = 0;
- ia64_psr(pt)->tb = 0;
+ child_psr->ss = 0;
+ child_psr->tb = 0;
/* Turn off flag indicating that the KRBS is sync'd with child's VM: */
child->thread.flags &= ~IA64_THREAD_KRBS_SYNCED;
@@ -809,7 +812,7 @@
long arg4, long arg5, long arg6, long arg7, long stack)
{
struct pt_regs *pt, *regs = (struct pt_regs *) &stack;
- unsigned long flags, urbs_end;
+ unsigned long urbs_end;
struct task_struct *child;
struct switch_stack *sw;
long ret;
@@ -855,6 +858,19 @@
if (child->p_pptr != current)
goto out_tsk;
+ if (request != PTRACE_KILL) {
+ if (child->state != TASK_STOPPED)
+ goto out_tsk;
+
+#ifdef CONFIG_SMP
+ while (child->has_cpu) {
+ if (child->state != TASK_STOPPED)
+ goto out_tsk;
+ barrier();
+ }
+#endif
+ }
+
pt = ia64_task_regs(child);
sw = (struct switch_stack *) (child->thread.ksp + 16);
@@ -925,7 +941,7 @@
child->ptrace &= ~PT_TRACESYS;
child->exit_code = data;
- /* make sure the single step/take-branch tra bits are not set: */
+ /* make sure the single step/taken-branch trap bits are not set: */
ia64_psr(pt)->ss = 0;
ia64_psr(pt)->tb = 0;
diff -urN linux-2.4.13/arch/ia64/kernel/sal.c linux-2.4.13-lia/arch/ia64/kernel/sal.c
--- linux-2.4.13/arch/ia64/kernel/sal.c Thu Jan 4 12:50:17 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/sal.c Thu Oct 4 00:21:39 2001
@@ -1,8 +1,8 @@
/*
* System Abstraction Layer (SAL) interface routines.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
*/
@@ -18,8 +18,6 @@
#include <asm/sal.h>
#include <asm/pal.h>
-#define SAL_DEBUG
-
spinlock_t sal_lock = SPIN_LOCK_UNLOCKED;
static struct {
@@ -122,10 +120,8 @@
switch (*p) {
case SAL_DESC_ENTRY_POINT:
ep = (struct ia64_sal_desc_entry_point *) p;
-#ifdef SAL_DEBUG
- printk("sal[%d] - entry: pal_proc=0x%lx, sal_proc=0x%lx\n",
- i, ep->pal_proc, ep->sal_proc);
-#endif
+ printk("SAL: entry: pal_proc=0x%lx, sal_proc=0x%lx\n",
+ ep->pal_proc, ep->sal_proc);
ia64_pal_handler_init(__va(ep->pal_proc));
ia64_sal_handler_init(__va(ep->sal_proc), __va(ep->gp));
break;
@@ -138,17 +134,12 @@
#ifdef CONFIG_SMP
{
struct ia64_sal_desc_ap_wakeup *ap = (void *) p;
-# ifdef SAL_DEBUG
- printk("sal[%d] - wakeup type %x, 0x%lx\n",
- i, ap->mechanism, ap->vector);
-# endif
+
switch (ap->mechanism) {
case IA64_SAL_AP_EXTERNAL_INT:
ap_wakeup_vector = ap->vector;
-# ifdef SAL_DEBUG
printk("SAL: AP wakeup using external interrupt "
"vector 0x%lx\n", ap_wakeup_vector);
-# endif
break;
default:
@@ -163,21 +154,13 @@
struct ia64_sal_desc_platform_feature *pf = (void *) p;
printk("SAL: Platform features ");
-#ifdef CONFIG_IA64_HAVE_IRQREDIR
- /*
- * Early versions of SAL say we don't have
- * IRQ redirection, even though we do...
- */
- pf->feature_mask |= (1 << 1);
-#endif
-
if (pf->feature_mask & (1 << 0))
printk("BusLock ");
if (pf->feature_mask & (1 << 1)) {
printk("IRQ_Redirection ");
#ifdef CONFIG_SMP
- if (no_int_routing)
+ if (no_int_routing)
smp_int_redirect &= ~SMP_IRQ_REDIRECTION;
else
smp_int_redirect |= SMP_IRQ_REDIRECTION;
diff -urN linux-2.4.13/arch/ia64/kernel/setup.c linux-2.4.13-lia/arch/ia64/kernel/setup.c
--- linux-2.4.13/arch/ia64/kernel/setup.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/setup.c Thu Oct 4 00:21:39 2001
@@ -534,10 +534,13 @@
/*
* Initialize default control register to defer all speculative faults. The
* kernel MUST NOT depend on a particular setting of these bits (in other words,
- * the kernel must have recovery code for all speculative accesses).
+ * the kernel must have recovery code for all speculative accesses). Turn on
+ * dcr.lc as per recommendation by the architecture team. Most IA-32 apps
+ * shouldn't be affected by this (moral: keep your ia32 locks aligned and you'll
+ * be fine).
*/
ia64_set_dcr( IA64_DCR_DM | IA64_DCR_DP | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_DR
- | IA64_DCR_DA | IA64_DCR_DD);
+ | IA64_DCR_DA | IA64_DCR_DD | IA64_DCR_LC);
#ifndef CONFIG_SMP
ia64_set_fpu_owner(0);
#endif
diff -urN linux-2.4.13/arch/ia64/kernel/sigframe.h linux-2.4.13-lia/arch/ia64/kernel/sigframe.h
--- linux-2.4.13/arch/ia64/kernel/sigframe.h Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/sigframe.h Thu Oct 4 00:21:52 2001
@@ -1,3 +1,9 @@
+struct sigscratch {
+ unsigned long scratch_unat; /* ar.unat for the general registers saved in pt */
+ unsigned long pad;
+ struct pt_regs pt;
+};
+
struct sigframe {
/*
* Place signal handler args where user-level unwinder can find them easily.
@@ -7,10 +13,11 @@
unsigned long arg0; /* signum */
unsigned long arg1; /* siginfo pointer */
unsigned long arg2; /* sigcontext pointer */
+ /*
+ * End of architected state.
+ */
- unsigned long rbs_base; /* base of new register backing store (or NULL) */
void *handler; /* pointer to the plabel of the signal handler */
-
struct siginfo info;
struct sigcontext sc;
};
diff -urN linux-2.4.13/arch/ia64/kernel/signal.c linux-2.4.13-lia/arch/ia64/kernel/signal.c
--- linux-2.4.13/arch/ia64/kernel/signal.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/signal.c Thu Oct 4 00:21:52 2001
@@ -2,7 +2,7 @@
* Architecture-specific signal handling support.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
- * Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* Derived from i386 and Alpha versions.
*/
@@ -39,12 +39,6 @@
# define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0])
#endif
-struct sigscratch {
- unsigned long scratch_unat; /* ar.unat for the general registers saved in pt */
- unsigned long pad;
- struct pt_regs pt;
-};
-
extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); /* forward decl */
long
@@ -55,6 +49,10 @@
/* XXX: Don't preclude handling different sized sigset_t's. */
if (sigsetsize != sizeof(sigset_t))
return -EINVAL;
+
+ if (!access_ok(VERIFY_READ, uset, sigsetsize))
+ return -EFAULT;
+
if (GET_SIGSET(&set, uset))
return -EFAULT;
@@ -73,15 +71,9 @@
* pre-set the correct error code here to ensure that the right values
* get saved in sigcontext by ia64_do_signal.
*/
-#ifdef CONFIG_IA32_SUPPORT
- if (IS_IA32_PROCESS(&scr->pt)) {
- scr->pt.r8 = -EINTR;
- } else
-#endif
- {
- scr->pt.r8 = EINTR;
- scr->pt.r10 = -1;
- }
+ scr->pt.r8 = EINTR;
+ scr->pt.r10 = -1;
+
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
@@ -139,10 +131,9 @@
struct ia64_psr *psr = ia64_psr(&scr->pt);
__copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
- if (!psr->dfh) {
- psr->mfh = 0;
+ psr->mfh = 0; /* drop signal handler's fph contents... */
+ if (!psr->dfh)
__ia64_load_fpu(current->thread.fph);
- }
}
return err;
}
@@ -380,7 +371,8 @@
err = __put_user(sig, &frame->arg0);
err |= __put_user(&frame->info, &frame->arg1);
err |= __put_user(&frame->sc, &frame->arg2);
- err |= __put_user(new_rbs, &frame->rbs_base);
+ err |= __put_user(new_rbs, &frame->sc.sc_rbs_base);
+ err |= __put_user(0, &frame->sc.sc_loadrs); /* initialize to zero */
err |= __put_user(ka->sa.sa_handler, &frame->handler);
err |= copy_siginfo_to_user(&frame->info, info);
@@ -460,6 +452,7 @@
long
ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall)
{
+ struct signal_struct *sig;
struct k_sigaction *ka;
siginfo_t info;
long restart = in_syscall;
@@ -571,8 +564,8 @@
case SIGSTOP:
current->state = TASK_STOPPED;
current->exit_code = signr;
- if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags
- & SA_NOCLDSTOP))
+ sig = current->p_pptr->sig;
+ if (sig && !(sig->action[SIGCHLD-1].sa.sa_flags & SA_NOCLDSTOP))
notify_parent(current, SIGCHLD);
schedule();
continue;
diff -urN linux-2.4.13/arch/ia64/kernel/smp.c linux-2.4.13-lia/arch/ia64/kernel/smp.c
--- linux-2.4.13/arch/ia64/kernel/smp.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/smp.c Wed Oct 10 18:50:56 2001
@@ -48,6 +48,7 @@
#include <asm/sal.h>
#include <asm/system.h>
#include <asm/unistd.h>
+#include <asm/mca.h>
/* The 'big kernel lock' */
spinlock_t kernel_flag = SPIN_LOCK_UNLOCKED;
@@ -70,20 +71,18 @@
#define IPI_CALL_FUNC 0
#define IPI_CPU_STOP 1
-#ifndef CONFIG_ITANIUM_PTCG
-# define IPI_FLUSH_TLB 2
-#endif /*!CONFIG_ITANIUM_PTCG */
static void
stop_this_cpu (void)
{
+ extern void cpu_halt (void);
/*
* Remove this CPU:
*/
clear_bit(smp_processor_id(), &cpu_online_map);
max_xtp();
__cli();
- for (;;);
+ cpu_halt();
}
void
@@ -136,49 +135,6 @@
stop_this_cpu();
break;
-#ifndef CONFIG_ITANIUM_PTCG
- case IPI_FLUSH_TLB:
- {
- extern unsigned long flush_start, flush_end, flush_nbits, flush_rid;
- extern atomic_t flush_cpu_count;
- unsigned long saved_rid = ia64_get_rr(flush_start);
- unsigned long end = flush_end;
- unsigned long start = flush_start;
- unsigned long nbits = flush_nbits;
-
- /*
- * Current CPU may be running with different RID so we need to
- * reload the RID of flushed address. Purging the translation
- * also needs ALAT invalidation; we do not need "invala" here
- * since it is done in ia64_leave_kernel.
- */
- ia64_srlz_d();
- if (saved_rid != flush_rid) {
- ia64_set_rr(flush_start, flush_rid);
- ia64_srlz_d();
- }
-
- do {
- /*
- * Purge local TLB entries.
- */
- __asm__ __volatile__ ("ptc.l %0,%1" ::
- "r"(start), "r"(nbits<<2) : "memory");
- start += (1UL << nbits);
- } while (start < end);
-
- ia64_insn_group_barrier();
- ia64_srlz_i(); /* srlz.i implies srlz.d */
-
- if (saved_rid != flush_rid) {
- ia64_set_rr(flush_start, saved_rid);
- ia64_srlz_d();
- }
- atomic_dec(&flush_cpu_count);
- break;
- }
-#endif /* !CONFIG_ITANIUM_PTCG */
-
default:
printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which);
break;
@@ -228,30 +184,6 @@
platform_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0);
}
-#ifndef CONFIG_ITANIUM_PTCG
-
-void
-smp_send_flush_tlb (void)
-{
- send_IPI_allbutself(IPI_FLUSH_TLB);
-}
-
-void
-smp_resend_flush_tlb (void)
-{
- int i;
-
- /*
- * Really need a null IPI but since this rarely should happen & since this code
- * will go away, lets not add one.
- */
- for (i = 0; i < smp_num_cpus; ++i)
- if (i != smp_processor_id())
- smp_send_reschedule(i);
-}
-
-#endif /* !CONFIG_ITANIUM_PTCG */
-
void
smp_flush_tlb_all (void)
{
@@ -277,10 +209,6 @@
{
struct call_data_struct data;
int cpus = 1;
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- unsigned long timeout;
-#endif
if (cpuid = smp_processor_id()) {
printk(__FUNCTION__" trying to call self\n");
@@ -295,26 +223,15 @@
atomic_set(&data.finished, 0);
spin_lock_bh(&call_lock);
- call_data = &data;
-
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- resend:
- send_IPI_single(cpuid, IPI_CALL_FUNC);
- /* Wait for response */
- timeout = jiffies + HZ;
- while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
- barrier();
- if (atomic_read(&data.started) != cpus)
- goto resend;
-#else
+ call_data = &data;
+ mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */
send_IPI_single(cpuid, IPI_CALL_FUNC);
/* Wait for response */
while (atomic_read(&data.started) != cpus)
barrier();
-#endif
+
if (wait)
while (atomic_read(&data.finished) != cpus)
barrier();
@@ -348,10 +265,6 @@
{
struct call_data_struct data;
int cpus = smp_num_cpus-1;
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- unsigned long timeout;
-#endif
if (!cpus)
return 0;
@@ -364,27 +277,14 @@
atomic_set(&data.finished, 0);
spin_lock_bh(&call_lock);
- call_data = &data;
-
-#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \
- || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_SPECIFIC))
- resend:
- /* Send a message to all other CPUs and wait for them to respond */
- send_IPI_allbutself(IPI_CALL_FUNC);
- /* Wait for response */
- timeout = jiffies + HZ;
- while ((atomic_read(&data.started) != cpus) && time_before(jiffies, timeout))
- barrier();
- if (atomic_read(&data.started) != cpus)
- goto resend;
-#else
+ call_data = &data;
+ mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */
send_IPI_allbutself(IPI_CALL_FUNC);
/* Wait for response */
while (atomic_read(&data.started) != cpus)
barrier();
-#endif
if (wait)
while (atomic_read(&data.finished) != cpus)
diff -urN linux-2.4.13/arch/ia64/kernel/smpboot.c linux-2.4.13-lia/arch/ia64/kernel/smpboot.c
--- linux-2.4.13/arch/ia64/kernel/smpboot.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/smpboot.c Thu Oct 4 00:21:39 2001
@@ -33,6 +33,7 @@
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/machvec.h>
+#include <asm/mca.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
@@ -42,6 +43,8 @@
#include <asm/system.h>
#include <asm/unistd.h>
+#define SMP_DEBUG 0
+
#if SMP_DEBUG
#define Dprintk(x...) printk(x)
#else
@@ -310,7 +313,7 @@
}
-void __init
+static void __init
smp_callin (void)
{
int cpuid, phys_id;
@@ -324,8 +327,7 @@
phys_id = hard_smp_processor_id();
if (test_and_set_bit(cpuid, &cpu_online_map)) {
- printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n",
- phys_id, cpuid);
+ printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n", phys_id, cpuid);
BUG();
}
@@ -341,6 +343,12 @@
* Get our bogomips.
*/
ia64_init_itm();
+
+#ifdef CONFIG_IA64_MCA
+ ia64_mca_cmc_vector_setup(); /* Setup vector on AP & enable */
+ ia64_mca_check_errors(); /* For post-failure MCA error logging */
+#endif
+
#ifdef CONFIG_PERFMON
perfmon_init_percpu();
#endif
@@ -364,14 +372,15 @@
{
extern int cpu_idle (void);
+ Dprintk("start_secondary: starting CPU 0x%x\n", hard_smp_processor_id());
efi_map_pal_code();
cpu_init();
smp_callin();
- Dprintk("CPU %d is set to go. \n", smp_processor_id());
+ Dprintk("CPU %d is set to go.\n", smp_processor_id());
while (!atomic_read(&smp_commenced))
;
- Dprintk("CPU %d is starting idle. \n", smp_processor_id());
+ Dprintk("CPU %d is starting idle.\n", smp_processor_id());
return cpu_idle();
}
@@ -415,7 +424,7 @@
unhash_process(idle);
init_tasks[cpu] = idle;
- Dprintk("Sending Wakeup Vector to AP 0x%x/0x%x.\n", cpu, sapicid);
+ Dprintk("Sending wakeup vector %u to AP 0x%x/0x%x.\n", ap_wakeup_vector, cpu, sapicid);
platform_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
@@ -424,7 +433,6 @@
*/
Dprintk("Waiting on callin_map ...");
for (timeout = 0; timeout < 100000; timeout++) {
- Dprintk(".");
if (test_bit(cpu, &cpu_callin_map))
break; /* It has booted */
udelay(100);
diff -urN linux-2.4.13/arch/ia64/kernel/sys_ia64.c linux-2.4.13-lia/arch/ia64/kernel/sys_ia64.c
--- linux-2.4.13/arch/ia64/kernel/sys_ia64.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/sys_ia64.c Thu Oct 4 00:21:39 2001
@@ -19,24 +19,29 @@
#include <asm/shmparam.h>
#include <asm/uaccess.h>
-#define COLOR_ALIGN(addr) (((addr) + SHMLBA - 1) & ~(SHMLBA - 1))
-
unsigned long
arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
{
- struct vm_area_struct * vmm;
long map_shared = (flags & MAP_SHARED);
+ unsigned long align_mask = PAGE_SIZE - 1;
+ struct vm_area_struct * vmm;
if (len > RGN_MAP_LIMIT)
return -ENOMEM;
if (!addr)
addr = TASK_UNMAPPED_BASE;
- if (map_shared)
- addr = COLOR_ALIGN(addr);
- else
- addr = PAGE_ALIGN(addr);
+ if (map_shared && (TASK_SIZE > 0xfffffffful))
+ /*
+ * For 64-bit tasks, align shared segments to 1MB to avoid potential
+ * performance penalty due to virtual aliasing (see ASDM). For 32-bit
+ * tasks, we prefer to avoid exhausting the address space too quickly by
+ * limiting alignment to a single page.
+ */
+ align_mask = SHMLBA - 1;
+
+ addr = (addr + align_mask) & ~align_mask;
for (vmm = find_vma(current->mm, addr); ; vmm = vmm->vm_next) {
/* At this point: (!vmm || addr < vmm->vm_end). */
@@ -46,9 +51,7 @@
return -ENOMEM;
if (!vmm || addr + len <= vmm->vm_start)
return addr;
- addr = vmm->vm_end;
- if (map_shared)
- addr = COLOR_ALIGN(addr);
+ addr = (vmm->vm_end + align_mask) & ~align_mask;
}
}
@@ -184,8 +187,10 @@
if (!file)
return -EBADF;
- if (!file->f_op || !file->f_op->mmap)
- return -ENODEV;
+ if (!file->f_op || !file->f_op->mmap) {
+ addr = -ENODEV;
+ goto out;
+ }
}
/*
@@ -194,22 +199,26 @@
*/
len = PAGE_ALIGN(len);
if (len = 0)
- return addr;
+ goto out;
/* don't permit mappings into unmapped space or the virtual page table of a region: */
roff = rgn_offset(addr);
- if ((len | roff | (roff + len)) >= RGN_MAP_LIMIT)
- return -EINVAL;
+ if ((len | roff | (roff + len)) >= RGN_MAP_LIMIT) {
+ addr = -EINVAL;
+ goto out;
+ }
/* don't permit mappings that would cross a region boundary: */
- if (rgn_index(addr) != rgn_index(addr + len))
- return -EINVAL;
+ if (rgn_index(addr) != rgn_index(addr + len)) {
+ addr = -EINVAL;
+ goto out;
+ }
down_write(¤t->mm->mmap_sem);
addr = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
up_write(¤t->mm->mmap_sem);
- if (file)
+out: if (file)
fput(file);
return addr;
}
diff -urN linux-2.4.13/arch/ia64/kernel/time.c linux-2.4.13-lia/arch/ia64/kernel/time.c
--- linux-2.4.13/arch/ia64/kernel/time.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/time.c Thu Oct 4 00:21:39 2001
@@ -145,6 +145,9 @@
tv->tv_usec = usec;
}
+/* XXX there should be a cleaner way for declaring an alias... */
+asm (".global get_fast_time; get_fast_time = do_gettimeofday");
+
static void
timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
diff -urN linux-2.4.13/arch/ia64/kernel/traps.c linux-2.4.13-lia/arch/ia64/kernel/traps.c
--- linux-2.4.13/arch/ia64/kernel/traps.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/traps.c Wed Oct 24 18:15:16 2001
@@ -1,20 +1,19 @@
/*
* Architecture-specific trap handling.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE
*/
/*
- * The fpu_fault() handler needs to be able to access and update all
- * floating point registers. Those saved in pt_regs can be accessed
- * through that structure, but those not saved, will be accessed
- * directly. To make this work, we need to ensure that the compiler
- * does not end up using a preserved floating point register on its
- * own. The following achieves this by declaring preserved registers
- * that are not marked as "fixed" as global register variables.
+ * fp_emulate() needs to be able to access and update all floating point registers. Those
+ * saved in pt_regs can be accessed through that structure, but those not saved, will be
+ * accessed directly. To make this work, we need to ensure that the compiler does not end
+ * up using a preserved floating point register on its own. The following achieves this
+ * by declaring preserved registers that are not marked as "fixed" as global register
+ * variables.
*/
register double f2 asm ("f2"); register double f3 asm ("f3");
register double f4 asm ("f4"); register double f5 asm ("f5");
@@ -33,13 +32,17 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
+#include <linux/vt_kern.h> /* For unblank_screen() */
+#include <asm/hardirq.h>
#include <asm/ia32.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/fpswa.h>
+extern spinlock_t timerlist_lock;
+
static fpswa_interface_t *fpswa_interface;
void __init
@@ -51,30 +54,74 @@
fpswa_interface = __va(ia64_boot_param->fpswa);
}
+/*
+ * Unlock any spinlocks which will prevent us from getting the message out (timerlist_lock
+ * is acquired through the console unblank code)
+ */
void
-die_if_kernel (char *str, struct pt_regs *regs, long err)
+bust_spinlocks (int yes)
{
- if (user_mode(regs)) {
-#if 0
- /* XXX for debugging only */
- printk ("!!die_if_kernel: %s(%d): %s %ld\n",
- current->comm, current->pid, str, err);
- show_regs(regs);
+ spin_lock_init(&timerlist_lock);
+ if (yes) {
+ oops_in_progress = 1;
+#ifdef CONFIG_SMP
+ global_irq_lock = 0; /* Many serial drivers do __global_cli() */
#endif
- return;
+ } else {
+ int loglevel_save = console_loglevel;
+#ifdef CONFIG_VT
+ unblank_screen();
+#endif
+ oops_in_progress = 0;
+ /*
+ * OK, the message is on the console. Now we call printk() without
+ * oops_in_progress set so that printk will give klogd a poke. Hold onto
+ * your hats...
+ */
+ console_loglevel = 15; /* NMI oopser may have shut the console up */
+ printk(" ");
+ console_loglevel = loglevel_save;
}
+}
- printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
-
- show_regs(regs);
+void
+die (const char *str, struct pt_regs *regs, long err)
+{
+ static struct {
+ spinlock_t lock;
+ int lock_owner;
+ int lock_owner_depth;
+ } die = {
+ lock: SPIN_LOCK_UNLOCKED,
+ lock_owner: -1,
+ lock_owner_depth: 0
+ };
- if (current->thread.flags & IA64_KERNEL_DEATH) {
- printk("die_if_kernel recursion detected.\n");
- sti();
- while (1);
+ if (die.lock_owner != smp_processor_id()) {
+ console_verbose();
+ spin_lock_irq(&die.lock);
+ die.lock_owner = smp_processor_id();
+ die.lock_owner_depth = 0;
+ bust_spinlocks(1);
}
- current->thread.flags |= IA64_KERNEL_DEATH;
- do_exit(SIGSEGV);
+
+ if (++die.lock_owner_depth < 3) {
+ printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
+ show_regs(regs);
+ } else
+ printk(KERN_ERR "Recursive die() failure, output suppressed\n");
+
+ bust_spinlocks(0);
+ die.lock_owner = -1;
+ spin_unlock_irq(&die.lock);
+ do_exit(SIGSEGV);
+}
+
+void
+die_if_kernel (char *str, struct pt_regs *regs, long err)
+{
+ if (!user_mode(regs))
+ die(str, regs, err);
}
void
@@ -169,14 +216,12 @@
}
/*
- * disabled_fph_fault() is called when a user-level process attempts
- * to access one of the registers f32..f127 when it doesn't own the
- * fp-high register partition. When this happens, we save the current
- * fph partition in the task_struct of the fpu-owner (if necessary)
- * and then load the fp-high partition of the current task (if
- * necessary). Note that the kernel has access to fph by the time we
- * get here, as the IVT's "Diabled FP-Register" handler takes care of
- * clearing psr.dfh.
+ * disabled_fph_fault() is called when a user-level process attempts to access f32..f127
+ * and it doesn't own the fp-high register partition. When this happens, we save the
+ * current fph partition in the task_struct of the fpu-owner (if necessary) and then load
+ * the fp-high partition of the current task (if necessary). Note that the kernel has
+ * access to fph by the time we get here, as the IVT's "Disabled FP-Register" handler takes
+ * care of clearing psr.dfh.
*/
static inline void
disabled_fph_fault (struct pt_regs *regs)
@@ -277,7 +322,7 @@
if (jiffies - last_time > 5*HZ)
fpu_swa_count = 0;
- if (++fpu_swa_count < 5) {
+ if ((++fpu_swa_count < 5) && !(current->thread.flags & IA64_THREAD_FPEMU_NOPRINT)) {
last_time = jiffies;
printk(KERN_WARNING "%s(%d): floating-point assist fault at ip %016lx\n",
current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri);
@@ -478,12 +523,12 @@
case 32: /* fp fault */
case 33: /* fp trap */
result = handle_fpu_swa((vector = 32) ? 1 : 0, regs, isr);
- if (result < 0) {
+ if ((result < 0) || (current->thread.flags & IA64_THREAD_FPEMU_SIGFPE)) {
siginfo.si_signo = SIGFPE;
siginfo.si_errno = 0;
siginfo.si_code = FPE_FLTINV;
siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
- force_sig(SIGFPE, current);
+ force_sig_info(SIGFPE, &siginfo, current);
}
return;
@@ -510,6 +555,10 @@
break;
case 46:
+#ifdef CONFIG_IA32_SUPPORT
+ if (ia32_intercept(regs, isr) = 0)
+ return;
+#endif
printk("Unexpected IA-32 intercept trap (Trap 46)\n");
printk(" iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n",
regs->cr_iip, ifa, isr, iim);
diff -urN linux-2.4.13/arch/ia64/kernel/unaligned.c linux-2.4.13-lia/arch/ia64/kernel/unaligned.c
--- linux-2.4.13/arch/ia64/kernel/unaligned.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/unaligned.c Wed Oct 24 18:15:29 2001
@@ -5,6 +5,8 @@
* Copyright (C) 1999-2000 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 2001 David Mosberger-Tang <davidm@hpl.hp.com>
*
+ * 2001/10/11 Fix unaligned access to rotating registers in s/w pipelined loops.
+ * 2001/08/13 Correct size of extended floats (float_fsz) from 16 to 10 bytes.
* 2001/01/17 Add support emulation of unaligned kernel accesses.
*/
#include <linux/kernel.h>
@@ -282,9 +284,19 @@
unsigned long rnats, nat_mask;
unsigned long on_kbs;
long sof = (regs->cr_ifs) & 0x7f;
+ long sor = 8 * ((regs->cr_ifs >> 14) & 0xf);
+ long rrb_gr = (regs->cr_ifs >> 18) & 0x7f;
+ long ridx;
+
+ if ((r1 - 32) > sor)
+ ridx = -sof + (r1 - 32);
+ else if ((r1 - 32) < (sor - rrb_gr))
+ ridx = -sof + (r1 - 32) + rrb_gr;
+ else
+ ridx = -sof + (r1 - 32) - (sor - rrb_gr);
- DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
- r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld ridx=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f, ridx);
if ((r1 - 32) >= sof) {
/* this should never happen, as the "rsvd register fault" has higher priority */
@@ -293,7 +305,7 @@
}
on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
- addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx);
if (addr >= kbs) {
/* the register is on the kernel backing store: easy... */
rnat_addr = ia64_rse_rnat_addr(addr);
@@ -318,12 +330,12 @@
return;
}
- bspstore = (unsigned long *) regs->ar_bspstore;
+ bspstore = (unsigned long *)regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
bsp = ia64_rse_skip_regs(ubs_end, -sof);
- addr = ia64_rse_skip_regs(bsp, r1 - 32);
+ addr = ia64_rse_skip_regs(bsp, ridx + sof);
- DPRINT("ubs_end=%p bsp=%p addr=%px\n", (void *) ubs_end, (void *) bsp, (void *) addr);
+ DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
ia64_poke(current, sw, (unsigned long) ubs_end, (unsigned long) addr, val);
@@ -353,9 +365,19 @@
unsigned long rnats, nat_mask;
unsigned long on_kbs;
long sof = (regs->cr_ifs) & 0x7f;
+ long sor = 8 * ((regs->cr_ifs >> 14) & 0xf);
+ long rrb_gr = (regs->cr_ifs >> 18) & 0x7f;
+ long ridx;
+
+ if ((r1 - 32) > sor)
+ ridx = -sof + (r1 - 32);
+ else if ((r1 - 32) < (sor - rrb_gr))
+ ridx = -sof + (r1 - 32) + rrb_gr;
+ else
+ ridx = -sof + (r1 - 32) - (sor - rrb_gr);
- DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld\n",
- r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f);
+ DPRINT("r%lu, sw.bspstore=%lx pt.bspstore=%lx sof=%ld sol=%ld ridx=%ld\n",
+ r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) & 0x7f, ridx);
if ((r1 - 32) >= sof) {
/* this should never happen, as the "rsvd register fault" has higher priority */
@@ -364,7 +386,7 @@
}
on_kbs = ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore);
- addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1 - 32));
+ addr = ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx);
if (addr >= kbs) {
/* the register is on the kernel backing store: easy... */
*val = *addr;
@@ -390,7 +412,7 @@
bspstore = (unsigned long *)regs->ar_bspstore;
ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
bsp = ia64_rse_skip_regs(ubs_end, -sof);
- addr = ia64_rse_skip_regs(bsp, r1 - 32);
+ addr = ia64_rse_skip_regs(bsp, ridx + sof);
DPRINT("ubs_end=%p bsp=%p addr=%p\n", (void *) ubs_end, (void *) bsp, (void *) addr);
@@ -908,7 +930,7 @@
* floating point operations sizes in bytes
*/
static const unsigned char float_fsz[4]={
- 16, /* extended precision (e) */
+ 10, /* extended precision (e) */
8, /* integer (8) */
4, /* single precision (s) */
8 /* double precision (d) */
@@ -978,11 +1000,11 @@
unsigned long len = float_fsz[ld.x6_sz];
/*
- * fr0 & fr1 don't need to be checked because Illegal Instruction
- * faults have higher priority than unaligned faults.
+ * fr0 & fr1 don't need to be checked because Illegal Instruction faults have
+ * higher priority than unaligned faults.
*
- * r0 cannot be found as the base as it would never generate an
- * unaligned reference.
+ * r0 cannot be found as the base as it would never generate an unaligned
+ * reference.
*/
/*
@@ -996,8 +1018,10 @@
* invalidate the ALAT entry and execute updates, if any.
*/
if (ld.x6_op != 0x2) {
- /* this assumes little-endian byte-order: */
-
+ /*
+ * This assumes little-endian byte-order. Note that there is no "ldfpe"
+ * instruction:
+ */
if (copy_from_user(&fpr_init[0], (void *) ifa, len)
|| copy_from_user(&fpr_init[1], (void *) (ifa + len), len))
return -1;
@@ -1337,7 +1361,7 @@
/*
* IMPORTANT:
- * Notice that the swictch statement DOES not cover all possible instructions
+ * Notice that the switch statement DOES not cover all possible instructions
* that DO generate unaligned references. This is made on purpose because for some
* instructions it DOES NOT make sense to try and emulate the access. Sometimes it
* is WRONG to try and emulate. Here is a list of instruction we don't emulate i.e.,
diff -urN linux-2.4.13/arch/ia64/kernel/unwind.c linux-2.4.13-lia/arch/ia64/kernel/unwind.c
--- linux-2.4.13/arch/ia64/kernel/unwind.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/kernel/unwind.c Thu Oct 4 00:21:39 2001
@@ -504,7 +504,7 @@
return 0;
}
-inline int
+int
unw_access_pr (struct unw_frame_info *info, unsigned long *val, int write)
{
unsigned long *addr;
diff -urN linux-2.4.13/arch/ia64/lib/clear_page.S linux-2.4.13-lia/arch/ia64/lib/clear_page.S
--- linux-2.4.13/arch/ia64/lib/clear_page.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/clear_page.S Thu Oct 4 00:21:39 2001
@@ -47,5 +47,5 @@
br.cloop.dptk.few 1b
;;
mov ar.lc = r2 // restore lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(clear_page)
diff -urN linux-2.4.13/arch/ia64/lib/clear_user.S linux-2.4.13-lia/arch/ia64/lib/clear_user.S
--- linux-2.4.13/arch/ia64/lib/clear_user.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/clear_user.S Thu Oct 4 00:21:39 2001
@@ -8,7 +8,7 @@
* r8: number of bytes that didn't get cleared due to a fault
*
* Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*/
#include <asm/asmmacro.h>
@@ -62,11 +62,11 @@
;; // avoid WAW on CFM
adds tmp=-1,len // br.ctop is repeat/until
mov ret0=len // return value is length at this point
-(p6) br.ret.spnt.few rp
+(p6) br.ret.spnt.many rp
;;
cmp.lt p6,p0\x16,len // if len > 16 then long memset
mov ar.lc=tmp // initialize lc for small count
-(p6) br.cond.dptk.few long_do_clear
+(p6) br.cond.dptk .long_do_clear
;; // WAR on ar.lc
//
// worst case 16 iterations, avg 8 iterations
@@ -79,7 +79,7 @@
1:
EX( .Lexit1, st1 [buf]=r0,1 )
adds len=-1,len // countdown length using len
- br.cloop.dptk.few 1b
+ br.cloop.dptk 1b
;; // avoid RAW on ar.lc
//
// .Lexit4: comes from byte by byte loop
@@ -87,7 +87,7 @@
.Lexit1:
mov ret0=len // faster than using ar.lc
mov ar.lc=saved_lc
- br.ret.sptk.few rp // end of short clear_user
+ br.ret.sptk.many rp // end of short clear_user
//
@@ -98,7 +98,7 @@
// instead of ret0 is due to the fact that the exception code
// changes the values of r8.
//
-long_do_clear:
+.long_do_clear:
tbit.nz p6,p0=buf,0 // odd alignment (for long_do_clear)
;;
EX( .Lexit3, (p6) st1 [buf]=r0,1 ) // 1-byte aligned
@@ -119,7 +119,7 @@
;;
cmp.eq p6,p0=r0,cnt
adds tmp=-1,cnt
-(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p6) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds buf2=8,buf // setup second base pointer
mov ar.lc=tmp
@@ -148,7 +148,7 @@
;; // needed to get len correct when error
st8 [buf2]=r0,16
adds len=-16,len
- br.cloop.dptk.few 2b
+ br.cloop.dptk 2b
;;
mov ar.lc=saved_lc
//
@@ -178,7 +178,7 @@
;;
EX( .Lexit2, (p7) st1 [buf]=r0 ) // only 1 byte left
mov ret0=r0 // success
- br.ret.dptk.few rp // end of most likely path
+ br.ret.sptk.many rp // end of most likely path
//
// Outlined error handling code
@@ -205,5 +205,5 @@
.Lexit3:
mov ret0=len
mov ar.lc=saved_lc
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(__do_clear_user)
diff -urN linux-2.4.13/arch/ia64/lib/copy_page.S linux-2.4.13-lia/arch/ia64/lib/copy_page.S
--- linux-2.4.13/arch/ia64/lib/copy_page.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/copy_page.S Thu Oct 4 00:21:39 2001
@@ -90,5 +90,5 @@
mov pr=saved_pr,0xffffffffffff0000 // restore predicates
mov ar.pfs=saved_pfs
mov ar.lc=saved_lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(copy_page)
diff -urN linux-2.4.13/arch/ia64/lib/copy_user.S linux-2.4.13-lia/arch/ia64/lib/copy_user.S
--- linux-2.4.13/arch/ia64/lib/copy_user.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/copy_user.S Thu Oct 4 00:21:39 2001
@@ -19,8 +19,8 @@
* ret0 0 in case of success. The number of bytes NOT copied in
* case of error.
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* Fixme:
* - handle the case where we have more than 16 bytes and the alignment
@@ -85,7 +85,7 @@
cmp.eq p8,p0=r0,len // check for zero length
.save ar.lc, saved_lc
mov saved_lc=ar.lc // preserve ar.lc (slow)
-(p8) br.ret.spnt.few rp // empty mempcy()
+(p8) br.ret.spnt.many rp // empty mempcy()
;;
add enddst=dst,len // first byte after end of source
add endsrc=src,len // first byte after end of destination
@@ -103,26 +103,26 @@
cmp.lt p10,p7=COPY_BREAK,len // if len > COPY_BREAK then long copy
xor tmp=src,dst // same alignment test prepare
-(p10) br.cond.dptk.few long_copy_user
+(p10) br.cond.dptk .long_copy_user
;; // RAW pr.rot/p16 ?
//
// Now we do the byte by byte loop with software pipeline
//
// p7 is necessarily false by now
1:
- EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 1b
;;
mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs // restore ar.ec
- br.ret.sptk.few rp // end of short memcpy
+ br.ret.sptk.many rp // end of short memcpy
//
// Not 8-byte aligned
//
-diff_align_copy_user:
+.diff_align_copy_user:
// At this point we know we have more than 16 bytes to copy
// and also that src and dest do _not_ have the same alignment.
and src2=0x7,src1 // src offset
@@ -153,7 +153,7 @@
// We know src1 is not 8-byte aligned in this case.
//
cmp.eq p14,p15=r0,dst2
-(p15) br.cond.spnt.few 1f
+(p15) br.cond.spnt 1f
;;
sub t1=8,src2
mov t2=src2
@@ -163,7 +163,7 @@
;;
sub lshiftd,rshift
;;
- br.cond.spnt.few word_copy_user
+ br.cond.spnt .word_copy_user
;;
1:
cmp.leu p14,p15=src2,dst2
@@ -192,15 +192,15 @@
mov ar.lc=cnt
;;
2:
- EX(failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe2,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 2b
;;
clrrrb
;;
-word_copy_user:
+.word_copy_user:
cmp.gtu p9,p0\x16,len1
-(p9) br.cond.spnt.few 4f // if (16 > len1) skip 8-byte copy
+(p9) br.cond.spnt 4f // if (16 > len1) skip 8-byte copy
;;
shr.u cnt=len1,3 // number of 64-bit words
;;
@@ -232,24 +232,24 @@
#define EPI_1 p[PIPE_DEPTH-2]
#define SWITCH(pred, shift) cmp.eq pred,p0=shift,rshift
#define CASE(pred, shift) \
- (pred) br.cond.spnt.few copy_user_bit##shift
+ (pred) br.cond.spnt .copy_user_bit##shift
#define BODY(rshift) \
-copy_user_bit##rshift: \
+.copy_user_bit##rshift: \
1: \
- EX(failure_out,(EPI) st8 [dst1]=tmp,8); \
+ EX(.failure_out,(EPI) st8 [dst1]=tmp,8); \
(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
EX(3f,(p16) ld8 val1[0]=[src1],8); \
- br.ctop.dptk.few 1b; \
+ br.ctop.dptk 1b; \
;; \
- br.cond.sptk.few .diff_align_do_tail; \
+ br.cond.sptk.many .diff_align_do_tail; \
2: \
(EPI) st8 [dst1]=tmp,8; \
(EPI_1) shrp tmp=val1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \
3: \
(p16) mov val1[0]=r0; \
- br.ctop.dptk.few 2b; \
+ br.ctop.dptk 2b; \
;; \
- br.cond.sptk.few failure_in2
+ br.cond.sptk.many .failure_in2
//
// Since the instruction 'shrp' requires a fixed 128-bit value
@@ -301,25 +301,25 @@
mov ar.lc=len1
;;
5:
- EX(failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
- EX(failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
+ EX(.failure_in_pipe1,(p16) ld1 val1[0]=[src1],1)
+ EX(.failure_out,(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1)
br.ctop.dptk.few 5b
;;
mov ar.lc=saved_lc
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// Beginning of long mempcy (i.e. > 16 bytes)
//
-long_copy_user:
+.long_copy_user:
tbit.nz p6,p7=src1,0 // odd alignement
and tmp=7,tmp
;;
cmp.eq p10,p8=r0,tmp
mov len1=len // copy because of rotation
-(p8) br.cond.dpnt.few diff_align_copy_user
+(p8) br.cond.dpnt .diff_align_copy_user
;;
// At this point we know we have more than 16 bytes to copy
// and also that both src and dest have the same alignment
@@ -327,11 +327,11 @@
// forward slowly until we reach 16byte alignment: no need to
// worry about reaching the end of buffer.
//
- EX(failure_in1,(p6) ld1 val1[0]=[src1],1) // 1-byte aligned
+ EX(.failure_in1,(p6) ld1 val1[0]=[src1],1) // 1-byte aligned
(p6) adds len1=-1,len1;;
tbit.nz p7,p0=src1,1
;;
- EX(failure_in1,(p7) ld2 val1[1]=[src1],2) // 2-byte aligned
+ EX(.failure_in1,(p7) ld2 val1[1]=[src1],2) // 2-byte aligned
(p7) adds len1=-2,len1;;
tbit.nz p8,p0=src1,2
;;
@@ -339,28 +339,28 @@
// Stop bit not required after ld4 because if we fail on ld4
// we have never executed the ld1, therefore st1 is not executed.
//
- EX(failure_in1,(p8) ld4 val2[0]=[src1],4) // 4-byte aligned
+ EX(.failure_in1,(p8) ld4 val2[0]=[src1],4) // 4-byte aligned
;;
- EX(failure_out,(p6) st1 [dst1]=val1[0],1)
+ EX(.failure_out,(p6) st1 [dst1]=val1[0],1)
tbit.nz p9,p0=src1,3
;;
//
// Stop bit not required after ld8 because if we fail on ld8
// we have never executed the ld2, therefore st2 is not executed.
//
- EX(failure_in1,(p9) ld8 val2[1]=[src1],8) // 8-byte aligned
- EX(failure_out,(p7) st2 [dst1]=val1[1],2)
+ EX(.failure_in1,(p9) ld8 val2[1]=[src1],8) // 8-byte aligned
+ EX(.failure_out,(p7) st2 [dst1]=val1[1],2)
(p8) adds len1=-4,len1
;;
- EX(failure_out, (p8) st4 [dst1]=val2[0],4)
+ EX(.failure_out, (p8) st4 [dst1]=val2[0],4)
(p9) adds len1=-8,len1;;
shr.u cnt=len1,4 // number of 128-bit (2x64bit) words
;;
- EX(failure_out, (p9) st8 [dst1]=val2[1],8)
+ EX(.failure_out, (p9) st8 [dst1]=val2[1],8)
tbit.nz p6,p0=len1,3
cmp.eq p7,p0=r0,cnt
adds tmp=-1,cnt // br.ctop is repeat/until
-(p7) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p7) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds src2=8,src1
adds dst2=8,dst1
@@ -370,12 +370,12 @@
// 16bytes/iteration
//
2:
- EX(failure_in3,(p16) ld8 val1[0]=[src1],16)
+ EX(.failure_in3,(p16) ld8 val1[0]=[src1],16)
(p16) ld8 val2[0]=[src2],16
- EX(failure_out, (EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16)
+ EX(.failure_out, (EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16)
(EPI) st8 [dst2]=val2[PIPE_DEPTH-1],16
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;; // RAW on src1 when fall through from loop
//
// Tail correction based on len only
@@ -384,29 +384,28 @@
// is 16 byte aligned AND we have less than 16 bytes to copy.
//
.dotail:
- EX(failure_in1,(p6) ld8 val1[0]=[src1],8) // at least 8 bytes
+ EX(.failure_in1,(p6) ld8 val1[0]=[src1],8) // at least 8 bytes
tbit.nz p7,p0=len1,2
;;
- EX(failure_in1,(p7) ld4 val1[1]=[src1],4) // at least 4 bytes
+ EX(.failure_in1,(p7) ld4 val1[1]=[src1],4) // at least 4 bytes
tbit.nz p8,p0=len1,1
;;
- EX(failure_in1,(p8) ld2 val2[0]=[src1],2) // at least 2 bytes
+ EX(.failure_in1,(p8) ld2 val2[0]=[src1],2) // at least 2 bytes
tbit.nz p9,p0=len1,0
;;
- EX(failure_out, (p6) st8 [dst1]=val1[0],8)
+ EX(.failure_out, (p6) st8 [dst1]=val1[0],8)
;;
- EX(failure_in1,(p9) ld1 val2[1]=[src1]) // only 1 byte left
+ EX(.failure_in1,(p9) ld1 val2[1]=[src1]) // only 1 byte left
mov ar.lc=saved_lc
;;
- EX(failure_out,(p7) st4 [dst1]=val1[1],4)
+ EX(.failure_out,(p7) st4 [dst1]=val1[1],4)
mov pr=saved_pr,0xffffffffffff0000
;;
- EX(failure_out, (p8) st2 [dst1]=val2[0],2)
+ EX(.failure_out, (p8) st2 [dst1]=val2[0],2)
mov ar.pfs=saved_pfs
;;
- EX(failure_out, (p9) st1 [dst1]=val2[1])
- br.ret.dptk.few rp
-
+ EX(.failure_out, (p9) st1 [dst1]=val2[1])
+ br.ret.sptk.many rp
//
@@ -433,32 +432,32 @@
// pipeline going. We can't really do this inline because
// p16 is always reset to 1 when lc > 0.
//
-failure_in_pipe1:
+.failure_in_pipe1:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
1:
(p16) mov val1[0]=r0
(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1
- br.ctop.dptk.few 1b
+ br.ctop.dptk 1b
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// This is the case where the byte by byte copy fails on the load
// when we copy the head. We need to finish the pipeline and copy
// zeros for the rest of the destination. Since this happens
// at the top we still need to fill the body and tail.
-failure_in_pipe2:
+.failure_in_pipe2:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
2:
(p16) mov val1[0]=r0
(EPI) st1 [dst1]=val1[PIPE_DEPTH-1],1
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;;
sub len=enddst,dst1,1 // precompute len
- br.cond.dptk.few failure_in1bis
+ br.cond.dptk.many .failure_in1bis
;;
//
@@ -533,9 +532,7 @@
// This means that we are in a situation similar the a fault in the
// head part. That's nice!
//
-failure_in1:
-// sub ret0=enddst,dst1 // number of bytes to zero, i.e. not copied
-// sub len=enddst,dst1,1
+.failure_in1:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
sub len=endsrc,src1,1
//
@@ -546,18 +543,17 @@
// calling side.
//
;;
-failure_in1bis: // from (failure_in3)
+.failure_in1bis: // from (.failure_in3)
mov ar.lc=len // Continue with a stupid byte store.
;;
5:
st1 [dst1]=r0,1
- br.cloop.dptk.few 5b
+ br.cloop.dptk 5b
;;
-skip_loop:
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// Here we simply restart the loop but instead
@@ -569,7 +565,7 @@
// we MUST use src1/endsrc here and not dst1/enddst because
// of the pipeline effect.
//
-failure_in3:
+.failure_in3:
sub ret0=endsrc,src1 // number of bytes to zero, i.e. not copied
;;
2:
@@ -577,36 +573,36 @@
(p16) mov val2[0]=r0
(EPI) st8 [dst1]=val1[PIPE_DEPTH-1],16
(EPI) st8 [dst2]=val2[PIPE_DEPTH-1],16
- br.ctop.dptk.few 2b
+ br.ctop.dptk 2b
;;
cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
sub len=enddst,dst1,1 // precompute len
-(p6) br.cond.dptk.few failure_in1bis
+(p6) br.cond.dptk .failure_in1bis
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
-failure_in2:
+.failure_in2:
sub ret0=endsrc,src1
cmp.ne p6,p0=dst1,enddst // Do we need to finish the tail ?
sub len=enddst,dst1,1 // precompute len
-(p6) br.cond.dptk.few failure_in1bis
+(p6) br.cond.dptk .failure_in1bis
;;
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
//
// handling of failures on stores: that's the easy part
//
-failure_out:
+.failure_out:
sub ret0=enddst,dst1
mov pr=saved_pr,0xffffffffffff0000
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(__copy_user)
diff -urN linux-2.4.13/arch/ia64/lib/do_csum.S linux-2.4.13-lia/arch/ia64/lib/do_csum.S
--- linux-2.4.13/arch/ia64/lib/do_csum.S Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/lib/do_csum.S Thu Oct 4 00:21:39 2001
@@ -16,7 +16,6 @@
* back-to-back 8-byte words per loop. Clean up the initialization
* for the loop. Support the cases where load latency = 1 or 2.
* Set CONFIG_IA64_LOAD_LATENCY to 1 or 2 (default).
- *
*/
#include <asm/asmmacro.h>
@@ -130,7 +129,7 @@
;; // avoid WAW on CFM
mov tmp3=0x7 // a temporary mask/value
add tmp1=buf,len // last byte's address
-(p6) br.ret.spnt.few rp // return if true (hope we can avoid that)
+(p6) br.ret.spnt.many rp // return if true (hope we can avoid that)
and firstoff=7,buf // how many bytes off for first1 element
tbit.nz p15,p0=buf,0 // is buf an odd address ?
@@ -181,9 +180,9 @@
cmp.ltu p6,p0=result1[0],word1[0] // check the carry
;;
(p6) adds result1[0]=1,result1[0]
-(p8) br.cond.dptk.few do_csum_exit // if (within an 8-byte word)
+(p8) br.cond.dptk .do_csum_exit // if (within an 8-byte word)
;;
-(p11) br.cond.dptk.few do_csum16 // if (count is even)
+(p11) br.cond.dptk .do_csum16 // if (count is even)
;;
// Here count is odd.
ld8 word1[1]=[first1],8 // load an 8-byte word
@@ -196,14 +195,14 @@
;;
(p6) adds result1[0]=1,result1[0]
;;
-(p9) br.cond.sptk.few do_csum_exit // if (count = 1) exit
+(p9) br.cond.sptk .do_csum_exit // if (count = 1) exit
// Fall through to caluculate the checksum, feeding result1[0] as
// the initial value in result1[0].
;;
//
// Calculate the checksum loading two 8-byte words per loop.
//
-do_csum16:
+.do_csum16:
mov saved_lc=ar.lc
shr.u count=count,1 // we do 16 bytes per loop
;;
@@ -225,7 +224,7 @@
;;
add first2=8,first1
;;
-(p9) br.cond.sptk.few do_csum_exit
+(p9) br.cond.sptk .do_csum_exit
;;
nop.m 0
nop.i 0
@@ -241,7 +240,7 @@
2:
(p16) ld8 word1[0]=[first1],16
(p16) ld8 word2[0]=[first2],16
- br.ctop.sptk.few 1b
+ br.ctop.sptk 1b
;;
// Since len is a 32-bit value, carry cannot be larger than
// a 64-bit value.
@@ -263,7 +262,7 @@
;;
(p6) adds result1[0]=1,result1[0]
;;
-do_csum_exit:
+.do_csum_exit:
movl tmp3=0xffffffff
;;
// XXX Fixme
@@ -299,7 +298,7 @@
;;
mov ar.lc=saved_lc
(p15) shr.u ret0=ret0,64-16 // + shift back to position = swap bytes
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
// I (Jun Nakajima) wrote an equivalent code (see below), but it was
// not much better than the original. So keep the original there so that
@@ -331,6 +330,6 @@
//(p15) mux1 ret0=ret0,@rev // reverse word
// ;;
//(p15) shr.u ret0=ret0,64-16 // + shift back to position = swap bytes
-// br.ret.sptk.few rp
+// br.ret.sptk.many rp
END(do_csum)
diff -urN linux-2.4.13/arch/ia64/lib/idiv32.S linux-2.4.13-lia/arch/ia64/lib/idiv32.S
--- linux-2.4.13/arch/ia64/lib/idiv32.S Mon Oct 9 17:54:56 2000
+++ linux-2.4.13-lia/arch/ia64/lib/idiv32.S Thu Oct 4 00:21:39 2001
@@ -79,5 +79,5 @@
;;
#endif
getf.sig r8 = f6 // transfer result to result register
- br.ret.sptk rp
+ br.ret.sptk.many rp
END(NAME)
diff -urN linux-2.4.13/arch/ia64/lib/idiv64.S linux-2.4.13-lia/arch/ia64/lib/idiv64.S
--- linux-2.4.13/arch/ia64/lib/idiv64.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/idiv64.S Thu Oct 4 00:21:39 2001
@@ -89,5 +89,5 @@
#endif
getf.sig r8 = f17 // transfer result to result register
ldf.fill f17 = [sp]
- br.ret.sptk rp
+ br.ret.sptk.many rp
END(NAME)
diff -urN linux-2.4.13/arch/ia64/lib/memcpy.S linux-2.4.13-lia/arch/ia64/lib/memcpy.S
--- linux-2.4.13/arch/ia64/lib/memcpy.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/memcpy.S Thu Oct 4 00:21:39 2001
@@ -9,20 +9,14 @@
* Output:
* no return value
*
- * Copyright (C) 2000 Hewlett-Packard Co
- * Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000-2001 Hewlett-Packard Co
+ * Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/config.h>
#include <asm/asmmacro.h>
-#if defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECIFIC)
-# define BRP(args...) nop.b 0
-#else
-# define BRP(args...) brp.loop.imp args
-#endif
-
GLOBAL_ENTRY(bcopy)
.regstk 3,0,0,0
mov r8=in0
@@ -103,8 +97,8 @@
cmp.ne p6,p0=t0,r0
mov src=in1 // copy because of rotation
-(p7) br.cond.spnt.few memcpy_short
-(p6) br.cond.spnt.few memcpy_long
+(p7) br.cond.spnt.few .memcpy_short
+(p6) br.cond.spnt.few .memcpy_long
;;
nop.m 0
;;
@@ -119,7 +113,7 @@
1: { .mib
(p[0]) ld8 val[0]=[src],8
nop.i 0
- BRP(1b, 2f)
+ brp.loop.imp 1b, 2f
}
2: { .mfb
(p[N-1])st8 [dst]=val[N-1],8
@@ -139,14 +133,14 @@
* issues, we want to avoid read-modify-write of entire words.
*/
.align 32
-memcpy_short:
+.memcpy_short:
adds cnt=-1,in2 // br.ctop is repeat/until
mov ar.ec=MEM_LAT
- BRP(1f, 2f)
+ brp.loop.imp 1f, 2f
;;
mov ar.lc=cnt
;;
- nop.m 0
+ nop.m 0
;;
nop.m 0
nop.i 0
@@ -163,7 +157,7 @@
1: { .mib
(p[0]) ld1 val[0]=[src],1
nop.i 0
- BRP(1b, 2f)
+ brp.loop.imp 1b, 2f
} ;;
2: { .mfb
(p[MEM_LAT-1])st1 [dst]=val[MEM_LAT-1],1
@@ -202,7 +196,7 @@
#define LOG_LOOP_SIZE 6
-memcpy_long:
+.memcpy_long:
alloc t3=ar.pfs,3,Nrot,0,Nrot // resize register frame
and t0=-8,src // t0 = src & ~7
and t2=7,src // t2 = src & 7
@@ -247,7 +241,7 @@
mov t4=ip
} ;;
and src2=-8,src // align source pointer
- adds t4=memcpy_loops-1b,t4
+ adds t4=.memcpy_loops-1b,t4
mov ar.ec=N
and t0=7,src // t0 = src & 7
@@ -266,7 +260,7 @@
mov pr=cnt,0x38 // set (p5,p4,p3) to # of bytes last-word bytes to copy
mov ar.lc=t2
;;
- nop.m 0
+ nop.m 0
;;
nop.m 0
nop.i 0
@@ -278,7 +272,7 @@
br.sptk.few b6
;;
-memcpy_tail:
+.memcpy_tail:
// At this point, (p5,p4,p3) are set to the number of bytes left to copy (which is
// less than 8) and t0 contains the last few bytes of the src buffer:
(p5) st4 [dst]=t0,4
@@ -300,7 +294,7 @@
1: { .mib \
(p[0]) ld8 val[0]=[src2],8; \
(p[MEM_LAT+3]) shrp w[0]=val[MEM_LAT+3],val[MEM_LAT+4-index],shift; \
- BRP(1b, 2f) \
+ brp.loop.imp 1b, 2f \
}; \
2: { .mfb \
(p[MEM_LAT+4]) st8 [dst]=w[1],8; \
@@ -311,8 +305,8 @@
ld8 val[N-1]=[src_end]; /* load last word (may be same as val[N]) */ \
;; \
shrp t0=val[N-1],val[N-index],shift; \
- br memcpy_tail
-memcpy_loops:
+ br .memcpy_tail
+.memcpy_loops:
COPY(0, 1) /* no point special casing this---it doesn't go any faster without shrp */
COPY(8, 0)
COPY(16, 0)
diff -urN linux-2.4.13/arch/ia64/lib/memset.S linux-2.4.13-lia/arch/ia64/lib/memset.S
--- linux-2.4.13/arch/ia64/lib/memset.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/memset.S Thu Oct 4 00:21:40 2001
@@ -43,11 +43,11 @@
adds tmp=-1,len // br.ctop is repeat/until
tbit.nz p6,p0=buf,0 // odd alignment
-(p8) br.ret.spnt.few rp
+(p8) br.ret.spnt.many rp
cmp.lt p7,p0\x16,len // if len > 16 then long memset
mux1 val=val,@brcst // prepare value
-(p7) br.cond.dptk.few long_memset
+(p7) br.cond.dptk .long_memset
;;
mov ar.lc=tmp // initialize lc for small count
;; // avoid RAW and WAW on ar.lc
@@ -57,11 +57,11 @@
;; // avoid RAW on ar.lc
mov ar.lc=saved_lc
mov ar.pfs=saved_pfs
- br.ret.sptk.few rp // end of short memset
+ br.ret.sptk.many rp // end of short memset
// at this point we know we have more than 16 bytes to copy
// so we focus on alignment
-long_memset:
+.long_memset:
(p6) st1 [buf]=val,1 // 1-byte aligned
(p6) adds len=-1,len;; // sync because buf is modified
tbit.nz p6,p0=buf,1
@@ -80,7 +80,7 @@
;;
cmp.eq p6,p0=r0,cnt
adds tmp=-1,cnt
-(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+(p6) br.cond.dpnt .dotail // we have less than 16 bytes left
;;
adds buf2=8,buf // setup second base pointer
mov ar.lc=tmp
@@ -104,5 +104,5 @@
mov ar.lc=saved_lc
;;
(p6) st1 [buf]=val // only 1 byte left
- br.ret.dptk.few rp
+ br.ret.sptk.many rp
END(memset)
diff -urN linux-2.4.13/arch/ia64/lib/strlen.S linux-2.4.13-lia/arch/ia64/lib/strlen.S
--- linux-2.4.13/arch/ia64/lib/strlen.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strlen.S Thu Oct 4 00:21:40 2001
@@ -11,7 +11,7 @@
* does not count the \0
*
* Copyright (C) 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 09/24/99 S.Eranian add speculation recovery code
*/
@@ -116,7 +116,7 @@
ld8.s w[0]=[src],8 // speculatively load next to next
cmp.eq.and p6,p0=8,val1 // p6 = p6 and val1=8
cmp.eq.and p6,p0=8,val2 // p6 = p6 and mask=8
-(p6) br.wtop.dptk.few 1b // loop until p6 = 0
+(p6) br.wtop.dptk 1b // loop until p6 = 0
;;
//
// We must return try the recovery code iff
@@ -127,14 +127,14 @@
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
tnat.nz p6,p7=val1 // test NaT on val1
-(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT
;;
//
// if we come here p7 is true, i.e., initialized for // cmp
//
cmp.eq.and p7,p0=8,val1// val1=8?
tnat.nz.and p7,p0=val2 // test NaT if val2
-(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT
;;
(p8) mov val1=val2 // the other test got us out of the loop
(p8) adds src=-16,src // correct position when 3 ahead
@@ -146,7 +146,7 @@
;;
sub ret0=ret0,tmp // adjust
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of normal execution
+ br.ret.sptk.many rp // end of normal execution
//
// Outlined recovery code when speculation failed
@@ -165,7 +165,7 @@
// - today we restart from the beginning of the string instead
// of trying to continue where we left off.
//
-recover:
+.recover:
ld8 val=[base],8 // will fail if unrecoverable fault
;;
or val=val,mask // remask first bytes
@@ -180,7 +180,7 @@
czx1.r val1=val // search 0 byte from right
;;
cmp.eq p6,p0=8,val1 // val1=8 ?
-(p6) br.wtop.dptk.few 2b // loop until p6 = 0
+(p6) br.wtop.dptk 2b // loop until p6 = 0
;; // (avoid WAW on p63)
sub ret0ºse,orig // distance from base
sub tmp=8,val1
@@ -188,5 +188,5 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of successful recovery code
+ br.ret.sptk.many rp // end of successful recovery code
END(strlen)
diff -urN linux-2.4.13/arch/ia64/lib/strlen_user.S linux-2.4.13-lia/arch/ia64/lib/strlen_user.S
--- linux-2.4.13/arch/ia64/lib/strlen_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strlen_user.S Thu Oct 4 00:21:40 2001
@@ -8,8 +8,8 @@
* ret0 0 in case of fault, strlen(buffer)+1 otherwise
*
* Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co
- * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang <davidm@hpl.hp.com>
- * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
+ * Stephane Eranian <eranian@hpl.hp.com>
*
* 01/19/99 S.Eranian heavily enhanced version (see details below)
* 09/24/99 S.Eranian added speculation recovery code
@@ -108,7 +108,7 @@
mov ar.ec=r0 // clear epilogue counter (saved in ar.pfs)
;;
add base=-16,src // keep track of aligned base
- chk.s v[1], recover // if already NaT, then directly skip to recover
+ chk.s v[1], .recover // if already NaT, then directly skip to recover
or v[1]=v[1],mask // now we have a safe initial byte pattern
;;
1:
@@ -130,14 +130,14 @@
//
cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
tnat.nz p6,p7=val1 // test NaT on val1
-(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT
;;
//
// if we come here p7 is true, i.e., initialized for // cmp
//
cmp.eq.and p7,p0=8,val1// val1=8?
tnat.nz.and p7,p0=val2 // test NaT if val2
-(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT
;;
(p8) mov val1=val2 // val2 contains the value
(p8) adds src=-16,src // correct position when 3 ahead
@@ -149,7 +149,7 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of normal execution
+ br.ret.sptk.many rp // end of normal execution
//
// Outlined recovery code when speculation failed
@@ -162,7 +162,7 @@
// - today we restart from the beginning of the string instead
// of trying to continue where we left off.
//
-recover:
+.recover:
EX(.Lexit1, ld8 val=[base],8) // load the initial bytes
;;
or val=val,mask // remask first bytes
@@ -185,7 +185,7 @@
;;
sub ret0=ret0,tmp // length=now - back -1
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp // end of successful recovery code
+ br.ret.sptk.many rp // end of successful recovery code
//
// We failed even on the normal load (called from exception handler)
@@ -194,5 +194,5 @@
mov ret0=0
mov pr=saved_pr,0xffffffffffff0000
mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strlen_user)
diff -urN linux-2.4.13/arch/ia64/lib/strncpy_from_user.S linux-2.4.13-lia/arch/ia64/lib/strncpy_from_user.S
--- linux-2.4.13/arch/ia64/lib/strncpy_from_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strncpy_from_user.S Thu Oct 4 00:21:40 2001
@@ -40,5 +40,5 @@
(p6) mov r8=in2 // buffer filled up---return buffer length
(p7) sub r8=in1,r9,1 // return string length (excluding NUL character)
[.Lexit:]
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strncpy_from_user)
diff -urN linux-2.4.13/arch/ia64/lib/strnlen_user.S linux-2.4.13-lia/arch/ia64/lib/strnlen_user.S
--- linux-2.4.13/arch/ia64/lib/strnlen_user.S Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/lib/strnlen_user.S Thu Oct 4 00:21:40 2001
@@ -33,7 +33,7 @@
add r9=1,r9
;;
cmp.eq p6,p0=r8,r0
-(p6) br.dpnt.few .Lexit
+(p6) br.cond.dpnt .Lexit
br.cloop.dptk.few .Loop1
add r9=1,in1 // NUL not found---return N+1
@@ -41,5 +41,5 @@
.Lexit:
mov r8=r9
mov ar.lc=r16 // restore ar.lc
- br.ret.sptk.few rp
+ br.ret.sptk.many rp
END(__strnlen_user)
diff -urN linux-2.4.13/arch/ia64/mm/fault.c linux-2.4.13-lia/arch/ia64/mm/fault.c
--- linux-2.4.13/arch/ia64/mm/fault.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/mm/fault.c Thu Oct 4 00:21:40 2001
@@ -1,8 +1,8 @@
/*
* MMU fault handling support.
*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2001 Hewlett-Packard Co
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/sched.h>
#include <linux/kernel.h>
@@ -16,7 +16,7 @@
#include <asm/uaccess.h>
#include <asm/hardirq.h>
-extern void die_if_kernel (char *, struct pt_regs *, long);
+extern void die (char *, struct pt_regs *, long);
/*
* This routine is analogous to expand_stack() but instead grows the
@@ -46,16 +46,15 @@
void
ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *regs)
{
+ int signal = SIGSEGV, code = SEGV_MAPERR;
+ struct vm_area_struct *vma, *prev_vma;
struct mm_struct *mm = current->mm;
struct exception_fixup fix;
- struct vm_area_struct *vma, *prev_vma;
struct siginfo si;
- int signal = SIGSEGV;
unsigned long mask;
/*
- * If we're in an interrupt or have no user
- * context, we must not take the fault..
+ * If we're in an interrupt or have no user context, we must not take the fault..
*/
if (in_interrupt() || !mm)
goto no_context;
@@ -71,6 +70,8 @@
goto check_expansion;
good_area:
+ code = SEGV_ACCERR;
+
/* OK, we've got a good vm_area for this memory area. Check the access permissions: */
# define VM_READ_BIT 0
@@ -89,12 +90,13 @@
if ((vma->vm_flags & mask) != mask)
goto bad_area;
+ survive:
/*
* If for any reason at all we couldn't handle the fault, make
* sure we exit gracefully rather than endlessly redo the
* fault.
*/
- switch (handle_mm_fault(mm, vma, address, mask) != 0) {
+ switch (handle_mm_fault(mm, vma, address, mask)) {
case 1:
++current->min_flt;
break;
@@ -147,7 +149,7 @@
if (user_mode(regs)) {
si.si_signo = signal;
si.si_errno = 0;
- si.si_code = SI_KERNEL;
+ si.si_code = code;
si.si_addr = (void *) address;
force_sig_info(signal, &si, current);
return;
@@ -174,17 +176,29 @@
}
/*
- * Oops. The kernel tried to access some bad page. We'll have
- * to terminate things with extreme prejudice.
+ * Oops. The kernel tried to access some bad page. We'll have to terminate things
+ * with extreme prejudice.
*/
- printk(KERN_ALERT "Unable to handle kernel paging request at "
- "virtual address %016lx\n", address);
- die_if_kernel("Oops", regs, isr);
+ bust_spinlocks(1);
+
+ if (address < PAGE_SIZE)
+ printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk(KERN_ALERT "Unable to handle kernel paging request at "
+ "virtual address %016lx\n", address);
+ die("Oops", regs, isr);
+ bust_spinlocks(0);
do_exit(SIGKILL);
return;
out_of_memory:
up_read(&mm->mmap_sem);
+ if (current->pid = 1) {
+ current->policy |= SCHED_YIELD;
+ schedule();
+ down_read(&mm->mmap_sem);
+ goto survive;
+ }
printk("VM: killing process %s\n", current->comm);
if (user_mode(regs))
do_exit(SIGKILL);
diff -urN linux-2.4.13/arch/ia64/mm/init.c linux-2.4.13-lia/arch/ia64/mm/init.c
--- linux-2.4.13/arch/ia64/mm/init.c Mon Sep 24 15:06:13 2001
+++ linux-2.4.13-lia/arch/ia64/mm/init.c Wed Oct 10 17:43:54 2001
@@ -167,13 +167,40 @@
}
void
-show_mem (void)
+show_mem(void)
{
int i, total = 0, reserved = 0;
int shared = 0, cached = 0;
printk("Mem-info:\n");
show_free_areas();
+
+#ifdef CONFIG_DISCONTIGMEM
+ {
+ pg_data_t *pgdat = pgdat_list;
+
+ printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
+ do {
+ printk("Node ID: %d\n", pgdat->node_id);
+ for(i = 0; i < pgdat->node_size; i++) {
+ if (PageReserved(pgdat->node_mem_map+i))
+ reserved++;
+ else if (PageSwapCache(pgdat->node_mem_map+i))
+ cached++;
+ else if (page_count(pgdat->node_mem_map + i))
+ shared += page_count(pgdat->node_mem_map + i) - 1;
+ }
+ printk("\t%d pages of RAM\n", pgdat->node_size);
+ printk("\t%d reserved pages\n", reserved);
+ printk("\t%d pages shared\n", shared);
+ printk("\t%d pages swap cached\n", cached);
+ pgdat = pgdat->node_next;
+ } while (pgdat);
+ printk("Total of %ld pages in page table cache\n", pgtable_cache_size);
+ show_buffers();
+ printk("%d free buffer pages\n", nr_free_buffer_pages());
+ }
+#else /* !CONFIG_DISCONTIGMEM */
printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
i = max_mapnr;
while (i-- > 0) {
@@ -191,6 +218,7 @@
printk("%d pages swap cached\n", cached);
printk("%ld pages in page table cache\n", pgtable_cache_size);
show_buffers();
+#endif /* !CONFIG_DISCONTIGMEM */
}
/*
diff -urN linux-2.4.13/arch/ia64/mm/tlb.c linux-2.4.13-lia/arch/ia64/mm/tlb.c
--- linux-2.4.13/arch/ia64/mm/tlb.c Tue Jul 31 10:30:08 2001
+++ linux-2.4.13-lia/arch/ia64/mm/tlb.c Wed Oct 10 17:45:07 2001
@@ -2,7 +2,7 @@
* TLB support routines.
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
- * Copyright (C) 1998-2001 David Mosberger-Tang <davidm@hpl.hp.com>
+ * David Mosberger-Tang <davidm@hpl.hp.com>
*
* 08/02/00 A. Mallick <asit.k.mallick@intel.com>
* Modified RID allocation for SMP
@@ -41,89 +41,6 @@
};
/*
- * Seralize usage of ptc.g
- */
-spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED; /* see <asm/pgtable.h> */
-
-#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG)
-
-#include <linux/irq.h>
-
-unsigned long flush_end, flush_start, flush_nbits, flush_rid;
-atomic_t flush_cpu_count;
-
-/*
- * flush_tlb_no_ptcg is called with ptcg_lock locked
- */
-static inline void
-flush_tlb_no_ptcg (unsigned long start, unsigned long end, unsigned long nbits)
-{
- extern void smp_send_flush_tlb (void);
- unsigned long saved_tpr = 0;
- unsigned long flags;
-
- /*
- * Some times this is called with interrupts disabled and causes
- * dead-lock; to avoid this we enable interrupt and raise the TPR
- * to enable ONLY IPI.
- */
- __save_flags(flags);
- if (!(flags & IA64_PSR_I)) {
- saved_tpr = ia64_get_tpr();
- ia64_srlz_d();
- ia64_set_tpr(IA64_IPI_VECTOR - 16);
- ia64_srlz_d();
- local_irq_enable();
- }
-
- spin_lock(&ptcg_lock);
- flush_rid = ia64_get_rr(start);
- ia64_srlz_d();
- flush_start = start;
- flush_end = end;
- flush_nbits = nbits;
- atomic_set(&flush_cpu_count, smp_num_cpus - 1);
- smp_send_flush_tlb();
- /*
- * Purge local TLB entries. ALAT invalidation is done in ia64_leave_kernel.
- */
- do {
- asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
- start += (1UL << nbits);
- } while (start < end);
-
- ia64_srlz_i(); /* srlz.i implies srlz.d */
-
- /*
- * Wait for other CPUs to finish purging entries.
- */
-#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC)
- {
- extern void smp_resend_flush_tlb (void);
- unsigned long start = ia64_get_itc();
-
- while (atomic_read(&flush_cpu_count) > 0) {
- if ((ia64_get_itc() - start) > 400000UL) {
- smp_resend_flush_tlb();
- start = ia64_get_itc();
- }
- }
- }
-#else
- while (atomic_read(&flush_cpu_count)) {
- /* Nothing */
- }
-#endif
- if (!(flags & IA64_PSR_I)) {
- local_irq_disable();
- ia64_set_tpr(saved_tpr);
- ia64_srlz_d();
- }
-}
-
-#endif /* CONFIG_SMP && !CONFIG_ITANIUM_PTCG */
-
-/*
* Acquire the ia64_ctx.lock before calling this function!
*/
void
@@ -162,6 +79,26 @@
flush_tlb_all();
}
+static void
+ia64_global_tlb_purge (unsigned long start, unsigned long end, unsigned long nbits)
+{
+ static spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED;
+
+ /* HW requires global serialization of ptc.ga. */
+ spin_lock(&ptcg_lock);
+ {
+ do {
+ /*
+ * Flush ALAT entries also.
+ */
+ asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2)
+ : "memory");
+ start += (1UL << nbits);
+ } while (start < end);
+ }
+ spin_unlock(&ptcg_lock);
+}
+
void
__flush_tlb_all (void)
{
@@ -222,23 +159,15 @@
}
start &= ~((1UL << nbits) - 1);
-#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG)
- flush_tlb_no_ptcg(start, end, nbits);
-#else
- spin_lock(&ptcg_lock);
- do {
# ifdef CONFIG_SMP
- /*
- * Flush ALAT entries also.
- */
- asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2) : "memory");
+ platform_global_tlb_purge(start, end, nbits);
# else
+ do {
asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
-# endif
start += (1UL << nbits);
} while (start < end);
-#endif /* CONFIG_SMP && !defined(CONFIG_ITANIUM_PTCG) */
- spin_unlock(&ptcg_lock);
+# endif
+
ia64_insn_group_barrier();
ia64_srlz_i(); /* srlz.i implies srlz.d */
ia64_insn_group_barrier();
diff -urN linux-2.4.13/arch/ia64/sn/sn1/llsc4.c linux-2.4.13-lia/arch/ia64/sn/sn1/llsc4.c
--- linux-2.4.13/arch/ia64/sn/sn1/llsc4.c Thu Apr 5 12:51:47 2001
+++ linux-2.4.13-lia/arch/ia64/sn/sn1/llsc4.c Thu Oct 4 00:21:40 2001
@@ -35,16 +35,6 @@
static int inttest=0;
#endif
-#ifdef IA64_SEMFIX_INSN
-#undef IA64_SEMFIX_INSN
-#endif
-#ifdef IA64_SEMFIX
-#undef IA64_SEMFIX
-#endif
-# define IA64_SEMFIX_INSN
-# define IA64_SEMFIX ""
-
-
/*
* Test parameter table for AUTOTEST
*/
@@ -192,7 +182,6 @@
printk (" llscfail \t%s\tForce a failure to test the trigger & error messages\n", fail_enabled ? "on" : "off");
printk (" llscselt \t%s\tSelective triger on failures\n", selective_trigger ? "on" : "off");
printk (" llscblkadr \t%s\tDump data block addresses\n", dump_block_addrs_opt ? "on" : "off");
- printk (" SEMFIX: %s\n", IA64_SEMFIX);
printk ("\n");
}
__setup("autotest", autotest_enable);
diff -urN linux-2.4.13/arch/ia64/tools/print_offsets.c linux-2.4.13-lia/arch/ia64/tools/print_offsets.c
--- linux-2.4.13/arch/ia64/tools/print_offsets.c Tue Jul 31 10:30:09 2001
+++ linux-2.4.13-lia/arch/ia64/tools/print_offsets.c Thu Oct 4 00:21:52 2001
@@ -57,11 +57,8 @@
{ "IA64_TASK_PROCESSOR_OFFSET", offsetof (struct task_struct, processor) },
{ "IA64_TASK_THREAD_OFFSET", offsetof (struct task_struct, thread) },
{ "IA64_TASK_THREAD_KSP_OFFSET", offsetof (struct task_struct, thread.ksp) },
-#ifdef CONFIG_IA32_SUPPORT
- { "IA64_TASK_THREAD_SIGMASK_OFFSET",offsetof (struct task_struct, thread.un.sigmask) },
-#endif
#ifdef CONFIG_PERFMON
- { "IA64_TASK_PFM_NOTIFY_OFFSET", offsetof(struct task_struct, thread.pfm_pend_notify) },
+ { "IA64_TASK_PFM_MUST_BLOCK_OFFSET",offsetof(struct task_struct, thread.pfm_must_block) },
#endif
{ "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) },
{ "IA64_TASK_MM_OFFSET", offsetof (struct task_struct, mm) },
@@ -165,17 +162,18 @@
{ "IA64_SIGCONTEXT_FR6_OFFSET", offsetof (struct sigcontext, sc_fr[6]) },
{ "IA64_SIGCONTEXT_PR_OFFSET", offsetof (struct sigcontext, sc_pr) },
{ "IA64_SIGCONTEXT_R12_OFFSET", offsetof (struct sigcontext, sc_gr[12]) },
+ { "IA64_SIGCONTEXT_RBS_BASE_OFFSET",offsetof (struct sigcontext, sc_rbs_base) },
+ { "IA64_SIGCONTEXT_LOADRS_OFFSET", offsetof (struct sigcontext, sc_loadrs) },
{ "IA64_SIGFRAME_ARG0_OFFSET", offsetof (struct sigframe, arg0) },
{ "IA64_SIGFRAME_ARG1_OFFSET", offsetof (struct sigframe, arg1) },
{ "IA64_SIGFRAME_ARG2_OFFSET", offsetof (struct sigframe, arg2) },
- { "IA64_SIGFRAME_RBS_BASE_OFFSET", offsetof (struct sigframe, rbs_base) },
{ "IA64_SIGFRAME_HANDLER_OFFSET", offsetof (struct sigframe, handler) },
{ "IA64_SIGFRAME_SIGCONTEXT_OFFSET", offsetof (struct sigframe, sc) },
{ "IA64_CLONE_VFORK", CLONE_VFORK },
{ "IA64_CLONE_VM", CLONE_VM },
{ "IA64_CPU_IRQ_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.irq_count) },
{ "IA64_CPU_BH_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.f.bh_count) },
- { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET", offsetof (struct cpuinfo_ia64, phys_stacked_size_p8) },
+ { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET",offsetof (struct cpuinfo_ia64, phys_stacked_size_p8)},
};
static const char *tabs = "\t\t\t\t\t\t\t\t\t\t";
diff -urN linux-2.4.13/arch/parisc/kernel/traps.c linux-2.4.13-lia/arch/parisc/kernel/traps.c
--- linux-2.4.13/arch/parisc/kernel/traps.c Wed Oct 10 16:31:44 2001
+++ linux-2.4.13-lia/arch/parisc/kernel/traps.c Wed Oct 24 18:17:29 2001
@@ -43,7 +43,6 @@
static inline void console_verbose(void)
{
- extern int console_loglevel;
console_loglevel = 15;
}
diff -urN linux-2.4.13/drivers/acpi/acpiconf.c linux-2.4.13-lia/drivers/acpi/acpiconf.c
--- linux-2.4.13/drivers/acpi/acpiconf.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/acpiconf.c Wed Oct 10 17:47:00 2001
@@ -0,0 +1,593 @@
+/*
+ * acpiconf.c - ACPI based kernel configuration
+ *
+ * Copyright (C) 2000-2001 Intel Corp.
+ * Copyright (C) 2000-2001 J.I. Lee <Jung-Ik.Lee@intel.com>
+ *
+ * Revision History:
+ * 9/15/2000 J.I.
+ * Major revision: for new ACPI initialization requirements
+ * 11/15/2000 J.I.
+ * Major revision: ACPI 2.0 tables support
+ * 04/23/2001 J.I.
+ * Rewrote functions to support multiple _PRTs of child P2Ps
+ * under root pci bus
+ */
+
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <asm/system.h>
+#include <asm/iosapic.h>
+#include <asm/efi.h>
+#include <asm/acpikcfg.h>
+#include "acpi.h"
+#include "osconf.h"
+#include "acpiconf.h"
+
+
+static int acpi_cf_initialized __initdata = 0;
+
+acpi_status __init
+acpi_cf_init (
+ void * rsdp
+ )
+{
+ acpi_status status;
+
+ acpi_os_bind_osd(ACPI_CF_PHASE_BOOTTIME);
+
+ status = acpi_initialize_subsystem ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:initialize_subsystem error=0x%x\n", status);
+ return status;
+ }
+ dprintk(("Acpi cfg:initialize_subsystem pass\n"));
+
+ status = acpi_load_tables ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:load firmware tables error=0x%x\n", status);
+ acpi_terminate();
+ return status;
+ }
+ dprintk(("Acpi cfg:load firmware tables pass\n"));
+
+ status = acpi_enable_subsystem (ACPI_FULL_INITIALIZATION);
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:enable_subsystem error=0x%x\n", status);
+ acpi_terminate();
+ return status;
+ }
+ dprintk(("Acpi cfg:enable_subsystem pass\n"));
+
+ acpi_cf_initialized++;
+
+ return AE_OK;
+}
+
+
+acpi_status __init
+acpi_cf_terminate ( void )
+{
+ acpi_status status;
+
+ if (! ACPI_CF_INITIALIZED()) {
+ acpi_os_bind_osd(ACPI_CF_PHASE_RUNTIME);
+ return AE_ERROR;
+ }
+
+ status = acpi_disable ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:disable fail=0x%x\n", status);
+ /* fall thru...*/
+ }
+
+ status = acpi_terminate ();
+ if (ACPI_FAILURE(status)) {
+ printk ("Acpi cfg:acpi terminate error=0x%x\n", status);
+ /* fall thru...*/
+ }
+
+ acpi_cf_cleanup();
+ acpi_os_bind_osd(ACPI_CF_PHASE_RUNTIME);
+
+ acpi_cf_initialized--;
+
+ return status;
+}
+
+
+acpi_status __init
+acpi_cf_get_pci_vectors (
+ struct pci_vector_struct **vectors,
+ int *num_pci_vectors
+ )
+{
+ acpi_status status;
+ void *prts;
+
+ if (! ACPI_CF_INITIALIZED()) {
+ status = acpi_cf_init((void *)efi.acpi);
+ if (ACPI_FAILURE (status))
+ return status;
+ }
+
+ *vectors = NULL;
+ *num_pci_vectors = 0;
+
+ status = acpi_cf_get_prt (&prts);
+ if (ACPI_FAILURE (status)) {
+ printk("Acpi cfg: get prt fail\n");
+ return status;
+ }
+
+ status = acpi_cf_convert_prt_to_vectors (prts, vectors, num_pci_vectors);
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ if (ACPI_SUCCESS(status)) {
+ acpi_cf_print_pci_vectors (*vectors, *num_pci_vectors);
+ }
+#endif
+ printk("Acpi cfg: get PCI interrupt vectors %s\n",
+ (ACPI_SUCCESS(status))?"pass":"fail");
+
+ return status;
+}
+
+
+static pci_routing_table *pci_routing_tables[PCI_MAX_BUS] __initdata = {NULL};
+
+
+typedef struct _acpi_rpb {
+ NATIVE_UINT rpb_busnum;
+ NATIVE_UINT lastbusnum;
+ acpi_handle rpb_handle;
+} acpi_rpb_t;
+
+
+static acpi_status __init
+acpi_cf_evaluate_method (
+ acpi_handle handle,
+ UINT8 *method_name,
+ NATIVE_UINT *nuint
+ )
+{
+ UINT32 tnuint = 0;
+ acpi_status status;
+
+ acpi_buffer ret_buf;
+ acpi_object *ext_obj;
+ UINT8 buf[PATHNAME_MAX];
+
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) buf;
+
+ status = acpi_evaluate_object(handle, method_name, NULL, &ret_buf);
+ if (ACPI_FAILURE(status)) {
+ if (status = AE_NOT_FOUND) {
+ printk("Acpi cfg: no %s found\n", method_name);
+ } else {
+ printk("Acpi cfg: %s fail=0x%x\n", method_name, status);
+ }
+ } else {
+ ext_obj = (acpi_object *) ret_buf.pointer;
+
+ switch (ext_obj->type) {
+ case ACPI_TYPE_INTEGER:
+ tnuint = (NATIVE_UINT) ext_obj->integer.value;
+ break;
+ default:
+ printk("Acpi cfg: %s obj type incorrect\n", method_name);
+ status = AE_TYPE;
+ break;
+ }
+ }
+
+ *nuint = tnuint;
+ return (status);
+}
+
+
+static acpi_status __init
+acpi_cf_evaluate_PRT (
+ acpi_handle handle,
+ pci_routing_table **prt
+ )
+{
+ acpi_buffer acpi_buffer;
+ acpi_status status;
+
+ acpi_buffer.length = 0;
+ acpi_buffer.pointer = NULL;
+
+ status = acpi_get_irq_routing_table (handle, &acpi_buffer);
+
+ switch (status) {
+ case AE_BUFFER_OVERFLOW:
+ dprintk(("Acpi cfg: _PRT found. need %d bytes\n",
+ acpi_buffer.length));
+ break; /* found */
+ default:
+ printk("Acpi cfg: _PRT fail=0x%x\n", status);
+ case AE_NOT_FOUND:
+ return status;
+ }
+
+ *prt = (pci_routing_table *) acpi_os_callocate (acpi_buffer.length);
+ if (!*prt) {
+ printk("Acpi cfg: callocate %d bytes for _PRT fail\n",
+ acpi_buffer.length);
+ return AE_NO_MEMORY;
+ }
+ acpi_buffer.pointer = (void *) *prt;
+
+ status = acpi_get_irq_routing_table (handle, &acpi_buffer);
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg: _PRT fail=0x%x.\n", status);
+ acpi_os_free(prt);
+ }
+
+ return status;
+}
+
+static acpi_status __init
+acpi_cf_get_root_pci_callback (
+ acpi_handle handle,
+ UINT32 Level,
+ void *context,
+ void **retval
+ )
+{
+ NATIVE_UINT busnum = 0;
+ acpi_status status;
+ acpi_rpb_t rpb;
+ pci_routing_table *prt;
+
+ UINT8 path_name[PATHNAME_MAX];
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ acpi_buffer ret_buf;
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
+
+ /*
+ * get bus number of this pci root bridge
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__BBN, &busnum);
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s evaluate _BBN fail=0x%x\n",
+ path_name, status);
+ return (status);
+ }
+ printk("Acpi cfg:%s ROOT PCI bus %ld\n", path_name, busnum);
+
+ /*
+ * evaluate root pci bridge's _CRS for Bus number range for child P2P
+ * (bus min/max/len) - not yet.
+ */
+
+ /*
+ * get immediate _PRT of this root pci bridge if any
+ */
+ status = acpi_cf_evaluate_PRT (handle, &prt);
+ switch(status) {
+ case AE_NOT_FOUND:
+ break;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _PRT fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ dprintk(("Acpi cfg:%s bus %ld got _PRT\n", path_name, busnum));
+ acpi_cf_add_to_pci_routing_tables (busnum, prt);
+ break;
+ }
+
+
+ /*
+ * walk down this root pci bridge to get _PRTs if any
+ */
+ rpb.rpb_busnum = rpb.lastbusnum = busnum;
+ rpb.rpb_handle = handle;
+ status = acpi_walk_namespace ( ACPI_TYPE_DEVICE,
+ handle,
+ ACPI_UINT32_MAX,
+ acpi_cf_get_prt_callback,
+ &rpb,
+ NULL );
+ if (ACPI_FAILURE(status))
+ printk("Acpi cfg:%s walk namespace for _PRT error=0x%x\n",
+ path_name, status);
+
+ return (status);
+}
+
+
+/*
+ * handle _PRTs of immediate P2Ps of root pci.
+ */
+static acpi_status __init
+acpi_cf_associate_prt_to_bus (
+ acpi_handle handle,
+ acpi_rpb_t *rpb,
+ NATIVE_UINT *retbusnum,
+ NATIVE_UINT depth
+ )
+{
+ acpi_status status;
+ UINT32 segbus;
+ NATIVE_UINT devfn;
+ UINT8 bn;
+
+ UINT8 path_name[PATHNAME_MAX];
+ acpi_pci_id pci_id;
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ acpi_buffer ret_buf;
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
+
+ /*
+ * get devfn from _ADR
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__ADR, &devfn);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s _ADR fail=0x%x. Set busnum to %ld\n",
+ path_name, status, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s _ADR =0x%x\n", path_name, (UINT32)devfn));
+
+
+ /*
+ * access pci config space for bus number
+ * segbus = from rpb, devfn = from _ADR
+ */
+ pci_id.segment = 0;
+ pci_id.bus = (u16)(rpb->rpb_busnum & 0xffffffff);
+ pci_id.device = (u16)((devfn >> 16) & 0xffff);
+ pci_id.function = (u16)(devfn & 0xffff);
+
+ status = acpi_os_read_pci_configuration(&pci_id, PCI_PRIMARY_BUS,
+ &bn, 8);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
+ path_name, status, segbus, (UINT32)devfn,
+ PCI_PRIMARY_BUS);
+ printk("Acpi cfg:%s Set busnum to %ld\n",
+ path_name, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s pribus %d\n", path_name, bn));
+
+
+ status = acpi_os_read_pci_configuration(&pci_id, PCI_SECONDARY_BUS,
+ &bn, 8);
+ if (ACPI_FAILURE(status)) {
+ *retbusnum = rpb->rpb_busnum + 1;
+ printk("Acpi cfg:%s pci read fail=0x%x. b:df:a=%x:%x:%x\n",
+ path_name, status, segbus, (UINT32)devfn,
+ PCI_SECONDARY_BUS);
+ printk("Acpi cfg:%s Set busnum to %ld\n",
+ path_name, *retbusnum);
+ return AE_OK;
+ }
+ dprintk(("Acpi cfg:%s busnum %d\n", path_name, bn));
+
+ *retbusnum = (NATIVE_UINT)bn;
+ return AE_OK;
+}
+
+
+static acpi_status __init
+acpi_cf_get_prt (
+ void **prts
+ )
+{
+ acpi_status status;
+
+ status = acpi_get_devices ( PCI_ROOT_HID_STRING,
+ acpi_cf_get_root_pci_callback,
+ NULL,
+ NULL );
+
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:get_device PCI ROOT HID error=0x%x\n", status);
+ }
+
+ *prts = (void *)pci_routing_tables;
+
+ return status;
+}
+
+static acpi_status __init
+acpi_cf_get_prt_callback (
+ acpi_handle handle,
+ UINT32 Level,
+ void *context,
+ void **retval
+ )
+{
+ pci_routing_table *prt;
+ NATIVE_UINT busnum = 0;
+ NATIVE_UINT temp = 0x0F;
+ acpi_status status;
+
+ UINT8 path_name[PATHNAME_MAX];
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+ acpi_buffer ret_buf;
+
+ ret_buf.length = PATHNAME_MAX;
+ ret_buf.pointer = (void *) path_name;
+
+ status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf);
+#else
+ memset(path_name, 0, sizeof (path_name));
+#endif
+
+ status = acpi_cf_evaluate_PRT (handle, &prt);
+ switch(status) {
+ case AE_NOT_FOUND:
+ return AE_OK;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _PRT fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ }
+
+ /*
+ * evaluate _STA in case this device does not exist
+ */
+ status = acpi_cf_evaluate_method(handle, METHOD_NAME__STA, &temp);
+ switch(status) {
+ case AE_NOT_FOUND:
+ break;
+ default:
+ if (ACPI_FAILURE(status)) {
+ printk("Acpi cfg:%s _STA fail=0x%x\n",
+ path_name, status);
+ return status;
+ }
+ if (!(temp & ACPI_STA_DEVICE_PRESENT)) {
+ dprintk(("Acpi cfg:%s not exist. _PRT discarded\n",
+ path_name));
+ acpi_os_free(prt);
+ return AE_OK;
+ }
+ break;
+ }
+
+ /*
+ * associate a bus number to this _PRT since
+ * this _PRT is not on root pci bridge
+ */
+ acpi_cf_associate_prt_to_bus(handle, context, &busnum, 0);
+
+ printk("Acpi cfg:%s busnum %ld got _PRT\n", path_name, busnum);
+ acpi_cf_add_to_pci_routing_tables (busnum, prt);
+
+ return AE_OK;
+}
+
+
+static void __init
+acpi_cf_add_to_pci_routing_tables (
+ NATIVE_UINT busnum,
+ pci_routing_table *prt
+ )
+{
+ if ( busnum >= PCI_MAX_BUS ) {
+ printk("Acpi cfg:invalid pci bus number %ld\n", busnum);
+ acpi_os_free(prt);
+ return;
+ }
+
+ if (pci_routing_tables[busnum]) {
+ printk("Acpi cfg:duplicate PRT for pci bus %ld. overiding...\n", busnum);
+ acpi_os_free(pci_routing_tables[busnum]);
+ }
+
+ pci_routing_tables[busnum] = prt;
+}
+
+
+#define DUMPVECTOR(pv) printk("PCI bus=0x%x id=0x%x pin=0x%x irq=0x%x\n", pv->bus, pv->pci_id, pv->pin, pv->irq);
+
+static acpi_status __init
+acpi_cf_convert_prt_to_vectors (
+ void *prts,
+ struct pci_vector_struct **vectors,
+ int *num_pci_vectors
+ )
+{
+ struct pci_vector_struct *pvec;
+ pci_routing_table **pprts, *prt, *prtf;
+ int nvec = 0;
+ int i;
+
+
+ pprts = (pci_routing_table **)prts;
+
+ for ( i = 0; i < PCI_MAX_BUS; i++) {
+ prt = *pprts++;
+ if (prt) {
+ for ( ; prt->length > 0; nvec++) {
+ prt = (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
+ }
+ }
+ }
+
+ *num_pci_vectors = nvec;
+ *vectors = acpi_os_callocate (sizeof(struct pci_vector_struct) * nvec);
+ if (!*vectors) {
+ printk("Acpi cfg: callocate for pci_vector error\n");
+ return AE_NO_MEMORY;
+ }
+
+ pvec = *vectors;
+ pprts = (pci_routing_table **)prts;
+
+ for ( i = 0; i < PCI_MAX_BUS; i++) {
+ prt = prtf = *pprts++;
+ if (prt) {
+ for ( ; prt->length > 0; pvec++) {
+ pvec->bus = (UINT16)i;
+ pvec->pci_id = prt->address;
+ pvec->pin = (UINT8)prt->pin;
+ pvec->irq = (UINT8)prt->source_index;
+
+ prt = (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->length);
+ }
+ acpi_os_free((void *)prtf);
+ }
+ }
+
+ return AE_OK;
+}
+
+
+void __init
+acpi_cf_cleanup ( void )
+{
+ /* nothing to free, pci_vectors are used by the kernel */
+}
+
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+void __init
+acpi_cf_print_pci_vectors (
+ struct pci_vector_struct *vectors,
+ int num_pci_vectors
+ )
+{
+ struct pci_vector_struct *pvec;
+ int i;
+
+ printk("number of PCI interrupt vectors = %d\n", num_pci_vectors);
+
+ pvec = vectors;
+ for (i = 0; i < num_pci_vectors; i++) {
+ DUMPVECTOR(pvec);
+ pvec++;
+ }
+}
+#endif
diff -urN linux-2.4.13/drivers/acpi/acpiconf.h linux-2.4.13-lia/drivers/acpi/acpiconf.h
--- linux-2.4.13/drivers/acpi/acpiconf.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/acpiconf.h Fri Oct 12 09:03:25 2001
@@ -0,0 +1,63 @@
+/*
+ * acpiconf.h - ACPI based kernel configuration
+ *
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <Jung-Ik.Lee@intel.com>
+ */
+
+#include <linux/init.h>
+
+#define PCI_MAX_BUS 0x100
+#define ACPI_STA_DEVICE_PRESENT 0x01
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+#define ACPI_CF_INITIALIZED() (acpi_cf_initialized > 0)
+#undef dprintk
+#define dprintk(a) printk a
+#else
+#define ACPI_CF_INITIALIZED() 1
+#undef dprintk
+#define dprintk(a)
+#endif
+
+
+extern
+void __init
+acpi_os_bind_osd(int acpi_phase);
+
+
+static
+acpi_status __init
+acpi_cf_get_prt (void **prts);
+
+
+static
+acpi_status __init
+acpi_cf_get_prt_callback (
+ acpi_handle handle,
+ UINT32 level,
+ void *context,
+ void **retval
+ );
+
+
+static
+void __init
+acpi_cf_add_to_pci_routing_tables (
+ NATIVE_UINT busnum,
+ pci_routing_table *prt
+ );
+
+
+static
+acpi_status __init
+acpi_cf_convert_prt_to_vectors (
+ void *prts,
+ struct pci_vector_struct **vectors,
+ int *num_pci_vectors
+ );
+
+
+void __init
+acpi_cf_cleanup ( void );
+
diff -urN linux-2.4.13/drivers/acpi/hardware/hwacpi.c linux-2.4.13-lia/drivers/acpi/hardware/hwacpi.c
--- linux-2.4.13/drivers/acpi/hardware/hwacpi.c Mon Sep 24 15:06:41 2001
+++ linux-2.4.13-lia/drivers/acpi/hardware/hwacpi.c Thu Oct 4 00:21:40 2001
@@ -196,6 +196,7 @@
{
acpi_status status = AE_NO_HARDWARE_RESPONSE;
+ u32 retries = 20;
FUNCTION_TRACE ("Hw_set_mode");
@@ -220,11 +221,14 @@
/* Give the platform some time to react */
- acpi_os_stall (5000);
+ while (retries-- > 0) {
+ acpi_os_stall (5000);
- if (acpi_hw_get_mode () = mode) {
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
- status = AE_OK;
+ if (acpi_hw_get_mode () = mode) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode));
+ status = AE_OK;
+ break;
+ }
}
return_ACPI_STATUS (status);
diff -urN linux-2.4.13/drivers/acpi/include/actypes.h linux-2.4.13-lia/drivers/acpi/include/actypes.h
--- linux-2.4.13/drivers/acpi/include/actypes.h Mon Sep 24 15:06:42 2001
+++ linux-2.4.13-lia/drivers/acpi/include/actypes.h Thu Oct 4 00:21:40 2001
@@ -60,6 +60,7 @@
typedef int INT32;
typedef unsigned int UINT32;
typedef COMPILER_DEPENDENT_UINT64 UINT64;
+typedef long INT64;
typedef UINT64 NATIVE_UINT;
typedef INT64 NATIVE_INT;
diff -urN linux-2.4.13/drivers/acpi/include/acutils.h linux-2.4.13-lia/drivers/acpi/include/acutils.h
--- linux-2.4.13/drivers/acpi/include/acutils.h Mon Sep 24 15:06:42 2001
+++ linux-2.4.13-lia/drivers/acpi/include/acutils.h Wed Oct 24 18:17:40 2001
@@ -383,6 +383,7 @@
/* Method name strings */
#define METHOD_NAME__HID "_HID"
+#define METHOD_NAME__CID "_CID"
#define METHOD_NAME__UID "_UID"
#define METHOD_NAME__ADR "_ADR"
#define METHOD_NAME__STA "_STA"
@@ -396,6 +397,11 @@
NATIVE_CHAR *object_name,
acpi_namespace_node *device_node,
acpi_integer *address);
+
+acpi_status
+acpi_ut_execute_CID (
+ acpi_namespace_node *device_node,
+ ACPI_DEVICE_ID *cid);
acpi_status
acpi_ut_execute_HID (
diff -urN linux-2.4.13/drivers/acpi/include/platform/acgcc.h linux-2.4.13-lia/drivers/acpi/include/platform/acgcc.h
--- linux-2.4.13/drivers/acpi/include/platform/acgcc.h Wed Oct 24 10:17:44 2001
+++ linux-2.4.13-lia/drivers/acpi/include/platform/acgcc.h Wed Oct 24 18:17:50 2001
@@ -42,11 +42,32 @@
/*! [Begin] no source code translation */
+#include <linux/interrupt.h>
+
+#include <asm/processor.h>
#include <asm/pal.h>
#define halt() ia64_pal_halt_light() /* PAL_HALT[_LIGHT] */
#define safe_halt() ia64_pal_halt(1) /* PAL_HALT */
+static inline void
+wbinvd (void)
+{
+ unsigned long flags, vector, position = 0;
+ long status;
+
+ do {
+ ia64_clear_ic(flags);
+ status = ia64_pal_cache_flush(0x3, (PAL_CACHE_FLUSH_INVALIDATE
+ | PAL_CACHE_FLUSH_CHK_INTRS),
+ &position, &vector);
+ local_irq_restore(flags);
+ if (status = 1) {
+ ia64_eoi();
+ hw_resend_irq(NULL, vector);
+ }
+ } while (status = 1);
+}
#define ACPI_ACQUIRE_GLOBAL_LOCK(GLptr, Acq) \
do { \
diff -urN linux-2.4.13/drivers/acpi/namespace/nsxfobj.c linux-2.4.13-lia/drivers/acpi/namespace/nsxfobj.c
--- linux-2.4.13/drivers/acpi/namespace/nsxfobj.c Mon Sep 24 15:06:43 2001
+++ linux-2.4.13-lia/drivers/acpi/namespace/nsxfobj.c Wed Oct 24 18:18:06 2001
@@ -588,6 +588,7 @@
acpi_namespace_node *node;
u32 flags;
ACPI_DEVICE_ID device_id;
+ ACPI_DEVICE_ID compatible_id;
ACPI_GET_DEVICES_INFO *info;
@@ -628,7 +629,17 @@
}
if (STRNCMP (device_id.buffer, info->hid, sizeof (device_id.buffer)) != 0) {
- return (AE_OK);
+ status = acpi_ut_execute_CID (node, &compatible_id);
+ if (status = AE_NOT_FOUND) {
+ return (AE_OK);
+ }
+ else if (ACPI_FAILURE (status)) {
+ return (AE_CTRL_DEPTH);
+ }
+
+ if (STRNCMP (compatible_id.buffer, info->hid, sizeof (compatible_id.buffer)) != 0) {
+ return (AE_OK);
+ }
}
}
diff -urN linux-2.4.13/drivers/acpi/os.c linux-2.4.13-lia/drivers/acpi/os.c
--- linux-2.4.13/drivers/acpi/os.c Mon Sep 24 15:06:43 2001
+++ linux-2.4.13-lia/drivers/acpi/os.c Thu Oct 4 00:21:40 2001
@@ -31,6 +31,8 @@
* - Fixed improper kernel_thread parameters
*/
+#include <linux/config.h>
+
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/mm.h>
@@ -48,7 +50,8 @@
#ifdef _IA64
#include <asm/hw_irq.h>
-#endif
+#include <asm/delay.h>
+#endif
#define _COMPONENT ACPI_OS_SERVICES
MODULE_NAME ("os")
@@ -61,6 +64,33 @@
/*****************************************************************************
+ * Function Binding
+ *****************************************************************************/
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+#include "osconf.h"
+
+struct acpi_osd acpi_osd_rt = {
+ /* these are runtime osd entries that differ from boottime entries */
+ acpi_os_allocate_rt,
+ acpi_os_callocate_rt,
+ acpi_os_free_rt,
+ acpi_os_queue_for_execution_rt,
+ acpi_os_read_pci_configuration_rt,
+ acpi_os_write_pci_configuration_rt,
+ acpi_os_stall_rt
+};
+#else
+#define acpi_os_allocate_rt acpi_os_allocate
+#define acpi_os_callocate_rt acpi_os_callocate
+#define acpi_os_free_rt acpi_os_free
+#define acpi_os_queue_for_execution_rt acpi_os_queue_for_execution
+#define acpi_os_read_pci_configuration_rt acpi_os_read_pci_configuration
+#define acpi_os_write_pci_configuration_rt acpi_os_write_pci_configuration
+#define acpi_os_stall_rt acpi_os_stall
+#endif
+
+/*****************************************************************************
* Debugger Stuff
*****************************************************************************/
@@ -137,13 +167,13 @@
}
void *
-acpi_os_allocate(u32 size)
+acpi_os_allocate_rt(u32 size)
{
return kmalloc(size, GFP_KERNEL);
}
void *
-acpi_os_callocate(u32 size)
+acpi_os_callocate_rt(u32 size)
{
void *ptr = acpi_os_allocate(size);
if (ptr)
@@ -153,7 +183,7 @@
}
void
-acpi_os_free(void *ptr)
+acpi_os_free_rt(void *ptr)
{
kfree(ptr);
}
@@ -233,12 +263,105 @@
(*acpi_irq_handler)(acpi_irq_context);
}
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+struct irqaction acpiirqaction;
+/*
+ * codes from request_irq and free_irq.
+ */
acpi_status
acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *context)
{
-#ifdef _IA64
+ struct irqaction *act;
+ int retval;
+
+ if (irq >= NR_IRQS) {
+ printk("ACPI: install SCI handler fail: invalid irq%d\n", irq);
+ return AE_ERROR;
+ }
+
+ if (!handler) {
+ printk("ACPI: install SCI handler fail: invalid handler\n");
+ return AE_ERROR;
+ }
+
+ act = & acpiirqaction;
+
irq = isa_irq_to_vector(irq);
-#endif /*_IA64*/
+ acpi_irq_irq = irq;
+ acpi_irq_handler = handler;
+ acpi_irq_context = context;
+
+ act->handler = acpi_irq;
+ act->flags = SA_INTERRUPT | SA_SHIRQ;
+ act->mask = 0;
+ act->name = "acpi";
+ act->next = NULL;
+ act->dev_id = acpi_irq;
+
+ retval = setup_irq(irq, act);
+ if (retval) {
+ printk("ACPI: install SCI handler fail: setup_irq\n");
+ acpi_irq_handler = NULL;
+ return AE_ERROR;
+ }
+ printk("ACPI: install SCI %d handler pass\n", irq);
+
+ return AE_OK;
+}
+
+acpi_status
+acpi_os_remove_interrupt_handler(u32 irq, OSD_HANDLER handler)
+{
+ irq_desc_t *desc;
+ struct irqaction **p;
+ unsigned long flags;
+
+ if (!acpi_irq_handler)
+ return AE_OK;
+
+ irq = isa_irq_to_vector(irq);
+ if (irq != acpi_irq_irq) return AE_ERROR;
+
+ acpi_irq_handler = NULL;
+
+ desc = irq_desc(irq);
+ spin_lock_irqsave(&desc->lock,flags);
+ p = &desc->action;
+ for (;;) {
+ struct irqaction * action = *p;
+ if (action) {
+ struct irqaction **pp = p;
+ p = &action->next;
+ if (action->dev_id != acpi_irq)
+ continue;
+
+ /* Found it - now remove it from the list of entries */
+ *pp = action->next;
+ if (!desc->action) {
+ desc->status |= IRQ_DISABLED;
+ desc->handler->shutdown(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock,flags);
+
+#ifdef CONFIG_SMP
+ /* Wait to make sure it's not being used on another CPU */
+ while (desc->status & IRQ_INPROGRESS)
+ barrier();
+#endif
+ return AE_OK;
+ }
+ printk("ACPI: Trying to free free IRQ%d\n",irq);
+ spin_unlock_irqrestore(&desc->lock,flags);
+ return AE_OK;
+ }
+
+ return AE_OK;
+}
+
+#else
+acpi_status
+acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *context)
+{
acpi_irq_irq = irq;
acpi_irq_handler = handler;
acpi_irq_context = context;
@@ -267,6 +390,7 @@
return AE_OK;
}
+#endif
/*
* Running in interpreter thread context, safe to sleep
@@ -280,7 +404,7 @@
}
void
-acpi_os_stall(u32 us)
+acpi_os_stall_rt(u32 us)
{
if (us > 10000) {
mdelay(us / 1000);
@@ -322,7 +446,7 @@
acpi_status
acpi_os_write_port(
ACPI_IO_ADDRESS port,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
switch (width)
@@ -375,7 +499,7 @@
acpi_status
acpi_os_write_memory(
ACPI_PHYSICAL_ADDRESS phys_addr,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
switch (width)
@@ -468,7 +592,7 @@
#else /*CONFIG_ACPI_PCI*/
acpi_status
-acpi_os_read_pci_configuration (
+acpi_os_read_pci_configuration_rt (
acpi_pci_id *pci_id,
u32 reg,
void *value,
@@ -502,10 +626,10 @@
}
acpi_status
-acpi_os_write_pci_configuration (
+acpi_os_write_pci_configuration_rt (
acpi_pci_id *pci_id,
u32 reg,
- u32 value,
+ NATIVE_UINT value,
u32 width)
{
int devfn = PCI_DEVFN(pci_id->device, pci_id->function);
@@ -620,6 +744,22 @@
acpi_os_free(dpc);
}
}
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+/*
+ * Queue for interpreter thread
+ */
+
+acpi_status
+acpi_os_queue_for_execution_rt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context)
+{
+ (*callback)(context);
+ return AE_OK;
+}
+#endif
acpi_status
acpi_os_queue_for_execution(
diff -urN linux-2.4.13/drivers/acpi/osconf.c linux-2.4.13-lia/drivers/acpi/osconf.c
--- linux-2.4.13/drivers/acpi/osconf.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/osconf.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,286 @@
+/*
+ * osconf.c - ACPI OS-dependent functions for Kernel Boot/Configuration time
+ *
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <Jung-Ik.Lee@intel.com>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/pci.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/sal.h>
+#include <asm/delay.h>
+
+#include "acpi.h"
+#include "osconf.h"
+
+
+static void * __init acpi_os_allocate_bt(u32 size);
+static void * __init acpi_os_callocate_bt(u32 size);
+static void __init acpi_os_free_bt(void *ptr);
+static void __init acpi_os_stall_bt(u32 us);
+
+static acpi_status __init
+acpi_os_queue_for_execution_bt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context
+ );
+
+static acpi_status __init
+acpi_os_read_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, void *value, u32 width);
+
+static acpi_status __init
+acpi_os_write_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width);
+
+
+extern struct acpi_osd acpi_osd_rt;
+static struct acpi_osd acpi_osd_bt __initdata = {
+ /* these are boottime osd entries that differ from runtime entries */
+ acpi_os_allocate_bt,
+ acpi_os_callocate_bt,
+ acpi_os_free_bt,
+ acpi_os_queue_for_execution_bt,
+ acpi_os_read_pci_configuration_bt,
+ acpi_os_write_pci_configuration_bt,
+ acpi_os_stall_bt
+};
+static struct acpi_osd *acpi_osd = &acpi_osd_rt;
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+static void __init
+acpi_cf_bm_statistics( void );
+#endif
+
+void __init
+acpi_os_bind_osd(int acpi_phase)
+{
+ switch (acpi_phase) {
+ case ACPI_CF_PHASE_BOOTTIME:
+ acpi_osd = &acpi_osd_bt;
+ printk("Acpi cfg:bind to Boot time Acpi OSD\n");
+ break;
+ case ACPI_CF_PHASE_RUNTIME:
+ default:
+ acpi_osd = &acpi_osd_rt;
+ printk("Acpi cfg:bind to Run time Acpi OSD\n");
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+ acpi_cf_bm_statistics();
+#endif
+ break;
+ }
+}
+
+void *
+acpi_os_allocate(u32 size)
+{
+ return acpi_osd->allocate(size);
+}
+
+void *
+acpi_os_callocate(u32 size)
+{
+ return acpi_osd->callocate(size);
+}
+
+void
+acpi_os_free(void *ptr)
+{
+ acpi_osd->free(ptr);
+ return;
+}
+
+void
+acpi_os_stall(u32 us)
+{
+ acpi_osd->stall(us);
+ return;
+}
+
+acpi_status
+acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value, u32 width)
+{
+ return acpi_osd->read_pci_configuration(pci_id, reg, value, width);
+}
+
+
+acpi_status
+acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width)
+{
+ return acpi_osd->write_pci_configuration(pci_id, reg, value, width);
+}
+
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+/*
+ * Let's profile bootmem usage to see how much we consume. J.I.
+ */
+static unsigned long bm_alloc_size __initdata = 0;
+static unsigned long bm_alloc_size_max __initdata = 0;
+static unsigned long bm_alloc_count_max __initdata = 0;
+static unsigned long bm_free_count_max __initdata = 0;
+
+static void __init
+acpi_cf_bm_checkin(void *ptr, u32 size)
+{
+ bm_alloc_count_max++;
+ bm_alloc_size += size;
+ if (bm_alloc_size > bm_alloc_size_max)
+ bm_alloc_size_max = bm_alloc_size;
+};
+
+static void __init
+acpi_cf_bm_checkout(void *ptr, u32 size)
+{
+ bm_free_count_max++;
+ bm_alloc_size -= size;
+};
+
+static void __init
+acpi_cf_bm_statistics( void )
+{
+ printk("Acpi cfg:bm_alloc_size_max =%ld bytes\n", bm_alloc_size_max);
+ printk("Acpi cfg:bm_alloc_count_max=%ld\n", bm_alloc_count_max);
+ printk("Acpi cfg:bm_free_count_max =%ld\n", bm_free_count_max);
+}
+#endif
+
+
+static void * __init
+acpi_os_allocate_bt(u32 size)
+{
+ void *ptr;
+
+ size += sizeof(unsigned long);
+ ptr = alloc_bootmem(size);
+
+ if (ptr) {
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+ acpi_cf_bm_checkin(ptr, size);
+#endif
+ *((unsigned long *)ptr) = (unsigned long)size;
+ ptr += sizeof(unsigned long);
+ }
+
+ return ptr;
+}
+
+static void * __init
+acpi_os_callocate_bt(u32 size)
+{
+ void *ptr = acpi_os_allocate_bt(size);
+
+ return ptr;
+}
+
+static void __init
+acpi_os_free_bt(void *ptr)
+{
+ unsigned long size;
+
+ ptr -= sizeof(size);
+ size = *((unsigned long *)ptr);
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE
+ acpi_cf_bm_checkout(ptr, (unsigned long)size);
+#endif
+ //if (size)
+ free_bootmem (__pa((unsigned long)ptr), (u32)size);
+}
+
+
+static void __init
+acpi_os_stall_bt(u32 us)
+{
+ unsigned long start = ia64_get_itc();
+ unsigned long cycles = us*733; /* XXX: 733 or 800 */
+ while (ia64_get_itc() - start < cycles)
+ /* skip */;
+}
+
+
+static acpi_status __init
+acpi_os_queue_for_execution_bt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context)
+{
+ /*
+ * run callback immediately
+ */
+ (*callback)(context);
+ return AE_OK;
+}
+
+
+static acpi_status __init
+acpi_os_read_pci_configuration_bt (
+ acpi_pci_id *pci_id,
+ u32 reg,
+ void *value,
+ u32 width)
+{
+ unsigned int devfn;
+ s64 status;
+ u64 lval;
+
+ devfn = PCI_DEVFN(pci_id->device, pci_id->function);
+
+ switch (width)
+ {
+ case 8:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 1, &lval);
+ *(u8*)value = (u8)lval;
+ break;
+ case 16:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 2, &lval);
+ *(u16*)value = (u16)lval;
+ break;
+ case 32:
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 4, &lval);
+ *(u32*)value = (u32)lval;
+ break;
+ default:
+ BUG();
+ }
+
+ return status;
+}
+
+
+static acpi_status __init
+acpi_os_write_pci_configuration_bt (
+ acpi_pci_id *pci_id,
+ u32 reg,
+ NATIVE_UINT value,
+ u32 width)
+{
+ unsigned int devfn;
+ s64 status;
+
+ devfn = PCI_DEVFN(pci_id->device, pci_id->function);
+
+ switch (width)
+ {
+ case 8:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 1, value);
+ break;
+ case 16:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 2, value);
+ break;
+ case 32:
+ status = ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), devfn, reg), 4, value);
+ break;
+ default:
+ BUG();
+ }
+
+ return status;
+}
diff -urN linux-2.4.13/drivers/acpi/osconf.h linux-2.4.13-lia/drivers/acpi/osconf.h
--- linux-2.4.13/drivers/acpi/osconf.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/acpi/osconf.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,57 @@
+/*
+ * osconf.h - ACPI OS-dependent headers for Kernel Boot/Configuration time
+ *
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <Jung-Ik.Lee@intel.com>
+ */
+
+
+struct acpi_osd {
+ void * (*allocate)(u32 size);
+ void * (*callocate)(u32 size);
+ void (*free)(void *ptr);
+ acpi_status (*queue_for_exec)(u32 pri, OSD_EXECUTION_CALLBACK cb, void *context);
+ acpi_status (*read_pci_configuration)(acpi_pci_id *pci_id, u32 reg, void *value, u32 width);
+ acpi_status (*write_pci_configuration)(acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width);
+ void (*stall)(u32 us);
+};
+
+
+#define PCI_CONFIG_ADDRESS(bus, devfn, where) \
+ (((u64) bus << 16) | ((u64) (devfn & 0xff) << 8) | (where & 0xff))
+
+#define ACPI_CF_PHASE_BOOTTIME 0x00
+#define ACPI_CF_PHASE_RUNTIME 0x01
+
+
+/* acpi_osd functions */
+void * acpi_os_allocate(u32 size);
+void * acpi_os_callocate(u32 size);
+void acpi_os_free(void *ptr);
+void acpi_os_stall(u32 us);
+
+acpi_status
+acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value, u32 width );
+
+acpi_status
+acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width );
+
+
+/* acpi_osd_rt functions */
+extern void * acpi_os_allocate_rt(u32 size);
+extern void * acpi_os_callocate_rt(u32 size);
+extern void acpi_os_free_rt(void *ptr);
+extern void acpi_os_stall_rt(u32 us);
+
+extern acpi_status
+acpi_os_queue_for_execution_rt(
+ u32 priority,
+ OSD_EXECUTION_CALLBACK callback,
+ void *context
+ );
+
+extern acpi_status
+acpi_os_read_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, void *value, u32 width );
+
+extern acpi_status
+acpi_os_write_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT value, u32 width );
diff -urN linux-2.4.13/drivers/acpi/ospm/include/ec.h linux-2.4.13-lia/drivers/acpi/ospm/include/ec.h
--- linux-2.4.13/drivers/acpi/ospm/include/ec.h Mon Sep 24 15:06:44 2001
+++ linux-2.4.13-lia/drivers/acpi/ospm/include/ec.h Thu Oct 4 00:21:40 2001
@@ -167,14 +167,14 @@
acpi_status
ec_io_read (
EC_CONTEXT *ec,
- u32 io_port,
+ ACPI_IO_ADDRESS io_port,
u8 *data,
EC_EVENT wait_event);
acpi_status
ec_io_write (
EC_CONTEXT *ec,
- u32 io_port,
+ ACPI_IO_ADDRESS io_port,
u8 data,
EC_EVENT wait_event);
diff -urN linux-2.4.13/drivers/acpi/ospm/system/sm_osl.c linux-2.4.13-lia/drivers/acpi/ospm/system/sm_osl.c
--- linux-2.4.13/drivers/acpi/ospm/system/sm_osl.c Mon Sep 24 15:06:44 2001
+++ linux-2.4.13-lia/drivers/acpi/ospm/system/sm_osl.c Thu Oct 4 00:21:40 2001
@@ -33,7 +33,9 @@
#include <asm/uaccess.h>
#include <linux/acpi.h>
#include <asm/io.h>
+#ifndef __ia64__
#include <linux/mc146818rtc.h>
+#endif
#include <linux/delay.h>
#include <acpi.h>
@@ -278,6 +280,7 @@
int *eof,
void *context)
{
+#ifndef _IA64
char *str = page;
int len;
u32 sec,min,hr;
@@ -351,6 +354,9 @@
*start = page;
return len;
+#else
+ return 0;
+#endif
}
static int get_date_field(char **str, u32 *value)
@@ -381,6 +387,7 @@
unsigned long count,
void *data)
{
+#ifndef _IA64
char buf[30];
char *str = buf;
u32 sec,min,hr;
@@ -520,6 +527,9 @@
error = 0;
out:
return error ? error : count;
+#else
+ return 0;
+#endif
}
static int
diff -urN linux-2.4.13/drivers/acpi/utilities/uteval.c linux-2.4.13-lia/drivers/acpi/utilities/uteval.c
--- linux-2.4.13/drivers/acpi/utilities/uteval.c Mon Sep 24 15:06:47 2001
+++ linux-2.4.13-lia/drivers/acpi/utilities/uteval.c Wed Oct 24 18:18:19 2001
@@ -115,6 +115,93 @@
/*******************************************************************************
*
+ * FUNCTION: Acpi_ut_execute_CID
+ *
+ * PARAMETERS: Device_node - Node for the device
+ * *Cid - Where the CID is returned
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Executes the _CID control method that returns the compatible
+ * ID of the device.
+ *
+ * NOTE: Internal function, no parameter validation
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ut_execute_CID (
+ acpi_namespace_node *device_node,
+ ACPI_DEVICE_ID *cid)
+{
+ acpi_operand_object *obj_desc;
+ acpi_status status;
+
+
+ FUNCTION_TRACE ("Ut_execute_CID");
+
+
+ /* Execute the method */
+
+ status = acpi_ns_evaluate_relative (device_node,
+ METHOD_NAME__CID, NULL, &obj_desc);
+ if (ACPI_FAILURE (status)) {
+ if (status = AE_NOT_FOUND) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "_CID on %4.4s was not found\n",
+ &device_node->name));
+ }
+
+ else {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "_CID on %4.4s failed %s\n",
+ &device_node->name, acpi_format_exception (status)));
+ }
+
+ return_ACPI_STATUS (status);
+ }
+
+ /* Did we get a return object? */
+
+ if (!obj_desc) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "No object was returned from _CID\n"));
+ return_ACPI_STATUS (AE_TYPE);
+ }
+
+ /*
+ * A _CID can return either a Number (32 bit compressed EISA ID) or
+ * a string
+ */
+ if ((obj_desc->common.type != ACPI_TYPE_INTEGER) &&
+ (obj_desc->common.type != ACPI_TYPE_STRING)) {
+ status = AE_TYPE;
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Type returned from _CID not a number or string: %s(%X) \n",
+ acpi_ut_get_type_name (obj_desc->common.type), obj_desc->common.type));
+ }
+
+ else {
+ if (obj_desc->common.type = ACPI_TYPE_INTEGER) {
+ /* Convert the Numeric CID to string */
+
+ acpi_ex_eisa_id_to_string ((u32) obj_desc->integer.value, cid->buffer);
+ }
+
+ else {
+ /* Copy the String CID from the returned object */
+
+ STRNCPY(cid->buffer, obj_desc->string.pointer, sizeof(cid->buffer));
+ }
+ }
+
+
+ /* On exit, we must delete the return object */
+
+ acpi_ut_remove_reference (obj_desc);
+
+ return_ACPI_STATUS (status);
+}
+
+/*******************************************************************************
+ *
* FUNCTION: Acpi_ut_execute_HID
*
* PARAMETERS: Device_node - Node for the device
diff -urN linux-2.4.13/drivers/char/Config.in linux-2.4.13-lia/drivers/char/Config.in
--- linux-2.4.13/drivers/char/Config.in Wed Oct 24 10:17:45 2001
+++ linux-2.4.13-lia/drivers/char/Config.in Wed Oct 24 10:21:08 2001
@@ -207,6 +207,9 @@
dep_tristate '/dev/agpgart (AGP Support)' CONFIG_AGP $CONFIG_DRM_AGP
if [ "$CONFIG_AGP" != "n" ]; then
bool ' Intel 440LX/BX/GX and I815/I830M/I840/I850 support' CONFIG_AGP_INTEL
+ if [ "$CONFIG_IA64" != "n" ]; then
+ bool ' Intel 460GX support' CONFIG_AGP_I460
+ fi
bool ' Intel I810/I815/I830M (on-board) support' CONFIG_AGP_I810
bool ' VIA chipset support' CONFIG_AGP_VIA
bool ' AMD Irongate, 761, and 762 support' CONFIG_AGP_AMD
@@ -215,7 +218,17 @@
bool ' Serverworks LE/HE support' CONFIG_AGP_SWORKS
fi
-source drivers/char/drm/Config.in
+bool 'Direct Rendering Manager (XFree86 DRI support)' CONFIG_DRM
+
+if [ "$CONFIG_DRM" = "y" ]; then
+ bool ' Build drivers for new (XFree 4.1) DRM' CONFIG_DRM_NEW
+ if [ "$CONFIG_DRM_NEW" = "y" ]; then
+ source drivers/char/drm/Config.in
+ else
+ define_bool CONFIG_DRM_OLD y
+ source drivers/char/drm-4.0/Config.in
+ fi
+fi
if [ "$CONFIG_HOTPLUG" = "y" -a "$CONFIG_PCMCIA" != "n" ]; then
source drivers/char/pcmcia/Config.in
diff -urN linux-2.4.13/drivers/char/Makefile linux-2.4.13-lia/drivers/char/Makefile
--- linux-2.4.13/drivers/char/Makefile Wed Oct 24 10:17:45 2001
+++ linux-2.4.13-lia/drivers/char/Makefile Wed Oct 24 10:21:08 2001
@@ -25,7 +25,7 @@
misc.o pty.o random.o selection.o serial.o \
sonypi.o tty_io.o tty_ioctl.o generic_serial.o
-mod-subdirs := joystick ftape drm pcmcia
+mod-subdirs := joystick ftape drm pcmcia drm-4.0
list-multi :=
@@ -138,6 +138,7 @@
obj-$(CONFIG_MAGIC_SYSRQ) += sysrq.o
obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o
+obj-$(CONFIG_SIM_SERIAL) += simserial.o
obj-$(CONFIG_ROCKETPORT) += rocket.o
obj-$(CONFIG_MOXA_SMARTIO) += mxser.o
obj-$(CONFIG_MOXA_INTELLIO) += moxa.o
@@ -198,7 +199,8 @@
obj-$(CONFIG_QIC02_TAPE) += tpqic02.o
subdir-$(CONFIG_FTAPE) += ftape
-subdir-$(CONFIG_DRM) += drm
+subdir-$(CONFIG_DRM_NEW) += drm
+subdir-$(CONFIG_DRM_OLD) += drm-4.0
subdir-$(CONFIG_PCMCIA) += pcmcia
subdir-$(CONFIG_AGP) += agp
diff -urN linux-2.4.13/drivers/char/agp/agp.h linux-2.4.13-lia/drivers/char/agp/agp.h
--- linux-2.4.13/drivers/char/agp/agp.h Wed Oct 10 16:31:46 2001
+++ linux-2.4.13-lia/drivers/char/agp/agp.h Wed Oct 10 16:33:17 2001
@@ -84,8 +84,8 @@
void *dev_private_data;
struct pci_dev *dev;
gatt_mask *masks;
- unsigned long *gatt_table;
- unsigned long *gatt_table_real;
+ u32 *gatt_table;
+ u32 *gatt_table_real;
unsigned long scratch_page;
unsigned long gart_bus_addr;
unsigned long gatt_bus_addr;
@@ -111,6 +111,7 @@
void (*cleanup) (void);
void (*tlb_flush) (agp_memory *);
unsigned long (*mask_memory) (unsigned long, int);
+ unsigned long (*unmask_memory) (unsigned long);
void (*cache_flush) (void);
int (*create_gatt_table) (void);
int (*free_gatt_table) (void);
@@ -150,6 +151,10 @@
#define A_IDXFIX() (A_SIZE_FIX(agp_bridge.aperture_sizes) + i)
#define MAXKEY (4096 * 32)
+#ifndef max
+#define max(a,b) (((a)>(b))?(a):(b))
+#endif
+
#define AGPGART_MODULE_NAME "agpgart"
#define PFX AGPGART_MODULE_NAME ": "
@@ -209,6 +214,9 @@
#ifndef PCI_DEVICE_ID_INTEL_82443GX_1
#define PCI_DEVICE_ID_INTEL_82443GX_1 0x71a1
#endif
+#ifndef PCI_DEVICE_ID_INTEL_460GX
+#define PCI_DEVICE_ID_INTEL_460GX 0x84ea
+#endif
#ifndef PCI_DEVICE_ID_AMD_IRONGATE_0
#define PCI_DEVICE_ID_AMD_IRONGATE_0 0x7006
#endif
@@ -250,6 +258,15 @@
#define INTEL_AGPCTRL 0xb0
#define INTEL_NBXCFG 0x50
#define INTEL_ERRSTS 0x91
+
+/* Intel 460GX Registers */
+#define INTEL_I460_APBASE 0x10
+#define INTEL_I460_BAPBASE 0x98
+#define INTEL_I460_GXBCTL 0xa0
+#define INTEL_I460_AGPSIZ 0xa2
+#define INTEL_I460_ATTBASE 0xfe200000
+#define INTEL_I460_GATT_VALID (1UL << 24)
+#define INTEL_I460_GATT_COHERENT (1UL << 25)
/* intel i840 registers */
#define INTEL_I840_MCHCFG 0x50
diff -urN linux-2.4.13/drivers/char/agp/agpgart_be.c linux-2.4.13-lia/drivers/char/agp/agpgart_be.c
--- linux-2.4.13/drivers/char/agp/agpgart_be.c Wed Oct 10 16:31:46 2001
+++ linux-2.4.13-lia/drivers/char/agp/agpgart_be.c Wed Oct 10 16:33:17 2001
@@ -22,6 +22,7 @@
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
+ * 460GX support by Chris Ahna <christopher.j.ahna@intel.com>
*/
#include <linux/config.h>
#include <linux/version.h>
@@ -43,6 +44,9 @@
#include <asm/uaccess.h>
#include <asm/io.h>
#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/smplock.h>
#include <linux/agp_backend.h>
#include "agp.h"
@@ -60,7 +64,7 @@
EXPORT_SYMBOL(agp_backend_release);
static void flush_cache(void);
-
+
static struct agp_bridge_data agp_bridge;
static int agp_try_unsupported __initdata = 0;
@@ -205,19 +209,56 @@
agp_bridge.free_by_type(curr);
return;
}
- if (curr->page_count != 0) {
- for (i = 0; i < curr->page_count; i++) {
- curr->memory[i] &= ~(0x00000fff);
- agp_bridge.agp_destroy_page((unsigned long)
- phys_to_virt(curr->memory[i]));
+ if(agp_bridge.cant_use_aperture = 0) {
+ if (curr->page_count != 0) {
+ for (i = 0; i < curr->page_count; i++) {
+ curr->memory[i] = agp_bridge.unmask_memory(
+ curr->memory[i]);
+ agp_bridge.agp_destroy_page((unsigned long)
+ phys_to_virt(curr->memory[i]));
+ }
}
+ } else {
+ vfree(curr->vmptr);
}
+
agp_free_key(curr->key);
vfree(curr->memory);
kfree(curr);
MOD_DEC_USE_COUNT;
}
+#define IN_VMALLOC(_x) (((_x) >= VMALLOC_START) && ((_x) < VMALLOC_END))
+
+/*
+ * Look up and return the pte corresponding to addr. We only do this for
+ * agp_ioremap'ed addresses.
+ */
+static pte_t * agp_lookup_pte(unsigned long addr) {
+
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ if(!IN_VMALLOC(addr))
+ return NULL;
+
+ dir = pgd_offset_k(addr);
+ pmd = pmd_offset(dir, addr);
+
+ if(pmd) {
+ pte = pte_offset(pmd, addr);
+
+ if(pte) {
+ return pte;
+ } else {
+ return NULL;
+ }
+ } else {
+ return NULL;
+ }
+}
+
#define ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(unsigned long))
agp_memory *agp_allocate_memory(size_t page_count, u32 type)
@@ -247,24 +288,60 @@
scratch_pages = (page_count + ENTRIES_PER_PAGE - 1) / ENTRIES_PER_PAGE;
new = agp_create_memory(scratch_pages);
-
if (new = NULL) {
MOD_DEC_USE_COUNT;
return NULL;
}
- for (i = 0; i < page_count; i++) {
- new->memory[i] = agp_bridge.agp_alloc_page();
- if (new->memory[i] = 0) {
- /* Free this structure */
- agp_free_memory(new);
+ if(agp_bridge.cant_use_aperture = 0) {
+ for (i = 0; i < page_count; i++) {
+ new->memory[i] = agp_bridge.agp_alloc_page();
+
+ if (new->memory[i] = 0) {
+ /* Free this structure */
+ agp_free_memory(new);
+ return NULL;
+ }
+ new->memory[i] + agp_bridge.mask_memory(
+ virt_to_phys((void *) new->memory[i]),
+ type);
+ new->page_count++;
+ }
+ } else {
+ void *vmblock;
+ unsigned long vaddr, paddr;
+ pte_t *pte;
+
+ vmblock = __vmalloc(page_count << PAGE_SHIFT, GFP_KERNEL,
+#ifdef __ia64__
+ pgprot_writecombine(PAGE_KERNEL));
+#else
+ PAGE_KERNEL);
+#endif
+ if(vmblock = NULL) {
+ MOD_DEC_USE_COUNT;
return NULL;
}
- new->memory[i] - agp_bridge.mask_memory(
- virt_to_phys((void *) new->memory[i]),
- type);
- new->page_count++;
+
+ new->vmptr = vmblock;
+ vaddr = (unsigned long) vmblock;
+
+ for(i = 0; i < page_count; i++, vaddr += PAGE_SIZE) {
+ pte = agp_lookup_pte(vaddr);
+ if(pte = NULL) {
+ MOD_DEC_USE_COUNT;
+ return NULL;
+ }
+#ifdef __ia64__
+ paddr = pte_val(*pte) & _PFN_MASK;
+#else
+ paddr = pte_val(*pte) & PAGE_MASK;
+#endif
+ new->memory[i] = agp_bridge.mask_memory(paddr, type);
+ }
+
+ new->page_count = page_count;
}
return new;
@@ -353,12 +430,13 @@
curr->is_flushed = TRUE;
}
ret_val = agp_bridge.insert_memory(curr, pg_start, curr->type);
-
+
if (ret_val != 0) {
return ret_val;
}
curr->is_bound = TRUE;
curr->pg_start = pg_start;
+
return 0;
}
@@ -377,6 +455,7 @@
if (ret_val != 0) {
return ret_val;
}
+
curr->is_bound = FALSE;
curr->pg_start = 0;
return 0;
@@ -387,9 +466,9 @@
/*
* Driver routines - start
* Currently this module supports the following chipsets:
- * i810, i815, 440lx, 440bx, 440gx, i840, i850, via vp3, via mvp3,
- * via kx133, via kt133, amd irongate, amd 761, amd 762, ALi M1541,
- * and generic support for the SiS chipsets.
+ * i810, 440lx, 440bx, 440gx, 460gx, i840, i850, via vp3, via mvp3, via kx133,
+ * via kt133, amd irongate, ALi M1541, and generic support for the SiS
+ * chipsets.
*/
/* Generic Agp routines - Start */
@@ -614,7 +693,7 @@
for (page = virt_to_page(table); page <= virt_to_page(table_end); page++)
set_bit(PG_reserved, &page->flags);
- agp_bridge.gatt_table_real = (unsigned long *) table;
+ agp_bridge.gatt_table_real = (u32 *) table;
CACHE_FLUSH();
agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
(PAGE_SIZE * (1 << page_order)));
@@ -832,6 +911,11 @@
agp_bridge.agp_enable(mode);
}
+static unsigned long agp_generic_unmask_memory(unsigned long addr)
+{
+ return addr & ~(0x00000fff);
+}
+
/* End - Generic Agp routines */
#ifdef CONFIG_AGP_I810
@@ -1096,6 +1180,7 @@
agp_bridge.cleanup = intel_i810_cleanup;
agp_bridge.tlb_flush = intel_i810_tlbflush;
agp_bridge.mask_memory = intel_i810_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = intel_i810_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1399,6 +1484,633 @@
#endif /* CONFIG_AGP_I810 */
+#ifdef CONFIG_AGP_I460
+
+/* BIOS configures the chipset so that one of two apbase registers are used */
+static u8 intel_i460_dynamic_apbase = 0x10;
+
+/* 460 supports multiple GART page sizes, so GART pageshift is dynamic */
+static u8 intel_i460_pageshift = 12;
+
+/* Keep track of which is larger, chipset or kernel page size. */
+static u32 intel_i460_cpk = 1;
+
+/* Structure for tracking partial use of 4MB GART pages */
+static u32 **i460_pg_detail = NULL;
+static u32 *i460_pg_count = NULL;
+
+#define I460_CPAGES_PER_KPAGE (PAGE_SIZE >> intel_i460_pageshift)
+#define I460_KPAGES_PER_CPAGE ((1 << intel_i460_pageshift) >> PAGE_SHIFT)
+
+#define I460_SRAM_IO_DISABLE (1 << 4)
+#define I460_BAPBASE_ENABLE (1 << 3)
+#define I460_AGPSIZ_MASK 0x7
+#define I460_4M_PS (1 << 1)
+
+#define log2(x) ffz(~(x))
+
+static int intel_i460_fetch_size(void)
+{
+ int i;
+ u8 temp;
+ aper_size_info_8 *values;
+
+ /* Determine the GART page size */
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &temp);
+ intel_i460_pageshift = (temp & I460_4M_PS) ? 22 : 12;
+
+ values = A_SIZE_8(agp_bridge.aperture_sizes);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+
+ /* Exit now if the IO drivers for the GART SRAMS are turned off */
+ if(temp & I460_SRAM_IO_DISABLE) {
+ printk("[agpgart] GART SRAMS disabled on 460GX chipset\n");
+ printk("[agpgart] AGPGART operation not possible\n");
+ return 0;
+ }
+
+ /* Make sure we don't try to create an 2 ^ 23 entry GATT */
+ if((intel_i460_pageshift = 0) && ((temp & I460_AGPSIZ_MASK) = 4)) {
+ printk("[agpgart] We can't have a 32GB aperture with 4KB"
+ " GART pages\n");
+ return 0;
+ }
+
+ /* Determine the proper APBASE register */
+ if(temp & I460_BAPBASE_ENABLE)
+ intel_i460_dynamic_apbase = INTEL_I460_BAPBASE;
+ else intel_i460_dynamic_apbase = INTEL_I460_APBASE;
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+
+ /*
+ * Dynamically calculate the proper num_entries and page_order
+ * values for the define aperture sizes. Take care not to
+ * shift off the end of values[i].size.
+ */
+ values[i].num_entries = (values[i].size << 8) >>
+ (intel_i460_pageshift - 12);
+ values[i].page_order = log2((sizeof(u32)*values[i].num_entries)
+ >> PAGE_SHIFT);
+ }
+
+ for (i = 0; i < agp_bridge.num_aperture_sizes; i++) {
+ /* Neglect control bits when matching up size_value */
+ if ((temp & I460_AGPSIZ_MASK) = values[i].size_value) {
+ agp_bridge.previous_size + agp_bridge.current_size = (void *) (values + i);
+ agp_bridge.aperture_size_idx = i;
+ return values[i].size;
+ }
+ }
+
+ return 0;
+}
+
+/* There isn't anything to do here since 460 has no GART TLB. */
+static void intel_i460_tlb_flush(agp_memory * mem)
+{
+ return;
+}
+
+/*
+ * This utility function is needed to prevent corruption of the control bits
+ * which are stored along with the aperture size in 460's AGPSIZ register
+ */
+static void intel_i460_write_agpsiz(u8 size_value)
+{
+ u8 temp;
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ,
+ ((temp & ~I460_AGPSIZ_MASK) | size_value));
+}
+
+static void intel_i460_cleanup(void)
+{
+ aper_size_info_8 *previous_size;
+
+ previous_size = A_SIZE_8(agp_bridge.previous_size);
+ intel_i460_write_agpsiz(previous_size->size_value);
+
+ if(intel_i460_cpk = 0)
+ {
+ vfree(i460_pg_detail);
+ vfree(i460_pg_count);
+ }
+}
+
+
+/* Control bits for Out-Of-GART coherency and Burst Write Combining */
+#define I460_GXBCTL_OOG (1UL << 0)
+#define I460_GXBCTL_BWC (1UL << 2)
+
+static int intel_i460_configure(void)
+{
+ union {
+ u32 small[2];
+ u64 large;
+ } temp;
+ u8 scratch;
+ int i;
+
+ aper_size_info_8 *current_size;
+
+ temp.large = 0;
+
+ current_size = A_SIZE_8(agp_bridge.current_size);
+ intel_i460_write_agpsiz(current_size->size_value);
+
+ /*
+ * Do the necessary rigmarole to read all eight bytes of APBASE.
+ * This has to be done since the AGP aperture can be above 4GB on
+ * 460 based systems.
+ */
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase,
+ &(temp.small[0]));
+ pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase + 4,
+ &(temp.small[1]));
+
+ /* Clear BAR control bits */
+ agp_bridge.gart_bus_addr = temp.large & ~((1UL << 3) - 1);
+
+ pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &scratch);
+ pci_write_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL,
+ (scratch & 0x02) | I460_GXBCTL_OOG | I460_GXBCTL_BWC);
+
+ /*
+ * Initialize partial allocation trackers if a GART page is bigger than
+ * a kernel page.
+ */
+ if(I460_CPAGES_PER_KPAGE >= 1) {
+ intel_i460_cpk = 1;
+ } else {
+ intel_i460_cpk = 0;
+
+ i460_pg_detail = (void *) vmalloc(sizeof(*i460_pg_detail) *
+ current_size->num_entries);
+ i460_pg_count = (void *) vmalloc(sizeof(*i460_pg_count) *
+ current_size->num_entries);
+
+ for (i = 0; i < current_size->num_entries; i++) {
+ i460_pg_count[i] = 0;
+ i460_pg_detail[i] = NULL;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_i460_create_gatt_table(void) {
+
+ char *table;
+ int i;
+ int page_order;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ /*
+ * Load up the fixed address of the GART SRAMS which hold our
+ * GATT table.
+ */
+ table = (char *) __va(INTEL_I460_ATTBASE);
+
+ temp = agp_bridge.current_size;
+ page_order = A_SIZE_8(temp)->page_order;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ agp_bridge.gatt_table_real = (u32 *) table;
+ agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
+ (PAGE_SIZE * (1 << page_order)));
+ agp_bridge.gatt_bus_addr = virt_to_phys(agp_bridge.gatt_table_real);
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ return 0;
+}
+
+static int intel_i460_free_gatt_table(void)
+{
+ int num_entries;
+ int i;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ for (i = 0; i < num_entries; i++) {
+ agp_bridge.gatt_table[i] = 0;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ iounmap(agp_bridge.gatt_table);
+
+ return 0;
+}
+
+/* These functions are called when PAGE_SIZE exceeds the GART page size */
+
+static int intel_i460_insert_memory_cpk(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, j, k, num_entries;
+ void *temp;
+ unsigned int hold;
+ unsigned int read_back;
+
+ /*
+ * The rest of the kernel will compute page offsets in terms of
+ * PAGE_SIZE.
+ */
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ if((pg_start + I460_CPAGES_PER_KPAGE * mem->page_count) > num_entries) {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ j = pg_start;
+ while (j < (pg_start + I460_CPAGES_PER_KPAGE * mem->page_count)) {
+ if (!PGE_EMPTY(agp_bridge.gatt_table[j])) {
+ return -EBUSY;
+ }
+ j++;
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for (i = 0, j = pg_start; i < mem->page_count; i++) {
+
+ hold = (unsigned int) (mem->memory[i]);
+
+ for (k = 0; k < I460_CPAGES_PER_KPAGE; k++, j++, hold++)
+ agp_bridge.gatt_table[j] = hold;
+ }
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[j - 1];
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_cpk(agp_memory * mem, off_t pg_start,
+ int type)
+{
+ int i;
+ unsigned int read_back;
+
+ pg_start = I460_CPAGES_PER_KPAGE * pg_start;
+
+ for (i = pg_start; i < (pg_start + I460_CPAGES_PER_KPAGE *
+ mem->page_count); i++)
+ agp_bridge.gatt_table[i] = 0;
+
+ /*
+ * The 460 spec says we have to read the last location written to
+ * make sure that all writes have taken effect
+ */
+ read_back = agp_bridge.gatt_table[i - 1];
+
+ return 0;
+}
+
+/*
+ * These functions are called when the GART page size exceeds PAGE_SIZE.
+ *
+ * This situation is interesting since AGP memory allocations that are
+ * smaller than a single GART page are possible. The structures i460_pg_count
+ * and i460_pg_detail track partial allocation of the large GART pages to
+ * work around this issue.
+ *
+ * i460_pg_count[pg_num] tracks the number of kernel pages in use within
+ * GART page pg_num. i460_pg_detail[pg_num] is an array containing a
+ * psuedo-GART entry for each of the aforementioned kernel pages. The whole
+ * of i460_pg_detail is equivalent to a giant GATT with page size equal to
+ * that of the kernel.
+ */
+
+static void *intel_i460_alloc_large_page(int pg_num)
+{
+ int i;
+ void *bp, *bp_end;
+ struct page *page;
+
+ i460_pg_detail[pg_num] = (void *) vmalloc(sizeof(u32) *
+ I460_KPAGES_PER_CPAGE);
+ if(i460_pg_detail[pg_num] = NULL) {
+ printk("[agpgart] Out of memory, we're in trouble...\n");
+ return NULL;
+ }
+
+ for(i = 0; i < I460_KPAGES_PER_CPAGE; i++)
+ i460_pg_detail[pg_num][i] = 0;
+
+ bp = (void *) __get_free_pages(GFP_KERNEL,
+ intel_i460_pageshift - PAGE_SHIFT);
+ if(bp = NULL) {
+ printk("[agpgart] Couldn't alloc 4M GART page...\n");
+ return NULL;
+ }
+
+ bp_end = bp + ((PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT))) - 1);
+
+ for (page = virt_to_page(bp); page <= virt_to_page(bp_end); page++)
+ {
+ atomic_inc(&page->count);
+ set_bit(PG_locked, &page->flags);
+ atomic_inc(&agp_bridge.current_memory_agp);
+ }
+
+ return bp;
+}
+
+static void intel_i460_free_large_page(int pg_num, unsigned long addr)
+{
+ struct page *page;
+ void *bp, *bp_end;
+
+ bp = (void *) __va(addr);
+ bp_end = bp + (PAGE_SIZE *
+ (1 << (intel_i460_pageshift - PAGE_SHIFT)));
+
+ vfree(i460_pg_detail[pg_num]);
+ i460_pg_detail[pg_num] = NULL;
+
+ for (page = virt_to_page(bp); page < virt_to_page(bp_end); page++)
+ {
+ atomic_dec(&page->count);
+ clear_bit(PG_locked, &page->flags);
+ wake_up(&page->wait);
+ atomic_dec(&agp_bridge.current_memory_agp);
+ }
+
+ free_pages((unsigned long) bp, intel_i460_pageshift - PAGE_SHIFT);
+}
+
+static int intel_i460_insert_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages */
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+ I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+ I460_KPAGES_PER_CPAGE;
+
+ if(end_pg > num_entries)
+ {
+ printk("[agpgart] Looks like we're out of AGP memory\n");
+ return -EINVAL;
+ }
+
+ /* Check if the requested region of the aperture is free */
+ for(pg = start_pg; pg <= end_pg; pg++)
+ {
+ /* Allocate new GART pages if necessary */
+ if(i460_pg_detail[pg] = NULL) {
+ temp = intel_i460_alloc_large_page(pg);
+ if(temp = NULL)
+ return -ENOMEM;
+ agp_bridge.gatt_table[pg] = agp_bridge.mask_memory(
+ (unsigned long) temp, 0);
+ read_back = agp_bridge.gatt_table[pg];
+ }
+
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++)
+ {
+ if(i460_pg_detail[pg][idx] != 0)
+ return -EBUSY;
+ }
+ }
+
+ if (mem->is_flushed = FALSE) {
+ CACHE_FLUSH();
+ mem->is_flushed = TRUE;
+ }
+
+ for(pg = start_pg, i = 0; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ i460_pg_detail[pg][idx] = agp_bridge.gatt_table[pg] +
+ ((idx * PAGE_SIZE) >> 12);
+ i460_pg_count[pg]++;
+
+ /* Finally we fill in mem->memory... */
+ mem->memory[i] = ((unsigned long) (0xffffff &
+ i460_pg_detail[pg][idx])) << 12;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_i460_remove_memory_kpc(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ int i, pg, start_pg, end_pg, start_offset, end_offset, idx;
+ int num_entries;
+ void *temp;
+ unsigned int read_back;
+ unsigned long addr;
+
+ temp = agp_bridge.current_size;
+ num_entries = A_SIZE_8(temp)->num_entries;
+
+ /* Figure out what pg_start means in terms of our large GART pages */
+ start_pg = pg_start / I460_KPAGES_PER_CPAGE;
+ start_offset = pg_start % I460_KPAGES_PER_CPAGE;
+ end_pg = (pg_start + mem->page_count - 1) /
+ I460_KPAGES_PER_CPAGE;
+ end_offset = (pg_start + mem->page_count - 1) %
+ I460_KPAGES_PER_CPAGE;
+
+ for(i = 0, pg = start_pg; pg <= end_pg; pg++)
+ {
+ for(idx = ((pg = start_pg) ? start_offset : 0);
+ idx < ((pg = end_pg) ? (end_offset + 1)
+ : I460_KPAGES_PER_CPAGE);
+ idx++, i++)
+ {
+ mem->memory[i] = 0;
+ i460_pg_detail[pg][idx] = 0;
+ i460_pg_count[pg]--;
+ }
+
+ /* Free GART pages if they are unused */
+ if(i460_pg_count[pg] = 0) {
+ addr = (0xffffffUL & (unsigned long)
+ (agp_bridge.gatt_table[pg])) << 12;
+
+ agp_bridge.gatt_table[pg] = 0;
+ read_back = agp_bridge.gatt_table[pg];
+
+ intel_i460_free_large_page(pg, addr);
+ }
+ }
+
+ return 0;
+}
+
+/* Dummy routines to call the approriate {cpk,kpc} function */
+
+static int intel_i460_insert_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_insert_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_insert_memory_kpc(mem, pg_start, type);
+}
+
+static int intel_i460_remove_memory(agp_memory * mem,
+ off_t pg_start, int type)
+{
+ if(intel_i460_cpk)
+ return intel_i460_remove_memory_cpk(mem, pg_start, type);
+ else
+ return intel_i460_remove_memory_kpc(mem, pg_start, type);
+}
+
+/*
+ * If the kernel page size is smaller that the chipset page size, we don't
+ * want to allocate memory until we know where it is to be bound in the
+ * aperture (a multi-kernel-page alloc might fit inside of an already
+ * allocated GART page). Consequently, don't allocate or free anything
+ * if i460_cpk (meaning chipset pages per kernel page) isn't set.
+ *
+ * Let's just hope nobody counts on the allocated AGP memory being there
+ * before bind time (I don't think current drivers do)...
+ */
+static unsigned long intel_i460_alloc_page(void)
+{
+ if(intel_i460_cpk)
+ return agp_generic_alloc_page();
+
+ /* Returning NULL would cause problems */
+ return ((unsigned long) ~0UL);
+}
+
+static void intel_i460_destroy_page(unsigned long page)
+{
+ if(intel_i460_cpk)
+ agp_generic_destroy_page(page);
+}
+
+static gatt_mask intel_i460_masks[] +{
+ {
+ INTEL_I460_GATT_VALID,
+ 0
+ }
+};
+
+static unsigned long intel_i460_mask_memory(unsigned long addr, int type)
+{
+ /* Make sure the returned address is a valid GATT entry */
+ return (agp_bridge.masks[0].mask | (((addr &
+ ~((1 << intel_i460_pageshift) - 1)) & 0xffffff000) >> 12));
+}
+
+static unsigned long intel_i460_unmask_memory(unsigned long addr)
+{
+ /* Turn a GATT entry into a physical address */
+ return ((addr & 0xffffff) << 12);
+}
+
+static aper_size_info_8 intel_i460_sizes[3] +{
+ /*
+ * The 32GB aperture is only available with a 4M GART page size.
+ * Due to the dynamic GART page size, we can't figure out page_order
+ * or num_entries until runtime.
+ */
+ {32768, 0, 0, 4},
+ {1024, 0, 0, 2},
+ {256, 0, 0, 1}
+};
+
+static int __init intel_i460_setup (struct pci_dev *pdev)
+{
+
+ agp_bridge.masks = intel_i460_masks;
+ agp_bridge.num_of_masks = 1;
+ agp_bridge.aperture_sizes = (void *) intel_i460_sizes;
+ agp_bridge.size_type = U8_APER_SIZE;
+ agp_bridge.num_aperture_sizes = 3;
+ agp_bridge.dev_private_data = NULL;
+ agp_bridge.needs_scratch_page = FALSE;
+ agp_bridge.configure = intel_i460_configure;
+ agp_bridge.fetch_size = intel_i460_fetch_size;
+ agp_bridge.cleanup = intel_i460_cleanup;
+ agp_bridge.tlb_flush = intel_i460_tlb_flush;
+ agp_bridge.mask_memory = intel_i460_mask_memory;
+ agp_bridge.unmask_memory = intel_i460_unmask_memory;
+ agp_bridge.agp_enable = agp_generic_agp_enable;
+ agp_bridge.cache_flush = global_cache_flush;
+ agp_bridge.create_gatt_table = intel_i460_create_gatt_table;
+ agp_bridge.free_gatt_table = intel_i460_free_gatt_table;
+ agp_bridge.insert_memory = intel_i460_insert_memory;
+ agp_bridge.remove_memory = intel_i460_remove_memory;
+ agp_bridge.alloc_by_type = agp_generic_alloc_by_type;
+ agp_bridge.free_by_type = agp_generic_free_by_type;
+ agp_bridge.agp_alloc_page = intel_i460_alloc_page;
+ agp_bridge.agp_destroy_page = intel_i460_destroy_page;
+#if 0
+ agp_bridge.suspend = ??;
+ agp_bridge.resume = ??;
+#endif
+ agp_bridge.cant_use_aperture = 1;
+
+ return 0;
+
+ (void) pdev; /* unused */
+}
+
+#endif /* CONFIG_AGP_I460 */
+
#ifdef CONFIG_AGP_INTEL
static int intel_fetch_size(void)
@@ -1579,6 +2291,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1612,6 +2325,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1645,6 +2359,7 @@
agp_bridge.cleanup = intel_cleanup;
agp_bridge.tlb_flush = intel_tlbflush;
agp_bridge.mask_memory = intel_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1765,6 +2480,7 @@
agp_bridge.cleanup = via_cleanup;
agp_bridge.tlb_flush = via_tlbflush;
agp_bridge.mask_memory = via_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1879,6 +2595,7 @@
agp_bridge.cleanup = sis_cleanup;
agp_bridge.tlb_flush = sis_tlbflush;
agp_bridge.mask_memory = sis_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -1901,8 +2618,8 @@
#ifdef CONFIG_AGP_AMD
typedef struct _amd_page_map {
- unsigned long *real;
- unsigned long *remapped;
+ u32 *real;
+ u32 *remapped;
} amd_page_map;
static struct _amd_irongate_private {
@@ -1915,7 +2632,7 @@
{
int i;
- page_map->real = (unsigned long *) __get_free_page(GFP_KERNEL);
+ page_map->real = (u32 *) __get_free_page(GFP_KERNEL);
if (page_map->real = NULL) {
return -ENOMEM;
}
@@ -2170,7 +2887,7 @@
off_t pg_start, int type)
{
int i, j, num_entries;
- unsigned long *cur_gatt;
+ u32 *cur_gatt;
unsigned long addr;
num_entries = A_SIZE_LVL2(agp_bridge.current_size)->num_entries;
@@ -2210,7 +2927,7 @@
int type)
{
int i;
- unsigned long *cur_gatt;
+ u32 *cur_gatt;
unsigned long addr;
if (type != 0 || mem->type != 0) {
@@ -2257,6 +2974,7 @@
agp_bridge.cleanup = amd_irongate_cleanup;
agp_bridge.tlb_flush = amd_irongate_tlbflush;
agp_bridge.mask_memory = amd_irongate_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = global_cache_flush;
agp_bridge.create_gatt_table = amd_create_gatt_table;
@@ -2505,6 +3223,7 @@
agp_bridge.cleanup = ali_cleanup;
agp_bridge.tlb_flush = ali_tlbflush;
agp_bridge.mask_memory = ali_mask_memory;
+ agp_bridge.unmask_memory = agp_generic_unmask_memory;
agp_bridge.agp_enable = agp_generic_agp_enable;
agp_bridge.cache_flush = ali_cache_flush;
agp_bridge.create_gatt_table = agp_generic_create_gatt_table;
@@ -3287,6 +4006,15 @@
#endif /* CONFIG_AGP_INTEL */
+#ifdef CONFIG_AGP_I460
+ { PCI_DEVICE_ID_INTEL_460GX,
+ PCI_VENDOR_ID_INTEL,
+ INTEL_460GX,
+ "Intel",
+ "460GX",
+ intel_i460_setup },
+#endif
+
#ifdef CONFIG_AGP_SIS
{ PCI_DEVICE_ID_SI_630,
PCI_VENDOR_ID_SI,
@@ -3455,6 +4183,18 @@
return -ENODEV;
}
+static int agp_check_supported_device(struct pci_dev *dev) {
+
+ int i;
+
+ for(i = 0; i < ARRAY_SIZE (agp_bridge_info); i++) {
+ if(dev->vendor = agp_bridge_info[i].vendor_id &&
+ dev->device = agp_bridge_info[i].device_id)
+ return 1;
+ }
+
+ return 0;
+}
/* Supported Device Scanning routine */
@@ -3464,8 +4204,14 @@
u8 cap_ptr = 0x00;
u32 cap_id, scratch;
- if ((dev = pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, NULL)) = NULL)
- return -ENODEV;
+ /*
+ * Some systems have multiple host bridges (i.e. BigSur), so
+ * we can't just use the first one we find.
+ */
+ do {
+ if ((dev = pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, dev)) = NULL)
+ return -ENODEV;
+ } while(!agp_check_supported_device(dev));
agp_bridge.dev = dev;
diff -urN linux-2.4.13/drivers/char/drm/Config.in linux-2.4.13-lia/drivers/char/drm/Config.in
--- linux-2.4.13/drivers/char/drm/Config.in Wed Aug 8 09:42:10 2001
+++ linux-2.4.13-lia/drivers/char/drm/Config.in Thu Oct 4 00:21:40 2001
@@ -5,12 +5,9 @@
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
#
-bool 'Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)' CONFIG_DRM
-if [ "$CONFIG_DRM" != "n" ]; then
- tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM_TDFX
- tristate ' 3dlabs GMX 2000' CONFIG_DRM_GAMMA
- tristate ' ATI Rage 128' CONFIG_DRM_R128
- dep_tristate ' ATI Radeon' CONFIG_DRM_RADEON $CONFIG_AGP
- dep_tristate ' Intel I810' CONFIG_DRM_I810 $CONFIG_AGP
- dep_tristate ' Matrox g200/g400' CONFIG_DRM_MGA $CONFIG_AGP
-fi
+tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM_TDFX
+tristate ' 3dlabs GMX 2000' CONFIG_DRM_GAMMA
+tristate ' ATI Rage 128' CONFIG_DRM_R128
+dep_tristate ' ATI Radeon' CONFIG_DRM_RADEON $CONFIG_AGP
+dep_tristate ' Intel I810' CONFIG_DRM_I810 $CONFIG_AGP
+dep_tristate ' Matrox g200/g400' CONFIG_DRM_MGA $CONFIG_AGP
diff -urN linux-2.4.13/drivers/char/drm/ati_pcigart.h linux-2.4.13-lia/drivers/char/drm/ati_pcigart.h
--- linux-2.4.13/drivers/char/drm/ati_pcigart.h Mon Sep 24 15:06:57 2001
+++ linux-2.4.13-lia/drivers/char/drm/ati_pcigart.h Thu Oct 4 00:21:40 2001
@@ -30,7 +30,10 @@
#define __NO_VERSION__
#include "drmP.h"
-#if PAGE_SIZE = 8192
+#if PAGE_SIZE = 16384
+# define ATI_PCIGART_TABLE_ORDER 1
+# define ATI_PCIGART_TABLE_PAGES (1 << 1)
+#elif PAGE_SIZE = 8192
# define ATI_PCIGART_TABLE_ORDER 2
# define ATI_PCIGART_TABLE_PAGES (1 << 2)
#elif PAGE_SIZE = 4096
@@ -103,6 +106,7 @@
goto done;
}
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
if ( !dev->pdev ) {
DRM_ERROR( "PCI device unknown!\n" );
goto done;
@@ -117,6 +121,9 @@
address = 0;
goto done;
}
+#else
+ bus_address = virt_to_bus( (void *)address );
+#endif
pci_gart = (u32 *)address;
@@ -126,6 +133,7 @@
memset( pci_gart, 0, ATI_MAX_PCIGART_PAGES * sizeof(u32) );
for ( i = 0 ; i < pages ; i++ ) {
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
/* we need to support large memory configurations */
entry->busaddr[i] = pci_map_single(dev->pdev,
page_address( entry->pagelist[i] ),
@@ -139,7 +147,9 @@
goto done;
}
page_base = (u32) entry->busaddr[i];
-
+#else
+ page_base = page_to_bus( entry->pagelist[i] );
+#endif
for (j = 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) {
*pci_gart++ = cpu_to_le32( page_base );
page_base += ATI_PCIGART_PAGE_SIZE;
@@ -164,6 +174,7 @@
unsigned long addr,
dma_addr_t bus_addr)
{
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
drm_sg_mem_t *entry = dev->sg;
unsigned long pages;
int i;
@@ -188,6 +199,8 @@
PAGE_SIZE, PCI_DMA_TODEVICE);
}
}
+
+#endif
if ( addr ) {
DRM(ati_free_pcigart_table)( addr );
diff -urN linux-2.4.13/drivers/char/drm/drmP.h linux-2.4.13-lia/drivers/char/drm/drmP.h
--- linux-2.4.13/drivers/char/drm/drmP.h Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/drmP.h Thu Oct 4 00:21:52 2001
@@ -366,13 +366,13 @@
if (len > DRM_PROC_LIMIT) { ret; *eof = 1; return len - offset; }
/* Mapping helper macros */
-#define DRM_IOREMAP(map) \
- (map)->handle = DRM(ioremap)( (map)->offset, (map)->size )
+#define DRM_IOREMAP(map, dev) \
+ (map)->handle = DRM(ioremap)( (map)->offset, (map)->size, (dev) )
-#define DRM_IOREMAPFREE(map) \
+#define DRM_IOREMAPFREE(map, dev) \
do { \
if ( (map)->handle && (map)->size ) \
- DRM(ioremapfree)( (map)->handle, (map)->size ); \
+ DRM(ioremapfree)( (map)->handle, (map)->size, (dev) ); \
} while (0)
#define DRM_FIND_MAP(_map, _o) \
@@ -826,8 +826,8 @@
extern unsigned long DRM(alloc_pages)(int order, int area);
extern void DRM(free_pages)(unsigned long address, int order,
int area);
-extern void *DRM(ioremap)(unsigned long offset, unsigned long size);
-extern void DRM(ioremapfree)(void *pt, unsigned long size);
+extern void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev);
+extern void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev);
#if __REALLY_HAVE_AGP
extern agp_memory *DRM(alloc_agp)(int pages, u32 type);
diff -urN linux-2.4.13/drivers/char/drm/drm_agpsupport.h linux-2.4.13-lia/drivers/char/drm/drm_agpsupport.h
--- linux-2.4.13/drivers/char/drm/drm_agpsupport.h Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_agpsupport.h Thu Oct 4 00:21:40 2001
@@ -275,6 +275,7 @@
case INTEL_I815: head->chipset = "Intel i815"; break;
case INTEL_I840: head->chipset = "Intel i840"; break;
case INTEL_I850: head->chipset = "Intel i850"; break;
+ case INTEL_460GX: head->chipset = "Intel 460GX"; break;
#endif
case VIA_GENERIC: head->chipset = "VIA"; break;
diff -urN linux-2.4.13/drivers/char/drm/drm_bufs.h linux-2.4.13-lia/drivers/char/drm/drm_bufs.h
--- linux-2.4.13/drivers/char/drm/drm_bufs.h Fri Aug 10 18:14:41 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_bufs.h Thu Oct 4 00:21:40 2001
@@ -107,7 +107,7 @@
switch ( map->type ) {
case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER:
-#if !defined(__sparc__) && !defined(__alpha__)
+#if !defined(__sparc__) && !defined(__alpha__) && !defined(__ia64__)
if ( map->offset + map->size < map->offset ||
map->offset < virt_to_phys(high_memory) ) {
DRM(free)( map, sizeof(*map), DRM_MEM_MAPS );
@@ -124,7 +124,7 @@
MTRR_TYPE_WRCOMB, 1 );
}
#endif
- map->handle = DRM(ioremap)( map->offset, map->size );
+ map->handle = DRM(ioremap)( map->offset, map->size, dev );
break;
case _DRM_SHM:
@@ -249,7 +249,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
diff -urN linux-2.4.13/drivers/char/drm/drm_drv.h linux-2.4.13-lia/drivers/char/drm/drm_drv.h
--- linux-2.4.13/drivers/char/drm/drm_drv.h Wed Oct 24 10:17:46 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_drv.h Wed Oct 24 10:21:09 2001
@@ -439,7 +439,7 @@
DRM_DEBUG( "mtrr_del=%d\n", retcode );
}
#endif
- DRM(ioremapfree)( map->handle, map->size );
+ DRM(ioremapfree)( map->handle, map->size, dev );
break;
case _DRM_SHM:
vfree(map->handle);
diff -urN linux-2.4.13/drivers/char/drm/drm_memory.h linux-2.4.13-lia/drivers/char/drm/drm_memory.h
--- linux-2.4.13/drivers/char/drm/drm_memory.h Fri Aug 10 18:14:41 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_memory.h Thu Oct 4 00:21:40 2001
@@ -306,9 +306,14 @@
}
}
-void *DRM(ioremap)(unsigned long offset, unsigned long size)
+void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev)
{
void *pt;
+#if __REALLY_HAVE_AGP
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ struct list_head *list;
+#endif
if (!size) {
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
@@ -316,12 +321,51 @@
return NULL;
}
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 0)
+ goto standard_ioremap;
+
+ list_for_each(list, &dev->maplist->head) {
+ r_list = (drm_map_list_t *)list;
+ map = r_list->map;
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
+ break;
+ }
+
+ if(agpmem = NULL)
+ goto ioremap_failure;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
+ }
+
+standard_ioremap:
+#endif
if (!(pt = ioremap(offset, size))) {
+#if __REALLY_HAVE_AGP
+ioremap_failure:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count;
spin_unlock(&DRM(mem_lock));
return NULL;
}
+#if __REALLY_HAVE_AGP
+ioremap_success:
+#endif
spin_lock(&DRM(mem_lock));
++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count;
DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated += size;
@@ -329,7 +373,7 @@
return pt;
}
-void DRM(ioremapfree)(void *pt, unsigned long size)
+void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev)
{
int alloc_count;
int free_count;
@@ -337,7 +381,11 @@
if (!pt)
DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
"Attempt to free NULL pointer\n");
+#if __REALLY_HAVE_AGP
+ else if(dev->agp->cant_use_aperture = 0)
+#else
else
+#endif
iounmap(pt);
spin_lock(&DRM(mem_lock));
diff -urN linux-2.4.13/drivers/char/drm/drm_scatter.h linux-2.4.13-lia/drivers/char/drm/drm_scatter.h
--- linux-2.4.13/drivers/char/drm/drm_scatter.h Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_scatter.h Thu Oct 4 00:21:40 2001
@@ -47,9 +47,11 @@
vfree( entry->virtual );
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
DRM(free)( entry->busaddr,
entry->pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
+#endif
DRM(free)( entry->pagelist,
entry->pages * sizeof(*entry->pagelist),
DRM_MEM_PAGES );
@@ -97,6 +99,7 @@
return -ENOMEM;
}
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
entry->busaddr = DRM(alloc)( pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
if ( !entry->busaddr ) {
@@ -109,12 +112,15 @@
return -ENOMEM;
}
memset( (void *)entry->busaddr, 0, pages * sizeof(*entry->busaddr) );
+#endif
entry->virtual = vmalloc_32( pages << PAGE_SHIFT );
if ( !entry->virtual ) {
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
DRM(free)( entry->busaddr,
entry->pages * sizeof(*entry->busaddr),
DRM_MEM_PAGES );
+#endif
DRM(free)( entry->pagelist,
entry->pages * sizeof(*entry->pagelist),
DRM_MEM_PAGES );
diff -urN linux-2.4.13/drivers/char/drm/drm_vm.h linux-2.4.13-lia/drivers/char/drm/drm_vm.h
--- linux-2.4.13/drivers/char/drm/drm_vm.h Wed Oct 24 10:17:48 2001
+++ linux-2.4.13-lia/drivers/char/drm/drm_vm.h Wed Oct 24 10:21:09 2001
@@ -89,7 +89,7 @@
if (map && map->type = _DRM_AGP) {
unsigned long offset = address - vma->vm_start;
- unsigned long baddr = VM_OFFSET(vma) + offset;
+ unsigned long baddr = VM_OFFSET(vma) + offset, paddr;
struct drm_agp_mem *agpmem;
struct page *page;
@@ -115,8 +115,19 @@
* Get the page, inc the use count, and return it
*/
offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
- agpmem->memory->memory[offset] &= dev->agp->page_mask;
- page = virt_to_page(__va(agpmem->memory->memory[offset]));
+
+ /*
+ * This is bad. What we really want to do here is unmask
+ * the GART table entry held in the agp_memory structure.
+ * There isn't a convenient way to call agp_bridge.unmask_
+ * memory from here, so hard code it for now.
+ */
+#if defined(__ia64__)
+ paddr = (agpmem->memory->memory[offset] & 0xffffff) << 12;
+#else
+ paddr = agpmem->memory->memory[offset] & dev->agp->page_mask;
+#endif
+ page = virt_to_page(__va(paddr));
get_page(page);
DRM_DEBUG("baddr = 0x%lx page = 0x%p, offset = 0x%lx\n",
@@ -255,7 +266,7 @@
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
#endif
- DRM(ioremapfree)(map->handle, map->size);
+ DRM(ioremapfree)(map->handle, map->size, dev);
break;
case _DRM_SHM:
vfree(map->handle);
@@ -502,15 +513,21 @@
switch (map->type) {
case _DRM_AGP:
-#if defined(__alpha__)
- /*
- * On Alpha we can't talk to bus dma address from the
- * CPU, so for memory of type DRM_AGP, we'll deal with
- * sorting out the real physical pages and mappings
- * in nopage()
- */
- vma->vm_ops = &DRM(vm_ops);
- break;
+#if __REALLY_HAVE_AGP
+ if(dev->agp->cant_use_aperture = 1) {
+ /*
+ * On some systems we can't talk to bus dma address from
+ * the CPU, so for memory of type DRM_AGP, we'll deal
+ * with sorting out the real physical pages and mappings
+ * in nopage()
+ */
+ vma->vm_ops = &DRM(vm_ops);
+#if defined(__ia64__)
+ vma->vm_page_prot + pgprot_writecombine(vma->vm_page_prot);
+#endif
+ goto mapswitch_out;
+ }
#endif
/* fall through to _DRM_FRAME_BUFFER... */
case _DRM_FRAME_BUFFER:
@@ -522,8 +539,7 @@
pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
}
#elif defined(__ia64__)
- if (map->type != _DRM_AGP)
- vma->vm_page_prot + vma->vm_page_prot pgprot_writecombine(vma->vm_page_prot);
#elif defined(__powerpc__)
pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE | _PAGE_GUARDED;
@@ -572,6 +588,9 @@
default:
return -EINVAL; /* This should never happen. */
}
+#if __REALLY_HAVE_AGP
+mapswitch_out:
+#endif
vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
#if LINUX_VERSION_CODE < 0x020203 /* KERNEL_VERSION(2,2,3) */
diff -urN linux-2.4.13/drivers/char/drm/i810_dma.c linux-2.4.13-lia/drivers/char/drm/i810_dma.c
--- linux-2.4.13/drivers/char/drm/i810_dma.c Wed Aug 8 09:42:15 2001
+++ linux-2.4.13-lia/drivers/char/drm/i810_dma.c Thu Oct 4 00:21:40 2001
@@ -315,7 +315,7 @@
if(dev_priv->ring.virtual_start) {
DRM(ioremapfree)((void *) dev_priv->ring.virtual_start,
- dev_priv->ring.Size);
+ dev_priv->ring.Size, dev);
}
if(dev_priv->hw_status_page != 0UL) {
i810_free_page(dev, dev_priv->hw_status_page);
@@ -329,7 +329,8 @@
for (i = 0; i < dma->buf_count; i++) {
drm_buf_t *buf = dma->buflist[ i ];
drm_i810_buf_priv_t *buf_priv = buf->dev_private;
- DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total);
+ DRM(ioremapfree)(buf_priv->kernel_virtual,
+ buf->total, dev);
}
}
return 0;
@@ -402,7 +403,7 @@
*buf_priv->in_use = I810_BUF_FREE;
buf_priv->kernel_virtual = DRM(ioremap)(buf->bus_address,
- buf->total);
+ buf->total, dev);
}
return 0;
}
@@ -458,7 +459,7 @@
dev_priv->ring.virtual_start = DRM(ioremap)(dev->agp->base +
init->ring_start,
- init->ring_size);
+ init->ring_size, dev);
if (dev_priv->ring.virtual_start = NULL) {
dev->dev_private = (void *) dev_priv;
diff -urN linux-2.4.13/drivers/char/drm/mga_dma.c linux-2.4.13-lia/drivers/char/drm/mga_dma.c
--- linux-2.4.13/drivers/char/drm/mga_dma.c Wed Aug 8 09:42:15 2001
+++ linux-2.4.13-lia/drivers/char/drm/mga_dma.c Thu Oct 4 00:21:40 2001
@@ -557,9 +557,9 @@
(drm_mga_sarea_t *)((u8 *)dev_priv->sarea->handle +
init->sarea_priv_offset);
- DRM_IOREMAP( dev_priv->warp );
- DRM_IOREMAP( dev_priv->primary );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->warp, dev );
+ DRM_IOREMAP( dev_priv->primary, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->warp->handle ||
!dev_priv->primary->handle ||
@@ -647,9 +647,9 @@
if ( dev->dev_private ) {
drm_mga_private_t *dev_priv = dev->dev_private;
- DRM_IOREMAPFREE( dev_priv->warp );
- DRM_IOREMAPFREE( dev_priv->primary );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->warp, dev );
+ DRM_IOREMAPFREE( dev_priv->primary, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
if ( dev_priv->head != NULL ) {
mga_freelist_cleanup( dev );
diff -urN linux-2.4.13/drivers/char/drm/r128_cce.c linux-2.4.13-lia/drivers/char/drm/r128_cce.c
--- linux-2.4.13/drivers/char/drm/r128_cce.c Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/r128_cce.c Thu Oct 4 00:21:52 2001
@@ -216,7 +216,22 @@
int i;
for ( i = 0 ; i < dev_priv->usec_timeout ; i++ ) {
+#ifndef CONFIG_AGP_I460
if ( GET_RING_HEAD( &dev_priv->ring ) = dev_priv->ring.tail ) {
+#else
+ /*
+ * XXX - this is (I think) a 460GX specific hack
+ *
+ * When doing texturing, ring.tail sometimes gets ahead of
+ * PM4_BUFFER_DL_WPTR by 2; consequently, the card processes
+ * its whole quota of instructions and *ring.head is still 2
+ * short of ring.tail. Work around this for now in lieu of
+ * a better solution.
+ */
+ if ( GET_RING_HEAD( &dev_priv->ring ) = dev_priv->ring.tail ||
+ ( dev_priv->ring.tail -
+ GET_RING_HEAD( &dev_priv->ring ) ) = 2 ) {
+#endif
int pm4stat = R128_READ( R128_PM4_STAT );
if ( ( (pm4stat & R128_PM4_FIFOCNT_MASK) > dev_priv->cce_fifo_size ) &&
@@ -341,8 +356,27 @@
SET_RING_HEAD( &dev_priv->ring, 0 );
if ( !dev_priv->is_pci ) {
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * XXX - This is a 460GX specific hack
+ *
+ * We have to hack this right now. 460GX isn't claiming PCI
+ * writes from the card into the AGP aperture. Because of this,
+ * we have to get space outside of the aperture for RPTR_ADDR.
+ */
+ if( dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off;
+
+ alt_rh_off = __get_free_page(GFP_KERNEL | GFP_DMA);
+ atomic_inc(&virt_to_page(alt_rh_off)->count);
+ set_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+
+ dev_priv->ring.head = (__volatile__ u32 *) alt_rh_off;
+ SET_RING_HEAD( &dev_priv->ring, 0 );
+ }
+#endif
R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
- dev_priv->ring_rptr->offset );
+ __pa( dev_priv->ring.head ) );
} else {
drm_sg_mem_t *entry = dev->sg;
unsigned long tmp_ofs, page_ofs;
@@ -350,11 +384,20 @@
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
page_ofs = tmp_ofs >> PAGE_SHIFT;
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
entry->busaddr[page_ofs]);
DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
+#else
+ R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR,
+ page_to_bus(entry->pagelist[page_ofs]));
+
+ DRM_DEBUG( "ring rptr: offset=0x%08lx handle=0x%08lx\n",
+ page_to_bus(entry->pagelist[page_ofs]),
+ entry->handle + tmp_ofs );
+#endif
}
/* Set watermark control */
@@ -550,9 +593,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cce_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cce_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cce_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -624,9 +667,9 @@
drm_r128_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cce_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cce_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
@@ -634,6 +677,21 @@
DRM_ERROR( "failed to cleanup PCI GART!\n" );
}
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * Free the page we grabbed for RPTR_ADDR
+ */
+ if( !dev_priv->is_pci && dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head;
+
+ atomic_dec(&virt_to_page(alt_rh_off)->count);
+ clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+ wake_up(&virt_to_page(alt_rh_off)->wait);
+ free_page(alt_rh_off);
+ }
+#endif
+
DRM(free)( dev->dev_private, sizeof(drm_r128_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN linux-2.4.13/drivers/char/drm/radeon_cp.c linux-2.4.13-lia/drivers/char/drm/radeon_cp.c
--- linux-2.4.13/drivers/char/drm/radeon_cp.c Mon Sep 24 15:06:58 2001
+++ linux-2.4.13-lia/drivers/char/drm/radeon_cp.c Thu Oct 4 00:21:52 2001
@@ -612,8 +612,27 @@
dev_priv->ring.tail = cur_read_ptr;
if ( !dev_priv->is_pci ) {
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * XXX - This is a 460GX specific hack
+ *
+ * We have to hack this right now. 460GX isn't claiming PCI
+ * writes from the card into the AGP aperture. Because of this,
+ * we have to get space outside of the aperture for RPTR_ADDR.
+ */
+ if( dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off;
+
+ alt_rh_off = __get_free_page(GFP_KERNEL | GFP_DMA);
+ atomic_inc(&virt_to_page(alt_rh_off)->count);
+ set_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+
+ dev_priv->ring.head = (__volatile__ u32 *) alt_rh_off;
+ *dev_priv->ring.head = cur_read_ptr;
+ }
+#endif
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
- dev_priv->ring_rptr->offset );
+ __pa( dev_priv->ring.head ) );
} else {
drm_sg_mem_t *entry = dev->sg;
unsigned long tmp_ofs, page_ofs;
@@ -621,11 +640,19 @@
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
page_ofs = tmp_ofs >> PAGE_SHIFT;
+#if defined(__alpha__) && (LINUX_VERSION_CODE >= 0x020400)
+ RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
+ entry->busaddr[page_ofs]);
+ DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
+ entry->busaddr[page_ofs],
+ entry->handle + tmp_ofs );
+#else
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
entry->busaddr[page_ofs]);
DRM_DEBUG( "ring rptr: offset=0x%08x handle=0x%08lx\n",
entry->busaddr[page_ofs],
entry->handle + tmp_ofs );
+#endif
}
/* Set ring buffer size */
@@ -836,9 +863,9 @@
init->sarea_priv_offset);
if ( !dev_priv->is_pci ) {
- DRM_IOREMAP( dev_priv->cp_ring );
- DRM_IOREMAP( dev_priv->ring_rptr );
- DRM_IOREMAP( dev_priv->buffers );
+ DRM_IOREMAP( dev_priv->cp_ring, dev );
+ DRM_IOREMAP( dev_priv->ring_rptr, dev );
+ DRM_IOREMAP( dev_priv->buffers, dev );
if(!dev_priv->cp_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev_priv->buffers->handle) {
@@ -983,9 +1010,9 @@
drm_radeon_private_t *dev_priv = dev->dev_private;
if ( !dev_priv->is_pci ) {
- DRM_IOREMAPFREE( dev_priv->cp_ring );
- DRM_IOREMAPFREE( dev_priv->ring_rptr );
- DRM_IOREMAPFREE( dev_priv->buffers );
+ DRM_IOREMAPFREE( dev_priv->cp_ring, dev );
+ DRM_IOREMAPFREE( dev_priv->ring_rptr, dev );
+ DRM_IOREMAPFREE( dev_priv->buffers, dev );
} else {
if (!DRM(ati_pcigart_cleanup)( dev,
dev_priv->phys_pci_gart,
@@ -993,6 +1020,21 @@
DRM_ERROR( "failed to cleanup PCI GART!\n" );
}
+#if defined(CONFIG_AGP_I460) && defined(__ia64__)
+ /*
+ * Free the page we grabbed for RPTR_ADDR
+ */
+ if( !dev_priv->is_pci && dev->agp->agp_info.chipset = INTEL_460GX ) {
+ unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head;
+
+ atomic_dec(&virt_to_page(alt_rh_off)->count);
+ clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags);
+ wake_up(&virt_to_page(alt_rh_off)->wait);
+ free_page(alt_rh_off);
+ }
+#endif
+
DRM(free)( dev->dev_private, sizeof(drm_radeon_private_t),
DRM_MEM_DRIVER );
dev->dev_private = NULL;
diff -urN linux-2.4.13/drivers/char/drm-4.0/Config.in linux-2.4.13-lia/drivers/char/drm-4.0/Config.in
--- linux-2.4.13/drivers/char/drm-4.0/Config.in Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/Config.in Thu Oct 4 00:21:40 2001
@@ -0,0 +1,13 @@
+#
+# drm device configuration
+#
+# This driver provides support for the
+# Direct Rendering Infrastructure (DRI) in XFree86 4.x.
+#
+
+tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM40_TDFX
+tristate ' 3dlabs GMX 2000' CONFIG_DRM40_GAMMA
+dep_tristate ' ATI Rage 128' CONFIG_DRM40_R128 $CONFIG_AGP
+dep_tristate ' ATI Radeon' CONFIG_DRM40_RADEON $CONFIG_AGP
+dep_tristate ' Intel I810' CONFIG_DRM40_I810 $CONFIG_AGP
+dep_tristate ' Matrox g200/g400' CONFIG_DRM40_MGA $CONFIG_AGP
diff -urN linux-2.4.13/drivers/char/drm-4.0/Makefile linux-2.4.13-lia/drivers/char/drm-4.0/Makefile
--- linux-2.4.13/drivers/char/drm-4.0/Makefile Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/Makefile Thu Oct 4 00:21:40 2001
@@ -0,0 +1,104 @@
+#
+# Makefile for the drm device driver. This driver provides support for
+# the Direct Rendering Infrastructure (DRI) in XFree86 4.x.
+#
+
+O_TARGET := drm.o
+
+export-objs := gamma_drv.o tdfx_drv.o r128_drv.o ffb_drv.o mga_drv.o \
+ i810_drv.o
+
+# lib-objs are included in every module so that radical changes to the
+# architecture of the DRM support library can be made at a later time.
+#
+# The downside is that each module is larger, and a system that uses
+# more than one module (i.e., a dual-head system) will use more memory
+# (but a system that uses exactly one module will use the same amount of
+# memory).
+#
+# The upside is that if the DRM support library ever becomes insufficient
+# for new families of cards, a new library can be implemented for those new
+# cards without impacting the drivers for the old cards. This is significant,
+# because testing architectural changes to old cards may be impossible, and
+# may delay the implementation of a better architecture. We've traded slight
+# memory waste (in the dual-head case) for greatly improved long-term
+# maintainability.
+#
+# NOTE: lib-objs will be eliminated in future versions, thereby
+# eliminating the need to compile the .o files into every module, but
+# for now we still need them.
+#
+
+lib-objs := init.o memory.o proc.o auth.o context.o drawable.o bufs.o
+lib-objs += lists.o lock.o ioctl.o fops.o vm.o dma.o ctxbitmap.o
+
+ifeq ($(CONFIG_AGP),y)
+ lib-objs += agpsupport.o
+else
+ ifeq ($(CONFIG_AGP),m)
+ lib-objs += agpsupport.o
+ endif
+endif
+
+list-multi := gamma.o tdfx.o r128.o ffb.o mga.o i810.o
+gamma-objs := gamma_drv.o gamma_dma.o
+tdfx-objs := tdfx_drv.o tdfx_context.o
+r128-objs := r128_drv.o r128_cce.o r128_context.o r128_bufs.o r128_state.o
+ffb-objs := ffb_drv.o ffb_context.o
+mga-objs := mga_drv.o mga_dma.o mga_context.o mga_bufs.o mga_state.o
+i810-objs := i810_drv.o i810_dma.o i810_context.o i810_bufs.o
+radeon-objs := radeon_drv.o radeon_cp.o radeon_context.o radeon_bufs.o radeon_state.o
+
+obj-$(CONFIG_DRM40_GAMMA) += gamma.o
+obj-$(CONFIG_DRM40_TDFX) += tdfx.o
+obj-$(CONFIG_DRM40_R128) += r128.o
+obj-$(CONFIG_DRM40_RADEON)+= radeon.o
+obj-$(CONFIG_DRM40_FFB) += ffb.o
+obj-$(CONFIG_DRM40_MGA) += mga.o
+obj-$(CONFIG_DRM40_I810) += i810.o
+
+
+# When linking into the kernel, link the library just once.
+# If making modules, we include the library into each module
+
+lib-objs-mod := $(patsubst %.o,%-mod.o,$(lib-objs))
+
+ifdef MAKING_MODULES
+ lib = drmlib-mod.a
+else
+ obj-y += drmlib.a
+endif
+
+include $(TOPDIR)/Rules.make
+
+$(patsubst %.o,%.c,$(lib-objs-mod)):
+ @ln -sf $(subst -mod,,$@) $@
+
+drmlib-mod.a: $(lib-objs-mod)
+ rm -f $@
+ $(AR) $(EXTRA_ARFLAGS) rcs $@ $(lib-objs-mod)
+
+drmlib.a: $(lib-objs)
+ rm -f $@
+ $(AR) $(EXTRA_ARFLAGS) rcs $@ $(lib-objs)
+
+gamma.o: $(gamma-objs) $(lib)
+ $(LD) -r -o $@ $(gamma-objs) $(lib)
+
+tdfx.o: $(tdfx-objs) $(lib)
+ $(LD) -r -o $@ $(tdfx-objs) $(lib)
+
+mga.o: $(mga-objs) $(lib)
+ $(LD) -r -o $@ $(mga-objs) $(lib)
+
+i810.o: $(i810-objs) $(lib)
+ $(LD) -r -o $@ $(i810-objs) $(lib)
+
+r128.o: $(r128-objs) $(lib)
+ $(LD) -r -o $@ $(r128-objs) $(lib)
+
+radeon.o: $(radeon-objs) $(lib)
+ $(LD) -r -o $@ $(radeon-objs) $(lib)
+
+ffb.o: $(ffb-objs) $(lib)
+ $(LD) -r -o $@ $(ffb-objs) $(lib)
diff -urN linux-2.4.13/drivers/char/drm-4.0/README.drm linux-2.4.13-lia/drivers/char/drm-4.0/README.drm
--- linux-2.4.13/drivers/char/drm-4.0/README.drm Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/README.drm Thu Oct 4 00:21:40 2001
@@ -0,0 +1,46 @@
+************************************************************
+* For the very latest on DRI development, please see: *
+* http://dri.sourceforge.net/ *
+************************************************************
+
+The Direct Rendering Manager (drm) is a device-independent kernel-level
+device driver that provides support for the XFree86 Direct Rendering
+Infrastructure (DRI).
+
+The DRM supports the Direct Rendering Infrastructure (DRI) in four major
+ways:
+
+ 1. The DRM provides synchronized access to the graphics hardware via
+ the use of an optimized two-tiered lock.
+
+ 2. The DRM enforces the DRI security policy for access to the graphics
+ hardware by only allowing authenticated X11 clients access to
+ restricted regions of memory.
+
+ 3. The DRM provides a generic DMA engine, complete with multiple
+ queues and the ability to detect the need for an OpenGL context
+ switch.
+
+ 4. The DRM is extensible via the use of small device-specific modules
+ that rely extensively on the API exported by the DRM module.
+
+
+Documentation on the DRI is available from:
+ http://precisioninsight.com/piinsights.html
+
+For specific information about kernel-level support, see:
+
+ The Direct Rendering Manager, Kernel Support for the Direct Rendering
+ Infrastructure
+ http://precisioninsight.com/dr/drm.html
+
+ Hardware Locking for the Direct Rendering Infrastructure
+ http://precisioninsight.com/dr/locking.html
+
+ A Security Analysis of the Direct Rendering Infrastructure
+ http://precisioninsight.com/dr/security.html
+
+************************************************************
+* For the very latest on DRI development, please see: *
+* http://dri.sourceforge.net/ *
+************************************************************
diff -urN linux-2.4.13/drivers/char/drm-4.0/agpsupport.c linux-2.4.13-lia/drivers/char/drm-4.0/agpsupport.c
--- linux-2.4.13/drivers/char/drm-4.0/agpsupport.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/agpsupport.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,349 @@
+/* agpsupport.c -- DRM support for AGP/GART backend -*- linux-c -*-
+ * Created: Mon Dec 13 09:56:45 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include <linux/config.h>
+#include <linux/module.h>
+#if LINUX_VERSION_CODE < 0x020400
+#include "agpsupport-pre24.h"
+#else
+#define DRM_AGP_GET (drm_agp_t *)inter_module_get("drm_agp")
+#define DRM_AGP_PUT inter_module_put("drm_agp")
+#endif
+
+static const drm_agp_t *drm_agp = NULL;
+
+int drm_agp_info(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ agp_kern_info *kern;
+ drm_agp_info_t info;
+
+ if (!dev->agp->acquired || !drm_agp->copy_info) return -EINVAL;
+
+ kern = &dev->agp->agp_info;
+ info.agp_version_major = kern->version.major;
+ info.agp_version_minor = kern->version.minor;
+ info.mode = kern->mode;
+ info.aperture_base = kern->aper_base;
+ info.aperture_size = kern->aper_size * 1024 * 1024;
+ info.memory_allowed = kern->max_memory << PAGE_SHIFT;
+ info.memory_used = kern->current_memory << PAGE_SHIFT;
+ info.id_vendor = kern->device->vendor;
+ info.id_device = kern->device->device;
+
+ if (copy_to_user((drm_agp_info_t *)arg, &info, sizeof(info)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_agp_acquire(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode;
+
+ if (dev->agp->acquired || !drm_agp->acquire) return -EINVAL;
+ if ((retcode = drm_agp->acquire())) return retcode;
+ dev->agp->acquired = 1;
+ return 0;
+}
+
+int drm_agp_release(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ if (!dev->agp->acquired || !drm_agp->release) return -EINVAL;
+ drm_agp->release();
+ dev->agp->acquired = 0;
+ return 0;
+
+}
+
+void _drm_agp_release(void)
+{
+ if (drm_agp->release) drm_agp->release();
+}
+
+int drm_agp_enable(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_mode_t mode;
+
+ if (!dev->agp->acquired || !drm_agp->enable) return -EINVAL;
+
+ if (copy_from_user(&mode, (drm_agp_mode_t *)arg, sizeof(mode)))
+ return -EFAULT;
+
+ dev->agp->mode = mode.mode;
+ drm_agp->enable(mode.mode);
+ dev->agp->base = dev->agp->agp_info.aper_base;
+ dev->agp->enabled = 1;
+ return 0;
+}
+
+int drm_agp_alloc(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_buffer_t request;
+ drm_agp_mem_t *entry;
+ agp_memory *memory;
+ unsigned long pages;
+ u32 type;
+ if (!dev->agp->acquired) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_buffer_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_alloc(sizeof(*entry), DRM_MEM_AGPLISTS)))
+ return -ENOMEM;
+
+ memset(entry, 0, sizeof(*entry));
+
+ pages = (request.size + PAGE_SIZE - 1) / PAGE_SIZE;
+ type = (u32) request.type;
+
+ if (!(memory = drm_alloc_agp(pages, type))) {
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return -ENOMEM;
+ }
+
+ entry->handle = (unsigned long)memory->memory;
+ entry->memory = memory;
+ entry->bound = 0;
+ entry->pages = pages;
+ entry->prev = NULL;
+ entry->next = dev->agp->memory;
+ if (dev->agp->memory) dev->agp->memory->prev = entry;
+ dev->agp->memory = entry;
+
+ request.handle = entry->handle;
+ request.physical = memory->physical;
+
+ if (copy_to_user((drm_agp_buffer_t *)arg, &request, sizeof(request))) {
+ dev->agp->memory = entry->next;
+ dev->agp->memory->prev = NULL;
+ drm_free_agp(memory, pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return -EFAULT;
+ }
+ return 0;
+}
+
+static drm_agp_mem_t *drm_agp_lookup_entry(drm_device_t *dev,
+ unsigned long handle)
+{
+ drm_agp_mem_t *entry;
+
+ for (entry = dev->agp->memory; entry; entry = entry->next) {
+ if (entry->handle = handle) return entry;
+ }
+ return NULL;
+}
+
+int drm_agp_unbind(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_binding_t request;
+ drm_agp_mem_t *entry;
+
+ if (!dev->agp->acquired) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_binding_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (!entry->bound) return -EINVAL;
+ return drm_unbind_agp(entry->memory);
+}
+
+int drm_agp_bind(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_binding_t request;
+ drm_agp_mem_t *entry;
+ int retcode;
+ int page;
+
+ if (!dev->agp->acquired || !drm_agp->bind_memory) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_binding_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (entry->bound) return -EINVAL;
+ page = (request.offset + PAGE_SIZE - 1) / PAGE_SIZE;
+ if ((retcode = drm_bind_agp(entry->memory, page))) return retcode;
+ entry->bound = dev->agp->base + (page << PAGE_SHIFT);
+ DRM_DEBUG("base = 0x%lx entry->bound = 0x%lx\n",
+ dev->agp->base, entry->bound);
+ return 0;
+}
+
+int drm_agp_free(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_buffer_t request;
+ drm_agp_mem_t *entry;
+
+ if (!dev->agp->acquired) return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_buffer_t *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (entry->bound) drm_unbind_agp(entry->memory);
+
+ if (entry->prev) entry->prev->next = entry->next;
+ else dev->agp->memory = entry->next;
+ if (entry->next) entry->next->prev = entry->prev;
+ drm_free_agp(entry->memory, entry->pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return 0;
+}
+
+drm_agp_head_t *drm_agp_init(void)
+{
+ drm_agp_head_t *head = NULL;
+
+ drm_agp = DRM_AGP_GET;
+ if (drm_agp) {
+ if (!(head = drm_alloc(sizeof(*head), DRM_MEM_AGPLISTS)))
+ return NULL;
+ memset((void *)head, 0, sizeof(*head));
+ drm_agp->copy_info(&head->agp_info);
+ if (head->agp_info.chipset = NOT_SUPPORTED) {
+ drm_free(head, sizeof(*head), DRM_MEM_AGPLISTS);
+ return NULL;
+ }
+ head->memory = NULL;
+ switch (head->agp_info.chipset) {
+ case INTEL_GENERIC: head->chipset = "Intel"; break;
+ case INTEL_LX: head->chipset = "Intel 440LX"; break;
+ case INTEL_BX: head->chipset = "Intel 440BX"; break;
+ case INTEL_GX: head->chipset = "Intel 440GX"; break;
+ case INTEL_I810: head->chipset = "Intel i810"; break;
+
+#if LINUX_VERSION_CODE >= 0x020400
+ case INTEL_I840: head->chipset = "Intel i840"; break;
+#endif
+ case INTEL_460GX: head->chipset = "Intel 460GX"; break;
+
+ case VIA_GENERIC: head->chipset = "VIA"; break;
+ case VIA_VP3: head->chipset = "VIA VP3"; break;
+ case VIA_MVP3: head->chipset = "VIA MVP3"; break;
+
+#if LINUX_VERSION_CODE >= 0x020400
+ case VIA_MVP4: head->chipset = "VIA MVP4"; break;
+ case VIA_APOLLO_KX133: head->chipset = "VIA Apollo KX133";
+ break;
+ case VIA_APOLLO_KT133: head->chipset = "VIA Apollo KT133";
+ break;
+#endif
+
+ case VIA_APOLLO_PRO: head->chipset = "VIA Apollo Pro";
+ break;
+ case SIS_GENERIC: head->chipset = "SiS"; break;
+ case AMD_GENERIC: head->chipset = "AMD"; break;
+ case AMD_IRONGATE: head->chipset = "AMD Irongate"; break;
+ case ALI_GENERIC: head->chipset = "ALi"; break;
+ case ALI_M1541: head->chipset = "ALi M1541"; break;
+ case ALI_M1621: head->chipset = "ALi M1621"; break;
+ case ALI_M1631: head->chipset = "ALi M1631"; break;
+ case ALI_M1632: head->chipset = "ALi M1632"; break;
+ case ALI_M1641: head->chipset = "ALi M1641"; break;
+ case ALI_M1647: head->chipset = "ALi M1647"; break;
+ case ALI_M1651: head->chipset = "ALi M1651"; break;
+ case SVWRKS_GENERIC: head->chipset = "Serverworks Generic";
+ break;
+ case SVWRKS_HE: head->chipset = "Serverworks HE"; break;
+ case SVWRKS_LE: head->chipset = "Serverworks LE"; break;
+
+ default: head->chipset = "Unknown"; break;
+ }
+#if LINUX_VERSION_CODE <= 0x020408
+ head->cant_use_aperture = 0;
+ head->page_mask = ~(0xfff);
+#else
+ head->cant_use_aperture = head->agp_info.cant_use_aperture;
+ head->page_mask = head->agp_info.page_mask;
+#endif
+
+ DRM_INFO("AGP %d.%d on %s @ 0x%08lx %ZuMB\n",
+ head->agp_info.version.major,
+ head->agp_info.version.minor,
+ head->chipset,
+ head->agp_info.aper_base,
+ head->agp_info.aper_size);
+ }
+ return head;
+}
+
+void drm_agp_uninit(void)
+{
+ DRM_AGP_PUT;
+ drm_agp = NULL;
+}
+
+agp_memory *drm_agp_allocate_memory(size_t pages, u32 type)
+{
+ if (!drm_agp->allocate_memory) return NULL;
+ return drm_agp->allocate_memory(pages, type);
+}
+
+int drm_agp_free_memory(agp_memory *handle)
+{
+ if (!handle || !drm_agp->free_memory) return 0;
+ drm_agp->free_memory(handle);
+ return 1;
+}
+
+int drm_agp_bind_memory(agp_memory *handle, off_t start)
+{
+ if (!handle || !drm_agp->bind_memory) return -EINVAL;
+ return drm_agp->bind_memory(handle, start);
+}
+
+int drm_agp_unbind_memory(agp_memory *handle)
+{
+ if (!handle || !drm_agp->unbind_memory) return -EINVAL;
+ return drm_agp->unbind_memory(handle);
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/auth.c linux-2.4.13-lia/drivers/char/drm-4.0/auth.c
--- linux-2.4.13/drivers/char/drm-4.0/auth.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/auth.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,162 @@
+/* auth.c -- IOCTLs for authentication -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+static int drm_hash_magic(drm_magic_t magic)
+{
+ return magic & (DRM_HASH_SIZE-1);
+}
+
+static drm_file_t *drm_find_file(drm_device_t *dev, drm_magic_t magic)
+{
+ drm_file_t *retval = NULL;
+ drm_magic_entry_t *pt;
+ int hash = drm_hash_magic(magic);
+
+ down(&dev->struct_sem);
+ for (pt = dev->magiclist[hash].head; pt; pt = pt->next) {
+ if (pt->magic = magic) {
+ retval = pt->priv;
+ break;
+ }
+ }
+ up(&dev->struct_sem);
+ return retval;
+}
+
+int drm_add_magic(drm_device_t *dev, drm_file_t *priv, drm_magic_t magic)
+{
+ int hash;
+ drm_magic_entry_t *entry;
+
+ DRM_DEBUG("%d\n", magic);
+
+ hash = drm_hash_magic(magic);
+ entry = drm_alloc(sizeof(*entry), DRM_MEM_MAGIC);
+ if (!entry) return -ENOMEM;
+ entry->magic = magic;
+ entry->priv = priv;
+ entry->next = NULL;
+
+ down(&dev->struct_sem);
+ if (dev->magiclist[hash].tail) {
+ dev->magiclist[hash].tail->next = entry;
+ dev->magiclist[hash].tail = entry;
+ } else {
+ dev->magiclist[hash].head = entry;
+ dev->magiclist[hash].tail = entry;
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+int drm_remove_magic(drm_device_t *dev, drm_magic_t magic)
+{
+ drm_magic_entry_t *prev = NULL;
+ drm_magic_entry_t *pt;
+ int hash;
+
+ DRM_DEBUG("%d\n", magic);
+ hash = drm_hash_magic(magic);
+
+ down(&dev->struct_sem);
+ for (pt = dev->magiclist[hash].head; pt; prev = pt, pt = pt->next) {
+ if (pt->magic = magic) {
+ if (dev->magiclist[hash].head = pt) {
+ dev->magiclist[hash].head = pt->next;
+ }
+ if (dev->magiclist[hash].tail = pt) {
+ dev->magiclist[hash].tail = prev;
+ }
+ if (prev) {
+ prev->next = pt->next;
+ }
+ up(&dev->struct_sem);
+ return 0;
+ }
+ }
+ up(&dev->struct_sem);
+
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+
+ return -EINVAL;
+}
+
+int drm_getmagic(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ static drm_magic_t sequence = 0;
+ static spinlock_t lock = SPIN_LOCK_UNLOCKED;
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_auth_t auth;
+
+ /* Find unique magic */
+ if (priv->magic) {
+ auth.magic = priv->magic;
+ } else {
+ do {
+ spin_lock(&lock);
+ if (!sequence) ++sequence; /* reserve 0 */
+ auth.magic = sequence++;
+ spin_unlock(&lock);
+ } while (drm_find_file(dev, auth.magic));
+ priv->magic = auth.magic;
+ drm_add_magic(dev, priv, auth.magic);
+ }
+
+ DRM_DEBUG("%u\n", auth.magic);
+ if (copy_to_user((drm_auth_t *)arg, &auth, sizeof(auth)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_authmagic(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_auth_t auth;
+ drm_file_t *file;
+
+ if (copy_from_user(&auth, (drm_auth_t *)arg, sizeof(auth)))
+ return -EFAULT;
+ DRM_DEBUG("%u\n", auth.magic);
+ if ((file = drm_find_file(dev, auth.magic))) {
+ file->authenticated = 1;
+ drm_remove_magic(dev, auth.magic);
+ return 0;
+ }
+ return -EINVAL;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/bufs.c linux-2.4.13-lia/drivers/char/drm-4.0/bufs.c
--- linux-2.4.13/drivers/char/drm-4.0/bufs.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/bufs.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,543 @@
+/* bufs.c -- IOCTLs to manage buffers -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include <linux/config.h>
+#include "drmP.h"
+#include "linux/un.h"
+
+ /* Compute order. Can be made faster. */
+int drm_order(unsigned long size)
+{
+ int order;
+ unsigned long tmp;
+
+ for (order = 0, tmp = size; tmp >>= 1; ++order);
+ if (size & ~(1 << order)) ++order;
+ return order;
+}
+
+int drm_addmap(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map;
+
+ if (!(filp->f_mode & 3)) return -EACCES; /* Require read/write */
+
+ map = drm_alloc(sizeof(*map), DRM_MEM_MAPS);
+ if (!map) return -ENOMEM;
+ if (copy_from_user(map, (drm_map_t *)arg, sizeof(*map))) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EFAULT;
+ }
+
+ DRM_DEBUG("offset = 0x%08lx, size = 0x%08lx, type = %d\n",
+ map->offset, map->size, map->type);
+ if ((map->offset & (~PAGE_MASK)) || (map->size & (~PAGE_MASK))) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+ map->mtrr = -1;
+ map->handle = 0;
+
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+#if !defined(__sparc__) && !defined(__ia64__)
+ if (map->offset + map->size < map->offset
+ || map->offset < virt_to_phys(high_memory)) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+#endif
+#ifdef CONFIG_MTRR
+ if (map->type = _DRM_FRAME_BUFFER
+ || (map->flags & _DRM_WRITE_COMBINING)) {
+ map->mtrr = mtrr_add(map->offset, map->size,
+ MTRR_TYPE_WRCOMB, 1);
+ }
+#endif
+ map->handle = drm_ioremap(map->offset, map->size, dev);
+ break;
+
+
+ case _DRM_SHM:
+ map->handle = (void *)drm_alloc_pages(drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ DRM_DEBUG("%ld %d %p\n", map->size, drm_order(map->size),
+ map->handle);
+ if (!map->handle) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -ENOMEM;
+ }
+ map->offset = (unsigned long)map->handle;
+ if (map->flags & _DRM_CONTAINS_LOCK) {
+ dev->lock.hw_lock = map->handle; /* Pointer to lock */
+ }
+ break;
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+ case _DRM_AGP:
+ map->offset = map->offset + dev->agp->base;
+ break;
+#endif
+ default:
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+
+ down(&dev->struct_sem);
+ if (dev->maplist) {
+ ++dev->map_count;
+ dev->maplist = drm_realloc(dev->maplist,
+ (dev->map_count-1)
+ * sizeof(*dev->maplist),
+ dev->map_count
+ * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ } else {
+ dev->map_count = 1;
+ dev->maplist = drm_alloc(dev->map_count*sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ }
+ dev->maplist[dev->map_count-1] = map;
+ up(&dev->struct_sem);
+
+ if (copy_to_user((drm_map_t *)arg, map, sizeof(*map)))
+ return -EFAULT;
+ if (map->type != _DRM_SHM) {
+ if (copy_to_user(&((drm_map_t *)arg)->handle,
+ &map->offset,
+ sizeof(map->offset)))
+ return -EFAULT;
+ }
+ return 0;
+}
+
+int drm_addbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int count;
+ int order;
+ int size;
+ int total;
+ int page_order;
+ drm_buf_entry_t *entry;
+ unsigned long page;
+ drm_buf_t *buf;
+ int alignment;
+ unsigned long offset;
+ int i;
+ int byte_count;
+ int page_count;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+
+ DRM_DEBUG("count = %d, size = %d (%d), order = %d, queue_count = %d\n",
+ request.count, request.size, size, order, dev->queue_count);
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if(count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->seglist = drm_alloc(count * sizeof(*entry->seglist),
+ DRM_MEM_SEGS);
+ if (!entry->seglist) {
+ drm_free(entry->buflist,
+ count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->seglist, 0, count * sizeof(*entry->seglist));
+
+ dma->pagelist = drm_realloc(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ DRM_DEBUG("pagelist: %d entries\n",
+ dma->page_count + (count << page_order));
+
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ byte_count = 0;
+ page_count = 0;
+ while (entry->buf_count < count) {
+ if (!(page = drm_alloc_pages(page_order, DRM_MEM_DMA))) break;
+ entry->seglist[entry->seg_count++] = page;
+ for (i = 0; i < (1 << page_order); i++) {
+ DRM_DEBUG("page %d @ 0x%08lx\n",
+ dma->page_count + page_count,
+ page + PAGE_SIZE * i);
+ dma->pagelist[dma->page_count + page_count++]
+ = page + PAGE_SIZE * i;
+ }
+ for (offset = 0;
+ offset + size <= total && entry->buf_count < count;
+ offset += alignment, ++entry->buf_count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+ buf->offset = (dma->byte_count + byte_count + offset);
+ buf->address = (void *)(page + offset);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ DRM_DEBUG("buffer %d @ %p\n",
+ entry->buf_count, buf->address);
+ }
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+ dma->seg_count += entry->seg_count;
+ dma->page_count += entry->seg_count << page_order;
+ dma->byte_count += PAGE_SIZE * (entry->seg_count << page_order);
+
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+ return 0;
+}
+
+int drm_infobufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_info_t request;
+ int i;
+ int count;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_info_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) ++count;
+ }
+
+ DRM_DEBUG("count = %d\n", count);
+
+ if (request.count >= count) {
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) {
+ if (copy_to_user(&request.list[count].count,
+ &dma->bufs[i].buf_count,
+ sizeof(dma->bufs[0]
+ .buf_count)) ||
+ copy_to_user(&request.list[count].size,
+ &dma->bufs[i].buf_size,
+ sizeof(dma->bufs[0].buf_size)) ||
+ copy_to_user(&request.list[count].low_mark,
+ &dma->bufs[i]
+ .freelist.low_mark,
+ sizeof(dma->bufs[0]
+ .freelist.low_mark)) ||
+ copy_to_user(&request.list[count]
+ .high_mark,
+ &dma->bufs[i]
+ .freelist.high_mark,
+ sizeof(dma->bufs[0]
+ .freelist.high_mark)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d %d %d %d %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].buf_size,
+ dma->bufs[i].freelist.low_mark,
+ dma->bufs[i].freelist.high_mark);
+ ++count;
+ }
+ }
+ }
+ request.count = count;
+
+ if (copy_to_user((drm_buf_info_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int drm_markbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int order;
+ drm_buf_entry_t *entry;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d, %d, %d\n",
+ request.size, request.low_mark, request.high_mark);
+ order = drm_order(request.size);
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ entry = &dma->bufs[order];
+
+ if (request.low_mark < 0 || request.low_mark > entry->buf_count)
+ return -EINVAL;
+ if (request.high_mark < 0 || request.high_mark > entry->buf_count)
+ return -EINVAL;
+
+ entry->freelist.low_mark = request.low_mark;
+ entry->freelist.high_mark = request.high_mark;
+
+ return 0;
+}
+
+int drm_freebufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_free_t request;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_free_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", request.count);
+ for (i = 0; i < request.count; i++) {
+ if (copy_from_user(&idx,
+ &request.list[i],
+ sizeof(idx)))
+ return -EFAULT;
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ idx, dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[idx];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d freeing buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ drm_free_buffer(dev, buf);
+ }
+
+ return 0;
+}
+
+int drm_mapbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ int retcode = 0;
+ const int zero = 0;
+ unsigned long virtual;
+ unsigned long address;
+ drm_buf_map_t request;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ DRM_DEBUG("\n");
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_map_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if (request.count >= dma->buf_count) {
+ down_write(¤t->mm->mmap_sem);
+ virtual = do_mmap(filp, 0, dma->byte_count,
+ PROT_READ|PROT_WRITE, MAP_SHARED, 0);
+ up_write(¤t->mm->mmap_sem);
+ if (virtual > -1024UL) {
+ /* Real error */
+ retcode = (signed long)virtual;
+ goto done;
+ }
+ request.virtual = (void *)virtual;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ if (copy_to_user(&request.list[i].idx,
+ &dma->buflist[i]->idx,
+ sizeof(request.list[0].idx))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].total,
+ &dma->buflist[i]->total,
+ sizeof(request.list[0].total))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].used,
+ &zero,
+ sizeof(zero))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ address = virtual + dma->buflist[i]->offset;
+ if (copy_to_user(&request.list[i].address,
+ &address,
+ sizeof(address))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ }
+ }
+done:
+ request.count = dma->buf_count;
+ DRM_DEBUG("%d buffers, retcode = %d\n", request.count, retcode);
+
+ if (copy_to_user((drm_buf_map_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return retcode;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/context.c linux-2.4.13-lia/drivers/char/drm-4.0/context.c
--- linux-2.4.13/drivers/char/drm-4.0/context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,321 @@
+/* context.c -- IOCTLs for contexts and DMA queues -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+static int drm_init_queue(drm_device_t *dev, drm_queue_t *q, drm_ctx_t *ctx)
+{
+ DRM_DEBUG("\n");
+
+ if (atomic_read(&q->use_count) != 1
+ || atomic_read(&q->finalization)
+ || atomic_read(&q->block_count)) {
+ DRM_ERROR("New queue is already in use: u%d f%d b%d\n",
+ atomic_read(&q->use_count),
+ atomic_read(&q->finalization),
+ atomic_read(&q->block_count));
+ }
+
+ atomic_set(&q->finalization, 0);
+ atomic_set(&q->block_count, 0);
+ atomic_set(&q->block_read, 0);
+ atomic_set(&q->block_write, 0);
+ atomic_set(&q->total_queued, 0);
+ atomic_set(&q->total_flushed, 0);
+ atomic_set(&q->total_locks, 0);
+
+ init_waitqueue_head(&q->write_queue);
+ init_waitqueue_head(&q->read_queue);
+ init_waitqueue_head(&q->flush_queue);
+
+ q->flags = ctx->flags;
+
+ drm_waitlist_create(&q->waitlist, dev->dma->buf_count);
+
+ return 0;
+}
+
+
+/* drm_alloc_queue:
+PRE: 1) dev->queuelist[0..dev->queue_count] is allocated and will not
+ disappear (so all deallocation must be done after IOCTLs are off)
+ 2) dev->queue_count < dev->queue_slots
+ 3) dev->queuelist[i].use_count = 0 and
+ dev->queuelist[i].finalization = 0 if i not in use
+POST: 1) dev->queuelist[i].use_count = 1
+ 2) dev->queue_count < dev->queue_slots */
+
+static int drm_alloc_queue(drm_device_t *dev)
+{
+ int i;
+ drm_queue_t *queue;
+ int oldslots;
+ int newslots;
+ /* Check for a free queue */
+ for (i = 0; i < dev->queue_count; i++) {
+ atomic_inc(&dev->queuelist[i]->use_count);
+ if (atomic_read(&dev->queuelist[i]->use_count) = 1
+ && !atomic_read(&dev->queuelist[i]->finalization)) {
+ DRM_DEBUG("%d (free)\n", i);
+ return i;
+ }
+ atomic_dec(&dev->queuelist[i]->use_count);
+ }
+ /* Allocate a new queue */
+
+ queue = drm_alloc(sizeof(*queue), DRM_MEM_QUEUES);
+ if(queue = NULL)
+ return -ENOMEM;
+
+ memset(queue, 0, sizeof(*queue));
+ down(&dev->struct_sem);
+ atomic_set(&queue->use_count, 1);
+
+ ++dev->queue_count;
+ if (dev->queue_count >= dev->queue_slots) {
+ oldslots = dev->queue_slots * sizeof(*dev->queuelist);
+ if (!dev->queue_slots) dev->queue_slots = 1;
+ dev->queue_slots *= 2;
+ newslots = dev->queue_slots * sizeof(*dev->queuelist);
+
+ dev->queuelist = drm_realloc(dev->queuelist,
+ oldslots,
+ newslots,
+ DRM_MEM_QUEUES);
+ if (!dev->queuelist) {
+ up(&dev->struct_sem);
+ DRM_DEBUG("out of memory\n");
+ return -ENOMEM;
+ }
+ }
+ dev->queuelist[dev->queue_count-1] = queue;
+
+ up(&dev->struct_sem);
+ DRM_DEBUG("%d (new)\n", dev->queue_count - 1);
+ return dev->queue_count - 1;
+}
+
+int drm_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS);
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+
+int drm_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ if ((ctx.handle = drm_alloc_queue(dev)) = DRM_KERNEL_CONTEXT) {
+ /* Init kernel's context and get a new one. */
+ drm_init_queue(dev, dev->queuelist[ctx.handle], &ctx);
+ ctx.handle = drm_alloc_queue(dev);
+ }
+ drm_init_queue(dev, dev->queuelist[ctx.handle], &ctx);
+ DRM_DEBUG("%d\n", ctx.handle);
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ drm_queue_t *q;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ if (ctx.handle < 0 || ctx.handle >= dev->queue_count) return -EINVAL;
+ q = dev->queuelist[ctx.handle];
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ /* No longer in use */
+ atomic_dec(&q->use_count);
+ return -EINVAL;
+ }
+
+ if (DRM_BUFCOUNT(&q->waitlist)) {
+ atomic_dec(&q->use_count);
+ return -EBUSY;
+ }
+
+ q->flags = ctx.flags;
+
+ atomic_dec(&q->use_count);
+ return 0;
+}
+
+int drm_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ drm_queue_t *q;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ if (ctx.handle >= dev->queue_count) return -EINVAL;
+ q = dev->queuelist[ctx.handle];
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ /* No longer in use */
+ atomic_dec(&q->use_count);
+ return -EINVAL;
+ }
+
+ ctx.flags = q->flags;
+ atomic_dec(&q->use_count);
+
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int drm_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return drm_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int drm_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ drm_context_switch_complete(dev, ctx.handle);
+
+ return 0;
+}
+
+int drm_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ drm_queue_t *q;
+ drm_buf_t *buf;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ if (ctx.handle >= dev->queue_count) return -EINVAL;
+ q = dev->queuelist[ctx.handle];
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ /* No longer in use */
+ atomic_dec(&q->use_count);
+ return -EINVAL;
+ }
+
+ atomic_inc(&q->finalization); /* Mark queue in finalization state */
+ atomic_sub(2, &q->use_count); /* Mark queue as unused (pending
+ finalization) */
+
+ while (test_and_set_bit(0, &dev->interrupt_flag)) {
+ schedule();
+ if (signal_pending(current)) {
+ clear_bit(0, &dev->interrupt_flag);
+ return -EINTR;
+ }
+ }
+ /* Remove queued buffers */
+ while ((buf = drm_waitlist_get(&q->waitlist))) {
+ drm_free_buffer(dev, buf);
+ }
+ clear_bit(0, &dev->interrupt_flag);
+
+ /* Wakeup blocked processes */
+ wake_up_interruptible(&q->read_queue);
+ wake_up_interruptible(&q->write_queue);
+ wake_up_interruptible(&q->flush_queue);
+
+ /* Finalization over. Queue is made
+ available when both use_count and
+ finalization become 0, which won't
+ happen until all the waiting processes
+ stop waiting. */
+ atomic_dec(&q->finalization);
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/ctxbitmap.c linux-2.4.13-lia/drivers/char/drm-4.0/ctxbitmap.c
--- linux-2.4.13/drivers/char/drm-4.0/ctxbitmap.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ctxbitmap.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,85 @@
+/* ctxbitmap.c -- Context bitmap management -*- linux-c -*-
+ * Created: Thu Jan 6 03:56:42 2000 by jhartmann@precisioninsight.com
+ *
+ * Copyright 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+void drm_ctxbitmap_free(drm_device_t *dev, int ctx_handle)
+{
+ if (ctx_handle < 0) goto failed;
+
+ if (ctx_handle < DRM_MAX_CTXBITMAP) {
+ clear_bit(ctx_handle, dev->ctx_bitmap);
+ return;
+ }
+failed:
+ DRM_ERROR("Attempt to free invalid context handle: %d\n",
+ ctx_handle);
+ return;
+}
+
+int drm_ctxbitmap_next(drm_device_t *dev)
+{
+ int bit;
+
+ bit = find_first_zero_bit(dev->ctx_bitmap, DRM_MAX_CTXBITMAP);
+ if (bit < DRM_MAX_CTXBITMAP) {
+ set_bit(bit, dev->ctx_bitmap);
+ DRM_DEBUG("drm_ctxbitmap_next bit : %d\n", bit);
+ return bit;
+ }
+ return -1;
+}
+
+int drm_ctxbitmap_init(drm_device_t *dev)
+{
+ int i;
+ int temp;
+
+ dev->ctx_bitmap = (unsigned long *) drm_alloc(PAGE_SIZE,
+ DRM_MEM_CTXBITMAP);
+ if(dev->ctx_bitmap = NULL) {
+ return -ENOMEM;
+ }
+ memset((void *) dev->ctx_bitmap, 0, PAGE_SIZE);
+ for(i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ temp = drm_ctxbitmap_next(dev);
+ DRM_DEBUG("drm_ctxbitmap_init : %d\n", temp);
+ }
+
+ return 0;
+}
+
+void drm_ctxbitmap_cleanup(drm_device_t *dev)
+{
+ drm_free((void *)dev->ctx_bitmap, PAGE_SIZE,
+ DRM_MEM_CTXBITMAP);
+}
+
diff -urN linux-2.4.13/drivers/char/drm-4.0/dma.c linux-2.4.13-lia/drivers/char/drm-4.0/dma.c
--- linux-2.4.13/drivers/char/drm-4.0/dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,546 @@
+/* dma.c -- DMA IOCTL and function support -*- linux-c -*-
+ * Created: Fri Mar 19 14:30:16 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinuxa.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+void drm_dma_setup(drm_device_t *dev)
+{
+ int i;
+
+ if (!(dev->dma = drm_alloc(sizeof(*dev->dma), DRM_MEM_DRIVER))) {
+ printk(KERN_ERR "drm_dma_setup: can't drm_alloc dev->dma");
+ return;
+ }
+ memset(dev->dma, 0, sizeof(*dev->dma));
+ for (i = 0; i <= DRM_MAX_ORDER; i++)
+ memset(&dev->dma->bufs[i], 0, sizeof(dev->dma->bufs[0]));
+}
+
+void drm_dma_takedown(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i, j;
+
+ if (!dma) return;
+
+ /* Clear dma buffers */
+ for (i = 0; i <= DRM_MAX_ORDER; i++) {
+ if (dma->bufs[i].seg_count) {
+ DRM_DEBUG("order %d: buf_count = %d,"
+ " seg_count = %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].seg_count);
+ for (j = 0; j < dma->bufs[i].seg_count; j++) {
+ drm_free_pages(dma->bufs[i].seglist[j],
+ dma->bufs[i].page_order,
+ DRM_MEM_DMA);
+ }
+ drm_free(dma->bufs[i].seglist,
+ dma->bufs[i].seg_count
+ * sizeof(*dma->bufs[0].seglist),
+ DRM_MEM_SEGS);
+ }
+ if(dma->bufs[i].buf_count) {
+ for(j = 0; j < dma->bufs[i].buf_count; j++) {
+ if(dma->bufs[i].buflist[j].dev_private) {
+ drm_free(dma->bufs[i].buflist[j].dev_private,
+ dma->bufs[i].buflist[j].dev_priv_size,
+ DRM_MEM_BUFS);
+ }
+ }
+ drm_free(dma->bufs[i].buflist,
+ dma->bufs[i].buf_count *
+ sizeof(*dma->bufs[0].buflist),
+ DRM_MEM_BUFS);
+ drm_freelist_destroy(&dma->bufs[i].freelist);
+ }
+ }
+
+ if (dma->buflist) {
+ drm_free(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ }
+
+ if (dma->pagelist) {
+ drm_free(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ }
+ drm_free(dev->dma, sizeof(*dev->dma), DRM_MEM_DRIVER);
+ dev->dma = NULL;
+}
+
+#if DRM_DMA_HISTOGRAM
+/* This is slow, but is useful for debugging. */
+int drm_histogram_slot(unsigned long count)
+{
+ int value = DRM_DMA_HISTOGRAM_INITIAL;
+ int slot;
+
+ for (slot = 0;
+ slot < DRM_DMA_HISTOGRAM_SLOTS;
+ ++slot, value = DRM_DMA_HISTOGRAM_NEXT(value)) {
+ if (count < value) return slot;
+ }
+ return DRM_DMA_HISTOGRAM_SLOTS - 1;
+}
+
+void drm_histogram_compute(drm_device_t *dev, drm_buf_t *buf)
+{
+ cycles_t queued_to_dispatched;
+ cycles_t dispatched_to_completed;
+ cycles_t completed_to_freed;
+ int q2d, d2c, c2f, q2c, q2f;
+
+ if (buf->time_queued) {
+ queued_to_dispatched = (buf->time_dispatched
+ - buf->time_queued);
+ dispatched_to_completed = (buf->time_completed
+ - buf->time_dispatched);
+ completed_to_freed = (buf->time_freed
+ - buf->time_completed);
+
+ q2d = drm_histogram_slot(queued_to_dispatched);
+ d2c = drm_histogram_slot(dispatched_to_completed);
+ c2f = drm_histogram_slot(completed_to_freed);
+
+ q2c = drm_histogram_slot(queued_to_dispatched
+ + dispatched_to_completed);
+ q2f = drm_histogram_slot(queued_to_dispatched
+ + dispatched_to_completed
+ + completed_to_freed);
+
+ atomic_inc(&dev->histo.total);
+ atomic_inc(&dev->histo.queued_to_dispatched[q2d]);
+ atomic_inc(&dev->histo.dispatched_to_completed[d2c]);
+ atomic_inc(&dev->histo.completed_to_freed[c2f]);
+
+ atomic_inc(&dev->histo.queued_to_completed[q2c]);
+ atomic_inc(&dev->histo.queued_to_freed[q2f]);
+
+ }
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+}
+#endif
+
+void drm_free_buffer(drm_device_t *dev, drm_buf_t *buf)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ if (!buf) return;
+
+ buf->waiting = 0;
+ buf->pending = 0;
+ buf->pid = 0;
+ buf->used = 0;
+#if DRM_DMA_HISTOGRAM
+ buf->time_completed = get_cycles();
+#endif
+ if (waitqueue_active(&buf->dma_wait)) {
+ wake_up_interruptible(&buf->dma_wait);
+ } else {
+ /* If processes are waiting, the last one
+ to wake will put the buffer on the free
+ list. If no processes are waiting, we
+ put the buffer on the freelist here. */
+ drm_freelist_put(dev, &dma->bufs[buf->order].freelist, buf);
+ }
+}
+
+void drm_reclaim_buffers(drm_device_t *dev, pid_t pid)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+
+ if (!dma) return;
+ for (i = 0; i < dma->buf_count; i++) {
+ if (dma->buflist[i]->pid = pid) {
+ switch (dma->buflist[i]->list) {
+ case DRM_LIST_NONE:
+ drm_free_buffer(dev, dma->buflist[i]);
+ break;
+ case DRM_LIST_WAIT:
+ dma->buflist[i]->list = DRM_LIST_RECLAIM;
+ break;
+ default:
+ /* Buffer already on hardware. */
+ break;
+ }
+ }
+ }
+}
+
+int drm_context_switch(drm_device_t *dev, int old, int new)
+{
+ char buf[64];
+ drm_queue_t *q;
+
+ atomic_inc(&dev->total_ctx);
+
+ if (test_and_set_bit(0, &dev->context_flag)) {
+ DRM_ERROR("Reentering -- FIXME\n");
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new >= dev->queue_count) {
+ clear_bit(0, &dev->context_flag);
+ return -EINVAL;
+ }
+
+ if (new = dev->last_context) {
+ clear_bit(0, &dev->context_flag);
+ return 0;
+ }
+
+ q = dev->queuelist[new];
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) = 1) {
+ atomic_dec(&q->use_count);
+ clear_bit(0, &dev->context_flag);
+ return -EINVAL;
+ }
+
+ if (drm_flags & DRM_FLAG_NOCTX) {
+ drm_context_switch_complete(dev, new);
+ } else {
+ sprintf(buf, "C %d %d\n", old, new);
+ drm_write_string(dev, buf);
+ }
+
+ atomic_dec(&q->use_count);
+
+ return 0;
+}
+
+int drm_context_switch_complete(drm_device_t *dev, int new)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
+ dev->last_switch = jiffies;
+
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Lock isn't held after context switch\n");
+ }
+
+ if (!dma || !(dma->next_buffer && dma->next_buffer->while_locked)) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("Cannot free lock\n");
+ }
+ }
+
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles()
+ - dev->ctx_start)]);
+
+#endif
+ clear_bit(0, &dev->context_flag);
+ wake_up_interruptible(&dev->context_wait);
+
+ return 0;
+}
+
+void drm_clear_next_buffer(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ dma->next_buffer = NULL;
+ if (dma->next_queue && !DRM_BUFCOUNT(&dma->next_queue->waitlist)) {
+ wake_up_interruptible(&dma->next_queue->flush_queue);
+ }
+ dma->next_queue = NULL;
+}
+
+
+int drm_select_queue(drm_device_t *dev, void (*wrapper)(unsigned long))
+{
+ int i;
+ int candidate = -1;
+ int j = jiffies;
+
+ if (!dev) {
+ DRM_ERROR("No device\n");
+ return -1;
+ }
+ if (!dev->queuelist || !dev->queuelist[DRM_KERNEL_CONTEXT]) {
+ /* This only happens between the time the
+ interrupt is initialized and the time
+ the queues are initialized. */
+ return -1;
+ }
+
+ /* Doing "while locked" DMA? */
+ if (DRM_WAITCOUNT(dev, DRM_KERNEL_CONTEXT)) {
+ return DRM_KERNEL_CONTEXT;
+ }
+
+ /* If there are buffers on the last_context
+ queue, and we have not been executing
+ this context very long, continue to
+ execute this context. */
+ if (dev->last_switch <= j
+ && dev->last_switch + DRM_TIME_SLICE > j
+ && DRM_WAITCOUNT(dev, dev->last_context)) {
+ return dev->last_context;
+ }
+
+ /* Otherwise, find a candidate */
+ for (i = dev->last_checked + 1; i < dev->queue_count; i++) {
+ if (DRM_WAITCOUNT(dev, i)) {
+ candidate = dev->last_checked = i;
+ break;
+ }
+ }
+
+ if (candidate < 0) {
+ for (i = 0; i < dev->queue_count; i++) {
+ if (DRM_WAITCOUNT(dev, i)) {
+ candidate = dev->last_checked = i;
+ break;
+ }
+ }
+ }
+
+ if (wrapper
+ && candidate >= 0
+ && candidate != dev->last_context
+ && dev->last_switch <= j
+ && dev->last_switch + DRM_TIME_SLICE > j) {
+ if (dev->timer.expires != dev->last_switch + DRM_TIME_SLICE) {
+ del_timer(&dev->timer);
+ dev->timer.function = wrapper;
+ dev->timer.data = (unsigned long)dev;
+ dev->timer.expires = dev->last_switch+DRM_TIME_SLICE;
+ add_timer(&dev->timer);
+ }
+ return -1;
+ }
+
+ return candidate;
+}
+
+
+int drm_dma_enqueue(drm_device_t *dev, drm_dma_t *d)
+{
+ int i;
+ drm_queue_t *q;
+ drm_buf_t *buf;
+ int idx;
+ int while_locked = 0;
+ drm_device_dma_t *dma = dev->dma;
+ DECLARE_WAITQUEUE(entry, current);
+
+ DRM_DEBUG("%d\n", d->send_count);
+
+ if (d->flags & _DRM_DMA_WHILE_LOCKED) {
+ int context = dev->lock.hw_lock->lock;
+
+ if (!_DRM_LOCK_IS_HELD(context)) {
+ DRM_ERROR("No lock held during \"while locked\""
+ " request\n");
+ return -EINVAL;
+ }
+ if (d->context != _DRM_LOCKING_CONTEXT(context)
+ && _DRM_LOCKING_CONTEXT(context) != DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Lock held by %d while %d makes"
+ " \"while locked\" request\n",
+ _DRM_LOCKING_CONTEXT(context),
+ d->context);
+ return -EINVAL;
+ }
+ q = dev->queuelist[DRM_KERNEL_CONTEXT];
+ while_locked = 1;
+ } else {
+ q = dev->queuelist[d->context];
+ }
+
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->block_write)) {
+ add_wait_queue(&q->write_queue, &entry);
+ atomic_inc(&q->block_count);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!atomic_read(&q->block_write)) break;
+ schedule();
+ if (signal_pending(current)) {
+ atomic_dec(&q->use_count);
+ remove_wait_queue(&q->write_queue, &entry);
+ return -EINTR;
+ }
+ }
+ atomic_dec(&q->block_count);
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&q->write_queue, &entry);
+ }
+
+ for (i = 0; i < d->send_count; i++) {
+ idx = d->send_indices[i];
+ if (idx < 0 || idx >= dma->buf_count) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Index %d (of %d max)\n",
+ d->send_indices[i], dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[ idx ];
+ if (buf->pid != current->pid) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Process %d using buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ if (buf->list != DRM_LIST_NONE) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Process %d using buffer %d on list %d\n",
+ current->pid, buf->idx, buf->list);
+ }
+ buf->used = d->send_sizes[i];
+ buf->while_locked = while_locked;
+ buf->context = d->context;
+ if (!buf->used) {
+ DRM_ERROR("Queueing 0 length buffer\n");
+ }
+ if (buf->pending) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Queueing pending buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ return -EINVAL;
+ }
+ if (buf->waiting) {
+ atomic_dec(&q->use_count);
+ DRM_ERROR("Queueing waiting buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ return -EINVAL;
+ }
+ buf->waiting = 1;
+ if (atomic_read(&q->use_count) = 1
+ || atomic_read(&q->finalization)) {
+ drm_free_buffer(dev, buf);
+ } else {
+ drm_waitlist_put(&q->waitlist, buf);
+ atomic_inc(&q->total_queued);
+ }
+ }
+ atomic_dec(&q->use_count);
+
+ return 0;
+}
+
+static int drm_dma_get_buffers_of_order(drm_device_t *dev, drm_dma_t *d,
+ int order)
+{
+ int i;
+ drm_buf_t *buf;
+ drm_device_dma_t *dma = dev->dma;
+
+ for (i = d->granted_count; i < d->request_count; i++) {
+ buf = drm_freelist_get(&dma->bufs[order].freelist,
+ d->flags & _DRM_DMA_WAIT);
+ if (!buf) break;
+ if (buf->pending || buf->waiting) {
+ DRM_ERROR("Free buffer %d in use by %d (w%d, p%d)\n",
+ buf->idx,
+ buf->pid,
+ buf->waiting,
+ buf->pending);
+ }
+ buf->pid = current->pid;
+ if (copy_to_user(&d->request_indices[i],
+ &buf->idx,
+ sizeof(buf->idx)))
+ return -EFAULT;
+
+ if (copy_to_user(&d->request_sizes[i],
+ &buf->total,
+ sizeof(buf->total)))
+ return -EFAULT;
+
+ ++d->granted_count;
+ }
+ return 0;
+}
+
+
+int drm_dma_get_buffers(drm_device_t *dev, drm_dma_t *dma)
+{
+ int order;
+ int retcode = 0;
+ int tmp_order;
+
+ order = drm_order(dma->request_size);
+
+ dma->granted_count = 0;
+ retcode = drm_dma_get_buffers_of_order(dev, dma, order);
+
+ if (dma->granted_count < dma->request_count
+ && (dma->flags & _DRM_DMA_SMALLER_OK)) {
+ for (tmp_order = order - 1;
+ !retcode
+ && dma->granted_count < dma->request_count
+ && tmp_order >= DRM_MIN_ORDER;
+ --tmp_order) {
+
+ retcode = drm_dma_get_buffers_of_order(dev, dma,
+ tmp_order);
+ }
+ }
+
+ if (dma->granted_count < dma->request_count
+ && (dma->flags & _DRM_DMA_LARGER_OK)) {
+ for (tmp_order = order + 1;
+ !retcode
+ && dma->granted_count < dma->request_count
+ && tmp_order <= DRM_MAX_ORDER;
+ ++tmp_order) {
+
+ retcode = drm_dma_get_buffers_of_order(dev, dma,
+ tmp_order);
+ }
+ }
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/drawable.c linux-2.4.13-lia/drivers/char/drm-4.0/drawable.c
--- linux-2.4.13/drivers/char/drm-4.0/drawable.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/drawable.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,51 @@
+/* drawable.c -- IOCTLs for drawables -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_adddraw(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_draw_t draw;
+
+ draw.handle = 0; /* NOOP */
+ DRM_DEBUG("%d\n", draw.handle);
+ if (copy_to_user((drm_draw_t *)arg, &draw, sizeof(draw)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_rmdraw(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ return 0; /* NOOP */
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/drm.h linux-2.4.13-lia/drivers/char/drm-4.0/drm.h
--- linux-2.4.13/drivers/char/drm-4.0/drm.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/drm.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,414 @@
+/* drm.h -- Header for Direct Rendering Manager -*- linux-c -*-
+ * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ * Acknowledgements:
+ * Dec 1999, Richard Henderson <rth@twiddle.net>, move to generic cmpxchg.
+ *
+ */
+
+#ifndef _DRM_H_
+#define _DRM_H_
+
+#include <linux/config.h>
+#if defined(__linux__)
+#include <asm/ioctl.h> /* For _IO* macros */
+#define DRM_IOCTL_NR(n) _IOC_NR(n)
+#elif defined(__FreeBSD__)
+#include <sys/ioccom.h>
+#define DRM_IOCTL_NR(n) ((n) & 0xff)
+#endif
+
+#define DRM_PROC_DEVICES "/proc/devices"
+#define DRM_PROC_MISC "/proc/misc"
+#define DRM_PROC_DRM "/proc/drm"
+#define DRM_DEV_DRM "/dev/drm"
+#define DRM_DEV_MODE (S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP)
+#define DRM_DEV_UID 0
+#define DRM_DEV_GID 0
+
+
+#define DRM_NAME "drm" /* Name in kernel, /dev, and /proc */
+#define DRM_MIN_ORDER 5 /* At least 2^5 bytes = 32 bytes */
+#define DRM_MAX_ORDER 22 /* Up to 2^22 bytes = 4MB */
+#define DRM_RAM_PERCENT 10 /* How much system ram can we lock? */
+
+#define _DRM_LOCK_HELD 0x80000000 /* Hardware lock is held */
+#define _DRM_LOCK_CONT 0x40000000 /* Hardware lock is contended */
+#define _DRM_LOCK_IS_HELD(lock) ((lock) & _DRM_LOCK_HELD)
+#define _DRM_LOCK_IS_CONT(lock) ((lock) & _DRM_LOCK_CONT)
+#define _DRM_LOCKING_CONTEXT(lock) ((lock) & ~(_DRM_LOCK_HELD|_DRM_LOCK_CONT))
+
+typedef unsigned long drm_handle_t;
+typedef unsigned int drm_context_t;
+typedef unsigned int drm_drawable_t;
+typedef unsigned int drm_magic_t;
+
+/* Warning: If you change this structure, make sure you change
+ * XF86DRIClipRectRec in the server as well */
+
+typedef struct drm_clip_rect {
+ unsigned short x1;
+ unsigned short y1;
+ unsigned short x2;
+ unsigned short y2;
+} drm_clip_rect_t;
+
+/* Seperate include files for the i810/mga/r128 specific structures */
+#include "mga_drm.h"
+#include "i810_drm.h"
+#include "r128_drm.h"
+#include "radeon_drm.h"
+#ifdef CONFIG_DRM40_SIS
+#include "sis_drm.h"
+#endif
+
+typedef struct drm_version {
+ int version_major; /* Major version */
+ int version_minor; /* Minor version */
+ int version_patchlevel;/* Patch level */
+ size_t name_len; /* Length of name buffer */
+ char *name; /* Name of driver */
+ size_t date_len; /* Length of date buffer */
+ char *date; /* User-space buffer to hold date */
+ size_t desc_len; /* Length of desc buffer */
+ char *desc; /* User-space buffer to hold desc */
+} drm_version_t;
+
+typedef struct drm_unique {
+ size_t unique_len; /* Length of unique */
+ char *unique; /* Unique name for driver instantiation */
+} drm_unique_t;
+
+typedef struct drm_list {
+ int count; /* Length of user-space structures */
+ drm_version_t *version;
+} drm_list_t;
+
+typedef struct drm_block {
+ int unused;
+} drm_block_t;
+
+typedef struct drm_control {
+ enum {
+ DRM_ADD_COMMAND,
+ DRM_RM_COMMAND,
+ DRM_INST_HANDLER,
+ DRM_UNINST_HANDLER
+ } func;
+ int irq;
+} drm_control_t;
+
+typedef enum drm_map_type {
+ _DRM_FRAME_BUFFER = 0, /* WC (no caching), no core dump */
+ _DRM_REGISTERS = 1, /* no caching, no core dump */
+ _DRM_SHM = 2, /* shared, cached */
+ _DRM_AGP = 3 /* AGP/GART */
+} drm_map_type_t;
+
+typedef enum drm_map_flags {
+ _DRM_RESTRICTED = 0x01, /* Cannot be mapped to user-virtual */
+ _DRM_READ_ONLY = 0x02,
+ _DRM_LOCKED = 0x04, /* shared, cached, locked */
+ _DRM_KERNEL = 0x08, /* kernel requires access */
+ _DRM_WRITE_COMBINING = 0x10, /* use write-combining if available */
+ _DRM_CONTAINS_LOCK = 0x20 /* SHM page that contains lock */
+} drm_map_flags_t;
+
+typedef struct drm_map {
+ unsigned long offset; /* Requested physical address (0 for SAREA)*/
+ unsigned long size; /* Requested physical size (bytes) */
+ drm_map_type_t type; /* Type of memory to map */
+ drm_map_flags_t flags; /* Flags */
+ void *handle; /* User-space: "Handle" to pass to mmap */
+ /* Kernel-space: kernel-virtual address */
+ int mtrr; /* MTRR slot used */
+ /* Private data */
+} drm_map_t;
+
+typedef enum drm_lock_flags {
+ _DRM_LOCK_READY = 0x01, /* Wait until hardware is ready for DMA */
+ _DRM_LOCK_QUIESCENT = 0x02, /* Wait until hardware quiescent */
+ _DRM_LOCK_FLUSH = 0x04, /* Flush this context's DMA queue first */
+ _DRM_LOCK_FLUSH_ALL = 0x08, /* Flush all DMA queues first */
+ /* These *HALT* flags aren't supported yet
+ -- they will be used to support the
+ full-screen DGA-like mode. */
+ _DRM_HALT_ALL_QUEUES = 0x10, /* Halt all current and future queues */
+ _DRM_HALT_CUR_QUEUES = 0x20 /* Halt all current queues */
+} drm_lock_flags_t;
+
+typedef struct drm_lock {
+ int context;
+ drm_lock_flags_t flags;
+} drm_lock_t;
+
+typedef enum drm_dma_flags { /* These values *MUST* match xf86drm.h */
+ /* Flags for DMA buffer dispatch */
+ _DRM_DMA_BLOCK = 0x01, /* Block until buffer dispatched.
+ Note, the buffer may not yet have
+ been processed by the hardware --
+ getting a hardware lock with the
+ hardware quiescent will ensure
+ that the buffer has been
+ processed. */
+ _DRM_DMA_WHILE_LOCKED = 0x02, /* Dispatch while lock held */
+ _DRM_DMA_PRIORITY = 0x04, /* High priority dispatch */
+
+ /* Flags for DMA buffer request */
+ _DRM_DMA_WAIT = 0x10, /* Wait for free buffers */
+ _DRM_DMA_SMALLER_OK = 0x20, /* Smaller-than-requested buffers ok */
+ _DRM_DMA_LARGER_OK = 0x40 /* Larger-than-requested buffers ok */
+} drm_dma_flags_t;
+
+typedef struct drm_buf_desc {
+ int count; /* Number of buffers of this size */
+ int size; /* Size in bytes */
+ int low_mark; /* Low water mark */
+ int high_mark; /* High water mark */
+ enum {
+ _DRM_PAGE_ALIGN = 0x01, /* Align on page boundaries for DMA */
+ _DRM_AGP_BUFFER = 0x02 /* Buffer is in agp space */
+ } flags;
+ unsigned long agp_start; /* Start address of where the agp buffers
+ * are in the agp aperture */
+} drm_buf_desc_t;
+
+typedef struct drm_buf_info {
+ int count; /* Entries in list */
+ drm_buf_desc_t *list;
+} drm_buf_info_t;
+
+typedef struct drm_buf_free {
+ int count;
+ int *list;
+} drm_buf_free_t;
+
+typedef struct drm_buf_pub {
+ int idx; /* Index into master buflist */
+ int total; /* Buffer size */
+ int used; /* Amount of buffer in use (for DMA) */
+ void *address; /* Address of buffer */
+} drm_buf_pub_t;
+
+typedef struct drm_buf_map {
+ int count; /* Length of buflist */
+ void *virtual; /* Mmaped area in user-virtual */
+ drm_buf_pub_t *list; /* Buffer information */
+} drm_buf_map_t;
+
+typedef struct drm_dma {
+ /* Indices here refer to the offset into
+ buflist in drm_buf_get_t. */
+ int context; /* Context handle */
+ int send_count; /* Number of buffers to send */
+ int *send_indices; /* List of handles to buffers */
+ int *send_sizes; /* Lengths of data to send */
+ drm_dma_flags_t flags; /* Flags */
+ int request_count; /* Number of buffers requested */
+ int request_size; /* Desired size for buffers */
+ int *request_indices; /* Buffer information */
+ int *request_sizes;
+ int granted_count; /* Number of buffers granted */
+} drm_dma_t;
+
+typedef enum {
+ _DRM_CONTEXT_PRESERVED = 0x01,
+ _DRM_CONTEXT_2DONLY = 0x02
+} drm_ctx_flags_t;
+
+typedef struct drm_ctx {
+ drm_context_t handle;
+ drm_ctx_flags_t flags;
+} drm_ctx_t;
+
+typedef struct drm_ctx_res {
+ int count;
+ drm_ctx_t *contexts;
+} drm_ctx_res_t;
+
+typedef struct drm_draw {
+ drm_drawable_t handle;
+} drm_draw_t;
+
+typedef struct drm_auth {
+ drm_magic_t magic;
+} drm_auth_t;
+
+typedef struct drm_irq_busid {
+ int irq;
+ int busnum;
+ int devnum;
+ int funcnum;
+} drm_irq_busid_t;
+
+typedef struct drm_agp_mode {
+ unsigned long mode;
+} drm_agp_mode_t;
+
+ /* For drm_agp_alloc -- allocated a buffer */
+typedef struct drm_agp_buffer {
+ unsigned long size; /* In bytes -- will round to page boundary */
+ unsigned long handle; /* Used for BIND/UNBIND ioctls */
+ unsigned long type; /* Type of memory to allocate */
+ unsigned long physical; /* Physical used by i810 */
+} drm_agp_buffer_t;
+
+ /* For drm_agp_bind */
+typedef struct drm_agp_binding {
+ unsigned long handle; /* From drm_agp_buffer */
+ unsigned long offset; /* In bytes -- will round to page boundary */
+} drm_agp_binding_t;
+
+typedef struct drm_agp_info {
+ int agp_version_major;
+ int agp_version_minor;
+ unsigned long mode;
+ unsigned long aperture_base; /* physical address */
+ unsigned long aperture_size; /* bytes */
+ unsigned long memory_allowed; /* bytes */
+ unsigned long memory_used;
+
+ /* PCI information */
+ unsigned short id_vendor;
+ unsigned short id_device;
+} drm_agp_info_t;
+
+#define DRM_IOCTL_BASE 'd'
+#define DRM_IO(nr) _IO(DRM_IOCTL_BASE,nr)
+#define DRM_IOR(nr,size) _IOR(DRM_IOCTL_BASE,nr,size)
+#define DRM_IOW(nr,size) _IOW(DRM_IOCTL_BASE,nr,size)
+#define DRM_IOWR(nr,size) _IOWR(DRM_IOCTL_BASE,nr,size)
+
+
+#define DRM_IOCTL_VERSION DRM_IOWR(0x00, drm_version_t)
+#define DRM_IOCTL_GET_UNIQUE DRM_IOWR(0x01, drm_unique_t)
+#define DRM_IOCTL_GET_MAGIC DRM_IOR( 0x02, drm_auth_t)
+#define DRM_IOCTL_IRQ_BUSID DRM_IOWR(0x03, drm_irq_busid_t)
+
+#define DRM_IOCTL_SET_UNIQUE DRM_IOW( 0x10, drm_unique_t)
+#define DRM_IOCTL_AUTH_MAGIC DRM_IOW( 0x11, drm_auth_t)
+#define DRM_IOCTL_BLOCK DRM_IOWR(0x12, drm_block_t)
+#define DRM_IOCTL_UNBLOCK DRM_IOWR(0x13, drm_block_t)
+#define DRM_IOCTL_CONTROL DRM_IOW( 0x14, drm_control_t)
+#define DRM_IOCTL_ADD_MAP DRM_IOWR(0x15, drm_map_t)
+#define DRM_IOCTL_ADD_BUFS DRM_IOWR(0x16, drm_buf_desc_t)
+#define DRM_IOCTL_MARK_BUFS DRM_IOW( 0x17, drm_buf_desc_t)
+#define DRM_IOCTL_INFO_BUFS DRM_IOWR(0x18, drm_buf_info_t)
+#define DRM_IOCTL_MAP_BUFS DRM_IOWR(0x19, drm_buf_map_t)
+#define DRM_IOCTL_FREE_BUFS DRM_IOW( 0x1a, drm_buf_free_t)
+
+#define DRM_IOCTL_ADD_CTX DRM_IOWR(0x20, drm_ctx_t)
+#define DRM_IOCTL_RM_CTX DRM_IOWR(0x21, drm_ctx_t)
+#define DRM_IOCTL_MOD_CTX DRM_IOW( 0x22, drm_ctx_t)
+#define DRM_IOCTL_GET_CTX DRM_IOWR(0x23, drm_ctx_t)
+#define DRM_IOCTL_SWITCH_CTX DRM_IOW( 0x24, drm_ctx_t)
+#define DRM_IOCTL_NEW_CTX DRM_IOW( 0x25, drm_ctx_t)
+#define DRM_IOCTL_RES_CTX DRM_IOWR(0x26, drm_ctx_res_t)
+#define DRM_IOCTL_ADD_DRAW DRM_IOWR(0x27, drm_draw_t)
+#define DRM_IOCTL_RM_DRAW DRM_IOWR(0x28, drm_draw_t)
+#define DRM_IOCTL_DMA DRM_IOWR(0x29, drm_dma_t)
+#define DRM_IOCTL_LOCK DRM_IOW( 0x2a, drm_lock_t)
+#define DRM_IOCTL_UNLOCK DRM_IOW( 0x2b, drm_lock_t)
+#define DRM_IOCTL_FINISH DRM_IOW( 0x2c, drm_lock_t)
+
+#define DRM_IOCTL_AGP_ACQUIRE DRM_IO( 0x30)
+#define DRM_IOCTL_AGP_RELEASE DRM_IO( 0x31)
+#define DRM_IOCTL_AGP_ENABLE DRM_IOW( 0x32, drm_agp_mode_t)
+#define DRM_IOCTL_AGP_INFO DRM_IOR( 0x33, drm_agp_info_t)
+#define DRM_IOCTL_AGP_ALLOC DRM_IOWR(0x34, drm_agp_buffer_t)
+#define DRM_IOCTL_AGP_FREE DRM_IOW( 0x35, drm_agp_buffer_t)
+#define DRM_IOCTL_AGP_BIND DRM_IOW( 0x36, drm_agp_binding_t)
+#define DRM_IOCTL_AGP_UNBIND DRM_IOW( 0x37, drm_agp_binding_t)
+
+/* Mga specific ioctls */
+#define DRM_IOCTL_MGA_INIT DRM_IOW( 0x40, drm_mga_init_t)
+#define DRM_IOCTL_MGA_SWAP DRM_IOW( 0x41, drm_mga_swap_t)
+#define DRM_IOCTL_MGA_CLEAR DRM_IOW( 0x42, drm_mga_clear_t)
+#define DRM_IOCTL_MGA_ILOAD DRM_IOW( 0x43, drm_mga_iload_t)
+#define DRM_IOCTL_MGA_VERTEX DRM_IOW( 0x44, drm_mga_vertex_t)
+#define DRM_IOCTL_MGA_FLUSH DRM_IOW( 0x45, drm_lock_t )
+#define DRM_IOCTL_MGA_INDICES DRM_IOW( 0x46, drm_mga_indices_t)
+#define DRM_IOCTL_MGA_BLIT DRM_IOW( 0x47, drm_mga_blit_t)
+
+/* I810 specific ioctls */
+#define DRM_IOCTL_I810_INIT DRM_IOW( 0x40, drm_i810_init_t)
+#define DRM_IOCTL_I810_VERTEX DRM_IOW( 0x41, drm_i810_vertex_t)
+#define DRM_IOCTL_I810_CLEAR DRM_IOW( 0x42, drm_i810_clear_t)
+#define DRM_IOCTL_I810_FLUSH DRM_IO( 0x43)
+#define DRM_IOCTL_I810_GETAGE DRM_IO( 0x44)
+#define DRM_IOCTL_I810_GETBUF DRM_IOWR(0x45, drm_i810_dma_t)
+#define DRM_IOCTL_I810_SWAP DRM_IO( 0x46)
+#define DRM_IOCTL_I810_COPY DRM_IOW( 0x47, drm_i810_copy_t)
+#define DRM_IOCTL_I810_DOCOPY DRM_IO( 0x48)
+
+/* Rage 128 specific ioctls */
+#define DRM_IOCTL_R128_INIT DRM_IOW( 0x40, drm_r128_init_t)
+#define DRM_IOCTL_R128_CCE_START DRM_IO( 0x41)
+#define DRM_IOCTL_R128_CCE_STOP DRM_IOW( 0x42, drm_r128_cce_stop_t)
+#define DRM_IOCTL_R128_CCE_RESET DRM_IO( 0x43)
+#define DRM_IOCTL_R128_CCE_IDLE DRM_IO( 0x44)
+#define DRM_IOCTL_R128_RESET DRM_IO( 0x46)
+#define DRM_IOCTL_R128_SWAP DRM_IO( 0x47)
+#define DRM_IOCTL_R128_CLEAR DRM_IOW( 0x48, drm_r128_clear_t)
+#define DRM_IOCTL_R128_VERTEX DRM_IOW( 0x49, drm_r128_vertex_t)
+#define DRM_IOCTL_R128_INDICES DRM_IOW( 0x4a, drm_r128_indices_t)
+#define DRM_IOCTL_R128_BLIT DRM_IOW( 0x4b, drm_r128_blit_t)
+#define DRM_IOCTL_R128_DEPTH DRM_IOW( 0x4c, drm_r128_depth_t)
+#define DRM_IOCTL_R128_STIPPLE DRM_IOW( 0x4d, drm_r128_stipple_t)
+#define DRM_IOCTL_R128_PACKET DRM_IOWR(0x4e, drm_r128_packet_t)
+
+/* Radeon specific ioctls */
+#define DRM_IOCTL_RADEON_CP_INIT DRM_IOW( 0x40, drm_radeon_init_t)
+#define DRM_IOCTL_RADEON_CP_START DRM_IO( 0x41)
+#define DRM_IOCTL_RADEON_CP_STOP DRM_IOW( 0x42, drm_radeon_cp_stop_t)
+#define DRM_IOCTL_RADEON_CP_RESET DRM_IO( 0x43)
+#define DRM_IOCTL_RADEON_CP_IDLE DRM_IO( 0x44)
+#define DRM_IOCTL_RADEON_RESET DRM_IO( 0x45)
+#define DRM_IOCTL_RADEON_FULLSCREEN DRM_IOW( 0x46, drm_radeon_fullscreen_t)
+#define DRM_IOCTL_RADEON_SWAP DRM_IO( 0x47)
+#define DRM_IOCTL_RADEON_CLEAR DRM_IOW( 0x48, drm_radeon_clear_t)
+#define DRM_IOCTL_RADEON_VERTEX DRM_IOW( 0x49, drm_radeon_vertex_t)
+#define DRM_IOCTL_RADEON_INDICES DRM_IOW( 0x4a, drm_radeon_indices_t)
+#define DRM_IOCTL_RADEON_BLIT DRM_IOW( 0x4b, drm_radeon_blit_t)
+#define DRM_IOCTL_RADEON_STIPPLE DRM_IOW( 0x4c, drm_radeon_stipple_t)
+#define DRM_IOCTL_RADEON_INDIRECT DRM_IOWR(0x4d, drm_radeon_indirect_t)
+
+#ifdef CONFIG_DRM40_SIS
+/* SiS specific ioctls */
+#define SIS_IOCTL_FB_ALLOC DRM_IOWR(0x44, drm_sis_mem_t)
+#define SIS_IOCTL_FB_FREE DRM_IOW( 0x45, drm_sis_mem_t)
+#define SIS_IOCTL_AGP_INIT DRM_IOWR(0x53, drm_sis_agp_t)
+#define SIS_IOCTL_AGP_ALLOC DRM_IOWR(0x54, drm_sis_mem_t)
+#define SIS_IOCTL_AGP_FREE DRM_IOW( 0x55, drm_sis_mem_t)
+#define SIS_IOCTL_FLIP DRM_IOW( 0x48, drm_sis_flip_t)
+#define SIS_IOCTL_FLIP_INIT DRM_IO( 0x49)
+#define SIS_IOCTL_FLIP_FINAL DRM_IO( 0x50)
+#endif
+
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/drmP.h linux-2.4.13-lia/drivers/char/drm-4.0/drmP.h
--- linux-2.4.13/drivers/char/drm-4.0/drmP.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/drmP.h Wed Oct 24 18:34:24 2001
@@ -0,0 +1,839 @@
+/* drmP.h -- Private header for Direct Rendering Manager -*- linux-c -*-
+ * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#ifndef _DRM_P_H_
+#define _DRM_P_H_
+
+#ifdef __KERNEL__
+#ifdef __alpha__
+/* add include of current.h so that "current" is defined
+ * before static inline funcs in wait.h. Doing this so we
+ * can build the DRM (part of PI DRI). 4/21/2000 S + B */
+#include <asm/current.h>
+#endif /* __alpha__ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/miscdevice.h>
+#include <linux/major.h>
+#include <linux/fs.h>
+#include <linux/proc_fs.h>
+#include <linux/init.h>
+#include <linux/file.h>
+#include <linux/pci.h>
+#include <linux/wrapper.h>
+#include <linux/version.h>
+#include <linux/sched.h>
+#include <linux/smp_lock.h> /* For (un)lock_kernel */
+#include <linux/mm.h>
+#ifdef __alpha__
+#include <asm/pgtable.h> /* For pte_wrprotect */
+#endif
+#include <asm/io.h>
+#include <asm/mman.h>
+#include <asm/uaccess.h>
+#ifdef CONFIG_MTRR
+#include <asm/mtrr.h>
+#endif
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+#include <linux/types.h>
+#include <linux/agp_backend.h>
+#endif
+#if LINUX_VERSION_CODE >= 0x020100 /* KERNEL_VERSION(2,1,0) */
+#include <linux/tqueue.h>
+#include <linux/poll.h>
+#endif
+#if LINUX_VERSION_CODE < 0x020400
+#include "compat-pre24.h"
+#endif
+#include "drm.h"
+
+#define DRM_DEBUG_CODE 2 /* Include debugging code (if > 1, then
+ also include looping detection. */
+#define DRM_DMA_HISTOGRAM 1 /* Make histogram of DMA latency. */
+
+#define DRM_HASH_SIZE 16 /* Size of key hash table */
+#define DRM_KERNEL_CONTEXT 0 /* Change drm_resctx if changed */
+#define DRM_RESERVED_CONTEXTS 1 /* Change drm_resctx if changed */
+#define DRM_LOOPING_LIMIT 5000000
+#define DRM_BSZ 1024 /* Buffer size for /dev/drm? output */
+#define DRM_TIME_SLICE (HZ/20) /* Time slice for GLXContexts */
+#define DRM_LOCK_SLICE 1 /* Time slice for lock, in jiffies */
+
+#define DRM_FLAG_DEBUG 0x01
+#define DRM_FLAG_NOCTX 0x02
+
+#define DRM_MEM_DMA 0
+#define DRM_MEM_SAREA 1
+#define DRM_MEM_DRIVER 2
+#define DRM_MEM_MAGIC 3
+#define DRM_MEM_IOCTLS 4
+#define DRM_MEM_MAPS 5
+#define DRM_MEM_VMAS 6
+#define DRM_MEM_BUFS 7
+#define DRM_MEM_SEGS 8
+#define DRM_MEM_PAGES 9
+#define DRM_MEM_FILES 10
+#define DRM_MEM_QUEUES 11
+#define DRM_MEM_CMDS 12
+#define DRM_MEM_MAPPINGS 13
+#define DRM_MEM_BUFLISTS 14
+#define DRM_MEM_AGPLISTS 15
+#define DRM_MEM_TOTALAGP 16
+#define DRM_MEM_BOUNDAGP 17
+#define DRM_MEM_CTXBITMAP 18
+
+#define DRM_MAX_CTXBITMAP (PAGE_SIZE * 8)
+
+ /* Backward compatibility section */
+ /* _PAGE_WT changed to _PAGE_PWT in 2.2.6 */
+#ifndef _PAGE_PWT
+#define _PAGE_PWT _PAGE_WT
+#endif
+ /* Wait queue declarations changed in 2.3.1 */
+#ifndef DECLARE_WAITQUEUE
+#define DECLARE_WAITQUEUE(w,c) struct wait_queue w = { c, NULL }
+typedef struct wait_queue *wait_queue_head_t;
+#define init_waitqueue_head(q) *q = NULL;
+#endif
+
+ /* _PAGE_4M changed to _PAGE_PSE in 2.3.23 */
+#ifndef _PAGE_PSE
+#define _PAGE_PSE _PAGE_4M
+#endif
+
+ /* vm_offset changed to vm_pgoff in 2.3.25 */
+#if LINUX_VERSION_CODE < 0x020319
+#define VM_OFFSET(vma) ((vma)->vm_offset)
+#else
+#define VM_OFFSET(vma) ((vma)->vm_pgoff << PAGE_SHIFT)
+#endif
+
+ /* *_nopage return values defined in 2.3.26 */
+#ifndef NOPAGE_SIGBUS
+#define NOPAGE_SIGBUS 0
+#endif
+#ifndef NOPAGE_OOM
+#define NOPAGE_OOM 0
+#endif
+
+ /* module_init/module_exit added in 2.3.13 */
+#ifndef module_init
+#define module_init(x) int init_module(void) { return x(); }
+#endif
+#ifndef module_exit
+#define module_exit(x) void cleanup_module(void) { x(); }
+#endif
+
+ /* Generic cmpxchg added in 2.3.x */
+#ifndef __HAVE_ARCH_CMPXCHG
+ /* Include this here so that driver can be
+ used with older kernels. */
+#if defined(__alpha__)
+static __inline__ unsigned long
+__cmpxchg_u32(volatile int *m, int old, int new)
+{
+ unsigned long prev, cmp;
+
+ __asm__ __volatile__(
+ "1: ldl_l %0,%2\n"
+ " cmpeq %0,%3,%1\n"
+ " beq %1,2f\n"
+ " mov %4,%1\n"
+ " stl_c %1,%2\n"
+ " beq %1,3f\n"
+ "2: mb\n"
+ ".subsection 2\n"
+ "3: br 1b\n"
+ ".previous"
+ : "=&r"(prev), "=&r"(cmp), "=m"(*m)
+ : "r"((long) old), "r"(new), "m"(*m));
+
+ return prev;
+}
+
+static __inline__ unsigned long
+__cmpxchg_u64(volatile long *m, unsigned long old, unsigned long new)
+{
+ unsigned long prev, cmp;
+
+ __asm__ __volatile__(
+ "1: ldq_l %0,%2\n"
+ " cmpeq %0,%3,%1\n"
+ " beq %1,2f\n"
+ " mov %4,%1\n"
+ " stq_c %1,%2\n"
+ " beq %1,3f\n"
+ "2: mb\n"
+ ".subsection 2\n"
+ "3: br 1b\n"
+ ".previous"
+ : "=&r"(prev), "=&r"(cmp), "=m"(*m)
+ : "r"((long) old), "r"(new), "m"(*m));
+
+ return prev;
+}
+
+static __inline__ unsigned long
+__cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
+{
+ switch (size) {
+ case 4:
+ return __cmpxchg_u32(ptr, old, new);
+ case 8:
+ return __cmpxchg_u64(ptr, old, new);
+ }
+ return old;
+}
+#define cmpxchg(ptr,o,n) \
+ ({ \
+ __typeof__(*(ptr)) _o_ = (o); \
+ __typeof__(*(ptr)) _n_ = (n); \
+ (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \
+ (unsigned long)_n_, sizeof(*(ptr))); \
+ })
+
+#elif __i386__
+static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+ unsigned long new, int size)
+{
+ unsigned long prev;
+ switch (size) {
+ case 1:
+ __asm__ __volatile__(LOCK_PREFIX "cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
+ : "memory");
+ return prev;
+ case 2:
+ __asm__ __volatile__(LOCK_PREFIX "cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
+ : "memory");
+ return prev;
+ case 4:
+ __asm__ __volatile__(LOCK_PREFIX "cmpxchgl %1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
+ : "memory");
+ return prev;
+ }
+ return old;
+}
+
+#define cmpxchg(ptr,o,n) \
+ ((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)(o), \
+ (unsigned long)(n),sizeof(*(ptr))))
+#endif /* i386 & alpha */
+#endif
+
+ /* Macros to make printk easier */
+#define DRM_ERROR(fmt, arg...) \
+ printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ "] *ERROR* " fmt , ##arg)
+#define DRM_MEM_ERROR(area, fmt, arg...) \
+ printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ ":%s] *ERROR* " fmt , \
+ drm_mem_stats[area].name , ##arg)
+#define DRM_INFO(fmt, arg...) printk(KERN_INFO "[" DRM_NAME "] " fmt , ##arg)
+
+#if DRM_DEBUG_CODE
+#define DRM_DEBUG(fmt, arg...) \
+ do { \
+ if (drm_flags&DRM_FLAG_DEBUG) \
+ printk(KERN_DEBUG \
+ "[" DRM_NAME ":" __FUNCTION__ "] " fmt , \
+ ##arg); \
+ } while (0)
+#else
+#define DRM_DEBUG(fmt, arg...) do { } while (0)
+#endif
+
+#define DRM_PROC_LIMIT (PAGE_SIZE-80)
+
+#define DRM_PROC_PRINT(fmt, arg...) \
+ len += sprintf(&buf[len], fmt , ##arg); \
+ if (len > DRM_PROC_LIMIT) return len;
+
+#define DRM_PROC_PRINT_RET(ret, fmt, arg...) \
+ len += sprintf(&buf[len], fmt , ##arg); \
+ if (len > DRM_PROC_LIMIT) { ret; return len; }
+
+ /* Internal types and structures */
+#define DRM_ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0]))
+#define DRM_MIN(a,b) ((a)<(b)?(a):(b))
+#define DRM_MAX(a,b) ((a)>(b)?(a):(b))
+
+#define DRM_LEFTCOUNT(x) (((x)->rp + (x)->count - (x)->wp) % ((x)->count + 1))
+#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x))
+#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist)
+
+typedef int drm_ioctl_t(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+typedef struct drm_ioctl_desc {
+ drm_ioctl_t *func;
+ int auth_needed;
+ int root_only;
+} drm_ioctl_desc_t;
+
+typedef struct drm_devstate {
+ pid_t owner; /* X server pid holding x_lock */
+
+} drm_devstate_t;
+
+typedef struct drm_magic_entry {
+ drm_magic_t magic;
+ struct drm_file *priv;
+ struct drm_magic_entry *next;
+} drm_magic_entry_t;
+
+typedef struct drm_magic_head {
+ struct drm_magic_entry *head;
+ struct drm_magic_entry *tail;
+} drm_magic_head_t;
+
+typedef struct drm_vma_entry {
+ struct vm_area_struct *vma;
+ struct drm_vma_entry *next;
+ pid_t pid;
+} drm_vma_entry_t;
+
+typedef struct drm_buf {
+ int idx; /* Index into master buflist */
+ int total; /* Buffer size */
+ int order; /* log-base-2(total) */
+ int used; /* Amount of buffer in use (for DMA) */
+ unsigned long offset; /* Byte offset (used internally) */
+ void *address; /* Address of buffer */
+ unsigned long bus_address; /* Bus address of buffer */
+ struct drm_buf *next; /* Kernel-only: used for free list */
+ __volatile__ int waiting; /* On kernel DMA queue */
+ __volatile__ int pending; /* On hardware DMA queue */
+ wait_queue_head_t dma_wait; /* Processes waiting */
+ pid_t pid; /* PID of holding process */
+ int context; /* Kernel queue for this buffer */
+ int while_locked;/* Dispatch this buffer while locked */
+ enum {
+ DRM_LIST_NONE = 0,
+ DRM_LIST_FREE = 1,
+ DRM_LIST_WAIT = 2,
+ DRM_LIST_PEND = 3,
+ DRM_LIST_PRIO = 4,
+ DRM_LIST_RECLAIM = 5
+ } list; /* Which list we're on */
+
+#if DRM_DMA_HISTOGRAM
+ cycles_t time_queued; /* Queued to kernel DMA queue */
+ cycles_t time_dispatched; /* Dispatched to hardware */
+ cycles_t time_completed; /* Completed by hardware */
+ cycles_t time_freed; /* Back on freelist */
+#endif
+
+ int dev_priv_size; /* Size of buffer private stoarge */
+ void *dev_private; /* Per-buffer private storage */
+} drm_buf_t;
+
+#if DRM_DMA_HISTOGRAM
+#define DRM_DMA_HISTOGRAM_SLOTS 9
+#define DRM_DMA_HISTOGRAM_INITIAL 10
+#define DRM_DMA_HISTOGRAM_NEXT(current) ((current)*10)
+typedef struct drm_histogram {
+ atomic_t total;
+
+ atomic_t queued_to_dispatched[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t dispatched_to_completed[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t completed_to_freed[DRM_DMA_HISTOGRAM_SLOTS];
+
+ atomic_t queued_to_completed[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t queued_to_freed[DRM_DMA_HISTOGRAM_SLOTS];
+
+ atomic_t dma[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t schedule[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t ctx[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t lacq[DRM_DMA_HISTOGRAM_SLOTS];
+ atomic_t lhld[DRM_DMA_HISTOGRAM_SLOTS];
+} drm_histogram_t;
+#endif
+
+ /* bufs is one longer than it has to be */
+typedef struct drm_waitlist {
+ int count; /* Number of possible buffers */
+ drm_buf_t **bufs; /* List of pointers to buffers */
+ drm_buf_t **rp; /* Read pointer */
+ drm_buf_t **wp; /* Write pointer */
+ drm_buf_t **end; /* End pointer */
+ spinlock_t read_lock;
+ spinlock_t write_lock;
+} drm_waitlist_t;
+
+typedef struct drm_freelist {
+ int initialized; /* Freelist in use */
+ atomic_t count; /* Number of free buffers */
+ drm_buf_t *next; /* End pointer */
+
+ wait_queue_head_t waiting; /* Processes waiting on free bufs */
+ int low_mark; /* Low water mark */
+ int high_mark; /* High water mark */
+ atomic_t wfh; /* If waiting for high mark */
+ spinlock_t lock;
+} drm_freelist_t;
+
+typedef struct drm_buf_entry {
+ int buf_size;
+ int buf_count;
+ drm_buf_t *buflist;
+ int seg_count;
+ int page_order;
+ unsigned long *seglist;
+
+ drm_freelist_t freelist;
+} drm_buf_entry_t;
+
+typedef struct drm_hw_lock {
+ __volatile__ unsigned int lock;
+ char padding[60]; /* Pad to cache line */
+} drm_hw_lock_t;
+
+typedef struct drm_file {
+ int authenticated;
+ int minor;
+ pid_t pid;
+ uid_t uid;
+ drm_magic_t magic;
+ unsigned long ioctl_count;
+ struct drm_file *next;
+ struct drm_file *prev;
+ struct drm_device *dev;
+ int remove_auth_on_close;
+} drm_file_t;
+
+
+typedef struct drm_queue {
+ atomic_t use_count; /* Outstanding uses (+1) */
+ atomic_t finalization; /* Finalization in progress */
+ atomic_t block_count; /* Count of processes waiting */
+ atomic_t block_read; /* Queue blocked for reads */
+ wait_queue_head_t read_queue; /* Processes waiting on block_read */
+ atomic_t block_write; /* Queue blocked for writes */
+ wait_queue_head_t write_queue; /* Processes waiting on block_write */
+ atomic_t total_queued; /* Total queued statistic */
+ atomic_t total_flushed;/* Total flushes statistic */
+ atomic_t total_locks; /* Total locks statistics */
+ drm_ctx_flags_t flags; /* Context preserving and 2D-only */
+ drm_waitlist_t waitlist; /* Pending buffers */
+ wait_queue_head_t flush_queue; /* Processes waiting until flush */
+} drm_queue_t;
+
+typedef struct drm_lock_data {
+ drm_hw_lock_t *hw_lock; /* Hardware lock */
+ pid_t pid; /* PID of lock holder (0=kernel) */
+ wait_queue_head_t lock_queue; /* Queue of blocked processes */
+ unsigned long lock_time; /* Time of last lock in jiffies */
+} drm_lock_data_t;
+
+typedef struct drm_device_dma {
+ /* Performance Counters */
+ atomic_t total_prio; /* Total DRM_DMA_PRIORITY */
+ atomic_t total_bytes; /* Total bytes DMA'd */
+ atomic_t total_dmas; /* Total DMA buffers dispatched */
+
+ atomic_t total_missed_dma; /* Missed drm_do_dma */
+ atomic_t total_missed_lock; /* Missed lock in drm_do_dma */
+ atomic_t total_missed_free; /* Missed drm_free_this_buffer */
+ atomic_t total_missed_sched;/* Missed drm_dma_schedule */
+
+ atomic_t total_tried; /* Tried next_buffer */
+ atomic_t total_hit; /* Sent next_buffer */
+ atomic_t total_lost; /* Lost interrupt */
+
+ drm_buf_entry_t bufs[DRM_MAX_ORDER+1];
+ int buf_count;
+ drm_buf_t **buflist; /* Vector of pointers info bufs */
+ int seg_count;
+ int page_count;
+ unsigned long *pagelist;
+ unsigned long byte_count;
+ enum {
+ _DRM_DMA_USE_AGP = 0x01
+ } flags;
+
+ /* DMA support */
+ drm_buf_t *this_buffer; /* Buffer being sent */
+ drm_buf_t *next_buffer; /* Selected buffer to send */
+ drm_queue_t *next_queue; /* Queue from which buffer selected*/
+ wait_queue_head_t waiting; /* Processes waiting on free bufs */
+} drm_device_dma_t;
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+typedef struct drm_agp_mem {
+ unsigned long handle;
+ agp_memory *memory;
+ unsigned long bound; /* address */
+ int pages;
+ struct drm_agp_mem *prev;
+ struct drm_agp_mem *next;
+} drm_agp_mem_t;
+
+typedef struct drm_agp_head {
+ agp_kern_info agp_info;
+ const char *chipset;
+ drm_agp_mem_t *memory;
+ unsigned long mode;
+ int enabled;
+ int acquired;
+ unsigned long base;
+ int agp_mtrr;
+ int cant_use_aperture;
+ unsigned long page_mask;
+} drm_agp_head_t;
+#endif
+
+typedef struct drm_sigdata {
+ int context;
+ drm_hw_lock_t *lock;
+} drm_sigdata_t;
+
+typedef struct drm_device {
+ const char *name; /* Simple driver name */
+ char *unique; /* Unique identifier: e.g., busid */
+ int unique_len; /* Length of unique field */
+ dev_t device; /* Device number for mknod */
+ char *devname; /* For /proc/interrupts */
+
+ int blocked; /* Blocked due to VC switch? */
+ struct proc_dir_entry *root; /* Root for this device's entries */
+
+ /* Locks */
+ spinlock_t count_lock; /* For inuse, open_count, buf_use */
+ struct semaphore struct_sem; /* For others */
+
+ /* Usage Counters */
+ int open_count; /* Outstanding files open */
+ atomic_t ioctl_count; /* Outstanding IOCTLs pending */
+ atomic_t vma_count; /* Outstanding vma areas open */
+ int buf_use; /* Buffers in use -- cannot alloc */
+ atomic_t buf_alloc; /* Buffer allocation in progress */
+
+ /* Performance Counters */
+ atomic_t total_open;
+ atomic_t total_close;
+ atomic_t total_ioctl;
+ atomic_t total_irq; /* Total interruptions */
+ atomic_t total_ctx; /* Total context switches */
+
+ atomic_t total_locks;
+ atomic_t total_unlocks;
+ atomic_t total_contends;
+ atomic_t total_sleeps;
+
+ /* Authentication */
+ drm_file_t *file_first;
+ drm_file_t *file_last;
+ drm_magic_head_t magiclist[DRM_HASH_SIZE];
+
+ /* Memory management */
+ drm_map_t **maplist; /* Vector of pointers to regions */
+ int map_count; /* Number of mappable regions */
+
+ drm_vma_entry_t *vmalist; /* List of vmas (for debugging) */
+ drm_lock_data_t lock; /* Information on hardware lock */
+
+ /* DMA queues (contexts) */
+ int queue_count; /* Number of active DMA queues */
+ int queue_reserved; /* Number of reserved DMA queues */
+ int queue_slots; /* Actual length of queuelist */
+ drm_queue_t **queuelist; /* Vector of pointers to DMA queues */
+ drm_device_dma_t *dma; /* Optional pointer for DMA support */
+
+ /* Context support */
+ int irq; /* Interrupt used by board */
+ __volatile__ long context_flag; /* Context swapping flag */
+ __volatile__ long interrupt_flag; /* Interruption handler flag */
+ __volatile__ long dma_flag; /* DMA dispatch flag */
+ struct timer_list timer; /* Timer for delaying ctx switch */
+ wait_queue_head_t context_wait; /* Processes waiting on ctx switch */
+ int last_checked; /* Last context checked for DMA */
+ int last_context; /* Last current context */
+ unsigned long last_switch; /* jiffies at last context switch */
+ struct tq_struct tq;
+ cycles_t ctx_start;
+ cycles_t lck_start;
+#if DRM_DMA_HISTOGRAM
+ drm_histogram_t histo;
+#endif
+
+ /* Callback to X server for context switch
+ and for heavy-handed reset. */
+ char buf[DRM_BSZ]; /* Output buffer */
+ char *buf_rp; /* Read pointer */
+ char *buf_wp; /* Write pointer */
+ char *buf_end; /* End pointer */
+ struct fasync_struct *buf_async;/* Processes waiting for SIGIO */
+ wait_queue_head_t buf_readers; /* Processes waiting to read */
+ wait_queue_head_t buf_writers; /* Processes waiting to ctx switch */
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+ drm_agp_head_t *agp;
+#endif
+ unsigned long *ctx_bitmap;
+ void *dev_private;
+ drm_sigdata_t sigdata; /* For block_all_signals */
+ sigset_t sigmask;
+} drm_device_t;
+
+ /* Internal function definitions */
+
+ /* Misc. support (init.c) */
+extern int drm_flags;
+extern void drm_parse_options(char *s);
+extern int drm_cpu_valid(void);
+
+
+ /* Device support (fops.c) */
+extern int drm_open_helper(struct inode *inode, struct file *filp,
+ drm_device_t *dev);
+extern int drm_flush(struct file *filp);
+extern int drm_release(struct inode *inode, struct file *filp);
+extern int drm_fasync(int fd, struct file *filp, int on);
+extern ssize_t drm_read(struct file *filp, char *buf, size_t count,
+ loff_t *off);
+extern int drm_write_string(drm_device_t *dev, const char *s);
+extern unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait);
+
+ /* Mapping support (vm.c) */
+#if LINUX_VERSION_CODE < 0x020317
+extern unsigned long drm_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern unsigned long drm_vm_shm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern unsigned long drm_vm_shm_nopage_lock(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern unsigned long drm_vm_dma_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+#else
+ /* Return type changed in 2.3.23 */
+extern struct page *drm_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern struct page *drm_vm_shm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern struct page *drm_vm_shm_nopage_lock(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+extern struct page *drm_vm_dma_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int write_access);
+#endif
+extern void drm_vm_open(struct vm_area_struct *vma);
+extern void drm_vm_close(struct vm_area_struct *vma);
+extern int drm_mmap_dma(struct file *filp,
+ struct vm_area_struct *vma);
+extern int drm_mmap(struct file *filp, struct vm_area_struct *vma);
+
+
+ /* Proc support (proc.c) */
+extern int drm_proc_init(drm_device_t *dev);
+extern int drm_proc_cleanup(void);
+
+ /* Memory management support (memory.c) */
+extern void drm_mem_init(void);
+extern int drm_mem_info(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data);
+extern void *drm_alloc(size_t size, int area);
+extern void *drm_realloc(void *oldpt, size_t oldsize, size_t size,
+ int area);
+extern char *drm_strdup(const char *s, int area);
+extern void drm_strfree(const char *s, int area);
+extern void drm_free(void *pt, size_t size, int area);
+extern unsigned long drm_alloc_pages(int order, int area);
+extern void drm_free_pages(unsigned long address, int order,
+ int area);
+extern void *drm_ioremap(unsigned long offset, unsigned long size,
+ drm_device_t *dev);
+extern void drm_ioremapfree(void *pt, unsigned long size,
+ drm_device_t *dev);
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+extern agp_memory *drm_alloc_agp(int pages, u32 type);
+extern int drm_free_agp(agp_memory *handle, int pages);
+extern int drm_bind_agp(agp_memory *handle, unsigned int start);
+extern int drm_unbind_agp(agp_memory *handle);
+#endif
+
+
+ /* Buffer management support (bufs.c) */
+extern int drm_order(unsigned long size);
+extern int drm_addmap(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_addbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_infobufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_markbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_freebufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_mapbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Buffer list management support (lists.c) */
+extern int drm_waitlist_create(drm_waitlist_t *bl, int count);
+extern int drm_waitlist_destroy(drm_waitlist_t *bl);
+extern int drm_waitlist_put(drm_waitlist_t *bl, drm_buf_t *buf);
+extern drm_buf_t *drm_waitlist_get(drm_waitlist_t *bl);
+
+extern int drm_freelist_create(drm_freelist_t *bl, int count);
+extern int drm_freelist_destroy(drm_freelist_t *bl);
+extern int drm_freelist_put(drm_device_t *dev, drm_freelist_t *bl,
+ drm_buf_t *buf);
+extern drm_buf_t *drm_freelist_get(drm_freelist_t *bl, int block);
+
+ /* DMA support (gen_dma.c) */
+extern void drm_dma_setup(drm_device_t *dev);
+extern void drm_dma_takedown(drm_device_t *dev);
+extern void drm_free_buffer(drm_device_t *dev, drm_buf_t *buf);
+extern void drm_reclaim_buffers(drm_device_t *dev, pid_t pid);
+extern int drm_context_switch(drm_device_t *dev, int old, int new);
+extern int drm_context_switch_complete(drm_device_t *dev, int new);
+extern void drm_clear_next_buffer(drm_device_t *dev);
+extern int drm_select_queue(drm_device_t *dev,
+ void (*wrapper)(unsigned long));
+extern int drm_dma_enqueue(drm_device_t *dev, drm_dma_t *dma);
+extern int drm_dma_get_buffers(drm_device_t *dev, drm_dma_t *dma);
+#if DRM_DMA_HISTOGRAM
+extern int drm_histogram_slot(unsigned long count);
+extern void drm_histogram_compute(drm_device_t *dev, drm_buf_t *buf);
+#endif
+
+
+ /* Misc. IOCTL support (ioctl.c) */
+extern int drm_irq_busid(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_getunique(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_setunique(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Context IOCTL support (context.c) */
+extern int drm_resctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_addctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_modctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_getctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_switchctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_newctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_rmctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Drawable IOCTL support (drawable.c) */
+extern int drm_adddraw(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_rmdraw(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Authentication IOCTL support (auth.c) */
+extern int drm_add_magic(drm_device_t *dev, drm_file_t *priv,
+ drm_magic_t magic);
+extern int drm_remove_magic(drm_device_t *dev, drm_magic_t magic);
+extern int drm_getmagic(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_authmagic(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+
+ /* Locking IOCTL support (lock.c) */
+extern int drm_block(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_unblock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_lock_take(__volatile__ unsigned int *lock,
+ unsigned int context);
+extern int drm_lock_transfer(drm_device_t *dev,
+ __volatile__ unsigned int *lock,
+ unsigned int context);
+extern int drm_lock_free(drm_device_t *dev,
+ __volatile__ unsigned int *lock,
+ unsigned int context);
+extern int drm_finish(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_flush_unblock(drm_device_t *dev, int context,
+ drm_lock_flags_t flags);
+extern int drm_flush_block_and_flush(drm_device_t *dev, int context,
+ drm_lock_flags_t flags);
+extern int drm_notifier(void *priv);
+
+ /* Context Bitmap support (ctxbitmap.c) */
+extern int drm_ctxbitmap_init(drm_device_t *dev);
+extern void drm_ctxbitmap_cleanup(drm_device_t *dev);
+extern int drm_ctxbitmap_next(drm_device_t *dev);
+extern void drm_ctxbitmap_free(drm_device_t *dev, int ctx_handle);
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+ /* AGP/GART support (agpsupport.c) */
+extern drm_agp_head_t *drm_agp_init(void);
+extern void drm_agp_uninit(void);
+extern int drm_agp_acquire(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern void _drm_agp_release(void);
+extern int drm_agp_release(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_enable(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_info(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_alloc(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_free(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_unbind(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int drm_agp_bind(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern agp_memory *drm_agp_allocate_memory(size_t pages, u32 type);
+extern int drm_agp_free_memory(agp_memory *handle);
+extern int drm_agp_bind_memory(agp_memory *handle, off_t start);
+extern int drm_agp_unbind_memory(agp_memory *handle);
+#endif
+#endif
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_context.c linux-2.4.13-lia/drivers/char/drm-4.0/ffb_context.c
--- linux-2.4.13/drivers/char/drm-4.0/ffb_context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,540 @@
+/* $Id: ffb_context.c,v 1.4 2000/08/29 07:01:55 davem Exp $
+ * ffb_context.c: Creator/Creator3D DRI/DRM context switching.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ *
+ * Almost entirely stolen from tdfx_context.c, see there
+ * for authors.
+ */
+
+#include <linux/sched.h>
+#include <asm/upa.h>
+
+#include "drmP.h"
+
+#include "ffb_drv.h"
+
+static int ffb_alloc_queue(drm_device_t *dev, int is_2d_only)
+{
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ int i;
+
+ for (i = 0; i < FFB_MAX_CTXS; i++) {
+ if (fpriv->hw_state[i] = NULL)
+ break;
+ }
+ if (i = FFB_MAX_CTXS)
+ return -1;
+
+ fpriv->hw_state[i] = kmalloc(sizeof(struct ffb_hw_context), GFP_KERNEL);
+ if (fpriv->hw_state[i] = NULL)
+ return -1;
+
+ fpriv->hw_state[i]->is_2d_only = is_2d_only;
+
+ /* Plus one because 0 is the special DRM_KERNEL_CONTEXT. */
+ return i + 1;
+}
+
+static void ffb_save_context(ffb_dev_priv_t *fpriv, int idx)
+{
+ ffb_fbcPtr ffb = fpriv->regs;
+ struct ffb_hw_context *ctx;
+ int i;
+
+ ctx = fpriv->hw_state[idx - 1];
+ if (idx = 0 || ctx = NULL)
+ return;
+
+ if (ctx->is_2d_only) {
+ /* 2D applications only care about certain pieces
+ * of state.
+ */
+ ctx->drawop = upa_readl(&ffb->drawop);
+ ctx->ppc = upa_readl(&ffb->ppc);
+ ctx->wid = upa_readl(&ffb->wid);
+ ctx->fg = upa_readl(&ffb->fg);
+ ctx->bg = upa_readl(&ffb->bg);
+ ctx->xclip = upa_readl(&ffb->xclip);
+ ctx->fbc = upa_readl(&ffb->fbc);
+ ctx->rop = upa_readl(&ffb->rop);
+ ctx->cmp = upa_readl(&ffb->cmp);
+ ctx->matchab = upa_readl(&ffb->matchab);
+ ctx->magnab = upa_readl(&ffb->magnab);
+ ctx->pmask = upa_readl(&ffb->pmask);
+ ctx->xpmask = upa_readl(&ffb->xpmask);
+ ctx->lpat = upa_readl(&ffb->lpat);
+ ctx->fontxy = upa_readl(&ffb->fontxy);
+ ctx->fontw = upa_readl(&ffb->fontw);
+ ctx->fontinc = upa_readl(&ffb->fontinc);
+
+ /* stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ ctx->stencil = upa_readl(&ffb->stencil);
+ ctx->stencilctl = upa_readl(&ffb->stencilctl);
+ }
+
+ for (i = 0; i < 32; i++)
+ ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]);
+ ctx->ucsr = upa_readl(&ffb->ucsr);
+ return;
+ }
+
+ /* Fetch drawop. */
+ ctx->drawop = upa_readl(&ffb->drawop);
+
+ /* If we were saving the vertex registers, this is where
+ * we would do it. We would save 32 32-bit words starting
+ * at ffb->suvtx.
+ */
+
+ /* Capture rendering attributes. */
+
+ ctx->ppc = upa_readl(&ffb->ppc); /* Pixel Processor Control */
+ ctx->wid = upa_readl(&ffb->wid); /* Current WID */
+ ctx->fg = upa_readl(&ffb->fg); /* Constant FG color */
+ ctx->bg = upa_readl(&ffb->bg); /* Constant BG color */
+ ctx->consty = upa_readl(&ffb->consty); /* Constant Y */
+ ctx->constz = upa_readl(&ffb->constz); /* Constant Z */
+ ctx->xclip = upa_readl(&ffb->xclip); /* X plane clip */
+ ctx->dcss = upa_readl(&ffb->dcss); /* Depth Cue Scale Slope */
+ ctx->vclipmin = upa_readl(&ffb->vclipmin); /* Primary XY clip, minimum */
+ ctx->vclipmax = upa_readl(&ffb->vclipmax); /* Primary XY clip, maximum */
+ ctx->vclipzmin = upa_readl(&ffb->vclipzmin); /* Primary Z clip, minimum */
+ ctx->vclipzmax = upa_readl(&ffb->vclipzmax); /* Primary Z clip, maximum */
+ ctx->dcsf = upa_readl(&ffb->dcsf); /* Depth Cue Scale Front Bound */
+ ctx->dcsb = upa_readl(&ffb->dcsb); /* Depth Cue Scale Back Bound */
+ ctx->dczf = upa_readl(&ffb->dczf); /* Depth Cue Scale Z Front */
+ ctx->dczb = upa_readl(&ffb->dczb); /* Depth Cue Scale Z Back */
+ ctx->blendc = upa_readl(&ffb->blendc); /* Alpha Blend Control */
+ ctx->blendc1 = upa_readl(&ffb->blendc1); /* Alpha Blend Color 1 */
+ ctx->blendc2 = upa_readl(&ffb->blendc2); /* Alpha Blend Color 2 */
+ ctx->fbc = upa_readl(&ffb->fbc); /* Frame Buffer Control */
+ ctx->rop = upa_readl(&ffb->rop); /* Raster Operation */
+ ctx->cmp = upa_readl(&ffb->cmp); /* Compare Controls */
+ ctx->matchab = upa_readl(&ffb->matchab); /* Buffer A/B Match Ops */
+ ctx->matchc = upa_readl(&ffb->matchc); /* Buffer C Match Ops */
+ ctx->magnab = upa_readl(&ffb->magnab); /* Buffer A/B Magnitude Ops */
+ ctx->magnc = upa_readl(&ffb->magnc); /* Buffer C Magnitude Ops */
+ ctx->pmask = upa_readl(&ffb->pmask); /* RGB Plane Mask */
+ ctx->xpmask = upa_readl(&ffb->xpmask); /* X Plane Mask */
+ ctx->ypmask = upa_readl(&ffb->ypmask); /* Y Plane Mask */
+ ctx->zpmask = upa_readl(&ffb->zpmask); /* Z Plane Mask */
+
+ /* Auxiliary Clips. */
+ ctx->auxclip0min = upa_readl(&ffb->auxclip[0].min);
+ ctx->auxclip0max = upa_readl(&ffb->auxclip[0].max);
+ ctx->auxclip1min = upa_readl(&ffb->auxclip[1].min);
+ ctx->auxclip1max = upa_readl(&ffb->auxclip[1].max);
+ ctx->auxclip2min = upa_readl(&ffb->auxclip[2].min);
+ ctx->auxclip2max = upa_readl(&ffb->auxclip[2].max);
+ ctx->auxclip3min = upa_readl(&ffb->auxclip[3].min);
+ ctx->auxclip3max = upa_readl(&ffb->auxclip[3].max);
+
+ ctx->lpat = upa_readl(&ffb->lpat); /* Line Pattern */
+ ctx->fontxy = upa_readl(&ffb->fontxy); /* XY Font Coordinate */
+ ctx->fontw = upa_readl(&ffb->fontw); /* Font Width */
+ ctx->fontinc = upa_readl(&ffb->fontinc); /* Font X/Y Increment */
+
+ /* These registers/features only exist on FFB2 and later chips. */
+ if (fpriv->ffb_type >= ffb2_prototype) {
+ ctx->dcss1 = upa_readl(&ffb->dcss1); /* Depth Cue Scale Slope 1 */
+ ctx->dcss2 = upa_readl(&ffb->dcss2); /* Depth Cue Scale Slope 2 */
+ ctx->dcss2 = upa_readl(&ffb->dcss3); /* Depth Cue Scale Slope 3 */
+ ctx->dcs2 = upa_readl(&ffb->dcs2); /* Depth Cue Scale 2 */
+ ctx->dcs3 = upa_readl(&ffb->dcs3); /* Depth Cue Scale 3 */
+ ctx->dcs4 = upa_readl(&ffb->dcs4); /* Depth Cue Scale 4 */
+ ctx->dcd2 = upa_readl(&ffb->dcd2); /* Depth Cue Depth 2 */
+ ctx->dcd3 = upa_readl(&ffb->dcd3); /* Depth Cue Depth 3 */
+ ctx->dcd4 = upa_readl(&ffb->dcd4); /* Depth Cue Depth 4 */
+
+ /* And stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ ctx->stencil = upa_readl(&ffb->stencil);
+ ctx->stencilctl = upa_readl(&ffb->stencilctl);
+ }
+ }
+
+ /* Save the 32x32 area pattern. */
+ for (i = 0; i < 32; i++)
+ ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]);
+
+ /* Finally, stash away the User Constol/Status Register. */
+ ctx->ucsr = upa_readl(&ffb->ucsr);
+}
+
+static void ffb_restore_context(ffb_dev_priv_t *fpriv, int old, int idx)
+{
+ ffb_fbcPtr ffb = fpriv->regs;
+ struct ffb_hw_context *ctx;
+ int i;
+
+ ctx = fpriv->hw_state[idx - 1];
+ if (idx = 0 || ctx = NULL)
+ return;
+
+ if (ctx->is_2d_only) {
+ /* 2D applications only care about certain pieces
+ * of state.
+ */
+ upa_writel(ctx->drawop, &ffb->drawop);
+
+ /* If we were restoring the vertex registers, this is where
+ * we would do it. We would restore 32 32-bit words starting
+ * at ffb->suvtx.
+ */
+
+ upa_writel(ctx->ppc, &ffb->ppc);
+ upa_writel(ctx->wid, &ffb->wid);
+ upa_writel(ctx->fg, &ffb->fg);
+ upa_writel(ctx->bg, &ffb->bg);
+ upa_writel(ctx->xclip, &ffb->xclip);
+ upa_writel(ctx->fbc, &ffb->fbc);
+ upa_writel(ctx->rop, &ffb->rop);
+ upa_writel(ctx->cmp, &ffb->cmp);
+ upa_writel(ctx->matchab, &ffb->matchab);
+ upa_writel(ctx->magnab, &ffb->magnab);
+ upa_writel(ctx->pmask, &ffb->pmask);
+ upa_writel(ctx->xpmask, &ffb->xpmask);
+ upa_writel(ctx->lpat, &ffb->lpat);
+ upa_writel(ctx->fontxy, &ffb->fontxy);
+ upa_writel(ctx->fontw, &ffb->fontw);
+ upa_writel(ctx->fontinc, &ffb->fontinc);
+
+ /* stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ upa_writel(ctx->stencil, &ffb->stencil);
+ upa_writel(ctx->stencilctl, &ffb->stencilctl);
+ upa_writel(0x80000000, &ffb->fbc);
+ upa_writel((ctx->stencilctl | 0x80000),
+ &ffb->rawstencilctl);
+ upa_writel(ctx->fbc, &ffb->fbc);
+ }
+
+ for (i = 0; i < 32; i++)
+ upa_writel(ctx->area_pattern[i], &ffb->pattern[i]);
+ upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr);
+ return;
+ }
+
+ /* Restore drawop. */
+ upa_writel(ctx->drawop, &ffb->drawop);
+
+ /* If we were restoring the vertex registers, this is where
+ * we would do it. We would restore 32 32-bit words starting
+ * at ffb->suvtx.
+ */
+
+ /* Restore rendering attributes. */
+
+ upa_writel(ctx->ppc, &ffb->ppc); /* Pixel Processor Control */
+ upa_writel(ctx->wid, &ffb->wid); /* Current WID */
+ upa_writel(ctx->fg, &ffb->fg); /* Constant FG color */
+ upa_writel(ctx->bg, &ffb->bg); /* Constant BG color */
+ upa_writel(ctx->consty, &ffb->consty); /* Constant Y */
+ upa_writel(ctx->constz, &ffb->constz); /* Constant Z */
+ upa_writel(ctx->xclip, &ffb->xclip); /* X plane clip */
+ upa_writel(ctx->dcss, &ffb->dcss); /* Depth Cue Scale Slope */
+ upa_writel(ctx->vclipmin, &ffb->vclipmin); /* Primary XY clip, minimum */
+ upa_writel(ctx->vclipmax, &ffb->vclipmax); /* Primary XY clip, maximum */
+ upa_writel(ctx->vclipzmin, &ffb->vclipzmin); /* Primary Z clip, minimum */
+ upa_writel(ctx->vclipzmax, &ffb->vclipzmax); /* Primary Z clip, maximum */
+ upa_writel(ctx->dcsf, &ffb->dcsf); /* Depth Cue Scale Front Bound */
+ upa_writel(ctx->dcsb, &ffb->dcsb); /* Depth Cue Scale Back Bound */
+ upa_writel(ctx->dczf, &ffb->dczf); /* Depth Cue Scale Z Front */
+ upa_writel(ctx->dczb, &ffb->dczb); /* Depth Cue Scale Z Back */
+ upa_writel(ctx->blendc, &ffb->blendc); /* Alpha Blend Control */
+ upa_writel(ctx->blendc1, &ffb->blendc1); /* Alpha Blend Color 1 */
+ upa_writel(ctx->blendc2, &ffb->blendc2); /* Alpha Blend Color 2 */
+ upa_writel(ctx->fbc, &ffb->fbc); /* Frame Buffer Control */
+ upa_writel(ctx->rop, &ffb->rop); /* Raster Operation */
+ upa_writel(ctx->cmp, &ffb->cmp); /* Compare Controls */
+ upa_writel(ctx->matchab, &ffb->matchab); /* Buffer A/B Match Ops */
+ upa_writel(ctx->matchc, &ffb->matchc); /* Buffer C Match Ops */
+ upa_writel(ctx->magnab, &ffb->magnab); /* Buffer A/B Magnitude Ops */
+ upa_writel(ctx->magnc, &ffb->magnc); /* Buffer C Magnitude Ops */
+ upa_writel(ctx->pmask, &ffb->pmask); /* RGB Plane Mask */
+ upa_writel(ctx->xpmask, &ffb->xpmask); /* X Plane Mask */
+ upa_writel(ctx->ypmask, &ffb->ypmask); /* Y Plane Mask */
+ upa_writel(ctx->zpmask, &ffb->zpmask); /* Z Plane Mask */
+
+ /* Auxiliary Clips. */
+ upa_writel(ctx->auxclip0min, &ffb->auxclip[0].min);
+ upa_writel(ctx->auxclip0max, &ffb->auxclip[0].max);
+ upa_writel(ctx->auxclip1min, &ffb->auxclip[1].min);
+ upa_writel(ctx->auxclip1max, &ffb->auxclip[1].max);
+ upa_writel(ctx->auxclip2min, &ffb->auxclip[2].min);
+ upa_writel(ctx->auxclip2max, &ffb->auxclip[2].max);
+ upa_writel(ctx->auxclip3min, &ffb->auxclip[3].min);
+ upa_writel(ctx->auxclip3max, &ffb->auxclip[3].max);
+
+ upa_writel(ctx->lpat, &ffb->lpat); /* Line Pattern */
+ upa_writel(ctx->fontxy, &ffb->fontxy); /* XY Font Coordinate */
+ upa_writel(ctx->fontw, &ffb->fontw); /* Font Width */
+ upa_writel(ctx->fontinc, &ffb->fontinc); /* Font X/Y Increment */
+
+ /* These registers/features only exist on FFB2 and later chips. */
+ if (fpriv->ffb_type >= ffb2_prototype) {
+ upa_writel(ctx->dcss1, &ffb->dcss1); /* Depth Cue Scale Slope 1 */
+ upa_writel(ctx->dcss2, &ffb->dcss2); /* Depth Cue Scale Slope 2 */
+ upa_writel(ctx->dcss3, &ffb->dcss2); /* Depth Cue Scale Slope 3 */
+ upa_writel(ctx->dcs2, &ffb->dcs2); /* Depth Cue Scale 2 */
+ upa_writel(ctx->dcs3, &ffb->dcs3); /* Depth Cue Scale 3 */
+ upa_writel(ctx->dcs4, &ffb->dcs4); /* Depth Cue Scale 4 */
+ upa_writel(ctx->dcd2, &ffb->dcd2); /* Depth Cue Depth 2 */
+ upa_writel(ctx->dcd3, &ffb->dcd3); /* Depth Cue Depth 3 */
+ upa_writel(ctx->dcd4, &ffb->dcd4); /* Depth Cue Depth 4 */
+
+ /* And stencil/stencilctl only exists on FFB2+ and later
+ * due to the introduction of 3DRAM-III.
+ */
+ if (fpriv->ffb_type = ffb2_vertical_plus ||
+ fpriv->ffb_type = ffb2_horizontal_plus) {
+ /* Unfortunately, there is a hardware bug on
+ * the FFB2+ chips which prevents a normal write
+ * to the stencil control register from working
+ * as it should.
+ *
+ * The state controlled by the FFB stencilctl register
+ * really gets transferred to the per-buffer instances
+ * of the stencilctl register in the 3DRAM chips.
+ *
+ * The bug is that FFB does not update buffer C correctly,
+ * so we have to do it by hand for them.
+ */
+
+ /* This will update buffers A and B. */
+ upa_writel(ctx->stencil, &ffb->stencil);
+ upa_writel(ctx->stencilctl, &ffb->stencilctl);
+
+ /* Force FFB to use buffer C 3dram regs. */
+ upa_writel(0x80000000, &ffb->fbc);
+ upa_writel((ctx->stencilctl | 0x80000),
+ &ffb->rawstencilctl);
+
+ /* Now restore the correct FBC controls. */
+ upa_writel(ctx->fbc, &ffb->fbc);
+ }
+ }
+
+ /* Restore the 32x32 area pattern. */
+ for (i = 0; i < 32; i++)
+ upa_writel(ctx->area_pattern[i], &ffb->pattern[i]);
+
+ /* Finally, stash away the User Constol/Status Register.
+ * The only state we really preserve here is the picking
+ * control.
+ */
+ upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr);
+}
+
+#define FFB_UCSR_FB_BUSY 0x01000000
+#define FFB_UCSR_RP_BUSY 0x02000000
+#define FFB_UCSR_ALL_BUSY (FFB_UCSR_RP_BUSY|FFB_UCSR_FB_BUSY)
+
+static void FFBWait(ffb_fbcPtr ffb)
+{
+ int limit = 100000;
+
+ do {
+ u32 regval = upa_readl(&ffb->ucsr);
+
+ if ((regval & FFB_UCSR_ALL_BUSY) = 0)
+ break;
+ } while (--limit);
+}
+
+int ffb_context_switch(drm_device_t *dev, int old, int new)
+{
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+
+ atomic_inc(&dev->total_ctx);
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new = dev->last_context ||
+ dev->last_context = 0) {
+ dev->last_context = new;
+ return 0;
+ }
+
+ FFBWait(fpriv->regs);
+ ffb_save_context(fpriv, old);
+ ffb_restore_context(fpriv, old, new);
+ FFBWait(fpriv->regs);
+
+ dev->last_context = new;
+
+ return 0;
+}
+
+int ffb_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS);
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+
+int ffb_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ idx = ffb_alloc_queue(dev, (ctx.flags & _DRM_CONTEXT_2DONLY));
+ if (idx < 0)
+ return -ENFILE;
+
+ DRM_DEBUG("%d\n", ctx.handle);
+ ctx.handle = idx;
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int ffb_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ struct ffb_hw_context *hwctx;
+ drm_ctx_t ctx;
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ idx = ctx.handle;
+ if (idx <= 0 || idx >= FFB_MAX_CTXS)
+ return -EINVAL;
+
+ hwctx = fpriv->hw_state[idx - 1];
+ if (hwctx = NULL)
+ return -EINVAL;
+
+ if ((ctx.flags & _DRM_CONTEXT_2DONLY) = 0)
+ hwctx->is_2d_only = 0;
+ else
+ hwctx->is_2d_only = 1;
+
+ return 0;
+}
+
+int ffb_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ struct ffb_hw_context *hwctx;
+ drm_ctx_t ctx;
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+
+ idx = ctx.handle;
+ if (idx <= 0 || idx >= FFB_MAX_CTXS)
+ return -EINVAL;
+
+ hwctx = fpriv->hw_state[idx - 1];
+ if (hwctx = NULL)
+ return -EINVAL;
+
+ if (hwctx->is_2d_only != 0)
+ ctx.flags = _DRM_CONTEXT_2DONLY;
+ else
+ ctx.flags = 0;
+
+ if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int ffb_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return ffb_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int ffb_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ return 0;
+}
+
+int ffb_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ int idx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+
+ idx = ctx.handle - 1;
+ if (idx < 0 || idx >= FFB_MAX_CTXS)
+ return -EINVAL;
+
+ if (fpriv->hw_state[idx] != NULL) {
+ kfree(fpriv->hw_state[idx]);
+ fpriv->hw_state[idx] = NULL;
+ }
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_drv.c linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.c
--- linux-2.4.13/drivers/char/drm-4.0/ffb_drv.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,951 @@
+/* $Id: ffb_drv.c,v 1.14 2001/05/24 12:01:47 davem Exp $
+ * ffb_drv.c: Creator/Creator3D direct rendering driver.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ */
+
+#include "drmP.h"
+
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+#include <asm/shmparam.h>
+#include <asm/oplib.h>
+#include <asm/upa.h>
+
+#include "ffb_drv.h"
+
+#define FFB_NAME "ffb"
+#define FFB_DESC "Creator/Creator3D"
+#define FFB_DATE "20000517"
+#define FFB_MAJOR 0
+#define FFB_MINOR 0
+#define FFB_PATCHLEVEL 1
+
+/* Forward declarations. */
+int ffb_init(void);
+void ffb_cleanup(void);
+static int ffb_version(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_open(struct inode *inode, struct file *filp);
+static int ffb_release(struct inode *inode, struct file *filp);
+static int ffb_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_lock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_unlock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+static int ffb_mmap(struct file *filp, struct vm_area_struct *vma);
+static unsigned long ffb_get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+
+/* From ffb_context.c */
+extern int ffb_resctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_addctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_modctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_getctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_switchctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_newctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_rmctx(struct inode *, struct file *, unsigned int, unsigned long);
+extern int ffb_context_switch(drm_device_t *, int, int);
+
+static struct file_operations ffb_fops = {
+ owner: THIS_MODULE,
+ open: ffb_open,
+ flush: drm_flush,
+ release: ffb_release,
+ ioctl: ffb_ioctl,
+ mmap: ffb_mmap,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+ get_unmapped_area: ffb_get_unmapped_area,
+};
+
+/* This is just a template, we make a new copy for each FFB
+ * we discover at init time so that each one gets a unique
+ * misc device minor number.
+ */
+static struct miscdevice ffb_misc = {
+ minor: MISC_DYNAMIC_MINOR,
+ name: FFB_NAME,
+ fops: &ffb_fops,
+};
+
+static drm_ioctl_desc_t ffb_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { ffb_version, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_busid, 0, 1 }, /* XXX */
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_block, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_unblock, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 },
+
+ /* The implementation is currently a nop just like on tdfx.
+ * Later we can do something more clever. -DaveM
+ */
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { ffb_addctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { ffb_rmctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { ffb_modctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { ffb_getctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { ffb_switchctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { ffb_newctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { ffb_resctx, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { ffb_lock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { ffb_unlock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_finish, 1, 0 },
+};
+#define FFB_IOCTL_COUNT DRM_ARRAY_SIZE(ffb_ioctls)
+
+#ifdef MODULE
+static char *ffb = NULL;
+#endif
+
+MODULE_AUTHOR("David S. Miller (davem@redhat.com)");
+MODULE_DESCRIPTION("Sun Creator/Creator3D DRI");
+
+static int ffb_takedown(drm_device_t *dev)
+{
+ int i;
+ drm_magic_entry_t *pt, *next;
+ drm_map_t *map;
+ drm_vma_entry_t *vma, *vma_next;
+
+ DRM_DEBUG("\n");
+
+ down(&dev->struct_sem);
+ del_timer(&dev->timer);
+
+ if (dev->devname) {
+ drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER);
+ dev->devname = NULL;
+ }
+
+ if (dev->unique) {
+ drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER);
+ dev->unique = NULL;
+ dev->unique_len = 0;
+ }
+
+ /* Clear pid list */
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ for (pt = dev->magiclist[i].head; pt; pt = next) {
+ next = pt->next;
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+ }
+ dev->magiclist[i].head = dev->magiclist[i].tail = NULL;
+ }
+
+ /* Clear vma list (only built for debugging) */
+ if (dev->vmalist) {
+ for (vma = dev->vmalist; vma; vma = vma_next) {
+ vma_next = vma->next;
+ drm_free(vma, sizeof(*vma), DRM_MEM_VMAS);
+ }
+ dev->vmalist = NULL;
+ }
+
+ /* Clear map area information */
+ if (dev->maplist) {
+ for (i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+
+ case _DRM_SHM:
+ drm_free_pages((unsigned long)map->handle,
+ drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ break;
+
+ default:
+ break;
+ };
+
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+
+ drm_free(dev->maplist,
+ dev->map_count * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ }
+
+ if (dev->lock.hw_lock) {
+ dev->lock.hw_lock = NULL; /* SHM removed */
+ dev->lock.pid = 0;
+ wake_up_interruptible(&dev->lock.lock_queue);
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+drm_device_t **ffb_dev_table;
+static int ffb_dev_table_size;
+
+static void get_ffb_type(ffb_dev_priv_t *ffb_priv, int instance)
+{
+ volatile unsigned char *strap_bits;
+ unsigned char val;
+
+ strap_bits = (volatile unsigned char *)
+ (ffb_priv->card_phys_base + 0x00200000UL);
+
+ /* Don't ask, you have to read the value twice for whatever
+ * reason to get correct contents.
+ */
+ val = upa_readb(strap_bits);
+ val = upa_readb(strap_bits);
+ switch (val & 0x78) {
+ case (0x0 << 5) | (0x0 << 3):
+ ffb_priv->ffb_type = ffb1_prototype;
+ printk("ffb%d: Detected FFB1 pre-FCS prototype\n", instance);
+ break;
+ case (0x0 << 5) | (0x1 << 3):
+ ffb_priv->ffb_type = ffb1_standard;
+ printk("ffb%d: Detected FFB1\n", instance);
+ break;
+ case (0x0 << 5) | (0x3 << 3):
+ ffb_priv->ffb_type = ffb1_speedsort;
+ printk("ffb%d: Detected FFB1-SpeedSort\n", instance);
+ break;
+ case (0x1 << 5) | (0x0 << 3):
+ ffb_priv->ffb_type = ffb2_prototype;
+ printk("ffb%d: Detected FFB2/vertical pre-FCS prototype\n", instance);
+ break;
+ case (0x1 << 5) | (0x1 << 3):
+ ffb_priv->ffb_type = ffb2_vertical;
+ printk("ffb%d: Detected FFB2/vertical\n", instance);
+ break;
+ case (0x1 << 5) | (0x2 << 3):
+ ffb_priv->ffb_type = ffb2_vertical_plus;
+ printk("ffb%d: Detected FFB2+/vertical\n", instance);
+ break;
+ case (0x2 << 5) | (0x0 << 3):
+ ffb_priv->ffb_type = ffb2_horizontal;
+ printk("ffb%d: Detected FFB2/horizontal\n", instance);
+ break;
+ case (0x2 << 5) | (0x2 << 3):
+ ffb_priv->ffb_type = ffb2_horizontal;
+ printk("ffb%d: Detected FFB2+/horizontal\n", instance);
+ break;
+ default:
+ ffb_priv->ffb_type = ffb2_vertical;
+ printk("ffb%d: Unknown boardID[%08x], assuming FFB2\n", instance, val);
+ break;
+ };
+}
+
+static void __init ffb_apply_upa_parent_ranges(int parent, struct linux_prom64_registers *regs)
+{
+ struct linux_prom64_ranges ranges[PROMREG_MAX];
+ char name[128];
+ int len, i;
+
+ prom_getproperty(parent, "name", name, sizeof(name));
+ if (strcmp(name, "upa") != 0)
+ return;
+
+ len = prom_getproperty(parent, "ranges", (void *) ranges, sizeof(ranges));
+ if (len <= 0)
+ return;
+
+ len /= sizeof(struct linux_prom64_ranges);
+ for (i = 0; i < len; i++) {
+ struct linux_prom64_ranges *rng = &ranges[i];
+ u64 phys_addr = regs->phys_addr;
+
+ if (phys_addr >= rng->ot_child_base &&
+ phys_addr < (rng->ot_child_base + rng->or_size)) {
+ regs->phys_addr -= rng->ot_child_base;
+ regs->phys_addr += rng->ot_parent_base;
+ return;
+ }
+ }
+
+ return;
+}
+
+static int __init ffb_init_one(int prom_node, int parent_node, int instance)
+{
+ struct linux_prom64_registers regs[2*PROMREG_MAX];
+ drm_device_t *dev;
+ ffb_dev_priv_t *ffb_priv;
+ int ret, i;
+
+ dev = kmalloc(sizeof(drm_device_t) + sizeof(ffb_dev_priv_t), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+
+ memset(dev, 0, sizeof(*dev));
+ spin_lock_init(&dev->count_lock);
+ sema_init(&dev->struct_sem, 1);
+
+ ffb_priv = (ffb_dev_priv_t *) (dev + 1);
+ ffb_priv->prom_node = prom_node;
+ if (prom_getproperty(ffb_priv->prom_node, "reg",
+ (void *)regs, sizeof(regs)) <= 0) {
+ kfree(dev);
+ return -EINVAL;
+ }
+ ffb_apply_upa_parent_ranges(parent_node, ®s[0]);
+ ffb_priv->card_phys_base = regs[0].phys_addr;
+ ffb_priv->regs = (ffb_fbcPtr)
+ (regs[0].phys_addr + 0x00600000UL);
+ get_ffb_type(ffb_priv, instance);
+ for (i = 0; i < FFB_MAX_CTXS; i++)
+ ffb_priv->hw_state[i] = NULL;
+
+ ffb_dev_table[instance] = dev;
+
+#ifdef MODULE
+ drm_parse_options(ffb);
+#endif
+
+ memcpy(&ffb_priv->miscdev, &ffb_misc, sizeof(ffb_misc));
+ ret = misc_register(&ffb_priv->miscdev);
+ if (ret) {
+ ffb_dev_table[instance] = NULL;
+ kfree(dev);
+ return ret;
+ }
+
+ dev->device = MKDEV(MISC_MAJOR, ffb_priv->miscdev.minor);
+ dev->name = FFB_NAME;
+
+ drm_mem_init();
+ drm_proc_init(dev);
+
+ DRM_INFO("Initialized %s %d.%d.%d %s on minor %d at %016lx\n",
+ FFB_NAME,
+ FFB_MAJOR,
+ FFB_MINOR,
+ FFB_PATCHLEVEL,
+ FFB_DATE,
+ ffb_priv->miscdev.minor,
+ ffb_priv->card_phys_base);
+
+ return 0;
+}
+
+static int __init ffb_count_siblings(int root)
+{
+ int node, child, count = 0;
+
+ child = prom_getchild(root);
+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb"))
+ count++;
+
+ return count;
+}
+
+static int __init ffb_init_dev_table(void)
+{
+ int root, total;
+
+ total = ffb_count_siblings(prom_root_node);
+ root = prom_getchild(prom_root_node);
+ for (root = prom_searchsiblings(root, "upa"); root;
+ root = prom_searchsiblings(prom_getsibling(root), "upa"))
+ total += ffb_count_siblings(root);
+
+ if (!total)
+ return -ENODEV;
+
+ ffb_dev_table = kmalloc(sizeof(drm_device_t *) * total, GFP_KERNEL);
+ if (!ffb_dev_table)
+ return -ENOMEM;
+
+ ffb_dev_table_size = total;
+
+ return 0;
+}
+
+static int __init ffb_scan_siblings(int root, int instance)
+{
+ int node, child;
+
+ child = prom_getchild(root);
+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node;
+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) {
+ ffb_init_one(node, root, instance);
+ instance++;
+ }
+
+ return instance;
+}
+
+int __init ffb_init(void)
+{
+ int root, instance, ret;
+
+ ret = ffb_init_dev_table();
+ if (ret)
+ return ret;
+
+ instance = ffb_scan_siblings(prom_root_node, 0);
+
+ root = prom_getchild(prom_root_node);
+ for (root = prom_searchsiblings(root, "upa"); root;
+ root = prom_searchsiblings(prom_getsibling(root), "upa"))
+ instance = ffb_scan_siblings(root, instance);
+
+ return 0;
+}
+
+void __exit ffb_cleanup(void)
+{
+ int instance;
+
+ DRM_DEBUG("\n");
+
+ drm_proc_cleanup();
+ for (instance = 0; instance < ffb_dev_table_size; instance++) {
+ drm_device_t *dev = ffb_dev_table[instance];
+ ffb_dev_priv_t *ffb_priv;
+
+ if (!dev)
+ continue;
+
+ ffb_priv = (ffb_dev_priv_t *) (dev + 1);
+ if (misc_deregister(&ffb_priv->miscdev)) {
+ DRM_ERROR("Cannot unload module\n");
+ } else {
+ DRM_INFO("Module unloaded\n");
+ }
+ ffb_takedown(dev);
+ kfree(dev);
+ ffb_dev_table[instance] = NULL;
+ }
+ kfree(ffb_dev_table);
+ ffb_dev_table = NULL;
+ ffb_dev_table_size = 0;
+}
+
+static int ffb_version(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ drm_version_t version;
+ int len, ret;
+
+ ret = copy_from_user(&version, (drm_version_t *)arg, sizeof(version));
+ if (ret)
+ return -EFAULT;
+
+ version.version_major = FFB_MAJOR;
+ version.version_minor = FFB_MINOR;
+ version.version_patchlevel = FFB_PATCHLEVEL;
+
+ len = strlen(FFB_NAME);
+ if (len > version.name_len)
+ len = version.name_len;
+ version.name_len = len;
+ if (len && version.name) {
+ ret = copy_to_user(version.name, FFB_NAME, len);
+ if (ret)
+ return -EFAULT;
+ }
+
+ len = strlen(FFB_DATE);
+ if (len > version.date_len)
+ len = version.date_len;
+ version.date_len = len;
+ if (len && version.date) {
+ ret = copy_to_user(version.date, FFB_DATE, len);
+ if (ret)
+ return -EFAULT;
+ }
+
+ len = strlen(FFB_DESC);
+ if (len > version.desc_len)
+ len = version.desc_len;
+ version.desc_len = len;
+ if (len && version.desc) {
+ ret = copy_to_user(version.desc, FFB_DESC, len);
+ if (ret)
+ return -EFAULT;
+ }
+
+ ret = copy_to_user((drm_version_t *) arg, &version, sizeof(version));
+ if (ret)
+ ret = -EFAULT;
+
+ return ret;
+}
+
+static int ffb_setup(drm_device_t *dev)
+{
+ int i;
+
+ atomic_set(&dev->ioctl_count, 0);
+ atomic_set(&dev->vma_count, 0);
+ dev->buf_use = 0;
+ atomic_set(&dev->buf_alloc, 0);
+
+ atomic_set(&dev->total_open, 0);
+ atomic_set(&dev->total_close, 0);
+ atomic_set(&dev->total_ioctl, 0);
+ atomic_set(&dev->total_irq, 0);
+ atomic_set(&dev->total_ctx, 0);
+ atomic_set(&dev->total_locks, 0);
+ atomic_set(&dev->total_unlocks, 0);
+ atomic_set(&dev->total_contends, 0);
+ atomic_set(&dev->total_sleeps, 0);
+
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ dev->magiclist[i].head = NULL;
+ dev->magiclist[i].tail = NULL;
+ }
+
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ dev->vmalist = NULL;
+ dev->lock.hw_lock = NULL;
+ init_waitqueue_head(&dev->lock.lock_queue);
+ dev->queue_count = 0;
+ dev->queue_reserved = 0;
+ dev->queue_slots = 0;
+ dev->queuelist = NULL;
+ dev->irq = 0;
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma = 0;
+ dev->dma_flag = 0;
+ dev->last_context = 0;
+ dev->last_switch = 0;
+ dev->last_checked = 0;
+ init_timer(&dev->timer);
+ init_waitqueue_head(&dev->context_wait);
+
+ dev->ctx_start = 0;
+ dev->lck_start = 0;
+
+ dev->buf_rp = dev->buf;
+ dev->buf_wp = dev->buf;
+ dev->buf_end = dev->buf + DRM_BSZ;
+ dev->buf_async = NULL;
+ init_waitqueue_head(&dev->buf_readers);
+ init_waitqueue_head(&dev->buf_writers);
+
+ return 0;
+}
+
+static int ffb_open(struct inode *inode, struct file *filp)
+{
+ drm_device_t *dev;
+ int minor, i;
+ int ret = 0;
+
+ minor = MINOR(inode->i_rdev);
+ for (i = 0; i < ffb_dev_table_size; i++) {
+ ffb_dev_priv_t *ffb_priv;
+
+ ffb_priv = (ffb_dev_priv_t *) (ffb_dev_table[i] + 1);
+
+ if (ffb_priv->miscdev.minor = minor)
+ break;
+ }
+
+ if (i >= ffb_dev_table_size)
+ return -EINVAL;
+
+ dev = ffb_dev_table[i];
+ if (!dev)
+ return -EINVAL;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ ret = drm_open_helper(inode, filp, dev);
+ if (!ret) {
+ atomic_inc(&dev->total_open);
+ spin_lock(&dev->count_lock);
+ if (!dev->open_count++) {
+ spin_unlock(&dev->count_lock);
+ return ffb_setup(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+
+ return ret;
+}
+
+static int ffb_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ int ret = 0;
+
+ lock_kernel();
+ dev = priv->dev;
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (dev->lock.hw_lock != NULL
+ && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)
+ && dev->lock.pid = current->pid) {
+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) (dev + 1);
+ int context = _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock);
+ int idx;
+
+ /* We have to free up the rogue hw context state
+ * holding error or else we will leak it.
+ */
+ idx = context - 1;
+ if (fpriv->hw_state[idx] != NULL) {
+ kfree(fpriv->hw_state[idx]);
+ fpriv->hw_state[idx] = NULL;
+ }
+ }
+
+ ret = drm_release(inode, filp);
+
+ if (!ret) {
+ atomic_inc(&dev->total_close);
+ spin_lock(&dev->count_lock);
+ if (!--dev->open_count) {
+ if (atomic_read(&dev->ioctl_count) || dev->blocked) {
+ DRM_ERROR("Device busy: %d %d\n",
+ atomic_read(&dev->ioctl_count),
+ dev->blocked);
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return -EBUSY;
+ }
+ spin_unlock(&dev->count_lock);
+ ret = ffb_takedown(dev);
+ unlock_kernel();
+ return ret;
+ }
+ spin_unlock(&dev->count_lock);
+ }
+
+ unlock_kernel();
+ return ret;
+}
+
+static int ffb_ioctl(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ int nr = DRM_IOCTL_NR(cmd);
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ioctl_desc_t *ioctl;
+ drm_ioctl_t *func;
+ int ret;
+
+ atomic_inc(&dev->ioctl_count);
+ atomic_inc(&dev->total_ioctl);
+ ++priv->ioctl_count;
+
+ DRM_DEBUG("pid = %d, cmd = 0x%02x, nr = 0x%02x, dev 0x%x, auth = %d\n",
+ current->pid, cmd, nr, dev->device, priv->authenticated);
+
+ if (nr >= FFB_IOCTL_COUNT) {
+ ret = -EINVAL;
+ } else {
+ ioctl = &ffb_ioctls[nr];
+ func = ioctl->func;
+
+ if (!func) {
+ DRM_DEBUG("no function\n");
+ ret = -EINVAL;
+ } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN))
+ || (ioctl->auth_needed && !priv->authenticated)) {
+ ret = -EACCES;
+ } else {
+ ret = (func)(inode, filp, cmd, arg);
+ }
+ }
+
+ atomic_dec(&dev->ioctl_count);
+
+ return ret;
+}
+
+static int ffb_lock(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_lock_t lock;
+
+ ret = copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock));
+ if (ret)
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
+ lock.context, current->pid, dev->lock.hw_lock->lock,
+ lock.flags);
+
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ ret = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ lock.context)) {
+ dev->lock.pid = current->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ break; /* Got lock */
+ }
+
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ current->state = TASK_INTERRUPTIBLE;
+ current->policy |= SCHED_YIELD;
+ schedule();
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+
+ if (!ret) {
+ sigemptyset(&dev->sigmask);
+ sigaddset(&dev->sigmask, SIGSTOP);
+ sigaddset(&dev->sigmask, SIGTSTP);
+ sigaddset(&dev->sigmask, SIGTTIN);
+ sigaddset(&dev->sigmask, SIGTTOU);
+ dev->sigdata.context = lock.context;
+ dev->sigdata.lock = dev->lock.hw_lock;
+ block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
+
+ if (dev->last_context != lock.context)
+ ffb_context_switch(dev, dev->last_context, lock.context);
+ }
+
+ DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock");
+
+ return ret;
+}
+
+int ffb_unlock(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_lock_t lock;
+ unsigned int old, new, prev, ctx;
+ int ret;
+
+ ret = copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock));
+ if (ret)
+ return -EFAULT;
+
+ if ((ctx = lock.context) = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d frees lock (%d holds)\n",
+ lock.context,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ atomic_inc(&dev->total_unlocks);
+ if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock))
+ atomic_inc(&dev->total_contends);
+
+ /* We no longer really hold it, but if we are the next
+ * agent to request it then we should just be able to
+ * take it immediately and not eat the ioctl.
+ */
+ dev->lock.pid = 0;
+ {
+ __volatile__ unsigned int *plock = &dev->lock.hw_lock->lock;
+
+ do {
+ old = *plock;
+ new = ctx;
+ prev = cmpxchg(plock, old, new);
+ } while (prev != old);
+ }
+
+ wake_up_interruptible(&dev->lock.lock_queue);
+
+ unblock_all_signals();
+ return 0;
+}
+
+extern struct vm_operations_struct drm_vm_ops;
+extern struct vm_operations_struct drm_vm_shm_ops;
+extern struct vm_operations_struct drm_vm_shm_lock_ops;
+
+static int ffb_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map = NULL;
+ ffb_dev_priv_t *ffb_priv;
+ int i, minor;
+
+ DRM_DEBUG("start = 0x%lx, end = 0x%lx, offset = 0x%lx\n",
+ vma->vm_start, vma->vm_end, VM_OFFSET(vma));
+
+ minor = MINOR(filp->f_dentry->d_inode->i_rdev);
+ ffb_priv = NULL;
+ for (i = 0; i < ffb_dev_table_size; i++) {
+ ffb_priv = (ffb_dev_priv_t *) (ffb_dev_table[i] + 1);
+ if (ffb_priv->miscdev.minor = minor)
+ break;
+ }
+ if (i >= ffb_dev_table_size)
+ return -EINVAL;
+
+ /* We don't support/need dma mappings, so... */
+ if (!VM_OFFSET(vma))
+ return -EINVAL;
+
+ for (i = 0; i < dev->map_count; i++) {
+ unsigned long off;
+
+ map = dev->maplist[i];
+
+ /* Ok, a little hack to make 32-bit apps work. */
+ off = (map->offset & 0xffffffff);
+ if (off = VM_OFFSET(vma))
+ break;
+ }
+
+ if (i >= dev->map_count)
+ return -EINVAL;
+
+ if (!map ||
+ ((map->flags & _DRM_RESTRICTED) && !capable(CAP_SYS_ADMIN)))
+ return -EPERM;
+
+ if (map->size != (vma->vm_end - vma->vm_start))
+ return -EINVAL;
+
+ /* Set read-only attribute before mappings are created
+ * so it works for fb/reg maps too.
+ */
+ if (map->flags & _DRM_READ_ONLY)
+ vma->vm_page_prot = __pgprot(pte_val(pte_wrprotect(
+ __pte(pgprot_val(vma->vm_page_prot)))));
+
+ switch (map->type) {
+ case _DRM_FRAME_BUFFER:
+ /* FALLTHROUGH */
+
+ case _DRM_REGISTERS:
+ /* In order to handle 32-bit drm apps/xserver we
+ * play a trick. The mappings only really specify
+ * the 32-bit offset from the cards 64-bit base
+ * address, and we just add in the base here.
+ */
+ vma->vm_flags |= VM_IO;
+ if (io_remap_page_range(vma->vm_start,
+ ffb_priv->card_phys_base + VM_OFFSET(vma),
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot, 0))
+ return -EAGAIN;
+
+ vma->vm_ops = &drm_vm_ops;
+ break;
+ case _DRM_SHM:
+ if (map->flags & _DRM_CONTAINS_LOCK)
+ vma->vm_ops = &drm_vm_shm_lock_ops;
+ else {
+ vma->vm_ops = &drm_vm_shm_ops;
+ vma->vm_private_data = (void *) map;
+ }
+
+ /* Don't let this area swap. Change when
+ * DRM_KERNEL advisory is supported.
+ */
+ vma->vm_flags |= VM_LOCKED;
+ break;
+ default:
+ return -EINVAL; /* This should never happen. */
+ };
+
+ vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
+
+ vma->vm_file = filp; /* Needed for drm_vm_open() */
+ drm_vm_open(vma);
+ return 0;
+}
+
+static drm_map_t *ffb_find_map(struct file *filp, unsigned long off)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ drm_map_t *map;
+ int i;
+
+ if (!priv || (dev = priv->dev) = NULL)
+ return NULL;
+
+ for (i = 0; i < dev->map_count; i++) {
+ unsigned long uoff;
+
+ map = dev->maplist[i];
+
+ /* Ok, a little hack to make 32-bit apps work. */
+ uoff = (map->offset & 0xffffffff);
+ if (uoff = off)
+ return map;
+ }
+ return NULL;
+}
+
+static unsigned long ffb_get_unmapped_area(struct file *filp, unsigned long hint, unsigned long len, unsigned long pgoff, unsigned long flags)
+{
+ drm_map_t *map = ffb_find_map(filp, pgoff << PAGE_SHIFT);
+ unsigned long addr = -ENOMEM;
+
+ if (!map)
+ return get_unmapped_area(NULL, hint, len, pgoff, flags);
+
+ if (map->type = _DRM_FRAME_BUFFER ||
+ map->type = _DRM_REGISTERS) {
+#ifdef HAVE_ARCH_FB_UNMAPPED_AREA
+ addr = get_fb_unmapped_area(filp, hint, len, pgoff, flags);
+#else
+ addr = get_unmapped_area(NULL, hint, len, pgoff, flags);
+#endif
+ } else if (map->type = _DRM_SHM && SHMLBA > PAGE_SIZE) {
+ unsigned long slack = SHMLBA - PAGE_SIZE;
+
+ addr = get_unmapped_area(NULL, hint, len + slack, pgoff, flags);
+ if (!(addr & ~PAGE_MASK)) {
+ unsigned long kvirt = (unsigned long) map->handle;
+
+ if ((kvirt & (SHMLBA - 1)) != (addr & (SHMLBA - 1))) {
+ unsigned long koff, aoff;
+
+ koff = kvirt & (SHMLBA - 1);
+ aoff = addr & (SHMLBA - 1);
+ if (koff < aoff)
+ koff += SHMLBA;
+
+ addr += (koff - aoff);
+ }
+ }
+ } else {
+ addr = get_unmapped_area(NULL, hint, len, pgoff, flags);
+ }
+
+ return addr;
+}
+
+module_init(ffb_init);
+module_exit(ffb_cleanup);
diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_drv.h linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.h
--- linux-2.4.13/drivers/char/drm-4.0/ffb_drv.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,276 @@
+/* $Id: ffb_drv.h,v 1.1 2000/06/01 04:24:39 davem Exp $
+ * ffb_drv.h: Creator/Creator3D direct rendering driver.
+ *
+ * Copyright (C) 2000 David S. Miller (davem@redhat.com)
+ */
+
+/* Auxilliary clips. */
+typedef struct {
+ volatile unsigned int min;
+ volatile unsigned int max;
+} ffb_auxclip, *ffb_auxclipPtr;
+
+/* FFB register set. */
+typedef struct _ffb_fbc {
+ /* Next vertex registers, on the right we list which drawops
+ * use said register and the logical name the register has in
+ * that context.
+ */ /* DESCRIPTION DRAWOP(NAME) */
+/*0x00*/unsigned int pad1[3]; /* Reserved */
+/*0x0c*/volatile unsigned int alpha; /* ALPHA Transparency */
+/*0x10*/volatile unsigned int red; /* RED */
+/*0x14*/volatile unsigned int green; /* GREEN */
+/*0x18*/volatile unsigned int blue; /* BLUE */
+/*0x1c*/volatile unsigned int z; /* DEPTH */
+/*0x20*/volatile unsigned int y; /* Y triangle(DOYF) */
+ /* aadot(DYF) */
+ /* ddline(DYF) */
+ /* aaline(DYF) */
+/*0x24*/volatile unsigned int x; /* X triangle(DOXF) */
+ /* aadot(DXF) */
+ /* ddline(DXF) */
+ /* aaline(DXF) */
+/*0x28*/unsigned int pad2[2]; /* Reserved */
+/*0x30*/volatile unsigned int ryf; /* Y (alias to DOYF) ddline(RYF) */
+ /* aaline(RYF) */
+ /* triangle(RYF) */
+/*0x34*/volatile unsigned int rxf; /* X ddline(RXF) */
+ /* aaline(RXF) */
+ /* triangle(RXF) */
+/*0x38*/unsigned int pad3[2]; /* Reserved */
+/*0x40*/volatile unsigned int dmyf; /* Y (alias to DOYF) triangle(DMYF) */
+/*0x44*/volatile unsigned int dmxf; /* X triangle(DMXF) */
+/*0x48*/unsigned int pad4[2]; /* Reserved */
+/*0x50*/volatile unsigned int ebyi; /* Y (alias to RYI) polygon(EBYI) */
+/*0x54*/volatile unsigned int ebxi; /* X polygon(EBXI) */
+/*0x58*/unsigned int pad5[2]; /* Reserved */
+/*0x60*/volatile unsigned int by; /* Y brline(RYI) */
+ /* fastfill(OP) */
+ /* polygon(YI) */
+ /* rectangle(YI) */
+ /* bcopy(SRCY) */
+ /* vscroll(SRCY) */
+/*0x64*/volatile unsigned int bx; /* X brline(RXI) */
+ /* polygon(XI) */
+ /* rectangle(XI) */
+ /* bcopy(SRCX) */
+ /* vscroll(SRCX) */
+ /* fastfill(GO) */
+/*0x68*/volatile unsigned int dy; /* destination Y fastfill(DSTY) */
+ /* bcopy(DSRY) */
+ /* vscroll(DSRY) */
+/*0x6c*/volatile unsigned int dx; /* destination X fastfill(DSTX) */
+ /* bcopy(DSTX) */
+ /* vscroll(DSTX) */
+/*0x70*/volatile unsigned int bh; /* Y (alias to RYI) brline(DYI) */
+ /* dot(DYI) */
+ /* polygon(ETYI) */
+ /* Height fastfill(H) */
+ /* bcopy(H) */
+ /* vscroll(H) */
+ /* Y count fastfill(NY) */
+/*0x74*/volatile unsigned int bw; /* X dot(DXI) */
+ /* brline(DXI) */
+ /* polygon(ETXI) */
+ /* fastfill(W) */
+ /* bcopy(W) */
+ /* vscroll(W) */
+ /* fastfill(NX) */
+/*0x78*/unsigned int pad6[2]; /* Reserved */
+/*0x80*/unsigned int pad7[32]; /* Reserved */
+
+ /* Setup Unit's vertex state register */
+/*100*/ volatile unsigned int suvtx;
+/*104*/ unsigned int pad8[63]; /* Reserved */
+
+ /* Frame Buffer Control Registers */
+/*200*/ volatile unsigned int ppc; /* Pixel Processor Control */
+/*204*/ volatile unsigned int wid; /* Current WID */
+/*208*/ volatile unsigned int fg; /* FG data */
+/*20c*/ volatile unsigned int bg; /* BG data */
+/*210*/ volatile unsigned int consty; /* Constant Y */
+/*214*/ volatile unsigned int constz; /* Constant Z */
+/*218*/ volatile unsigned int xclip; /* X Clip */
+/*21c*/ volatile unsigned int dcss; /* Depth Cue Scale Slope */
+/*220*/ volatile unsigned int vclipmin; /* Viewclip XY Min Bounds */
+/*224*/ volatile unsigned int vclipmax; /* Viewclip XY Max Bounds */
+/*228*/ volatile unsigned int vclipzmin; /* Viewclip Z Min Bounds */
+/*22c*/ volatile unsigned int vclipzmax; /* Viewclip Z Max Bounds */
+/*230*/ volatile unsigned int dcsf; /* Depth Cue Scale Front Bound */
+/*234*/ volatile unsigned int dcsb; /* Depth Cue Scale Back Bound */
+/*238*/ volatile unsigned int dczf; /* Depth Cue Z Front */
+/*23c*/ volatile unsigned int dczb; /* Depth Cue Z Back */
+/*240*/ unsigned int pad9; /* Reserved */
+/*244*/ volatile unsigned int blendc; /* Alpha Blend Control */
+/*248*/ volatile unsigned int blendc1; /* Alpha Blend Color 1 */
+/*24c*/ volatile unsigned int blendc2; /* Alpha Blend Color 2 */
+/*250*/ volatile unsigned int fbramitc; /* FB RAM Interleave Test Control */
+/*254*/ volatile unsigned int fbc; /* Frame Buffer Control */
+/*258*/ volatile unsigned int rop; /* Raster OPeration */
+/*25c*/ volatile unsigned int cmp; /* Frame Buffer Compare */
+/*260*/ volatile unsigned int matchab; /* Buffer AB Match Mask */
+/*264*/ volatile unsigned int matchc; /* Buffer C(YZ) Match Mask */
+/*268*/ volatile unsigned int magnab; /* Buffer AB Magnitude Mask */
+/*26c*/ volatile unsigned int magnc; /* Buffer C(YZ) Magnitude Mask */
+/*270*/ volatile unsigned int fbcfg0; /* Frame Buffer Config 0 */
+/*274*/ volatile unsigned int fbcfg1; /* Frame Buffer Config 1 */
+/*278*/ volatile unsigned int fbcfg2; /* Frame Buffer Config 2 */
+/*27c*/ volatile unsigned int fbcfg3; /* Frame Buffer Config 3 */
+/*280*/ volatile unsigned int ppcfg; /* Pixel Processor Config */
+/*284*/ volatile unsigned int pick; /* Picking Control */
+/*288*/ volatile unsigned int fillmode; /* FillMode */
+/*28c*/ volatile unsigned int fbramwac; /* FB RAM Write Address Control */
+/*290*/ volatile unsigned int pmask; /* RGB PlaneMask */
+/*294*/ volatile unsigned int xpmask; /* X PlaneMask */
+/*298*/ volatile unsigned int ypmask; /* Y PlaneMask */
+/*29c*/ volatile unsigned int zpmask; /* Z PlaneMask */
+/*2a0*/ ffb_auxclip auxclip[4]; /* Auxilliary Viewport Clip */
+
+ /* New 3dRAM III support regs */
+/*2c0*/ volatile unsigned int rawblend2;
+/*2c4*/ volatile unsigned int rawpreblend;
+/*2c8*/ volatile unsigned int rawstencil;
+/*2cc*/ volatile unsigned int rawstencilctl;
+/*2d0*/ volatile unsigned int threedram1;
+/*2d4*/ volatile unsigned int threedram2;
+/*2d8*/ volatile unsigned int passin;
+/*2dc*/ volatile unsigned int rawclrdepth;
+/*2e0*/ volatile unsigned int rawpmask;
+/*2e4*/ volatile unsigned int rawcsrc;
+/*2e8*/ volatile unsigned int rawmatch;
+/*2ec*/ volatile unsigned int rawmagn;
+/*2f0*/ volatile unsigned int rawropblend;
+/*2f4*/ volatile unsigned int rawcmp;
+/*2f8*/ volatile unsigned int rawwac;
+/*2fc*/ volatile unsigned int fbramid;
+
+/*300*/ volatile unsigned int drawop; /* Draw OPeration */
+/*304*/ unsigned int pad10[2]; /* Reserved */
+/*30c*/ volatile unsigned int lpat; /* Line Pattern control */
+/*310*/ unsigned int pad11; /* Reserved */
+/*314*/ volatile unsigned int fontxy; /* XY Font coordinate */
+/*318*/ volatile unsigned int fontw; /* Font Width */
+/*31c*/ volatile unsigned int fontinc; /* Font Increment */
+/*320*/ volatile unsigned int font; /* Font bits */
+/*324*/ unsigned int pad12[3]; /* Reserved */
+/*330*/ volatile unsigned int blend2;
+/*334*/ volatile unsigned int preblend;
+/*338*/ volatile unsigned int stencil;
+/*33c*/ volatile unsigned int stencilctl;
+
+/*340*/ unsigned int pad13[4]; /* Reserved */
+/*350*/ volatile unsigned int dcss1; /* Depth Cue Scale Slope 1 */
+/*354*/ volatile unsigned int dcss2; /* Depth Cue Scale Slope 2 */
+/*358*/ volatile unsigned int dcss3; /* Depth Cue Scale Slope 3 */
+/*35c*/ volatile unsigned int widpmask;
+/*360*/ volatile unsigned int dcs2;
+/*364*/ volatile unsigned int dcs3;
+/*368*/ volatile unsigned int dcs4;
+/*36c*/ unsigned int pad14; /* Reserved */
+/*370*/ volatile unsigned int dcd2;
+/*374*/ volatile unsigned int dcd3;
+/*378*/ volatile unsigned int dcd4;
+/*37c*/ unsigned int pad15; /* Reserved */
+/*380*/ volatile unsigned int pattern[32]; /* area Pattern */
+/*400*/ unsigned int pad16[8]; /* Reserved */
+/*420*/ volatile unsigned int reset; /* chip RESET */
+/*424*/ unsigned int pad17[247]; /* Reserved */
+/*800*/ volatile unsigned int devid; /* Device ID */
+/*804*/ unsigned int pad18[63]; /* Reserved */
+/*900*/ volatile unsigned int ucsr; /* User Control & Status Register */
+/*904*/ unsigned int pad19[31]; /* Reserved */
+/*980*/ volatile unsigned int mer; /* Mode Enable Register */
+/*984*/ unsigned int pad20[1439]; /* Reserved */
+} ffb_fbc, *ffb_fbcPtr;
+
+struct ffb_hw_context {
+ int is_2d_only;
+
+ unsigned int ppc;
+ unsigned int wid;
+ unsigned int fg;
+ unsigned int bg;
+ unsigned int consty;
+ unsigned int constz;
+ unsigned int xclip;
+ unsigned int dcss;
+ unsigned int vclipmin;
+ unsigned int vclipmax;
+ unsigned int vclipzmin;
+ unsigned int vclipzmax;
+ unsigned int dcsf;
+ unsigned int dcsb;
+ unsigned int dczf;
+ unsigned int dczb;
+ unsigned int blendc;
+ unsigned int blendc1;
+ unsigned int blendc2;
+ unsigned int fbc;
+ unsigned int rop;
+ unsigned int cmp;
+ unsigned int matchab;
+ unsigned int matchc;
+ unsigned int magnab;
+ unsigned int magnc;
+ unsigned int pmask;
+ unsigned int xpmask;
+ unsigned int ypmask;
+ unsigned int zpmask;
+ unsigned int auxclip0min;
+ unsigned int auxclip0max;
+ unsigned int auxclip1min;
+ unsigned int auxclip1max;
+ unsigned int auxclip2min;
+ unsigned int auxclip2max;
+ unsigned int auxclip3min;
+ unsigned int auxclip3max;
+ unsigned int drawop;
+ unsigned int lpat;
+ unsigned int fontxy;
+ unsigned int fontw;
+ unsigned int fontinc;
+ unsigned int area_pattern[32];
+ unsigned int ucsr;
+ unsigned int stencil;
+ unsigned int stencilctl;
+ unsigned int dcss1;
+ unsigned int dcss2;
+ unsigned int dcss3;
+ unsigned int dcs2;
+ unsigned int dcs3;
+ unsigned int dcs4;
+ unsigned int dcd2;
+ unsigned int dcd3;
+ unsigned int dcd4;
+ unsigned int mer;
+};
+
+#define FFB_MAX_CTXS 32
+
+enum ffb_chip_type {
+ ffb1_prototype = 0, /* Early pre-FCS FFB */
+ ffb1_standard, /* First FCS FFB, 100Mhz UPA, 66MHz gclk */
+ ffb1_speedsort, /* Second FCS FFB, 100Mhz UPA, 75MHz gclk */
+ ffb2_prototype, /* Early pre-FCS vertical FFB2 */
+ ffb2_vertical, /* First FCS FFB2/vertical, 100Mhz UPA, 100MHZ gclk,
+ 75(SingleBuffer)/83(DoubleBuffer) MHz fclk */
+ ffb2_vertical_plus, /* Second FCS FFB2/vertical, same timings */
+ ffb2_horizontal, /* First FCS FFB2/horizontal, same timings as FFB2/vert */
+ ffb2_horizontal_plus, /* Second FCS FFB2/horizontal, same timings */
+ afb_m3, /* FCS Elite3D, 3 float chips */
+ afb_m6 /* FCS Elite3D, 6 float chips */
+};
+
+typedef struct ffb_dev_priv {
+ /* Misc software state. */
+ int prom_node;
+ enum ffb_chip_type ffb_type;
+ u64 card_phys_base;
+ struct miscdevice miscdev;
+
+ /* Controller registers. */
+ ffb_fbcPtr regs;
+
+ /* Context table. */
+ struct ffb_hw_context *hw_state[FFB_MAX_CTXS];
+} ffb_dev_priv_t;
diff -urN linux-2.4.13/drivers/char/drm-4.0/fops.c linux-2.4.13-lia/drivers/char/drm-4.0/fops.c
--- linux-2.4.13/drivers/char/drm-4.0/fops.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/fops.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,253 @@
+/* fops.c -- File operations for DRM -*- linux-c -*-
+ * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ * Daryll Strauss <daryll@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include <linux/poll.h>
+
+/* drm_open is called whenever a process opens /dev/drm. */
+
+int drm_open_helper(struct inode *inode, struct file *filp, drm_device_t *dev)
+{
+ kdev_t minor = MINOR(inode->i_rdev);
+ drm_file_t *priv;
+
+ if (filp->f_flags & O_EXCL) return -EBUSY; /* No exclusive opens */
+ if (!drm_cpu_valid()) return -EINVAL;
+
+ DRM_DEBUG("pid = %d, minor = %d\n", current->pid, minor);
+
+ priv = drm_alloc(sizeof(*priv), DRM_MEM_FILES);
+ if(priv = NULL)
+ return -ENOMEM;
+ memset(priv, 0, sizeof(*priv));
+
+ filp->private_data = priv;
+ priv->uid = current->euid;
+ priv->pid = current->pid;
+ priv->minor = minor;
+ priv->dev = dev;
+ priv->ioctl_count = 0;
+ priv->authenticated = capable(CAP_SYS_ADMIN);
+
+ down(&dev->struct_sem);
+ if (!dev->file_last) {
+ priv->next = NULL;
+ priv->prev = NULL;
+ dev->file_first = priv;
+ dev->file_last = priv;
+ } else {
+ priv->next = NULL;
+ priv->prev = dev->file_last;
+ dev->file_last->next = priv;
+ dev->file_last = priv;
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+int drm_flush(struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("pid = %d, device = 0x%x, open_count = %d\n",
+ current->pid, dev->device, dev->open_count);
+ return 0;
+}
+
+/* drm_release is called whenever a process closes /dev/drm*. Linux calls
+ this only if any mappings have been closed. */
+
+int drm_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("pid = %d, device = 0x%x, open_count = %d\n",
+ current->pid, dev->device, dev->open_count);
+
+ if (dev->lock.hw_lock
+ && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)
+ && dev->lock.pid = current->pid) {
+ DRM_ERROR("Process %d dead, freeing lock for context %d\n",
+ current->pid,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ drm_lock_free(dev,
+ &dev->lock.hw_lock->lock,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+
+ /* FIXME: may require heavy-handed reset of
+ hardware at this point, possibly
+ processed via a callback to the X
+ server. */
+ }
+ drm_reclaim_buffers(dev, priv->pid);
+
+ drm_fasync(-1, filp, 0);
+
+ down(&dev->struct_sem);
+ if (priv->prev) priv->prev->next = priv->next;
+ else dev->file_first = priv->next;
+ if (priv->next) priv->next->prev = priv->prev;
+ else dev->file_last = priv->prev;
+ up(&dev->struct_sem);
+
+ drm_free(priv, sizeof(*priv), DRM_MEM_FILES);
+
+ return 0;
+}
+
+int drm_fasync(int fd, struct file *filp, int on)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode;
+
+ DRM_DEBUG("fd = %d, device = 0x%x\n", fd, dev->device);
+ retcode = fasync_helper(fd, filp, on, &dev->buf_async);
+ if (retcode < 0) return retcode;
+ return 0;
+}
+
+
+/* The drm_read and drm_write_string code (especially that which manages
+ the circular buffer), is based on Alessandro Rubini's LINUX DEVICE
+ DRIVERS (Cambridge: O'Reilly, 1998), pages 111-113. */
+
+ssize_t drm_read(struct file *filp, char *buf, size_t count, loff_t *off)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int left;
+ int avail;
+ int send;
+ int cur;
+
+ DRM_DEBUG("%p, %p\n", dev->buf_rp, dev->buf_wp);
+
+ while (dev->buf_rp = dev->buf_wp) {
+ DRM_DEBUG(" sleeping\n");
+ if (filp->f_flags & O_NONBLOCK) {
+ return -EAGAIN;
+ }
+ interruptible_sleep_on(&dev->buf_readers);
+ if (signal_pending(current)) {
+ DRM_DEBUG(" interrupted\n");
+ return -ERESTARTSYS;
+ }
+ DRM_DEBUG(" awake\n");
+ }
+
+ left = (dev->buf_rp + DRM_BSZ - dev->buf_wp) % DRM_BSZ;
+ avail = DRM_BSZ - left;
+ send = DRM_MIN(avail, count);
+
+ while (send) {
+ if (dev->buf_wp > dev->buf_rp) {
+ cur = DRM_MIN(send, dev->buf_wp - dev->buf_rp);
+ } else {
+ cur = DRM_MIN(send, dev->buf_end - dev->buf_rp);
+ }
+ if (copy_to_user(buf, dev->buf_rp, cur))
+ return -EFAULT;
+ dev->buf_rp += cur;
+ if (dev->buf_rp = dev->buf_end) dev->buf_rp = dev->buf;
+ send -= cur;
+ }
+
+ wake_up_interruptible(&dev->buf_writers);
+ return DRM_MIN(avail, count);;
+}
+
+int drm_write_string(drm_device_t *dev, const char *s)
+{
+ int left = (dev->buf_rp + DRM_BSZ - dev->buf_wp) % DRM_BSZ;
+ int send = strlen(s);
+ int count;
+
+ DRM_DEBUG("%d left, %d to send (%p, %p)\n",
+ left, send, dev->buf_rp, dev->buf_wp);
+
+ if (left = 1 || dev->buf_wp != dev->buf_rp) {
+ DRM_ERROR("Buffer not empty (%d left, wp = %p, rp = %p)\n",
+ left,
+ dev->buf_wp,
+ dev->buf_rp);
+ }
+
+ while (send) {
+ if (dev->buf_wp >= dev->buf_rp) {
+ count = DRM_MIN(send, dev->buf_end - dev->buf_wp);
+ if (count = left) --count; /* Leave a hole */
+ } else {
+ count = DRM_MIN(send, dev->buf_rp - dev->buf_wp - 1);
+ }
+ strncpy(dev->buf_wp, s, count);
+ dev->buf_wp += count;
+ if (dev->buf_wp = dev->buf_end) dev->buf_wp = dev->buf;
+ send -= count;
+ }
+
+#if LINUX_VERSION_CODE < 0x020315 && !defined(KILLFASYNCHASTHREEPARAMETERS)
+ /* The extra parameter to kill_fasync was added in 2.3.21, and is
+ _not_ present in _stock_ 2.2.14 and 2.2.15. However, some
+ distributions patch 2.2.x kernels to add this parameter. The
+ Makefile.linux attempts to detect this addition and defines
+ KILLFASYNCHASTHREEPARAMETERS if three parameters are found. */
+ if (dev->buf_async) kill_fasync(dev->buf_async, SIGIO);
+#else
+
+ /* Parameter added in 2.3.21. */
+#if LINUX_VERSION_CODE < 0x020400
+ if (dev->buf_async) kill_fasync(dev->buf_async, SIGIO, POLL_IN);
+#else
+ /* Type of first parameter changed in
+ Linux 2.4.0-test2... */
+ if (dev->buf_async) kill_fasync(&dev->buf_async, SIGIO, POLL_IN);
+#endif
+#endif
+ DRM_DEBUG("waking\n");
+ wake_up_interruptible(&dev->buf_readers);
+ return 0;
+}
+
+unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ poll_wait(filp, &dev->buf_readers, wait);
+ if (dev->buf_wp != dev->buf_rp) return POLLIN | POLLRDNORM;
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_dma.c linux-2.4.13-lia/drivers/char/drm-4.0/gamma_dma.c
--- linux-2.4.13/drivers/char/drm-4.0/gamma_dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,836 @@
+/* gamma_dma.c -- DMA support for GMX 2000 -*- linux-c -*-
+ * Created: Fri Mar 19 14:30:16 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "gamma_drv.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+
+/* WARNING!!! MAGIC NUMBER!!! The number of regions already added to the
+ kernel must be specified here. Currently, the number is 2. This must
+ match the order the X server uses for instantiating register regions ,
+ or must be passed in a new ioctl. */
+#define GAMMA_REG(reg) \
+ (2 \
+ + ((reg < 0x1000) \
+ ? 0 \
+ : ((reg < 0x10000) ? 1 : ((reg < 0x11000) ? 2 : 3))))
+
+#define GAMMA_OFF(reg) \
+ ((reg < 0x1000) \
+ ? reg \
+ : ((reg < 0x10000) \
+ ? (reg - 0x1000) \
+ : ((reg < 0x11000) \
+ ? (reg - 0x10000) \
+ : (reg - 0x11000))))
+
+#define GAMMA_BASE(reg) ((unsigned long)dev->maplist[GAMMA_REG(reg)]->handle)
+#define GAMMA_ADDR(reg) (GAMMA_BASE(reg) + GAMMA_OFF(reg))
+#define GAMMA_DEREF(reg) *(__volatile__ int *)GAMMA_ADDR(reg)
+#define GAMMA_READ(reg) GAMMA_DEREF(reg)
+#define GAMMA_WRITE(reg,val) do { GAMMA_DEREF(reg) = val; } while (0)
+
+#define GAMMA_BROADCASTMASK 0x9378
+#define GAMMA_COMMANDINTENABLE 0x0c48
+#define GAMMA_DMAADDRESS 0x0028
+#define GAMMA_DMACOUNT 0x0030
+#define GAMMA_FILTERMODE 0x8c00
+#define GAMMA_GCOMMANDINTFLAGS 0x0c50
+#define GAMMA_GCOMMANDMODE 0x0c40
+#define GAMMA_GCOMMANDSTATUS 0x0c60
+#define GAMMA_GDELAYTIMER 0x0c38
+#define GAMMA_GDMACONTROL 0x0060
+#define GAMMA_GINTENABLE 0x0808
+#define GAMMA_GINTFLAGS 0x0810
+#define GAMMA_INFIFOSPACE 0x0018
+#define GAMMA_OUTFIFOWORDS 0x0020
+#define GAMMA_OUTPUTFIFO 0x2000
+#define GAMMA_SYNC 0x8c40
+#define GAMMA_SYNC_TAG 0x0188
+
+static inline void gamma_dma_dispatch(drm_device_t *dev, unsigned long address,
+ unsigned long length)
+{
+ GAMMA_WRITE(GAMMA_DMAADDRESS, virt_to_phys((void *)address));
+ while (GAMMA_READ(GAMMA_GCOMMANDSTATUS) != 4)
+ ;
+ GAMMA_WRITE(GAMMA_DMACOUNT, length / 4);
+}
+
+static inline void gamma_dma_quiescent_single(drm_device_t *dev)
+{
+ while (GAMMA_READ(GAMMA_DMACOUNT))
+ ;
+ while (GAMMA_READ(GAMMA_INFIFOSPACE) < 3)
+ ;
+
+ GAMMA_WRITE(GAMMA_FILTERMODE, 1 << 10);
+ GAMMA_WRITE(GAMMA_SYNC, 0);
+
+ do {
+ while (!GAMMA_READ(GAMMA_OUTFIFOWORDS))
+ ;
+ } while (GAMMA_READ(GAMMA_OUTPUTFIFO) != GAMMA_SYNC_TAG);
+}
+
+static inline void gamma_dma_quiescent_dual(drm_device_t *dev)
+{
+ while (GAMMA_READ(GAMMA_DMACOUNT))
+ ;
+ while (GAMMA_READ(GAMMA_INFIFOSPACE) < 3)
+ ;
+
+ GAMMA_WRITE(GAMMA_BROADCASTMASK, 3);
+
+ GAMMA_WRITE(GAMMA_FILTERMODE, 1 << 10);
+ GAMMA_WRITE(GAMMA_SYNC, 0);
+
+ /* Read from first MX */
+ do {
+ while (!GAMMA_READ(GAMMA_OUTFIFOWORDS))
+ ;
+ } while (GAMMA_READ(GAMMA_OUTPUTFIFO) != GAMMA_SYNC_TAG);
+
+ /* Read from second MX */
+ do {
+ while (!GAMMA_READ(GAMMA_OUTFIFOWORDS + 0x10000))
+ ;
+ } while (GAMMA_READ(GAMMA_OUTPUTFIFO + 0x10000) != GAMMA_SYNC_TAG);
+}
+
+static inline void gamma_dma_ready(drm_device_t *dev)
+{
+ while (GAMMA_READ(GAMMA_DMACOUNT))
+ ;
+}
+
+static inline int gamma_dma_is_ready(drm_device_t *dev)
+{
+ return !GAMMA_READ(GAMMA_DMACOUNT);
+}
+
+static void gamma_dma_service(int irq, void *device, struct pt_regs *regs)
+{
+ drm_device_t *dev = (drm_device_t *)device;
+ drm_device_dma_t *dma = dev->dma;
+
+ atomic_inc(&dev->total_irq);
+ GAMMA_WRITE(GAMMA_GDELAYTIMER, 0xc350/2); /* 0x05S */
+ GAMMA_WRITE(GAMMA_GCOMMANDINTFLAGS, 8);
+ GAMMA_WRITE(GAMMA_GINTFLAGS, 0x2001);
+ if (gamma_dma_is_ready(dev)) {
+ /* Free previous buffer */
+ if (test_and_set_bit(0, &dev->dma_flag)) {
+ atomic_inc(&dma->total_missed_free);
+ return;
+ }
+ if (dma->this_buffer) {
+ drm_free_buffer(dev, dma->this_buffer);
+ dma->this_buffer = NULL;
+ }
+ clear_bit(0, &dev->dma_flag);
+
+ /* Dispatch new buffer */
+ queue_task(&dev->tq, &tq_immediate);
+ mark_bh(IMMEDIATE_BH);
+ }
+}
+
+/* Only called by gamma_dma_schedule. */
+static int gamma_do_dma(drm_device_t *dev, int locked)
+{
+ unsigned long address;
+ unsigned long length;
+ drm_buf_t *buf;
+ int retcode = 0;
+ drm_device_dma_t *dma = dev->dma;
+#if DRM_DMA_HISTOGRAM
+ cycles_t dma_start, dma_stop;
+#endif
+
+ if (test_and_set_bit(0, &dev->dma_flag)) {
+ atomic_inc(&dma->total_missed_dma);
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dma_start = get_cycles();
+#endif
+
+ if (!dma->next_buffer) {
+ DRM_ERROR("No next_buffer\n");
+ clear_bit(0, &dev->dma_flag);
+ return -EINVAL;
+ }
+
+ buf = dma->next_buffer;
+ address = (unsigned long)buf->address;
+ length = buf->used;
+
+ DRM_DEBUG("context %d, buffer %d (%ld bytes)\n",
+ buf->context, buf->idx, length);
+
+ if (buf->list = DRM_LIST_RECLAIM) {
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ clear_bit(0, &dev->dma_flag);
+ return -EINVAL;
+ }
+
+ if (!length) {
+ DRM_ERROR("0 length buffer\n");
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ clear_bit(0, &dev->dma_flag);
+ return 0;
+ }
+
+ if (!gamma_dma_is_ready(dev)) {
+ clear_bit(0, &dev->dma_flag);
+ return -EBUSY;
+ }
+
+ if (buf->while_locked) {
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Dispatching buffer %d from pid %d"
+ " \"while locked\", but no lock held\n",
+ buf->idx, buf->pid);
+ }
+ } else {
+ if (!locked && !drm_lock_take(&dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ atomic_inc(&dma->total_missed_lock);
+ clear_bit(0, &dev->dma_flag);
+ return -EBUSY;
+ }
+ }
+
+ if (dev->last_context != buf->context
+ && !(dev->queuelist[buf->context]->flags
+ & _DRM_CONTEXT_PRESERVED)) {
+ /* PRE: dev->last_context != buf->context */
+ if (drm_context_switch(dev, dev->last_context, buf->context)) {
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ }
+ retcode = -EBUSY;
+ goto cleanup;
+
+ /* POST: we will wait for the context
+ switch and will dispatch on a later call
+ when dev->last_context = buf->context.
+ NOTE WE HOLD THE LOCK THROUGHOUT THIS
+ TIME! */
+ }
+
+ drm_clear_next_buffer(dev);
+ buf->pending = 1;
+ buf->waiting = 0;
+ buf->list = DRM_LIST_PEND;
+#if DRM_DMA_HISTOGRAM
+ buf->time_dispatched = get_cycles();
+#endif
+
+ gamma_dma_dispatch(dev, address, length);
+ drm_free_buffer(dev, dma->this_buffer);
+ dma->this_buffer = buf;
+
+ atomic_add(length, &dma->total_bytes);
+ atomic_inc(&dma->total_dmas);
+
+ if (!buf->while_locked && !dev->context_flag && !locked) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+cleanup:
+
+ clear_bit(0, &dev->dma_flag);
+
+#if DRM_DMA_HISTOGRAM
+ dma_stop = get_cycles();
+ atomic_inc(&dev->histo.dma[drm_histogram_slot(dma_stop - dma_start)]);
+#endif
+
+ return retcode;
+}
+
+static void gamma_dma_schedule_timer_wrapper(unsigned long dev)
+{
+ gamma_dma_schedule((drm_device_t *)dev, 0);
+}
+
+static void gamma_dma_schedule_tq_wrapper(void *dev)
+{
+ gamma_dma_schedule(dev, 0);
+}
+
+int gamma_dma_schedule(drm_device_t *dev, int locked)
+{
+ int next;
+ drm_queue_t *q;
+ drm_buf_t *buf;
+ int retcode = 0;
+ int processed = 0;
+ int missed;
+ int expire = 20;
+ drm_device_dma_t *dma = dev->dma;
+#if DRM_DMA_HISTOGRAM
+ cycles_t schedule_start;
+#endif
+
+ if (test_and_set_bit(0, &dev->interrupt_flag)) {
+ /* Not reentrant */
+ atomic_inc(&dma->total_missed_sched);
+ return -EBUSY;
+ }
+ missed = atomic_read(&dma->total_missed_sched);
+
+#if DRM_DMA_HISTOGRAM
+ schedule_start = get_cycles();
+#endif
+
+again:
+ if (dev->context_flag) {
+ clear_bit(0, &dev->interrupt_flag);
+ return -EBUSY;
+ }
+ if (dma->next_buffer) {
+ /* Unsent buffer that was previously
+ selected, but that couldn't be sent
+ because the lock could not be obtained
+ or the DMA engine wasn't ready. Try
+ again. */
+ atomic_inc(&dma->total_tried);
+ if (!(retcode = gamma_do_dma(dev, locked))) {
+ atomic_inc(&dma->total_hit);
+ ++processed;
+ }
+ } else {
+ do {
+ next = drm_select_queue(dev,
+ gamma_dma_schedule_timer_wrapper);
+ if (next >= 0) {
+ q = dev->queuelist[next];
+ buf = drm_waitlist_get(&q->waitlist);
+ dma->next_buffer = buf;
+ dma->next_queue = q;
+ if (buf && buf->list = DRM_LIST_RECLAIM) {
+ drm_clear_next_buffer(dev);
+ drm_free_buffer(dev, buf);
+ }
+ }
+ } while (next >= 0 && !dma->next_buffer);
+ if (dma->next_buffer) {
+ if (!(retcode = gamma_do_dma(dev, locked))) {
+ ++processed;
+ }
+ }
+ }
+
+ if (--expire) {
+ if (missed != atomic_read(&dma->total_missed_sched)) {
+ atomic_inc(&dma->total_lost);
+ if (gamma_dma_is_ready(dev)) goto again;
+ }
+ if (processed && gamma_dma_is_ready(dev)) {
+ atomic_inc(&dma->total_lost);
+ processed = 0;
+ goto again;
+ }
+ }
+
+ clear_bit(0, &dev->interrupt_flag);
+
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.schedule[drm_histogram_slot(get_cycles()
+ - schedule_start)]);
+#endif
+ return retcode;
+}
+
+static int gamma_dma_priority(drm_device_t *dev, drm_dma_t *d)
+{
+ unsigned long address;
+ unsigned long length;
+ int must_free = 0;
+ int retcode = 0;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+ drm_buf_t *last_buf = NULL;
+ drm_device_dma_t *dma = dev->dma;
+ DECLARE_WAITQUEUE(entry, current);
+
+ /* Turn off interrupt handling */
+ while (test_and_set_bit(0, &dev->interrupt_flag)) {
+ schedule();
+ if (signal_pending(current)) return -EINTR;
+ }
+ if (!(d->flags & _DRM_DMA_WHILE_LOCKED)) {
+ while (!drm_lock_take(&dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ schedule();
+ if (signal_pending(current)) {
+ clear_bit(0, &dev->interrupt_flag);
+ return -EINTR;
+ }
+ }
+ ++must_free;
+ }
+ atomic_inc(&dma->total_prio);
+
+ for (i = 0; i < d->send_count; i++) {
+ idx = d->send_indices[i];
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ d->send_indices[i], dma->buf_count - 1);
+ continue;
+ }
+ buf = dma->buflist[ idx ];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d using buffer owned by %d\n",
+ current->pid, buf->pid);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ if (buf->list != DRM_LIST_NONE) {
+ DRM_ERROR("Process %d using %d's buffer on list %d\n",
+ current->pid, buf->pid, buf->list);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ /* This isn't a race condition on
+ buf->list, since our concern is the
+ buffer reclaim during the time the
+ process closes the /dev/drm? handle, so
+ it can't also be doing DMA. */
+ buf->list = DRM_LIST_PRIO;
+ buf->used = d->send_sizes[i];
+ buf->context = d->context;
+ buf->while_locked = d->flags & _DRM_DMA_WHILE_LOCKED;
+ address = (unsigned long)buf->address;
+ length = buf->used;
+ if (!length) {
+ DRM_ERROR("0 length buffer\n");
+ }
+ if (buf->pending) {
+ DRM_ERROR("Sending pending buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ if (buf->waiting) {
+ DRM_ERROR("Sending waiting buffer:"
+ " buffer %d, offset %d\n",
+ d->send_indices[i], i);
+ retcode = -EINVAL;
+ goto cleanup;
+ }
+ buf->pending = 1;
+
+ if (dev->last_context != buf->context
+ && !(dev->queuelist[buf->context]->flags
+ & _DRM_CONTEXT_PRESERVED)) {
+ add_wait_queue(&dev->context_wait, &entry);
+ current->state = TASK_INTERRUPTIBLE;
+ /* PRE: dev->last_context != buf->context */
+ drm_context_switch(dev, dev->last_context,
+ buf->context);
+ /* POST: we will wait for the context
+ switch and will dispatch on a later call
+ when dev->last_context = buf->context.
+ NOTE WE HOLD THE LOCK THROUGHOUT THIS
+ TIME! */
+ schedule();
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->context_wait, &entry);
+ if (signal_pending(current)) {
+ retcode = -EINTR;
+ goto cleanup;
+ }
+ if (dev->last_context != buf->context) {
+ DRM_ERROR("Context mismatch: %d %d\n",
+ dev->last_context,
+ buf->context);
+ }
+ }
+
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = get_cycles();
+ buf->time_dispatched = buf->time_queued;
+#endif
+ gamma_dma_dispatch(dev, address, length);
+ atomic_add(length, &dma->total_bytes);
+ atomic_inc(&dma->total_dmas);
+
+ if (last_buf) {
+ drm_free_buffer(dev, last_buf);
+ }
+ last_buf = buf;
+ }
+
+
+cleanup:
+ if (last_buf) {
+ gamma_dma_ready(dev);
+ drm_free_buffer(dev, last_buf);
+ }
+
+ if (must_free && !dev->context_flag) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+ clear_bit(0, &dev->interrupt_flag);
+ return retcode;
+}
+
+static int gamma_dma_send_buffers(drm_device_t *dev, drm_dma_t *d)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ drm_buf_t *last_buf = NULL;
+ int retcode = 0;
+ drm_device_dma_t *dma = dev->dma;
+
+ if (d->flags & _DRM_DMA_BLOCK) {
+ last_buf = dma->buflist[d->send_indices[d->send_count-1]];
+ add_wait_queue(&last_buf->dma_wait, &entry);
+ }
+
+ if ((retcode = drm_dma_enqueue(dev, d))) {
+ if (d->flags & _DRM_DMA_BLOCK)
+ remove_wait_queue(&last_buf->dma_wait, &entry);
+ return retcode;
+ }
+
+ gamma_dma_schedule(dev, 0);
+
+ if (d->flags & _DRM_DMA_BLOCK) {
+ DRM_DEBUG("%d waiting\n", current->pid);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!last_buf->waiting && !last_buf->pending)
+ break; /* finished */
+ schedule();
+ if (signal_pending(current)) {
+ retcode = -EINTR; /* Can't restart */
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ DRM_DEBUG("%d running\n", current->pid);
+ remove_wait_queue(&last_buf->dma_wait, &entry);
+ if (!retcode
+ || (last_buf->list=DRM_LIST_PEND && !last_buf->pending)) {
+ if (!waitqueue_active(&last_buf->dma_wait)) {
+ drm_free_buffer(dev, last_buf);
+ }
+ }
+ if (retcode) {
+ DRM_ERROR("ctx%d w%d p%d c%d i%d l%d %d/%d\n",
+ d->context,
+ last_buf->waiting,
+ last_buf->pending,
+ DRM_WAITCOUNT(dev, d->context),
+ last_buf->idx,
+ last_buf->list,
+ last_buf->pid,
+ current->pid);
+ }
+ }
+ return retcode;
+}
+
+int gamma_dma(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ int retcode = 0;
+ drm_dma_t d;
+
+ if (copy_from_user(&d, (drm_dma_t *)arg, sizeof(d)))
+ return -EFAULT;
+ DRM_DEBUG("%d %d: %d send, %d req\n",
+ current->pid, d.context, d.send_count, d.request_count);
+
+ if (d.context = DRM_KERNEL_CONTEXT || d.context >= dev->queue_slots) {
+ DRM_ERROR("Process %d using context %d\n",
+ current->pid, d.context);
+ return -EINVAL;
+ }
+ if (d.send_count < 0 || d.send_count > dma->buf_count) {
+ DRM_ERROR("Process %d trying to send %d buffers (of %d max)\n",
+ current->pid, d.send_count, dma->buf_count);
+ return -EINVAL;
+ }
+ if (d.request_count < 0 || d.request_count > dma->buf_count) {
+ DRM_ERROR("Process %d trying to get %d buffers (of %d max)\n",
+ current->pid, d.request_count, dma->buf_count);
+ return -EINVAL;
+ }
+
+ if (d.send_count) {
+ if (d.flags & _DRM_DMA_PRIORITY)
+ retcode = gamma_dma_priority(dev, &d);
+ else
+ retcode = gamma_dma_send_buffers(dev, &d);
+ }
+
+ d.granted_count = 0;
+
+ if (!retcode && d.request_count) {
+ retcode = drm_dma_get_buffers(dev, &d);
+ }
+
+ DRM_DEBUG("%d returning, granted = %d\n",
+ current->pid, d.granted_count);
+ if (copy_to_user((drm_dma_t *)arg, &d, sizeof(d)))
+ return -EFAULT;
+
+ return retcode;
+}
+
+int gamma_irq_install(drm_device_t *dev, int irq)
+{
+ int retcode;
+
+ if (!irq) return -EINVAL;
+
+ down(&dev->struct_sem);
+ if (dev->irq) {
+ up(&dev->struct_sem);
+ return -EBUSY;
+ }
+ dev->irq = irq;
+ up(&dev->struct_sem);
+
+ DRM_DEBUG("%d\n", irq);
+
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+
+ dev->dma->next_buffer = NULL;
+ dev->dma->next_queue = NULL;
+ dev->dma->this_buffer = NULL;
+
+ INIT_LIST_HEAD(&dev->tq.list);
+ dev->tq.sync = 0;
+ dev->tq.routine = gamma_dma_schedule_tq_wrapper;
+ dev->tq.data = dev;
+
+
+ /* Before installing handler */
+ GAMMA_WRITE(GAMMA_GCOMMANDMODE, 0);
+ GAMMA_WRITE(GAMMA_GDMACONTROL, 0);
+
+ /* Install handler */
+ if ((retcode = request_irq(dev->irq,
+ gamma_dma_service,
+ 0,
+ dev->devname,
+ dev))) {
+ down(&dev->struct_sem);
+ dev->irq = 0;
+ up(&dev->struct_sem);
+ return retcode;
+ }
+
+ /* After installing handler */
+ GAMMA_WRITE(GAMMA_GINTENABLE, 0x2001);
+ GAMMA_WRITE(GAMMA_COMMANDINTENABLE, 0x0008);
+ GAMMA_WRITE(GAMMA_GDELAYTIMER, 0x39090);
+
+ return 0;
+}
+
+int gamma_irq_uninstall(drm_device_t *dev)
+{
+ int irq;
+
+ down(&dev->struct_sem);
+ irq = dev->irq;
+ dev->irq = 0;
+ up(&dev->struct_sem);
+
+ if (!irq) return -EINVAL;
+
+ DRM_DEBUG("%d\n", irq);
+
+ GAMMA_WRITE(GAMMA_GDELAYTIMER, 0);
+ GAMMA_WRITE(GAMMA_COMMANDINTENABLE, 0);
+ GAMMA_WRITE(GAMMA_GINTENABLE, 0);
+ free_irq(irq, dev);
+
+ return 0;
+}
+
+
+int gamma_control(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_control_t ctl;
+ int retcode;
+
+ if (copy_from_user(&ctl, (drm_control_t *)arg, sizeof(ctl)))
+ return -EFAULT;
+
+ switch (ctl.func) {
+ case DRM_INST_HANDLER:
+ if ((retcode = gamma_irq_install(dev, ctl.irq)))
+ return retcode;
+ break;
+ case DRM_UNINST_HANDLER:
+ if ((retcode = gamma_irq_uninstall(dev)))
+ return retcode;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+int gamma_lock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_lock_t lock;
+ drm_queue_t *q;
+#if DRM_DMA_HISTOGRAM
+ cycles_t start;
+
+ dev->lck_start = start = get_cycles();
+#endif
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
+ lock.context, current->pid, dev->lock.hw_lock->lock,
+ lock.flags);
+
+ if (lock.context < 0 || lock.context >= dev->queue_count)
+ return -EINVAL;
+ q = dev->queuelist[lock.context];
+
+ ret = drm_flush_block_and_flush(dev, lock.context, lock.flags);
+
+ if (!ret) {
+ if (_DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)
+ != lock.context) {
+ long j = jiffies - dev->lock.lock_time;
+
+ if (j > 0 && j <= DRM_LOCK_SLICE) {
+ /* Can't take lock if we just had it and
+ there is contention. */
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(j);
+ }
+ }
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ ret = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ lock.context)) {
+ dev->lock.pid = current->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ atomic_inc(&q->total_locks);
+ break; /* Got lock */
+ }
+
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ schedule();
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+ }
+
+ drm_flush_unblock(dev, lock.context, lock.flags); /* cleanup phase */
+
+ if (!ret) {
+ sigemptyset(&dev->sigmask);
+ sigaddset(&dev->sigmask, SIGSTOP);
+ sigaddset(&dev->sigmask, SIGTSTP);
+ sigaddset(&dev->sigmask, SIGTTIN);
+ sigaddset(&dev->sigmask, SIGTTOU);
+ dev->sigdata.context = lock.context;
+ dev->sigdata.lock = dev->lock.hw_lock;
+ block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
+
+ if (lock.flags & _DRM_LOCK_READY)
+ gamma_dma_ready(dev);
+ if (lock.flags & _DRM_LOCK_QUIESCENT) {
+ if (gamma_found() = 1) {
+ gamma_dma_quiescent_single(dev);
+ } else {
+ gamma_dma_quiescent_dual(dev);
+ }
+ }
+ }
+ DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock");
+
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.lacq[drm_histogram_slot(get_cycles() - start)]);
+#endif
+
+ return ret;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_drv.c linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.c
--- linux-2.4.13/drivers/char/drm-4.0/gamma_drv.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,571 @@
+/* gamma.c -- 3dlabs GMX 2000 driver -*- linux-c -*-
+ * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#include <linux/config.h>
+#include "drmP.h"
+#include "gamma_drv.h"
+
+#ifndef PCI_DEVICE_ID_3DLABS_GAMMA
+#define PCI_DEVICE_ID_3DLABS_GAMMA 0x0008
+#endif
+#ifndef PCI_DEVICE_ID_3DLABS_MX
+#define PCI_DEVICE_ID_3DLABS_MX 0x0006
+#endif
+
+#define GAMMA_NAME "gamma"
+#define GAMMA_DESC "3dlabs GMX 2000"
+#define GAMMA_DATE "20000910"
+#define GAMMA_MAJOR 1
+#define GAMMA_MINOR 0
+#define GAMMA_PATCHLEVEL 0
+
+static drm_device_t gamma_device;
+
+static struct file_operations gamma_fops = {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* This started being used during 2.4.0-test */
+ owner: THIS_MODULE,
+#endif
+ open: gamma_open,
+ flush: drm_flush,
+ release: gamma_release,
+ ioctl: gamma_ioctl,
+ mmap: drm_mmap,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+};
+
+static struct miscdevice gamma_misc = {
+ minor: MISC_DYNAMIC_MINOR,
+ name: GAMMA_NAME,
+ fops: &gamma_fops,
+};
+
+static drm_ioctl_desc_t gamma_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { gamma_version, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_busid, 0, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_block, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_unblock, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_CONTROL)] = { gamma_control, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] = { drm_addbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] = { drm_markbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] = { drm_infobufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MAP_BUFS)] = { drm_mapbufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] = { drm_freebufs, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { drm_addctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { drm_rmctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { drm_modctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { drm_getctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { drm_switchctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { drm_newctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { drm_resctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_DMA)] = { gamma_dma, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { gamma_lock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { gamma_unlock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_finish, 1, 0 },
+};
+#define GAMMA_IOCTL_COUNT DRM_ARRAY_SIZE(gamma_ioctls)
+
+#ifdef MODULE
+static char *gamma = NULL;
+#endif
+static int devices = 0;
+
+MODULE_AUTHOR("VA Linux Systems, Inc.");
+MODULE_DESCRIPTION("3dlabs GMX 2000");
+MODULE_PARM(gamma, "s");
+MODULE_PARM(devices, "i");
+MODULE_PARM_DESC(devices,
+ "devices=x, where x is the number of MX chips on card\n");
+#ifndef MODULE
+/* gamma_options is called by the kernel to parse command-line options
+ * passed via the boot-loader (e.g., LILO). It calls the insmod option
+ * routine, drm_parse_options.
+ */
+
+
+static int __init gamma_options(char *str)
+{
+ drm_parse_options(str);
+ return 1;
+}
+
+__setup("gamma=", gamma_options);
+#endif
+
+static int gamma_setup(drm_device_t *dev)
+{
+ int i;
+
+ atomic_set(&dev->ioctl_count, 0);
+ atomic_set(&dev->vma_count, 0);
+ dev->buf_use = 0;
+ atomic_set(&dev->buf_alloc, 0);
+
+ drm_dma_setup(dev);
+
+ atomic_set(&dev->total_open, 0);
+ atomic_set(&dev->total_close, 0);
+ atomic_set(&dev->total_ioctl, 0);
+ atomic_set(&dev->total_irq, 0);
+ atomic_set(&dev->total_ctx, 0);
+ atomic_set(&dev->total_locks, 0);
+ atomic_set(&dev->total_unlocks, 0);
+ atomic_set(&dev->total_contends, 0);
+ atomic_set(&dev->total_sleeps, 0);
+
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ dev->magiclist[i].head = NULL;
+ dev->magiclist[i].tail = NULL;
+ }
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ dev->vmalist = NULL;
+ dev->lock.hw_lock = NULL;
+ init_waitqueue_head(&dev->lock.lock_queue);
+ dev->queue_count = 0;
+ dev->queue_reserved = 0;
+ dev->queue_slots = 0;
+ dev->queuelist = NULL;
+ dev->irq = 0;
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+ dev->last_context = 0;
+ dev->last_switch = 0;
+ dev->last_checked = 0;
+ init_timer(&dev->timer);
+ init_waitqueue_head(&dev->context_wait);
+#if DRM_DMA_HISTO
+ memset(&dev->histo, 0, sizeof(dev->histo));
+#endif
+ dev->ctx_start = 0;
+ dev->lck_start = 0;
+
+ dev->buf_rp = dev->buf;
+ dev->buf_wp = dev->buf;
+ dev->buf_end = dev->buf + DRM_BSZ;
+ dev->buf_async = NULL;
+ init_waitqueue_head(&dev->buf_readers);
+ init_waitqueue_head(&dev->buf_writers);
+
+ DRM_DEBUG("\n");
+
+ /* The kernel's context could be created here, but is now created
+ in drm_dma_enqueue. This is more resource-efficient for
+ hardware that does not do DMA, but may mean that
+ drm_select_queue fails between the time the interrupt is
+ initialized and the time the queues are initialized. */
+
+ return 0;
+}
+
+
+static int gamma_takedown(drm_device_t *dev)
+{
+ int i;
+ drm_magic_entry_t *pt, *next;
+ drm_map_t *map;
+ drm_vma_entry_t *vma, *vma_next;
+
+ DRM_DEBUG("\n");
+
+ if (dev->irq) gamma_irq_uninstall(dev);
+
+ down(&dev->struct_sem);
+ del_timer(&dev->timer);
+
+ if (dev->devname) {
+ drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER);
+ dev->devname = NULL;
+ }
+
+ if (dev->unique) {
+ drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER);
+ dev->unique = NULL;
+ dev->unique_len = 0;
+ }
+ /* Clear pid list */
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ for (pt = dev->magiclist[i].head; pt; pt = next) {
+ next = pt->next;
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+ }
+ dev->magiclist[i].head = dev->magiclist[i].tail = NULL;
+ }
+
+ /* Clear vma list (only built for debugging) */
+ if (dev->vmalist) {
+ for (vma = dev->vmalist; vma; vma = vma_next) {
+ vma_next = vma->next;
+ drm_free(vma, sizeof(*vma), DRM_MEM_VMAS);
+ }
+ dev->vmalist = NULL;
+ }
+
+ /* Clear map area and mtrr information */
+ if (dev->maplist) {
+ for (i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+#ifdef CONFIG_MTRR
+ if (map->mtrr >= 0) {
+ int retcode;
+ retcode = mtrr_del(map->mtrr,
+ map->offset,
+ map->size);
+ DRM_DEBUG("mtrr_del = %d\n", retcode);
+ }
+#endif
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+ case _DRM_SHM:
+ drm_free_pages((unsigned long)map->handle,
+ drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ break;
+ case _DRM_AGP:
+ /* Do nothing here, because this is all
+ handled in the AGP/GART driver. */
+ break;
+ }
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+ drm_free(dev->maplist,
+ dev->map_count * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ }
+
+ if (dev->queuelist) {
+ for (i = 0; i < dev->queue_count; i++) {
+ drm_waitlist_destroy(&dev->queuelist[i]->waitlist);
+ if (dev->queuelist[i]) {
+ drm_free(dev->queuelist[i],
+ sizeof(*dev->queuelist[0]),
+ DRM_MEM_QUEUES);
+ dev->queuelist[i] = NULL;
+ }
+ }
+ drm_free(dev->queuelist,
+ dev->queue_slots * sizeof(*dev->queuelist),
+ DRM_MEM_QUEUES);
+ dev->queuelist = NULL;
+ }
+
+ drm_dma_takedown(dev);
+
+ dev->queue_count = 0;
+ if (dev->lock.hw_lock) {
+ dev->lock.hw_lock = NULL; /* SHM removed */
+ dev->lock.pid = 0;
+ wake_up_interruptible(&dev->lock.lock_queue);
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+int gamma_found(void)
+{
+ return devices;
+}
+
+int gamma_find_devices(void)
+{
+ struct pci_dev *d = NULL, *one = NULL, *two = NULL;
+
+ d = pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_GAMMA,d);
+ if (!d) return 0;
+
+ one = pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_MX,d);
+ if (!one) return 0;
+
+ /* Make sure it's on the same card, if not - no MX's found */
+ if (PCI_SLOT(d->devfn) != PCI_SLOT(one->devfn)) return 0;
+
+ two = pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_MX,one);
+ if (!two) return 1;
+
+ /* Make sure it's on the same card, if not - only 1 MX found */
+ if (PCI_SLOT(d->devfn) != PCI_SLOT(two->devfn)) return 1;
+
+ /* Two MX's found - we don't currently support more than 2 */
+ return 2;
+}
+
+/* gamma_init is called via init_module at module load time, or via
+ * linux/init/main.c (this is not currently supported). */
+
+static int __init gamma_init(void)
+{
+ int retcode;
+ drm_device_t *dev = &gamma_device;
+
+ DRM_DEBUG("\n");
+
+ memset((void *)dev, 0, sizeof(*dev));
+ dev->count_lock = SPIN_LOCK_UNLOCKED;
+ sema_init(&dev->struct_sem, 1);
+
+#ifdef MODULE
+ drm_parse_options(gamma);
+#endif
+ devices = gamma_find_devices();
+ if (devices = 0) return -1;
+
+ if ((retcode = misc_register(&gamma_misc))) {
+ DRM_ERROR("Cannot register \"%s\"\n", GAMMA_NAME);
+ return retcode;
+ }
+ dev->device = MKDEV(MISC_MAJOR, gamma_misc.minor);
+ dev->name = GAMMA_NAME;
+
+ drm_mem_init();
+ drm_proc_init(dev);
+
+ DRM_INFO("Initialized %s %d.%d.%d %s on minor %d with %d MX devices\n",
+ GAMMA_NAME,
+ GAMMA_MAJOR,
+ GAMMA_MINOR,
+ GAMMA_PATCHLEVEL,
+ GAMMA_DATE,
+ gamma_misc.minor,
+ devices);
+
+ return 0;
+}
+
+/* gamma_cleanup is called via cleanup_module at module unload time. */
+
+static void __exit gamma_cleanup(void)
+{
+ drm_device_t *dev = &gamma_device;
+
+ DRM_DEBUG("\n");
+
+ drm_proc_cleanup();
+ if (misc_deregister(&gamma_misc)) {
+ DRM_ERROR("Cannot unload module\n");
+ } else {
+ DRM_INFO("Module unloaded\n");
+ }
+ gamma_takedown(dev);
+}
+
+module_init(gamma_init);
+module_exit(gamma_cleanup);
+
+
+int gamma_version(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_version_t version;
+ int len;
+
+ if (copy_from_user(&version,
+ (drm_version_t *)arg,
+ sizeof(version)))
+ return -EFAULT;
+
+#define DRM_COPY(name,value) \
+ len = strlen(value); \
+ if (len > name##_len) len = name##_len; \
+ name##_len = strlen(value); \
+ if (len && name) { \
+ if (copy_to_user(name, value, len)) \
+ return -EFAULT; \
+ }
+
+ version.version_major = GAMMA_MAJOR;
+ version.version_minor = GAMMA_MINOR;
+ version.version_patchlevel = GAMMA_PATCHLEVEL;
+
+ DRM_COPY(version.name, GAMMA_NAME);
+ DRM_COPY(version.date, GAMMA_DATE);
+ DRM_COPY(version.desc, GAMMA_DESC);
+
+ if (copy_to_user((drm_version_t *)arg,
+ &version,
+ sizeof(version)))
+ return -EFAULT;
+ return 0;
+}
+
+int gamma_open(struct inode *inode, struct file *filp)
+{
+ drm_device_t *dev = &gamma_device;
+ int retcode = 0;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (!(retcode = drm_open_helper(inode, filp, dev))) {
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_INC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_open);
+ spin_lock(&dev->count_lock);
+ if (!dev->open_count++) {
+ spin_unlock(&dev->count_lock);
+ return gamma_setup(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+ return retcode;
+}
+
+int gamma_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ int retcode = 0;
+
+ lock_kernel();
+ dev = priv->dev;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (!(retcode = drm_release(inode, filp))) {
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_close);
+ spin_lock(&dev->count_lock);
+ if (!--dev->open_count) {
+ if (atomic_read(&dev->ioctl_count) || dev->blocked) {
+ DRM_ERROR("Device busy: %d %d\n",
+ atomic_read(&dev->ioctl_count),
+ dev->blocked);
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return -EBUSY;
+ }
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return gamma_takedown(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+ unlock_kernel();
+ return retcode;
+}
+
+/* drm_ioctl is called whenever a process performs an ioctl on /dev/drm. */
+
+int gamma_ioctl(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ int nr = DRM_IOCTL_NR(cmd);
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode = 0;
+ drm_ioctl_desc_t *ioctl;
+ drm_ioctl_t *func;
+
+ atomic_inc(&dev->ioctl_count);
+ atomic_inc(&dev->total_ioctl);
+ ++priv->ioctl_count;
+
+ DRM_DEBUG("pid = %d, cmd = 0x%02x, nr = 0x%02x, dev 0x%x, auth = %d\n",
+ current->pid, cmd, nr, dev->device, priv->authenticated);
+
+ if (nr >= GAMMA_IOCTL_COUNT) {
+ retcode = -EINVAL;
+ } else {
+ ioctl = &gamma_ioctls[nr];
+ func = ioctl->func;
+
+ if (!func) {
+ DRM_DEBUG("no function\n");
+ retcode = -EINVAL;
+ } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN))
+ || (ioctl->auth_needed && !priv->authenticated)) {
+ retcode = -EACCES;
+ } else {
+ retcode = (func)(inode, filp, cmd, arg);
+ }
+ }
+
+ atomic_dec(&dev->ioctl_count);
+ return retcode;
+}
+
+
+int gamma_unlock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_lock_t lock;
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d frees lock (%d holds)\n",
+ lock.context,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ atomic_inc(&dev->total_unlocks);
+ if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock))
+ atomic_inc(&dev->total_contends);
+ drm_lock_transfer(dev, &dev->lock.hw_lock->lock, DRM_KERNEL_CONTEXT);
+ gamma_dma_schedule(dev, 1);
+ if (!dev->context_flag) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.lhld[drm_histogram_slot(get_cycles()
+ - dev->lck_start)]);
+#endif
+
+ unblock_all_signals();
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_drv.h linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.h
--- linux-2.4.13/drivers/char/drm-4.0/gamma_drv.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,58 @@
+/* gamma_drv.h -- Private header for 3dlabs GMX 2000 driver -*- linux-c -*-
+ * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#ifndef _GAMMA_DRV_H_
+#define _GAMMA_DRV_H_
+
+ /* gamma_drv.c */
+extern int gamma_version(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_open(struct inode *inode, struct file *filp);
+extern int gamma_release(struct inode *inode, struct file *filp);
+extern int gamma_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_lock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_unlock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* gamma_dma.c */
+extern int gamma_dma_schedule(drm_device_t *dev, int locked);
+extern int gamma_dma(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_irq_install(drm_device_t *dev, int irq);
+extern int gamma_irq_uninstall(drm_device_t *dev);
+extern int gamma_control(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int gamma_find_devices(void);
+extern int gamma_found(void);
+
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_bufs.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_bufs.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_bufs.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_bufs.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,339 @@
+/* i810_bufs.c -- IOCTLs to manage buffers -*- linux-c -*-
+ * Created: Thu Jan 6 01:47:26 2000 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "i810_drv.h"
+#include "linux/un.h"
+
+int i810_addbufs_agp(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ drm_buf_entry_t *entry;
+ drm_buf_t *buf;
+ unsigned long offset;
+ unsigned long agp_offset;
+ int count;
+ int order;
+ int size;
+ int alignment;
+ int page_order;
+ int total;
+ int byte_count;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+ agp_offset = request.agp_start;
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size) :size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+ byte_count = 0;
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if(count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ offset = 0;
+
+ while(entry->buf_count < count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+ buf->offset = offset;
+ buf->bus_address = dev->agp->base + agp_offset + offset;
+ buf->address = (void *)(agp_offset + offset + dev->agp->base);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+
+ buf->dev_private = drm_alloc(sizeof(drm_i810_buf_priv_t),
+ DRM_MEM_BUFS);
+ buf->dev_priv_size = sizeof(drm_i810_buf_priv_t);
+ memset(buf->dev_private, 0, sizeof(drm_i810_buf_priv_t));
+
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ offset = offset + alignment;
+ entry->buf_count++;
+ byte_count += PAGE_SIZE << page_order;
+
+ DRM_DEBUG("buffer %d @ %p\n",
+ entry->buf_count, buf->address);
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+ dma->byte_count += byte_count;
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+ dma->flags = _DRM_DMA_USE_AGP;
+ return 0;
+}
+
+int i810_addbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_buf_desc_t request;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if(request.flags & _DRM_AGP_BUFFER)
+ return i810_addbufs_agp(inode, filp, cmd, arg);
+ else
+ return -EINVAL;
+}
+
+int i810_infobufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_info_t request;
+ int i;
+ int count;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_info_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) ++count;
+ }
+
+ DRM_DEBUG("count = %d\n", count);
+
+ if (request.count >= count) {
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) {
+ if (copy_to_user(&request.list[count].count,
+ &dma->bufs[i].buf_count,
+ sizeof(dma->bufs[0]
+ .buf_count)) ||
+ copy_to_user(&request.list[count].size,
+ &dma->bufs[i].buf_size,
+ sizeof(dma->bufs[0].buf_size)) ||
+ copy_to_user(&request.list[count].low_mark,
+ &dma->bufs[i]
+ .freelist.low_mark,
+ sizeof(dma->bufs[0]
+ .freelist.low_mark)) ||
+ copy_to_user(&request.list[count]
+ .high_mark,
+ &dma->bufs[i]
+ .freelist.high_mark,
+ sizeof(dma->bufs[0]
+ .freelist.high_mark)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d %d %d %d %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].buf_size,
+ dma->bufs[i].freelist.low_mark,
+ dma->bufs[i].freelist.high_mark);
+ ++count;
+ }
+ }
+ }
+ request.count = count;
+
+ if (copy_to_user((drm_buf_info_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int i810_markbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int order;
+ drm_buf_entry_t *entry;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d, %d, %d\n",
+ request.size, request.low_mark, request.high_mark);
+ order = drm_order(request.size);
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ entry = &dma->bufs[order];
+
+ if (request.low_mark < 0 || request.low_mark > entry->buf_count)
+ return -EINVAL;
+ if (request.high_mark < 0 || request.high_mark > entry->buf_count)
+ return -EINVAL;
+
+ entry->freelist.low_mark = request.low_mark;
+ entry->freelist.high_mark = request.high_mark;
+
+ return 0;
+}
+
+int i810_freebufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_free_t request;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_free_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("%d\n", request.count);
+ for (i = 0; i < request.count; i++) {
+ if (copy_from_user(&idx,
+ &request.list[i],
+ sizeof(idx)))
+ return -EFAULT;
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ idx, dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[idx];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d freeing buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ drm_free_buffer(dev, buf);
+ }
+
+ return 0;
+}
+
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_context.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_context.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,212 @@
+/* i810_context.c -- IOCTLs for i810 contexts -*- linux-c -*-
+ * Created: Mon Dec 13 09:51:35 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "i810_drv.h"
+
+static int i810_alloc_queue(drm_device_t *dev)
+{
+ int temp = drm_ctxbitmap_next(dev);
+ DRM_DEBUG("i810_alloc_queue: %d\n", temp);
+ return temp;
+}
+
+int i810_context_switch(drm_device_t *dev, int old, int new)
+{
+ char buf[64];
+
+ atomic_inc(&dev->total_ctx);
+
+ if (test_and_set_bit(0, &dev->context_flag)) {
+ DRM_ERROR("Reentering -- FIXME\n");
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new = dev->last_context) {
+ clear_bit(0, &dev->context_flag);
+ return 0;
+ }
+
+ if (drm_flags & DRM_FLAG_NOCTX) {
+ i810_context_switch_complete(dev, new);
+ } else {
+ sprintf(buf, "C %d %d\n", old, new);
+ drm_write_string(dev, buf);
+ }
+
+ return 0;
+}
+
+int i810_context_switch_complete(drm_device_t *dev, int new)
+{
+ dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
+ dev->last_switch = jiffies;
+
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Lock isn't held after context switch\n");
+ }
+
+ /* If a context switch is ever initiated
+ when the kernel holds the lock, release
+ that lock here. */
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles()
+ - dev->ctx_start)]);
+
+#endif
+ clear_bit(0, &dev->context_flag);
+ wake_up(&dev->context_wait);
+
+ return 0;
+}
+
+int i810_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS);
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ if ((ctx.handle = i810_alloc_queue(dev)) = DRM_KERNEL_CONTEXT) {
+ /* Skip kernel's context and get a new one. */
+ ctx.handle = i810_alloc_queue(dev);
+ }
+ if (ctx.handle = -1) {
+ DRM_DEBUG("Not enough free contexts.\n");
+ /* Should this return -EBUSY instead? */
+ return -ENOMEM;
+ }
+ DRM_DEBUG("%d\n", ctx.handle);
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ /* This does nothing for the i810 */
+ return 0;
+}
+
+int i810_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+ /* This is 0, because we don't hanlde any context flags */
+ ctx.flags = 0;
+ if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return i810_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int i810_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ i810_context_switch_complete(dev, ctx.handle);
+
+ return 0;
+}
+
+int i810_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ if(ctx.handle != DRM_KERNEL_CONTEXT) {
+ drm_ctxbitmap_free(dev, ctx.handle);
+ }
+
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_dma.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_dma.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,1438 @@
+/* i810_dma.c -- DMA support for the i810 -*- linux-c -*-
+ * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ * Keith Whitwell <keithw@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "i810_drv.h"
+#include <linux/interrupt.h> /* For task queue support */
+
+/* in case we don't have a 2.3.99-pre6 kernel or later: */
+#ifndef VM_DONTCOPY
+#define VM_DONTCOPY 0
+#endif
+
+#define I810_BUF_FREE 2
+#define I810_BUF_CLIENT 1
+#define I810_BUF_HARDWARE 0
+
+#define I810_BUF_UNMAPPED 0
+#define I810_BUF_MAPPED 1
+
+#define I810_REG(reg) 2
+#define I810_BASE(reg) ((unsigned long) \
+ dev->maplist[I810_REG(reg)]->handle)
+#define I810_ADDR(reg) (I810_BASE(reg) + reg)
+#define I810_DEREF(reg) *(__volatile__ int *)I810_ADDR(reg)
+#define I810_READ(reg) I810_DEREF(reg)
+#define I810_WRITE(reg,val) do { I810_DEREF(reg) = val; } while (0)
+#define I810_DEREF16(reg) *(__volatile__ u16 *)I810_ADDR(reg)
+#define I810_READ16(reg) I810_DEREF16(reg)
+#define I810_WRITE16(reg,val) do { I810_DEREF16(reg) = val; } while (0)
+
+#define RING_LOCALS unsigned int outring, ringmask; volatile char *virt;
+
+#define BEGIN_LP_RING(n) do { \
+ if (I810_VERBOSE) \
+ DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", \
+ n, __FUNCTION__); \
+ if (dev_priv->ring.space < n*4) \
+ i810_wait_ring(dev, n*4); \
+ dev_priv->ring.space -= n*4; \
+ outring = dev_priv->ring.tail; \
+ ringmask = dev_priv->ring.tail_mask; \
+ virt = dev_priv->ring.virtual_start; \
+} while (0)
+
+#define ADVANCE_LP_RING() do { \
+ if (I810_VERBOSE) DRM_DEBUG("ADVANCE_LP_RING\n"); \
+ dev_priv->ring.tail = outring; \
+ I810_WRITE(LP_RING + RING_TAIL, outring); \
+} while(0)
+
+#define OUT_RING(n) do { \
+ if (I810_VERBOSE) DRM_DEBUG(" OUT_RING %x\n", (int)(n)); \
+ *(volatile unsigned int *)(virt + outring) = n; \
+ outring += 4; \
+ outring &= ringmask; \
+} while (0);
+
+static inline void i810_print_status_page(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ u32 *temp = (u32 *)dev_priv->hw_status_page;
+ int i;
+
+ DRM_DEBUG( "hw_status: Interrupt Status : %x\n", temp[0]);
+ DRM_DEBUG( "hw_status: LpRing Head ptr : %x\n", temp[1]);
+ DRM_DEBUG( "hw_status: IRing Head ptr : %x\n", temp[2]);
+ DRM_DEBUG( "hw_status: Reserved : %x\n", temp[3]);
+ DRM_DEBUG( "hw_status: Driver Counter : %d\n", temp[5]);
+ for(i = 6; i < dma->buf_count + 6; i++) {
+ DRM_DEBUG( "buffer status idx : %d used: %d\n", i - 6, temp[i]);
+ }
+}
+
+static drm_buf_t *i810_freelist_get(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+ int used;
+
+ /* Linear search might not be the best solution */
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ /* In use is already a pointer */
+ used = cmpxchg(buf_priv->in_use, I810_BUF_FREE,
+ I810_BUF_CLIENT);
+ if(used = I810_BUF_FREE) {
+ return buf;
+ }
+ }
+ return NULL;
+}
+
+/* This should only be called if the buffer is not sent to the hardware
+ * yet, the hardware updates in use for us once its on the ring buffer.
+ */
+
+static int i810_freelist_put(drm_device_t *dev, drm_buf_t *buf)
+{
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ int used;
+
+ /* In use is already a pointer */
+ used = cmpxchg(buf_priv->in_use, I810_BUF_CLIENT, I810_BUF_FREE);
+ if(used != I810_BUF_CLIENT) {
+ DRM_ERROR("Freeing buffer thats not in use : %d\n", buf->idx);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static struct file_operations i810_buffer_fops = {
+ open: i810_open,
+ flush: drm_flush,
+ release: i810_release,
+ ioctl: i810_ioctl,
+ mmap: i810_mmap_buffers,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+};
+
+int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ drm_i810_private_t *dev_priv;
+ drm_buf_t *buf;
+ drm_i810_buf_priv_t *buf_priv;
+
+ lock_kernel();
+ dev = priv->dev;
+ dev_priv = dev->dev_private;
+ buf = dev_priv->mmap_buffer;
+ buf_priv = buf->dev_private;
+
+ vma->vm_flags |= (VM_IO | VM_DONTCOPY);
+ vma->vm_file = filp;
+
+ buf_priv->currently_mapped = I810_BUF_MAPPED;
+ unlock_kernel();
+
+ if (remap_page_range(vma->vm_start,
+ VM_OFFSET(vma),
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot)) return -EAGAIN;
+ return 0;
+}
+
+static int i810_map_buffer(drm_buf_t *buf, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ struct file_operations *old_fops;
+ int retcode = 0;
+
+ if(buf_priv->currently_mapped = I810_BUF_MAPPED) return -EINVAL;
+
+ if(VM_DONTCOPY != 0) {
+ down_write(¤t->mm->mmap_sem);
+ old_fops = filp->f_op;
+ filp->f_op = &i810_buffer_fops;
+ dev_priv->mmap_buffer = buf;
+ buf_priv->virtual = (void *)do_mmap(filp, 0, buf->total,
+ PROT_READ|PROT_WRITE,
+ MAP_SHARED,
+ buf->bus_address);
+ dev_priv->mmap_buffer = NULL;
+ filp->f_op = old_fops;
+ if ((unsigned long)buf_priv->virtual > -1024UL) {
+ /* Real error */
+ DRM_DEBUG("mmap error\n");
+ retcode = (signed int)buf_priv->virtual;
+ buf_priv->virtual = 0;
+ }
+ up_write(¤t->mm->mmap_sem);
+ } else {
+ buf_priv->virtual = buf_priv->kernel_virtual;
+ buf_priv->currently_mapped = I810_BUF_MAPPED;
+ }
+ return retcode;
+}
+
+static int i810_unmap_buffer(drm_buf_t *buf)
+{
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ int retcode = 0;
+
+ if(VM_DONTCOPY != 0) {
+ if(buf_priv->currently_mapped != I810_BUF_MAPPED)
+ return -EINVAL;
+ down_write(¤t->mm->mmap_sem);
+#if LINUX_VERSION_CODE < 0x020399
+ retcode = do_munmap((unsigned long)buf_priv->virtual,
+ (size_t) buf->total);
+#else
+ retcode = do_munmap(current->mm,
+ (unsigned long)buf_priv->virtual,
+ (size_t) buf->total);
+#endif
+ up_write(¤t->mm->mmap_sem);
+ }
+ buf_priv->currently_mapped = I810_BUF_UNMAPPED;
+ buf_priv->virtual = 0;
+
+ return retcode;
+}
+
+static int i810_dma_get_buffer(drm_device_t *dev, drm_i810_dma_t *d,
+ struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_buf_t *buf;
+ drm_i810_buf_priv_t *buf_priv;
+ int retcode = 0;
+
+ buf = i810_freelist_get(dev);
+ if (!buf) {
+ retcode = -ENOMEM;
+ DRM_DEBUG("retcode=%d\n", retcode);
+ return retcode;
+ }
+
+ retcode = i810_map_buffer(buf, filp);
+ if(retcode) {
+ i810_freelist_put(dev, buf);
+ DRM_DEBUG("mapbuf failed, retcode %d\n", retcode);
+ return retcode;
+ }
+ buf->pid = priv->pid;
+ buf_priv = buf->dev_private;
+ d->granted = 1;
+ d->request_idx = buf->idx;
+ d->request_size = buf->total;
+ d->virtual = buf_priv->virtual;
+
+ return retcode;
+}
+
+static unsigned long i810_alloc_page(drm_device_t *dev)
+{
+ unsigned long address;
+
+ address = __get_free_page(GFP_KERNEL);
+ if(address = 0UL)
+ return 0;
+
+ atomic_inc(&virt_to_page(address)->count);
+ set_bit(PG_locked, &virt_to_page(address)->flags);
+
+ return address;
+}
+
+static void i810_free_page(drm_device_t *dev, unsigned long page)
+{
+ if(page = 0UL)
+ return;
+
+ atomic_dec(&virt_to_page(page)->count);
+ clear_bit(PG_locked, &virt_to_page(page)->flags);
+ wake_up(&virt_to_page(page)->wait);
+ free_page(page);
+ return;
+}
+
+static int i810_dma_cleanup(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ if(dev->dev_private) {
+ int i;
+ drm_i810_private_t *dev_priv =
+ (drm_i810_private_t *) dev->dev_private;
+
+ if(dev_priv->ring.virtual_start) {
+ drm_ioremapfree((void *) dev_priv->ring.virtual_start,
+ dev_priv->ring.Size, dev);
+ }
+ if(dev_priv->hw_status_page != 0UL) {
+ i810_free_page(dev, dev_priv->hw_status_page);
+ /* Need to rewrite hardware status page */
+ I810_WRITE(0x02080, 0x1ffff000);
+ }
+ drm_free(dev->dev_private, sizeof(drm_i810_private_t),
+ DRM_MEM_DRIVER);
+ dev->dev_private = NULL;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ drm_ioremapfree(buf_priv->kernel_virtual,
+ buf->total, dev);
+ }
+ }
+ return 0;
+}
+
+static int i810_wait_ring(drm_device_t *dev, int n)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_ring_buffer_t *ring = &(dev_priv->ring);
+ int iters = 0;
+ unsigned long end;
+ unsigned int last_head = I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR;
+
+ end = jiffies + (HZ*3);
+ while (ring->space < n) {
+ int i;
+
+ ring->head = I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR;
+ ring->space = ring->head - (ring->tail+8);
+ if (ring->space < 0) ring->space += ring->Size;
+
+ if (ring->head != last_head)
+ end = jiffies + (HZ*3);
+
+ iters++;
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("space: %d wanted %d\n", ring->space, n);
+ DRM_ERROR("lockup\n");
+ goto out_wait_ring;
+ }
+
+ for (i = 0 ; i < 2000 ; i++) ;
+ }
+
+out_wait_ring:
+ return iters;
+}
+
+static void i810_kernel_lost_context(drm_device_t *dev)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_ring_buffer_t *ring = &(dev_priv->ring);
+
+ ring->head = I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR;
+ ring->tail = I810_READ(LP_RING + RING_TAIL);
+ ring->space = ring->head - (ring->tail+8);
+ if (ring->space < 0) ring->space += ring->Size;
+}
+
+static int i810_freelist_init(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ int my_idx = 24;
+ u32 *hw_status = (u32 *)(dev_priv->hw_status_page + my_idx);
+ int i;
+
+ if(dma->buf_count > 1019) {
+ /* Not enough space in the status page for the freelist */
+ return -EINVAL;
+ }
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+
+ buf_priv->in_use = hw_status++;
+ buf_priv->my_use_idx = my_idx;
+ my_idx += 4;
+
+ *buf_priv->in_use = I810_BUF_FREE;
+
+ buf_priv->kernel_virtual = drm_ioremap(buf->bus_address,
+ buf->total, dev);
+ }
+ return 0;
+}
+
+static int i810_dma_initialize(drm_device_t *dev,
+ drm_i810_private_t *dev_priv,
+ drm_i810_init_t *init)
+{
+ drm_map_t *sarea_map;
+
+ dev->dev_private = (void *) dev_priv;
+ memset(dev_priv, 0, sizeof(drm_i810_private_t));
+
+ if (init->ring_map_idx >= dev->map_count ||
+ init->buffer_map_idx >= dev->map_count) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("ring_map or buffer_map are invalid\n");
+ return -EINVAL;
+ }
+
+ dev_priv->ring_map_idx = init->ring_map_idx;
+ dev_priv->buffer_map_idx = init->buffer_map_idx;
+ sarea_map = dev->maplist[0];
+ dev_priv->sarea_priv = (drm_i810_sarea_t *)
+ ((u8 *)sarea_map->handle +
+ init->sarea_priv_offset);
+
+ atomic_set(&dev_priv->flush_done, 0);
+ init_waitqueue_head(&dev_priv->flush_queue);
+
+ dev_priv->ring.Start = init->ring_start;
+ dev_priv->ring.End = init->ring_end;
+ dev_priv->ring.Size = init->ring_size;
+
+ dev_priv->ring.virtual_start = drm_ioremap(dev->agp->base +
+ init->ring_start,
+ init->ring_size, dev);
+
+ dev_priv->ring.tail_mask = dev_priv->ring.Size - 1;
+
+ if (dev_priv->ring.virtual_start = NULL) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("can not ioremap virtual address for"
+ " ring buffer\n");
+ return -ENOMEM;
+ }
+
+ dev_priv->w = init->w;
+ dev_priv->h = init->h;
+ dev_priv->pitch = init->pitch;
+ dev_priv->back_offset = init->back_offset;
+ dev_priv->depth_offset = init->depth_offset;
+
+ dev_priv->front_di1 = init->front_offset | init->pitch_bits;
+ dev_priv->back_di1 = init->back_offset | init->pitch_bits;
+ dev_priv->zi1 = init->depth_offset | init->pitch_bits;
+
+
+ /* Program Hardware Status Page */
+ dev_priv->hw_status_page = i810_alloc_page(dev);
+ memset((void *) dev_priv->hw_status_page, 0, PAGE_SIZE);
+ if(dev_priv->hw_status_page = 0UL) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("Can not allocate hardware status page\n");
+ return -ENOMEM;
+ }
+ DRM_DEBUG("hw status page @ %lx\n", dev_priv->hw_status_page);
+
+ I810_WRITE(0x02080, virt_to_bus((void *)dev_priv->hw_status_page));
+ DRM_DEBUG("Enabled hardware status page\n");
+
+ /* Now we need to init our freelist */
+ if(i810_freelist_init(dev) != 0) {
+ i810_dma_cleanup(dev);
+ DRM_ERROR("Not enough space in the status page for"
+ " the freelist\n");
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+int i810_dma_init(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_private_t *dev_priv;
+ drm_i810_init_t init;
+ int retcode = 0;
+
+ if (copy_from_user(&init, (drm_i810_init_t *)arg, sizeof(init)))
+ return -EFAULT;
+
+ switch(init.func) {
+ case I810_INIT_DMA:
+ dev_priv = drm_alloc(sizeof(drm_i810_private_t),
+ DRM_MEM_DRIVER);
+ if(dev_priv = NULL) return -ENOMEM;
+ retcode = i810_dma_initialize(dev, dev_priv, &init);
+ break;
+ case I810_CLEANUP_DMA:
+ retcode = i810_dma_cleanup(dev);
+ break;
+ default:
+ retcode = -EINVAL;
+ break;
+ }
+
+ return retcode;
+}
+
+
+
+/* Most efficient way to verify state for the i810 is as it is
+ * emitted. Non-conformant state is silently dropped.
+ *
+ * Use 'volatile' & local var tmp to force the emitted values to be
+ * identical to the verified ones.
+ */
+static void i810EmitContextVerified( drm_device_t *dev,
+ volatile unsigned int *code )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ int i, j = 0;
+ unsigned int tmp;
+ RING_LOCALS;
+
+ BEGIN_LP_RING( I810_CTX_SETUP_SIZE );
+
+ OUT_RING( GFX_OP_COLOR_FACTOR );
+ OUT_RING( code[I810_CTXREG_CF1] );
+
+ OUT_RING( GFX_OP_STIPPLE );
+ OUT_RING( code[I810_CTXREG_ST1] );
+
+ for ( i = 4 ; i < I810_CTX_SETUP_SIZE ; i++ ) {
+ tmp = code[i];
+
+ if ((tmp & (7<<29)) = (3<<29) &&
+ (tmp & (0x1f<<24)) < (0x1d<<24))
+ {
+ OUT_RING( tmp );
+ j++;
+ }
+ }
+
+ if (j & 1)
+ OUT_RING( 0 );
+
+ ADVANCE_LP_RING();
+}
+
+static void i810EmitTexVerified( drm_device_t *dev,
+ volatile unsigned int *code )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ int i, j = 0;
+ unsigned int tmp;
+ RING_LOCALS;
+
+ BEGIN_LP_RING( I810_TEX_SETUP_SIZE );
+
+ OUT_RING( GFX_OP_MAP_INFO );
+ OUT_RING( code[I810_TEXREG_MI1] );
+ OUT_RING( code[I810_TEXREG_MI2] );
+ OUT_RING( code[I810_TEXREG_MI3] );
+
+ for ( i = 4 ; i < I810_TEX_SETUP_SIZE ; i++ ) {
+ tmp = code[i];
+
+ if ((tmp & (7<<29)) = (3<<29) &&
+ (tmp & (0x1f<<24)) < (0x1d<<24))
+ {
+ OUT_RING( tmp );
+ j++;
+ }
+ }
+
+ if (j & 1)
+ OUT_RING( 0 );
+
+ ADVANCE_LP_RING();
+}
+
+
+/* Need to do some additional checking when setting the dest buffer.
+ */
+static void i810EmitDestVerified( drm_device_t *dev,
+ volatile unsigned int *code )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ unsigned int tmp;
+ RING_LOCALS;
+
+ BEGIN_LP_RING( I810_DEST_SETUP_SIZE + 2 );
+
+ tmp = code[I810_DESTREG_DI1];
+ if (tmp = dev_priv->front_di1 || tmp = dev_priv->back_di1) {
+ OUT_RING( CMD_OP_DESTBUFFER_INFO );
+ OUT_RING( tmp );
+ } else
+ DRM_DEBUG("bad di1 %x (allow %x or %x)\n",
+ tmp, dev_priv->front_di1, dev_priv->back_di1);
+
+ /* invarient:
+ */
+ OUT_RING( CMD_OP_Z_BUFFER_INFO );
+ OUT_RING( dev_priv->zi1 );
+
+ OUT_RING( GFX_OP_DESTBUFFER_VARS );
+ OUT_RING( code[I810_DESTREG_DV1] );
+
+ OUT_RING( GFX_OP_DRAWRECT_INFO );
+ OUT_RING( code[I810_DESTREG_DR1] );
+ OUT_RING( code[I810_DESTREG_DR2] );
+ OUT_RING( code[I810_DESTREG_DR3] );
+ OUT_RING( code[I810_DESTREG_DR4] );
+ OUT_RING( 0 );
+
+ ADVANCE_LP_RING();
+}
+
+
+
+static void i810EmitState( drm_device_t *dev )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ unsigned int dirty = sarea_priv->dirty;
+
+ if (dirty & I810_UPLOAD_BUFFERS) {
+ i810EmitDestVerified( dev, sarea_priv->BufferState );
+ sarea_priv->dirty &= ~I810_UPLOAD_BUFFERS;
+ }
+
+ if (dirty & I810_UPLOAD_CTX) {
+ i810EmitContextVerified( dev, sarea_priv->ContextState );
+ sarea_priv->dirty &= ~I810_UPLOAD_CTX;
+ }
+
+ if (dirty & I810_UPLOAD_TEX0) {
+ i810EmitTexVerified( dev, sarea_priv->TexState[0] );
+ sarea_priv->dirty &= ~I810_UPLOAD_TEX0;
+ }
+
+ if (dirty & I810_UPLOAD_TEX1) {
+ i810EmitTexVerified( dev, sarea_priv->TexState[1] );
+ sarea_priv->dirty &= ~I810_UPLOAD_TEX1;
+ }
+}
+
+
+
+/* need to verify
+ */
+static void i810_dma_dispatch_clear( drm_device_t *dev, int flags,
+ unsigned int clear_color,
+ unsigned int clear_zval )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ int nbox = sarea_priv->nbox;
+ drm_clip_rect_t *pbox = sarea_priv->boxes;
+ int pitch = dev_priv->pitch;
+ int cpp = 2;
+ int i;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ if (nbox > I810_NR_SAREA_CLIPRECTS)
+ nbox = I810_NR_SAREA_CLIPRECTS;
+
+ for (i = 0 ; i < nbox ; i++, pbox++) {
+ unsigned int x = pbox->x1;
+ unsigned int y = pbox->y1;
+ unsigned int width = (pbox->x2 - x) * cpp;
+ unsigned int height = pbox->y2 - y;
+ unsigned int start = y * pitch + x * cpp;
+
+ if (pbox->x1 > pbox->x2 ||
+ pbox->y1 > pbox->y2 ||
+ pbox->x2 > dev_priv->w ||
+ pbox->y2 > dev_priv->h)
+ continue;
+
+ if ( flags & I810_FRONT ) {
+ DRM_DEBUG("clear front\n");
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT |
+ BR00_OP_COLOR_BLT | 0x3 );
+ OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch );
+ OUT_RING( (height << 16) | width );
+ OUT_RING( start );
+ OUT_RING( clear_color );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+ }
+
+ if ( flags & I810_BACK ) {
+ DRM_DEBUG("clear back\n");
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT |
+ BR00_OP_COLOR_BLT | 0x3 );
+ OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch );
+ OUT_RING( (height << 16) | width );
+ OUT_RING( dev_priv->back_offset + start );
+ OUT_RING( clear_color );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+ }
+
+ if ( flags & I810_DEPTH ) {
+ DRM_DEBUG("clear depth\n");
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT |
+ BR00_OP_COLOR_BLT | 0x3 );
+ OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch );
+ OUT_RING( (height << 16) | width );
+ OUT_RING( dev_priv->depth_offset + start );
+ OUT_RING( clear_zval );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+ }
+ }
+}
+
+static void i810_dma_dispatch_swap( drm_device_t *dev )
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ int nbox = sarea_priv->nbox;
+ drm_clip_rect_t *pbox = sarea_priv->boxes;
+ int pitch = dev_priv->pitch;
+ int cpp = 2;
+ int ofs = dev_priv->back_offset;
+ int i;
+ RING_LOCALS;
+
+ DRM_DEBUG("swapbuffers\n");
+
+ i810_kernel_lost_context(dev);
+
+ if (nbox > I810_NR_SAREA_CLIPRECTS)
+ nbox = I810_NR_SAREA_CLIPRECTS;
+
+ for (i = 0 ; i < nbox; i++, pbox++)
+ {
+ unsigned int w = pbox->x2 - pbox->x1;
+ unsigned int h = pbox->y2 - pbox->y1;
+ unsigned int dst = pbox->x1*cpp + pbox->y1*pitch;
+ unsigned int start = ofs + dst;
+
+ if (pbox->x1 > pbox->x2 ||
+ pbox->y1 > pbox->y2 ||
+ pbox->x2 > dev_priv->w ||
+ pbox->y2 > dev_priv->h)
+ continue;
+
+ DRM_DEBUG("dispatch swap %d,%d-%d,%d!\n",
+ pbox[i].x1, pbox[i].y1,
+ pbox[i].x2, pbox[i].y2);
+
+ BEGIN_LP_RING( 6 );
+ OUT_RING( BR00_BITBLT_CLIENT | BR00_OP_SRC_COPY_BLT | 0x4 );
+ OUT_RING( pitch | (0xCC << 16));
+ OUT_RING( (h << 16) | (w * cpp));
+ OUT_RING( dst );
+ OUT_RING( pitch );
+ OUT_RING( start );
+ ADVANCE_LP_RING();
+ }
+}
+
+
+static void i810_dma_dispatch_vertex(drm_device_t *dev,
+ drm_buf_t *buf,
+ int discard,
+ int used)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+ drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ drm_clip_rect_t *box = sarea_priv->boxes;
+ int nbox = sarea_priv->nbox;
+ unsigned long address = (unsigned long)buf->bus_address;
+ unsigned long start = address - dev->agp->base;
+ int i = 0, u;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ if (nbox > I810_NR_SAREA_CLIPRECTS)
+ nbox = I810_NR_SAREA_CLIPRECTS;
+
+ if (discard) {
+ u = cmpxchg(buf_priv->in_use, I810_BUF_CLIENT,
+ I810_BUF_HARDWARE);
+ if(u != I810_BUF_CLIENT) {
+ DRM_DEBUG("xxxx 2\n");
+ }
+ }
+
+ if (used > 4*1024)
+ used = 0;
+
+ if (sarea_priv->dirty)
+ i810EmitState( dev );
+
+ DRM_DEBUG("dispatch vertex addr 0x%lx, used 0x%x nbox %d\n",
+ address, used, nbox);
+
+ dev_priv->counter++;
+ DRM_DEBUG( "dispatch counter : %ld\n", dev_priv->counter);
+ DRM_DEBUG( "i810_dma_dispatch\n");
+ DRM_DEBUG( "start : %lx\n", start);
+ DRM_DEBUG( "used : %d\n", used);
+ DRM_DEBUG( "start + used - 4 : %ld\n", start + used - 4);
+
+ if (buf_priv->currently_mapped = I810_BUF_MAPPED) {
+ *(u32 *)buf_priv->virtual = (GFX_OP_PRIMITIVE |
+ sarea_priv->vertex_prim |
+ ((used/4)-2));
+
+ if (used & 4) {
+ *(u32 *)((u32)buf_priv->virtual + used) = 0;
+ used += 4;
+ }
+
+ i810_unmap_buffer(buf);
+ }
+
+ if (used) {
+ do {
+ if (i < nbox) {
+ BEGIN_LP_RING(4);
+ OUT_RING( GFX_OP_SCISSOR | SC_UPDATE_SCISSOR |
+ SC_ENABLE );
+ OUT_RING( GFX_OP_SCISSOR_INFO );
+ OUT_RING( box[i].x1 | (box[i].y1<<16) );
+ OUT_RING( (box[i].x2-1) | ((box[i].y2-1)<<16) );
+ ADVANCE_LP_RING();
+ }
+
+ BEGIN_LP_RING(4);
+ OUT_RING( CMD_OP_BATCH_BUFFER );
+ OUT_RING( start | BB1_PROTECTED );
+ OUT_RING( start + used - 4 );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+
+ } while (++i < nbox);
+ }
+
+ BEGIN_LP_RING(10);
+ OUT_RING( CMD_STORE_DWORD_IDX );
+ OUT_RING( 20 );
+ OUT_RING( dev_priv->counter );
+ OUT_RING( 0 );
+
+ if (discard) {
+ OUT_RING( CMD_STORE_DWORD_IDX );
+ OUT_RING( buf_priv->my_use_idx );
+ OUT_RING( I810_BUF_FREE );
+ OUT_RING( 0 );
+ }
+
+ OUT_RING( CMD_REPORT_HEAD );
+ OUT_RING( 0 );
+ ADVANCE_LP_RING();
+}
+
+
+/* Interrupts are only for flushing */
+static void i810_dma_service(int irq, void *device, struct pt_regs *regs)
+{
+ drm_device_t *dev = (drm_device_t *)device;
+ u16 temp;
+
+ atomic_inc(&dev->total_irq);
+ temp = I810_READ16(I810REG_INT_IDENTITY_R);
+ temp = temp & ~(0x6000);
+ if(temp != 0) I810_WRITE16(I810REG_INT_IDENTITY_R,
+ temp); /* Clear all interrupts */
+ else
+ return;
+
+ queue_task(&dev->tq, &tq_immediate);
+ mark_bh(IMMEDIATE_BH);
+}
+
+static void i810_dma_task_queue(void *device)
+{
+ drm_device_t *dev = (drm_device_t *) device;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+
+ atomic_set(&dev_priv->flush_done, 1);
+ wake_up_interruptible(&dev_priv->flush_queue);
+}
+
+int i810_irq_install(drm_device_t *dev, int irq)
+{
+ int retcode;
+ u16 temp;
+
+ if (!irq) return -EINVAL;
+
+ down(&dev->struct_sem);
+ if (dev->irq) {
+ up(&dev->struct_sem);
+ return -EBUSY;
+ }
+ dev->irq = irq;
+ up(&dev->struct_sem);
+
+ DRM_DEBUG( "Interrupt Install : %d\n", irq);
+ DRM_DEBUG("%d\n", irq);
+
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+
+ dev->dma->next_buffer = NULL;
+ dev->dma->next_queue = NULL;
+ dev->dma->this_buffer = NULL;
+
+ INIT_LIST_HEAD(&dev->tq.list);
+ dev->tq.sync = 0;
+ dev->tq.routine = i810_dma_task_queue;
+ dev->tq.data = dev;
+
+ /* Before installing handler */
+ temp = I810_READ16(I810REG_HWSTAM);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_HWSTAM, temp);
+
+ temp = I810_READ16(I810REG_INT_MASK_R);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_INT_MASK_R, temp); /* Unmask interrupts */
+ temp = I810_READ16(I810REG_INT_ENABLE_R);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_INT_ENABLE_R, temp); /* Disable all interrupts */
+
+ /* Install handler */
+ if ((retcode = request_irq(dev->irq,
+ i810_dma_service,
+ SA_SHIRQ,
+ dev->devname,
+ dev))) {
+ down(&dev->struct_sem);
+ dev->irq = 0;
+ up(&dev->struct_sem);
+ return retcode;
+ }
+ temp = I810_READ16(I810REG_INT_ENABLE_R);
+ temp = temp & 0x6000;
+ temp = temp | 0x0003;
+ I810_WRITE16(I810REG_INT_ENABLE_R,
+ temp); /* Enable bp & user interrupts */
+ return 0;
+}
+
+int i810_irq_uninstall(drm_device_t *dev)
+{
+ int irq;
+ u16 temp;
+
+
+/* return 0; */
+
+ down(&dev->struct_sem);
+ irq = dev->irq;
+ dev->irq = 0;
+ up(&dev->struct_sem);
+
+ if (!irq) return -EINVAL;
+
+ DRM_DEBUG( "Interrupt UnInstall: %d\n", irq);
+ DRM_DEBUG("%d\n", irq);
+
+ temp = I810_READ16(I810REG_INT_IDENTITY_R);
+ temp = temp & ~(0x6000);
+ if(temp != 0) I810_WRITE16(I810REG_INT_IDENTITY_R,
+ temp); /* Clear all interrupts */
+
+ temp = I810_READ16(I810REG_INT_ENABLE_R);
+ temp = temp & 0x6000;
+ I810_WRITE16(I810REG_INT_ENABLE_R,
+ temp); /* Disable all interrupts */
+
+ free_irq(irq, dev);
+
+ return 0;
+}
+
+int i810_control(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_control_t ctl;
+ int retcode;
+
+ DRM_DEBUG( "i810_control\n");
+
+ if (copy_from_user(&ctl, (drm_control_t *)arg, sizeof(ctl)))
+ return -EFAULT;
+
+ switch (ctl.func) {
+ case DRM_INST_HANDLER:
+ if ((retcode = i810_irq_install(dev, ctl.irq)))
+ return retcode;
+ break;
+ case DRM_UNINST_HANDLER:
+ if ((retcode = i810_irq_uninstall(dev)))
+ return retcode;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static inline void i810_dma_emit_flush(drm_device_t *dev)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ BEGIN_LP_RING(2);
+ OUT_RING( CMD_REPORT_HEAD );
+ OUT_RING( GFX_OP_USER_INTERRUPT );
+ ADVANCE_LP_RING();
+
+/* i810_wait_ring( dev, dev_priv->ring.Size - 8 ); */
+/* atomic_set(&dev_priv->flush_done, 1); */
+/* wake_up_interruptible(&dev_priv->flush_queue); */
+}
+
+static inline void i810_dma_quiescent_emit(drm_device_t *dev)
+{
+ drm_i810_private_t *dev_priv = dev->dev_private;
+ RING_LOCALS;
+
+ i810_kernel_lost_context(dev);
+
+ BEGIN_LP_RING(4);
+ OUT_RING( INST_PARSER_CLIENT | INST_OP_FLUSH | INST_FLUSH_MAP_CACHE );
+ OUT_RING( CMD_REPORT_HEAD );
+ OUT_RING( 0 );
+ OUT_RING( GFX_OP_USER_INTERRUPT );
+ ADVANCE_LP_RING();
+
+/* i810_wait_ring( dev, dev_priv->ring.Size - 8 ); */
+/* atomic_set(&dev_priv->flush_done, 1); */
+/* wake_up_interruptible(&dev_priv->flush_queue); */
+}
+
+static void i810_dma_quiescent(drm_device_t *dev)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ unsigned long end;
+
+ if(dev_priv = NULL) {
+ return;
+ }
+ atomic_set(&dev_priv->flush_done, 0);
+ add_wait_queue(&dev_priv->flush_queue, &entry);
+ end = jiffies + (HZ*3);
+
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ i810_dma_quiescent_emit(dev);
+ if (atomic_read(&dev_priv->flush_done) = 1) break;
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("lockup\n");
+ break;
+ }
+ schedule_timeout(HZ*3);
+ if (signal_pending(current)) {
+ break;
+ }
+ }
+
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev_priv->flush_queue, &entry);
+
+ return;
+}
+
+static int i810_flush_queue(drm_device_t *dev)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ drm_device_dma_t *dma = dev->dma;
+ unsigned long end;
+ int i, ret = 0;
+
+ if(dev_priv = NULL) {
+ return 0;
+ }
+ atomic_set(&dev_priv->flush_done, 0);
+ add_wait_queue(&dev_priv->flush_queue, &entry);
+ end = jiffies + (HZ*3);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ i810_dma_emit_flush(dev);
+ if (atomic_read(&dev_priv->flush_done) = 1) break;
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("lockup\n");
+ break;
+ }
+ schedule_timeout(HZ*3);
+ if (signal_pending(current)) {
+ ret = -EINTR; /* Can't restart */
+ break;
+ }
+ }
+
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev_priv->flush_queue, &entry);
+
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+
+ int used = cmpxchg(buf_priv->in_use, I810_BUF_HARDWARE,
+ I810_BUF_FREE);
+
+ if (used = I810_BUF_HARDWARE)
+ DRM_DEBUG("reclaimed from HARDWARE\n");
+ if (used = I810_BUF_CLIENT)
+ DRM_DEBUG("still on client HARDWARE\n");
+ }
+
+ return ret;
+}
+
+/* Must be called with the lock held */
+void i810_reclaim_buffers(drm_device_t *dev, pid_t pid)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+
+ if (!dma) return;
+ if (!dev->dev_private) return;
+ if (!dma->buflist) return;
+
+ i810_flush_queue(dev);
+
+ for (i = 0; i < dma->buf_count; i++) {
+ drm_buf_t *buf = dma->buflist[ i ];
+ drm_i810_buf_priv_t *buf_priv = buf->dev_private;
+
+ if (buf->pid = pid && buf_priv) {
+ int used = cmpxchg(buf_priv->in_use, I810_BUF_CLIENT,
+ I810_BUF_FREE);
+
+ if (used = I810_BUF_CLIENT)
+ DRM_DEBUG("reclaimed from client\n");
+ if(buf_priv->currently_mapped = I810_BUF_MAPPED)
+ buf_priv->currently_mapped = I810_BUF_UNMAPPED;
+ }
+ }
+}
+
+int i810_lock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_lock_t lock;
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
+ lock.context, current->pid, dev->lock.hw_lock->lock,
+ lock.flags);
+
+ if (lock.context < 0) {
+ return -EINVAL;
+ }
+ /* Only one queue:
+ */
+
+ if (!ret) {
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ ret = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ lock.context)) {
+ dev->lock.pid = current->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ break; /* Got lock */
+ }
+
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ DRM_DEBUG("Calling lock schedule\n");
+ schedule();
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+ }
+
+ if (!ret) {
+ sigemptyset(&dev->sigmask);
+ sigaddset(&dev->sigmask, SIGSTOP);
+ sigaddset(&dev->sigmask, SIGTSTP);
+ sigaddset(&dev->sigmask, SIGTTIN);
+ sigaddset(&dev->sigmask, SIGTTOU);
+ dev->sigdata.context = lock.context;
+ dev->sigdata.lock = dev->lock.hw_lock;
+ block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
+
+ if (lock.flags & _DRM_LOCK_QUIESCENT) {
+ DRM_DEBUG("_DRM_LOCK_QUIESCENT\n");
+ DRM_DEBUG("fred\n");
+ i810_dma_quiescent(dev);
+ }
+ }
+ DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock");
+ return ret;
+}
+
+int i810_flush_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("i810_flush_ioctl\n");
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_flush_ioctl called without lock held\n");
+ return -EINVAL;
+ }
+
+ i810_flush_queue(dev);
+ return 0;
+}
+
+
+int i810_dma_vertex(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+ drm_i810_vertex_t vertex;
+
+ if (copy_from_user(&vertex, (drm_i810_vertex_t *)arg, sizeof(vertex)))
+ return -EFAULT;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_dma_vertex called without lock held\n");
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("i810 dma vertex, idx %d used %d discard %d\n",
+ vertex.idx, vertex.used, vertex.discard);
+
+ if(vertex.idx < 0 || vertex.idx > dma->buf_count) return -EINVAL;
+
+ i810_dma_dispatch_vertex( dev,
+ dma->buflist[ vertex.idx ],
+ vertex.discard, vertex.used );
+
+ atomic_add(vertex.used, &dma->total_bytes);
+ atomic_inc(&dma->total_dmas);
+ sarea_priv->last_enqueue = dev_priv->counter-1;
+ sarea_priv->last_dispatch = (int) hw_status[5];
+
+ return 0;
+}
+
+
+
+int i810_clear_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_clear_t clear;
+
+ if (copy_from_user(&clear, (drm_i810_clear_t *)arg, sizeof(clear)))
+ return -EFAULT;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_clear_bufs called without lock held\n");
+ return -EINVAL;
+ }
+
+ i810_dma_dispatch_clear( dev, clear.flags,
+ clear.clear_color,
+ clear.clear_depth );
+ return 0;
+}
+
+int i810_swap_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("i810_swap_bufs\n");
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_swap_buf called without lock held\n");
+ return -EINVAL;
+ }
+
+ i810_dma_dispatch_swap( dev );
+ return 0;
+}
+
+int i810_getage(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+
+ sarea_priv->last_dispatch = (int) hw_status[5];
+ return 0;
+}
+
+int i810_getbuf(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode = 0;
+ drm_i810_dma_t d;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+
+ DRM_DEBUG("getbuf\n");
+ if (copy_from_user(&d, (drm_i810_dma_t *)arg, sizeof(d)))
+ return -EFAULT;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_dma called without lock held\n");
+ return -EINVAL;
+ }
+
+ d.granted = 0;
+
+ retcode = i810_dma_get_buffer(dev, &d, filp);
+
+ DRM_DEBUG("i810_dma: %d returning %d, granted = %d\n",
+ current->pid, retcode, d.granted);
+
+ if (copy_to_user((drm_dma_t *)arg, &d, sizeof(d)))
+ return -EFAULT;
+ sarea_priv->last_dispatch = (int) hw_status[5];
+
+ return retcode;
+}
+
+int i810_copybuf(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_i810_copy_t d;
+ drm_i810_private_t *dev_priv = (drm_i810_private_t *)dev->dev_private;
+ u32 *hw_status = (u32 *)dev_priv->hw_status_page;
+ drm_i810_sarea_t *sarea_priv = (drm_i810_sarea_t *)
+ dev_priv->sarea_priv;
+ drm_buf_t *buf;
+ drm_i810_buf_priv_t *buf_priv;
+ drm_device_dma_t *dma = dev->dma;
+
+ if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("i810_dma called without lock held\n");
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&d, (drm_i810_copy_t *)arg, sizeof(d)))
+ return -EFAULT;
+
+ if(d.idx < 0 || d.idx > dma->buf_count) return -EINVAL;
+ buf = dma->buflist[ d.idx ];
+ buf_priv = buf->dev_private;
+ if (buf_priv->currently_mapped != I810_BUF_MAPPED) return -EPERM;
+
+ /* Stopping end users copying their data to the entire kernel
+ is good.. */
+ if (d.used < 0 || d.used > buf->total)
+ return -EINVAL;
+
+ if (copy_from_user(buf_priv->virtual, d.address, d.used))
+ return -EFAULT;
+
+ sarea_priv->last_dispatch = (int) hw_status[5];
+
+ return 0;
+}
+
+int i810_docopy(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ if(VM_DONTCOPY = 0) return 1;
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drm.h linux-2.4.13-lia/drivers/char/drm-4.0/i810_drm.h
--- linux-2.4.13/drivers/char/drm-4.0/i810_drm.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drm.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,194 @@
+#ifndef _I810_DRM_H_
+#define _I810_DRM_H_
+
+/* WARNING: These defines must be the same as what the Xserver uses.
+ * if you change them, you must change the defines in the Xserver.
+ */
+
+#ifndef _I810_DEFINES_
+#define _I810_DEFINES_
+
+#define I810_DMA_BUF_ORDER 12
+#define I810_DMA_BUF_SZ (1<<I810_DMA_BUF_ORDER)
+#define I810_DMA_BUF_NR 256
+#define I810_NR_SAREA_CLIPRECTS 8
+
+/* Each region is a minimum of 64k, and there are at most 64 of them.
+ */
+#define I810_NR_TEX_REGIONS 64
+#define I810_LOG_MIN_TEX_REGION_SIZE 16
+#endif
+
+#define I810_UPLOAD_TEX0IMAGE 0x1 /* handled clientside */
+#define I810_UPLOAD_TEX1IMAGE 0x2 /* handled clientside */
+#define I810_UPLOAD_CTX 0x4
+#define I810_UPLOAD_BUFFERS 0x8
+#define I810_UPLOAD_TEX0 0x10
+#define I810_UPLOAD_TEX1 0x20
+#define I810_UPLOAD_CLIPRECTS 0x40
+
+
+/* Indices into buf.Setup where various bits of state are mirrored per
+ * context and per buffer. These can be fired at the card as a unit,
+ * or in a piecewise fashion as required.
+ */
+
+/* Destbuffer state
+ * - backbuffer linear offset and pitch -- invarient in the current dri
+ * - zbuffer linear offset and pitch -- also invarient
+ * - drawing origin in back and depth buffers.
+ *
+ * Keep the depth/back buffer state here to acommodate private buffers
+ * in the future.
+ */
+#define I810_DESTREG_DI0 0 /* CMD_OP_DESTBUFFER_INFO (2 dwords) */
+#define I810_DESTREG_DI1 1
+#define I810_DESTREG_DV0 2 /* GFX_OP_DESTBUFFER_VARS (2 dwords) */
+#define I810_DESTREG_DV1 3
+#define I810_DESTREG_DR0 4 /* GFX_OP_DRAWRECT_INFO (4 dwords) */
+#define I810_DESTREG_DR1 5
+#define I810_DESTREG_DR2 6
+#define I810_DESTREG_DR3 7
+#define I810_DESTREG_DR4 8
+#define I810_DEST_SETUP_SIZE 10
+
+/* Context state
+ */
+#define I810_CTXREG_CF0 0 /* GFX_OP_COLOR_FACTOR */
+#define I810_CTXREG_CF1 1
+#define I810_CTXREG_ST0 2 /* GFX_OP_STIPPLE */
+#define I810_CTXREG_ST1 3
+#define I810_CTXREG_VF 4 /* GFX_OP_VERTEX_FMT */
+#define I810_CTXREG_MT 5 /* GFX_OP_MAP_TEXELS */
+#define I810_CTXREG_MC0 6 /* GFX_OP_MAP_COLOR_STAGES - stage 0 */
+#define I810_CTXREG_MC1 7 /* GFX_OP_MAP_COLOR_STAGES - stage 1 */
+#define I810_CTXREG_MC2 8 /* GFX_OP_MAP_COLOR_STAGES - stage 2 */
+#define I810_CTXREG_MA0 9 /* GFX_OP_MAP_ALPHA_STAGES - stage 0 */
+#define I810_CTXREG_MA1 10 /* GFX_OP_MAP_ALPHA_STAGES - stage 1 */
+#define I810_CTXREG_MA2 11 /* GFX_OP_MAP_ALPHA_STAGES - stage 2 */
+#define I810_CTXREG_SDM 12 /* GFX_OP_SRC_DEST_MONO */
+#define I810_CTXREG_FOG 13 /* GFX_OP_FOG_COLOR */
+#define I810_CTXREG_B1 14 /* GFX_OP_BOOL_1 */
+#define I810_CTXREG_B2 15 /* GFX_OP_BOOL_2 */
+#define I810_CTXREG_LCS 16 /* GFX_OP_LINEWIDTH_CULL_SHADE_MODE */
+#define I810_CTXREG_PV 17 /* GFX_OP_PV_RULE -- Invarient! */
+#define I810_CTXREG_ZA 18 /* GFX_OP_ZBIAS_ALPHAFUNC */
+#define I810_CTXREG_AA 19 /* GFX_OP_ANTIALIAS */
+#define I810_CTX_SETUP_SIZE 20
+
+/* Texture state (per tex unit)
+ */
+#define I810_TEXREG_MI0 0 /* GFX_OP_MAP_INFO (4 dwords) */
+#define I810_TEXREG_MI1 1
+#define I810_TEXREG_MI2 2
+#define I810_TEXREG_MI3 3
+#define I810_TEXREG_MF 4 /* GFX_OP_MAP_FILTER */
+#define I810_TEXREG_MLC 5 /* GFX_OP_MAP_LOD_CTL */
+#define I810_TEXREG_MLL 6 /* GFX_OP_MAP_LOD_LIMITS */
+#define I810_TEXREG_MCS 7 /* GFX_OP_MAP_COORD_SETS ??? */
+#define I810_TEX_SETUP_SIZE 8
+
+#define I810_FRONT 0x1
+#define I810_BACK 0x2
+#define I810_DEPTH 0x4
+
+
+typedef struct _drm_i810_init {
+ enum {
+ I810_INIT_DMA = 0x01,
+ I810_CLEANUP_DMA = 0x02
+ } func;
+ int ring_map_idx;
+ int buffer_map_idx;
+ int sarea_priv_offset;
+ unsigned int ring_start;
+ unsigned int ring_end;
+ unsigned int ring_size;
+ unsigned int front_offset;
+ unsigned int back_offset;
+ unsigned int depth_offset;
+ unsigned int w;
+ unsigned int h;
+ unsigned int pitch;
+ unsigned int pitch_bits;
+} drm_i810_init_t;
+
+/* Warning: If you change the SAREA structure you must change the Xserver
+ * structure as well */
+
+typedef struct _drm_i810_tex_region {
+ unsigned char next, prev; /* indices to form a circular LRU */
+ unsigned char in_use; /* owned by a client, or free? */
+ int age; /* tracked by clients to update local LRU's */
+} drm_i810_tex_region_t;
+
+typedef struct _drm_i810_sarea {
+ unsigned int ContextState[I810_CTX_SETUP_SIZE];
+ unsigned int BufferState[I810_DEST_SETUP_SIZE];
+ unsigned int TexState[2][I810_TEX_SETUP_SIZE];
+ unsigned int dirty;
+
+ unsigned int nbox;
+ drm_clip_rect_t boxes[I810_NR_SAREA_CLIPRECTS];
+
+ /* Maintain an LRU of contiguous regions of texture space. If
+ * you think you own a region of texture memory, and it has an
+ * age different to the one you set, then you are mistaken and
+ * it has been stolen by another client. If global texAge
+ * hasn't changed, there is no need to walk the list.
+ *
+ * These regions can be used as a proxy for the fine-grained
+ * texture information of other clients - by maintaining them
+ * in the same lru which is used to age their own textures,
+ * clients have an approximate lru for the whole of global
+ * texture space, and can make informed decisions as to which
+ * areas to kick out. There is no need to choose whether to
+ * kick out your own texture or someone else's - simply eject
+ * them all in LRU order.
+ */
+
+ drm_i810_tex_region_t texList[I810_NR_TEX_REGIONS+1];
+ /* Last elt is sentinal */
+ int texAge; /* last time texture was uploaded */
+ int last_enqueue; /* last time a buffer was enqueued */
+ int last_dispatch; /* age of the most recently dispatched buffer */
+ int last_quiescent; /* */
+ int ctxOwner; /* last context to upload state */
+
+ int vertex_prim;
+
+} drm_i810_sarea_t;
+
+typedef struct _drm_i810_clear {
+ int clear_color;
+ int clear_depth;
+ int flags;
+} drm_i810_clear_t;
+
+
+
+/* These may be placeholders if we have more cliprects than
+ * I810_NR_SAREA_CLIPRECTS. In that case, the client sets discard to
+ * false, indicating that the buffer will be dispatched again with a
+ * new set of cliprects.
+ */
+typedef struct _drm_i810_vertex {
+ int idx; /* buffer index */
+ int used; /* nr bytes in use */
+ int discard; /* client is finished with the buffer? */
+} drm_i810_vertex_t;
+
+typedef struct _drm_i810_copy_t {
+ int idx; /* buffer index */
+ int used; /* nr bytes in use */
+ void *address; /* Address to copy from */
+} drm_i810_copy_t;
+
+typedef struct drm_i810_dma {
+ void *virtual;
+ int request_idx;
+ int request_size;
+ int granted;
+} drm_i810_dma_t;
+
+#endif /* _I810_DRM_H_ */
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drv.c linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.c
--- linux-2.4.13/drivers/char/drm-4.0/i810_drv.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,648 @@
+/* i810_drv.c -- I810 driver -*- linux-c -*-
+ * Created: Mon Dec 13 01:56:22 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#include <linux/config.h>
+#include "drmP.h"
+#include "i810_drv.h"
+
+#define I810_NAME "i810"
+#define I810_DESC "Intel I810"
+#define I810_DATE "20000928"
+#define I810_MAJOR 1
+#define I810_MINOR 1
+#define I810_PATCHLEVEL 0
+
+static drm_device_t i810_device;
+drm_ctx_t i810_res_ctx;
+
+static struct file_operations i810_fops = {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* This started being used during 2.4.0-test */
+ owner: THIS_MODULE,
+#endif
+ open: i810_open,
+ flush: drm_flush,
+ release: i810_release,
+ ioctl: i810_ioctl,
+ mmap: drm_mmap,
+ read: drm_read,
+ fasync: drm_fasync,
+ poll: drm_poll,
+};
+
+static struct miscdevice i810_misc = {
+ minor: MISC_DYNAMIC_MINOR,
+ name: I810_NAME,
+ fops: &i810_fops,
+};
+
+static drm_ioctl_desc_t i810_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { i810_version, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_busid, 0, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_block, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_unblock, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_CONTROL)] = { i810_control, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] = { i810_addbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] = { i810_markbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] = { i810_infobufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] = { i810_freebufs, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { i810_addctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { i810_rmctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { i810_modctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { i810_getctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { i810_switchctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { i810_newctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { i810_resctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { i810_lock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { i810_unlock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_finish, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE)] = { drm_agp_acquire, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_RELEASE)] = { drm_agp_release, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ENABLE)] = { drm_agp_enable, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_INFO)] = { drm_agp_info, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ALLOC)] = { drm_agp_alloc, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_FREE)] = { drm_agp_free, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_BIND)] = { drm_agp_bind, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_UNBIND)] = { drm_agp_unbind, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_INIT)] = { i810_dma_init, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_VERTEX)] = { i810_dma_vertex, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_CLEAR)] = { i810_clear_bufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_FLUSH)] = { i810_flush_ioctl,1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_GETAGE)] = { i810_getage, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_GETBUF)] = { i810_getbuf, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_SWAP)] = { i810_swap_bufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_COPY)] = { i810_copybuf, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_I810_DOCOPY)] = { i810_docopy, 1, 0 },
+};
+
+#define I810_IOCTL_COUNT DRM_ARRAY_SIZE(i810_ioctls)
+
+#ifdef MODULE
+static char *i810 = NULL;
+#endif
+
+MODULE_AUTHOR("VA Linux Systems, Inc.");
+MODULE_DESCRIPTION("Intel I810");
+MODULE_PARM(i810, "s");
+
+#ifndef MODULE
+/* i810_options is called by the kernel to parse command-line options
+ * passed via the boot-loader (e.g., LILO). It calls the insmod option
+ * routine, drm_parse_drm.
+ */
+
+static int __init i810_options(char *str)
+{
+ drm_parse_options(str);
+ return 1;
+}
+
+__setup("i810=", i810_options);
+#endif
+
+static int i810_setup(drm_device_t *dev)
+{
+ int i;
+
+ atomic_set(&dev->ioctl_count, 0);
+ atomic_set(&dev->vma_count, 0);
+ dev->buf_use = 0;
+ atomic_set(&dev->buf_alloc, 0);
+
+ drm_dma_setup(dev);
+
+ atomic_set(&dev->total_open, 0);
+ atomic_set(&dev->total_close, 0);
+ atomic_set(&dev->total_ioctl, 0);
+ atomic_set(&dev->total_irq, 0);
+ atomic_set(&dev->total_ctx, 0);
+ atomic_set(&dev->total_locks, 0);
+ atomic_set(&dev->total_unlocks, 0);
+ atomic_set(&dev->total_contends, 0);
+ atomic_set(&dev->total_sleeps, 0);
+
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ dev->magiclist[i].head = NULL;
+ dev->magiclist[i].tail = NULL;
+ }
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ dev->vmalist = NULL;
+ dev->lock.hw_lock = NULL;
+ init_waitqueue_head(&dev->lock.lock_queue);
+ dev->queue_count = 0;
+ dev->queue_reserved = 0;
+ dev->queue_slots = 0;
+ dev->queuelist = NULL;
+ dev->irq = 0;
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+ dev->last_context = 0;
+ dev->last_switch = 0;
+ dev->last_checked = 0;
+ init_timer(&dev->timer);
+ init_waitqueue_head(&dev->context_wait);
+#if DRM_DMA_HISTO
+ memset(&dev->histo, 0, sizeof(dev->histo));
+#endif
+ dev->ctx_start = 0;
+ dev->lck_start = 0;
+
+ dev->buf_rp = dev->buf;
+ dev->buf_wp = dev->buf;
+ dev->buf_end = dev->buf + DRM_BSZ;
+ dev->buf_async = NULL;
+ init_waitqueue_head(&dev->buf_readers);
+ init_waitqueue_head(&dev->buf_writers);
+
+ DRM_DEBUG("\n");
+
+ /* The kernel's context could be created here, but is now created
+ in drm_dma_enqueue. This is more resource-efficient for
+ hardware that does not do DMA, but may mean that
+ drm_select_queue fails between the time the interrupt is
+ initialized and the time the queues are initialized. */
+
+ return 0;
+}
+
+
+static int i810_takedown(drm_device_t *dev)
+{
+ int i;
+ drm_magic_entry_t *pt, *next;
+ drm_map_t *map;
+ drm_vma_entry_t *vma, *vma_next;
+
+ DRM_DEBUG("\n");
+
+ if (dev->irq) i810_irq_uninstall(dev);
+
+ down(&dev->struct_sem);
+ del_timer(&dev->timer);
+
+ if (dev->devname) {
+ drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER);
+ dev->devname = NULL;
+ }
+
+ if (dev->unique) {
+ drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER);
+ dev->unique = NULL;
+ dev->unique_len = 0;
+ }
+ /* Clear pid list */
+ for (i = 0; i < DRM_HASH_SIZE; i++) {
+ for (pt = dev->magiclist[i].head; pt; pt = next) {
+ next = pt->next;
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+ }
+ dev->magiclist[i].head = dev->magiclist[i].tail = NULL;
+ }
+ /* Clear AGP information */
+ if (dev->agp) {
+ drm_agp_mem_t *entry;
+ drm_agp_mem_t *nexte;
+
+ /* Remove AGP resources, but leave dev->agp
+ intact until r128_cleanup is called. */
+ for (entry = dev->agp->memory; entry; entry = nexte) {
+ nexte = entry->next;
+ if (entry->bound) drm_unbind_agp(entry->memory);
+ drm_free_agp(entry->memory, entry->pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ }
+ dev->agp->memory = NULL;
+
+ if (dev->agp->acquired) _drm_agp_release();
+
+ dev->agp->acquired = 0;
+ dev->agp->enabled = 0;
+ }
+ /* Clear vma list (only built for debugging) */
+ if (dev->vmalist) {
+ for (vma = dev->vmalist; vma; vma = vma_next) {
+ vma_next = vma->next;
+ drm_free(vma, sizeof(*vma), DRM_MEM_VMAS);
+ }
+ dev->vmalist = NULL;
+ }
+
+ /* Clear map area and mtrr information */
+ if (dev->maplist) {
+ for (i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+#ifdef CONFIG_MTRR
+ if (map->mtrr >= 0) {
+ int retcode;
+ retcode = mtrr_del(map->mtrr,
+ map->offset,
+ map->size);
+ DRM_DEBUG("mtrr_del = %d\n", retcode);
+ }
+#endif
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+ case _DRM_SHM:
+ drm_free_pages((unsigned long)map->handle,
+ drm_order(map->size)
+ - PAGE_SHIFT,
+ DRM_MEM_SAREA);
+ break;
+ case _DRM_AGP:
+ break;
+ }
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+ drm_free(dev->maplist,
+ dev->map_count * sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ dev->maplist = NULL;
+ dev->map_count = 0;
+ }
+
+ if (dev->queuelist) {
+ for (i = 0; i < dev->queue_count; i++) {
+ drm_waitlist_destroy(&dev->queuelist[i]->waitlist);
+ if (dev->queuelist[i]) {
+ drm_free(dev->queuelist[i],
+ sizeof(*dev->queuelist[0]),
+ DRM_MEM_QUEUES);
+ dev->queuelist[i] = NULL;
+ }
+ }
+ drm_free(dev->queuelist,
+ dev->queue_slots * sizeof(*dev->queuelist),
+ DRM_MEM_QUEUES);
+ dev->queuelist = NULL;
+ }
+
+ drm_dma_takedown(dev);
+
+ dev->queue_count = 0;
+ if (dev->lock.hw_lock) {
+ dev->lock.hw_lock = NULL; /* SHM removed */
+ dev->lock.pid = 0;
+ wake_up_interruptible(&dev->lock.lock_queue);
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+/* i810_init is called via init_module at module load time, or via
+ * linux/init/main.c (this is not currently supported). */
+
+static int __init i810_init(void)
+{
+ int retcode;
+ drm_device_t *dev = &i810_device;
+
+ DRM_DEBUG("\n");
+
+ memset((void *)dev, 0, sizeof(*dev));
+ dev->count_lock = SPIN_LOCK_UNLOCKED;
+ sema_init(&dev->struct_sem, 1);
+
+#ifdef MODULE
+ drm_parse_options(i810);
+#endif
+ DRM_DEBUG("doing misc_register\n");
+ if ((retcode = misc_register(&i810_misc))) {
+ DRM_ERROR("Cannot register \"%s\"\n", I810_NAME);
+ return retcode;
+ }
+ dev->device = MKDEV(MISC_MAJOR, i810_misc.minor);
+ dev->name = I810_NAME;
+
+ DRM_DEBUG("doing mem init\n");
+ drm_mem_init();
+ DRM_DEBUG("doing proc init\n");
+ drm_proc_init(dev);
+ DRM_DEBUG("doing agp init\n");
+ dev->agp = drm_agp_init();
+ if(dev->agp = NULL) {
+ DRM_INFO("The i810 drm module requires the agpgart module"
+ " to function correctly\nPlease load the agpgart"
+ " module before you load the i810 module\n");
+ drm_proc_cleanup();
+ misc_deregister(&i810_misc);
+ i810_takedown(dev);
+ return -ENOMEM;
+ }
+ DRM_DEBUG("doing ctxbitmap init\n");
+ if((retcode = drm_ctxbitmap_init(dev))) {
+ DRM_ERROR("Cannot allocate memory for context bitmap.\n");
+ drm_proc_cleanup();
+ misc_deregister(&i810_misc);
+ i810_takedown(dev);
+ return retcode;
+ }
+
+ DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
+ I810_NAME,
+ I810_MAJOR,
+ I810_MINOR,
+ I810_PATCHLEVEL,
+ I810_DATE,
+ i810_misc.minor);
+
+ return 0;
+}
+
+/* i810_cleanup is called via cleanup_module at module unload time. */
+
+static void __exit i810_cleanup(void)
+{
+ drm_device_t *dev = &i810_device;
+
+ DRM_DEBUG("\n");
+
+ drm_proc_cleanup();
+ if (misc_deregister(&i810_misc)) {
+ DRM_ERROR("Cannot unload module\n");
+ } else {
+ DRM_INFO("Module unloaded\n");
+ }
+ drm_ctxbitmap_cleanup(dev);
+ i810_takedown(dev);
+ if (dev->agp) {
+ drm_agp_uninit();
+ drm_free(dev->agp, sizeof(*dev->agp), DRM_MEM_AGPLISTS);
+ dev->agp = NULL;
+ }
+}
+
+module_init(i810_init);
+module_exit(i810_cleanup);
+
+
+int i810_version(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_version_t version;
+ int len;
+
+ if (copy_from_user(&version,
+ (drm_version_t *)arg,
+ sizeof(version)))
+ return -EFAULT;
+
+#define DRM_COPY(name,value) \
+ len = strlen(value); \
+ if (len > name##_len) len = name##_len; \
+ name##_len = strlen(value); \
+ if (len && name) { \
+ if (copy_to_user(name, value, len)) \
+ return -EFAULT; \
+ }
+
+ version.version_major = I810_MAJOR;
+ version.version_minor = I810_MINOR;
+ version.version_patchlevel = I810_PATCHLEVEL;
+
+ DRM_COPY(version.name, I810_NAME);
+ DRM_COPY(version.date, I810_DATE);
+ DRM_COPY(version.desc, I810_DESC);
+
+ if (copy_to_user((drm_version_t *)arg,
+ &version,
+ sizeof(version)))
+ return -EFAULT;
+ return 0;
+}
+
+int i810_open(struct inode *inode, struct file *filp)
+{
+ drm_device_t *dev = &i810_device;
+ int retcode = 0;
+
+ DRM_DEBUG("open_count = %d\n", dev->open_count);
+ if (!(retcode = drm_open_helper(inode, filp, dev))) {
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_INC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_open);
+ spin_lock(&dev->count_lock);
+ if (!dev->open_count++) {
+ spin_unlock(&dev->count_lock);
+ return i810_setup(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ }
+ return retcode;
+}
+
+int i810_release(struct inode *inode, struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ int retcode = 0;
+
+ lock_kernel();
+ dev = priv->dev;
+ DRM_DEBUG("pid = %d, device = 0x%x, open_count = %d\n",
+ current->pid, dev->device, dev->open_count);
+
+ if (dev->lock.hw_lock && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)
+ && dev->lock.pid = current->pid) {
+ i810_reclaim_buffers(dev, priv->pid);
+ DRM_ERROR("Process %d dead, freeing lock for context %d\n",
+ current->pid,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ drm_lock_free(dev,
+ &dev->lock.hw_lock->lock,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+
+ /* FIXME: may require heavy-handed reset of
+ hardware at this point, possibly
+ processed via a callback to the X
+ server. */
+ } else if (dev->lock.hw_lock) {
+ /* The lock is required to reclaim buffers */
+ DECLARE_WAITQUEUE(entry, current);
+ add_wait_queue(&dev->lock.lock_queue, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!dev->lock.hw_lock) {
+ /* Device has been unregistered */
+ retcode = -EINTR;
+ break;
+ }
+ if (drm_lock_take(&dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ dev->lock.pid = priv->pid;
+ dev->lock.lock_time = jiffies;
+ atomic_inc(&dev->total_locks);
+ break; /* Got lock */
+ }
+ /* Contention */
+ atomic_inc(&dev->total_sleeps);
+ schedule();
+ if (signal_pending(current)) {
+ retcode = -ERESTARTSYS;
+ break;
+ }
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&dev->lock.lock_queue, &entry);
+ if(!retcode) {
+ i810_reclaim_buffers(dev, priv->pid);
+ drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT);
+ }
+ }
+ drm_fasync(-1, filp, 0);
+
+ down(&dev->struct_sem);
+ if (priv->prev) priv->prev->next = priv->next;
+ else dev->file_first = priv->next;
+ if (priv->next) priv->next->prev = priv->prev;
+ else dev->file_last = priv->prev;
+ up(&dev->struct_sem);
+
+ drm_free(priv, sizeof(*priv), DRM_MEM_FILES);
+#if LINUX_VERSION_CODE < 0x020333
+ MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */
+#endif
+ atomic_inc(&dev->total_close);
+ spin_lock(&dev->count_lock);
+ if (!--dev->open_count) {
+ if (atomic_read(&dev->ioctl_count) || dev->blocked) {
+ DRM_ERROR("Device busy: %d %d\n",
+ atomic_read(&dev->ioctl_count),
+ dev->blocked);
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return -EBUSY;
+ }
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return i810_takedown(dev);
+ }
+ spin_unlock(&dev->count_lock);
+ unlock_kernel();
+ return retcode;
+}
+
+/* drm_ioctl is called whenever a process performs an ioctl on /dev/drm. */
+
+int i810_ioctl(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ int nr = DRM_IOCTL_NR(cmd);
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode = 0;
+ drm_ioctl_desc_t *ioctl;
+ drm_ioctl_t *func;
+
+ atomic_inc(&dev->ioctl_count);
+ atomic_inc(&dev->total_ioctl);
+ ++priv->ioctl_count;
+
+ DRM_DEBUG("pid = %d, cmd = 0x%02x, nr = 0x%02x, dev 0x%x, auth = %d\n",
+ current->pid, cmd, nr, dev->device, priv->authenticated);
+
+ if (nr >= I810_IOCTL_COUNT) {
+ retcode = -EINVAL;
+ } else {
+ ioctl = &i810_ioctls[nr];
+ func = ioctl->func;
+
+ if (!func) {
+ DRM_DEBUG("no function\n");
+ retcode = -EINVAL;
+ } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN))
+ || (ioctl->auth_needed && !priv->authenticated)) {
+ retcode = -EACCES;
+ } else {
+ retcode = (func)(inode, filp, cmd, arg);
+ }
+ }
+
+ atomic_dec(&dev->ioctl_count);
+ return retcode;
+}
+
+int i810_unlock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_lock_t lock;
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+
+ if (lock.context = DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("Process %d using kernel context %d\n",
+ current->pid, lock.context);
+ return -EINVAL;
+ }
+
+ DRM_DEBUG("%d frees lock (%d holds)\n",
+ lock.context,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock));
+ atomic_inc(&dev->total_unlocks);
+ if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock))
+ atomic_inc(&dev->total_contends);
+ drm_lock_transfer(dev, &dev->lock.hw_lock->lock, DRM_KERNEL_CONTEXT);
+ if (!dev->context_flag) {
+ if (drm_lock_free(dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT)) {
+ DRM_ERROR("\n");
+ }
+ }
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.lhld[drm_histogram_slot(get_cycles()
+ - dev->lck_start)]);
+#endif
+
+ unblock_all_signals();
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drv.h linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.h
--- linux-2.4.13/drivers/char/drm-4.0/i810_drv.h Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.h Thu Oct 4 00:21:40 2001
@@ -0,0 +1,225 @@
+/* i810_drv.h -- Private header for the Matrox g200/g400 driver -*- linux-c -*-
+ * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#ifndef _I810_DRV_H_
+#define _I810_DRV_H_
+
+typedef struct drm_i810_buf_priv {
+ u32 *in_use;
+ int my_use_idx;
+ int currently_mapped;
+ void *virtual;
+ void *kernel_virtual;
+ int map_count;
+ struct vm_area_struct *vma;
+} drm_i810_buf_priv_t;
+
+typedef struct _drm_i810_ring_buffer{
+ int tail_mask;
+ unsigned long Start;
+ unsigned long End;
+ unsigned long Size;
+ u8 *virtual_start;
+ int head;
+ int tail;
+ int space;
+} drm_i810_ring_buffer_t;
+
+typedef struct drm_i810_private {
+ int ring_map_idx;
+ int buffer_map_idx;
+
+ drm_i810_ring_buffer_t ring;
+ drm_i810_sarea_t *sarea_priv;
+
+ unsigned long hw_status_page;
+ unsigned long counter;
+
+ atomic_t flush_done;
+ wait_queue_head_t flush_queue; /* Processes waiting until flush */
+ drm_buf_t *mmap_buffer;
+
+
+ u32 front_di1, back_di1, zi1;
+
+ int back_offset;
+ int depth_offset;
+ int w, h;
+ int pitch;
+} drm_i810_private_t;
+
+ /* i810_drv.c */
+extern int i810_version(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_open(struct inode *inode, struct file *filp);
+extern int i810_release(struct inode *inode, struct file *filp);
+extern int i810_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_unlock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* i810_dma.c */
+extern int i810_dma_schedule(drm_device_t *dev, int locked);
+extern int i810_getbuf(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_irq_install(drm_device_t *dev, int irq);
+extern int i810_irq_uninstall(drm_device_t *dev);
+extern int i810_control(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_lock(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_dma_init(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_flush_ioctl(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern void i810_reclaim_buffers(drm_device_t *dev, pid_t pid);
+extern int i810_getage(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg);
+extern int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma);
+extern int i810_copybuf(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_docopy(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* i810_bufs.c */
+extern int i810_addbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_infobufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_markbufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_freebufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_addmap(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+ /* i810_context.c */
+extern int i810_resctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_addctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_modctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_getctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_switchctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_newctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+extern int i810_rmctx(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+extern int i810_context_switch(drm_device_t *dev, int old, int new);
+extern int i810_context_switch_complete(drm_device_t *dev, int new);
+
+#define I810_VERBOSE 0
+
+
+int i810_dma_vertex(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+int i810_swap_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+int i810_clear_bufs(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg);
+
+#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23))
+#define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23))
+#define CMD_REPORT_HEAD (7<<23)
+#define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1)
+#define CMD_OP_BATCH_BUFFER ((0x0<<29)|(0x30<<23)|0x1)
+
+#define INST_PARSER_CLIENT 0x00000000
+#define INST_OP_FLUSH 0x02000000
+#define INST_FLUSH_MAP_CACHE 0x00000001
+
+
+#define BB1_START_ADDR_MASK (~0x7)
+#define BB1_PROTECTED (1<<0)
+#define BB1_UNPROTECTED (0<<0)
+#define BB2_END_ADDR_MASK (~0x7)
+
+#define I810REG_HWSTAM 0x02098
+#define I810REG_INT_IDENTITY_R 0x020a4
+#define I810REG_INT_MASK_R 0x020a8
+#define I810REG_INT_ENABLE_R 0x020a0
+
+#define LP_RING 0x2030
+#define HP_RING 0x2040
+#define RING_TAIL 0x00
+#define TAIL_ADDR 0x000FFFF8
+#define RING_HEAD 0x04
+#define HEAD_WRAP_COUNT 0xFFE00000
+#define HEAD_WRAP_ONE 0x00200000
+#define HEAD_ADDR 0x001FFFFC
+#define RING_START 0x08
+#define START_ADDR 0x00FFFFF8
+#define RING_LEN 0x0C
+#define RING_NR_PAGES 0x000FF000
+#define RING_REPORT_MASK 0x00000006
+#define RING_REPORT_64K 0x00000002
+#define RING_REPORT_128K 0x00000004
+#define RING_NO_REPORT 0x00000000
+#define RING_VALID_MASK 0x00000001
+#define RING_VALID 0x00000001
+#define RING_INVALID 0x00000000
+
+#define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19))
+#define SC_UPDATE_SCISSOR (0x1<<1)
+#define SC_ENABLE_MASK (0x1<<0)
+#define SC_ENABLE (0x1<<0)
+
+#define GFX_OP_SCISSOR_INFO ((0x3<<29)|(0x1d<<24)|(0x81<<16)|(0x1))
+#define SCI_YMIN_MASK (0xffff<<16)
+#define SCI_XMIN_MASK (0xffff<<0)
+#define SCI_YMAX_MASK (0xffff<<16)
+#define SCI_XMAX_MASK (0xffff<<0)
+
+#define GFX_OP_COLOR_FACTOR ((0x3<<29)|(0x1d<<24)|(0x1<<16)|0x0)
+#define GFX_OP_STIPPLE ((0x3<<29)|(0x1d<<24)|(0x83<<16))
+#define GFX_OP_MAP_INFO ((0x3<<29)|(0x1d<<24)|0x2)
+#define GFX_OP_DESTBUFFER_VARS ((0x3<<29)|(0x1d<<24)|(0x85<<16)|0x0)
+#define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3))
+#define GFX_OP_PRIMITIVE ((0x3<<29)|(0x1f<<24))
+
+#define CMD_OP_Z_BUFFER_INFO ((0x0<<29)|(0x16<<23))
+#define CMD_OP_DESTBUFFER_INFO ((0x0<<29)|(0x15<<23))
+
+#define BR00_BITBLT_CLIENT 0x40000000
+#define BR00_OP_COLOR_BLT 0x10000000
+#define BR00_OP_SRC_COPY_BLT 0x10C00000
+#define BR13_SOLID_PATTERN 0x80000000
+
+
+
+#endif
+
diff -urN linux-2.4.13/drivers/char/drm-4.0/init.c linux-2.4.13-lia/drivers/char/drm-4.0/init.c
--- linux-2.4.13/drivers/char/drm-4.0/init.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/init.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,113 @@
+/* init.c -- Setup/Cleanup for DRM -*- linux-c -*-
+ * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_flags = 0;
+
+/* drm_parse_option parses a single option. See description for
+ drm_parse_options for details. */
+
+static void drm_parse_option(char *s)
+{
+ char *c, *r;
+
+ DRM_DEBUG("\"%s\"\n", s);
+ if (!s || !*s) return;
+ for (c = s; *c && *c != ':'; c++); /* find : or \0 */
+ if (*c) r = c + 1; else r = NULL; /* remember remainder */
+ *c = '\0'; /* terminate */
+ if (!strcmp(s, "noctx")) {
+ drm_flags |= DRM_FLAG_NOCTX;
+ DRM_INFO("Server-mediated context switching OFF\n");
+ return;
+ }
+ if (!strcmp(s, "debug")) {
+ drm_flags |= DRM_FLAG_DEBUG;
+ DRM_INFO("Debug messages ON\n");
+ return;
+ }
+ DRM_ERROR("\"%s\" is not a valid option\n", s);
+ return;
+}
+
+/* drm_parse_options parse the insmod "drm=" options, or the command-line
+ * options passed to the kernel via LILO. The grammar of the format is as
+ * follows:
+ *
+ * drm ::= 'drm=' option_list
+ * option_list ::= option [ ';' option_list ]
+ * option ::= 'device:' major
+ * | 'debug'
+ * | 'noctx'
+ * major ::= INTEGER
+ *
+ * Note that 's' contains option_list without the 'drm=' part.
+ *
+ * device=major,minor specifies the device number used for /dev/drm
+ * if major = 0 then the misc device is used
+ * if major = 0 and minor = 0 then dynamic misc allocation is used
+ * debug=on specifies that debugging messages will be printk'd
+ * debug=trace specifies that each function call will be logged via printk
+ * debug=off turns off all debugging options
+ *
+ */
+
+void drm_parse_options(char *s)
+{
+ char *h, *t, *n;
+
+ DRM_DEBUG("\"%s\"\n", s ?: "");
+ if (!s || !*s) return;
+
+ for (h = t = n = s; h && *h; h = n) {
+ for (; *t && *t != ';'; t++); /* find ; or \0 */
+ if (*t) n = t + 1; else n = NULL; /* remember next */
+ *t = '\0'; /* terminate */
+ drm_parse_option(h); /* parse */
+ }
+}
+
+/* drm_cpu_valid returns non-zero if the DRI will run on this CPU, and 0
+ * otherwise. */
+
+int drm_cpu_valid(void)
+{
+#if defined(__i386__)
+ if (boot_cpu_data.x86 = 3) return 0; /* No cmpxchg on a 386 */
+#endif
+#if defined(__sparc__) && !defined(__sparc_v9__)
+ if (1)
+ return 0; /* No cmpxchg before v9 sparc. */
+#endif
+ return 1;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/ioctl.c linux-2.4.13-lia/drivers/char/drm-4.0/ioctl.c
--- linux-2.4.13/drivers/char/drm-4.0/ioctl.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/ioctl.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,99 @@
+/* ioctl.c -- IOCTL processing for DRM -*- linux-c -*-
+ * Created: Fri Jan 8 09:01:26 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_irq_busid(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_irq_busid_t p;
+ struct pci_dev *dev;
+
+ if (copy_from_user(&p, (drm_irq_busid_t *)arg, sizeof(p)))
+ return -EFAULT;
+ dev = pci_find_slot(p.busnum, PCI_DEVFN(p.devnum, p.funcnum));
+ if (dev) p.irq = dev->irq;
+ else p.irq = 0;
+ DRM_DEBUG("%d:%d:%d => IRQ %d\n",
+ p.busnum, p.devnum, p.funcnum, p.irq);
+ if (copy_to_user((drm_irq_busid_t *)arg, &p, sizeof(p)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_getunique(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_unique_t u;
+
+ if (copy_from_user(&u, (drm_unique_t *)arg, sizeof(u)))
+ return -EFAULT;
+ if (u.unique_len >= dev->unique_len) {
+ if (copy_to_user(u.unique, dev->unique, dev->unique_len))
+ return -EFAULT;
+ }
+ u.unique_len = dev->unique_len;
+ if (copy_to_user((drm_unique_t *)arg, &u, sizeof(u)))
+ return -EFAULT;
+ return 0;
+}
+
+int drm_setunique(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_unique_t u;
+
+ if (dev->unique_len || dev->unique)
+ return -EBUSY;
+
+ if (copy_from_user(&u, (drm_unique_t *)arg, sizeof(u)))
+ return -EFAULT;
+
+ if (!u.unique_len || u.unique_len > 1024)
+ return -EINVAL;
+
+ dev->unique_len = u.unique_len;
+ dev->unique = drm_alloc(u.unique_len + 1, DRM_MEM_DRIVER);
+ if (copy_from_user(dev->unique, u.unique, dev->unique_len))
+ return -EFAULT;
+ dev->unique[dev->unique_len] = '\0';
+
+ dev->devname = drm_alloc(strlen(dev->name) + strlen(dev->unique) + 2,
+ DRM_MEM_DRIVER);
+ sprintf(dev->devname, "%s@%s", dev->name, dev->unique);
+
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/lists.c linux-2.4.13-lia/drivers/char/drm-4.0/lists.c
--- linux-2.4.13/drivers/char/drm-4.0/lists.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/lists.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,218 @@
+/* lists.c -- Buffer list handling routines -*- linux-c -*-
+ * Created: Mon Apr 19 20:54:22 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_waitlist_create(drm_waitlist_t *bl, int count)
+{
+ if (bl->count) return -EINVAL;
+
+ bl->count = count;
+ bl->bufs = drm_alloc((bl->count + 2) * sizeof(*bl->bufs),
+ DRM_MEM_BUFLISTS);
+ bl->rp = bl->bufs;
+ bl->wp = bl->bufs;
+ bl->end = &bl->bufs[bl->count+1];
+ bl->write_lock = SPIN_LOCK_UNLOCKED;
+ bl->read_lock = SPIN_LOCK_UNLOCKED;
+ return 0;
+}
+
+int drm_waitlist_destroy(drm_waitlist_t *bl)
+{
+ if (bl->rp != bl->wp) return -EINVAL;
+ if (bl->bufs) drm_free(bl->bufs,
+ (bl->count + 2) * sizeof(*bl->bufs),
+ DRM_MEM_BUFLISTS);
+ bl->count = 0;
+ bl->bufs = NULL;
+ bl->rp = NULL;
+ bl->wp = NULL;
+ bl->end = NULL;
+ return 0;
+}
+
+int drm_waitlist_put(drm_waitlist_t *bl, drm_buf_t *buf)
+{
+ int left;
+ unsigned long flags;
+
+ left = DRM_LEFTCOUNT(bl);
+ if (!left) {
+ DRM_ERROR("Overflow while adding buffer %d from pid %d\n",
+ buf->idx, buf->pid);
+ return -EINVAL;
+ }
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = get_cycles();
+#endif
+ buf->list = DRM_LIST_WAIT;
+
+ spin_lock_irqsave(&bl->write_lock, flags);
+ *bl->wp = buf;
+ if (++bl->wp >= bl->end) bl->wp = bl->bufs;
+ spin_unlock_irqrestore(&bl->write_lock, flags);
+
+ return 0;
+}
+
+drm_buf_t *drm_waitlist_get(drm_waitlist_t *bl)
+{
+ drm_buf_t *buf;
+ unsigned long flags;
+
+ spin_lock_irqsave(&bl->read_lock, flags);
+ buf = *bl->rp;
+ if (bl->rp = bl->wp) {
+ spin_unlock_irqrestore(&bl->read_lock, flags);
+ return NULL;
+ }
+ if (++bl->rp >= bl->end) bl->rp = bl->bufs;
+ spin_unlock_irqrestore(&bl->read_lock, flags);
+
+ return buf;
+}
+
+int drm_freelist_create(drm_freelist_t *bl, int count)
+{
+ atomic_set(&bl->count, 0);
+ bl->next = NULL;
+ init_waitqueue_head(&bl->waiting);
+ bl->low_mark = 0;
+ bl->high_mark = 0;
+ atomic_set(&bl->wfh, 0);
+ bl->lock = SPIN_LOCK_UNLOCKED;
+ ++bl->initialized;
+ return 0;
+}
+
+int drm_freelist_destroy(drm_freelist_t *bl)
+{
+ atomic_set(&bl->count, 0);
+ bl->next = NULL;
+ return 0;
+}
+
+int drm_freelist_put(drm_device_t *dev, drm_freelist_t *bl, drm_buf_t *buf)
+{
+ drm_device_dma_t *dma = dev->dma;
+
+ if (!dma) {
+ DRM_ERROR("No DMA support\n");
+ return 1;
+ }
+
+ if (buf->waiting || buf->pending || buf->list = DRM_LIST_FREE) {
+ DRM_ERROR("Freed buffer %d: w%d, p%d, l%d\n",
+ buf->idx, buf->waiting, buf->pending, buf->list);
+ }
+ if (!bl) return 1;
+#if DRM_DMA_HISTOGRAM
+ buf->time_freed = get_cycles();
+ drm_histogram_compute(dev, buf);
+#endif
+ buf->list = DRM_LIST_FREE;
+
+ spin_lock(&bl->lock);
+ buf->next = bl->next;
+ bl->next = buf;
+ spin_unlock(&bl->lock);
+
+ atomic_inc(&bl->count);
+ if (atomic_read(&bl->count) > dma->buf_count) {
+ DRM_ERROR("%d of %d buffers free after addition of %d\n",
+ atomic_read(&bl->count), dma->buf_count, buf->idx);
+ return 1;
+ }
+ /* Check for high water mark */
+ if (atomic_read(&bl->wfh) && atomic_read(&bl->count)>=bl->high_mark) {
+ atomic_set(&bl->wfh, 0);
+ wake_up_interruptible(&bl->waiting);
+ }
+ return 0;
+}
+
+static drm_buf_t *drm_freelist_try(drm_freelist_t *bl)
+{
+ drm_buf_t *buf;
+
+ if (!bl) return NULL;
+
+ /* Get buffer */
+ spin_lock(&bl->lock);
+ if (!bl->next) {
+ spin_unlock(&bl->lock);
+ return NULL;
+ }
+ buf = bl->next;
+ bl->next = bl->next->next;
+ spin_unlock(&bl->lock);
+
+ atomic_dec(&bl->count);
+ buf->next = NULL;
+ buf->list = DRM_LIST_NONE;
+ if (buf->waiting || buf->pending) {
+ DRM_ERROR("Free buffer %d: w%d, p%d, l%d\n",
+ buf->idx, buf->waiting, buf->pending, buf->list);
+ }
+
+ return buf;
+}
+
+drm_buf_t *drm_freelist_get(drm_freelist_t *bl, int block)
+{
+ drm_buf_t *buf = NULL;
+ DECLARE_WAITQUEUE(entry, current);
+
+ if (!bl || !bl->initialized) return NULL;
+
+ /* Check for low water mark */
+ if (atomic_read(&bl->count) <= bl->low_mark) /* Became low */
+ atomic_set(&bl->wfh, 1);
+ if (atomic_read(&bl->wfh)) {
+ if (block) {
+ add_wait_queue(&bl->waiting, &entry);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!atomic_read(&bl->wfh)
+ && (buf = drm_freelist_try(bl))) break;
+ schedule();
+ if (signal_pending(current)) break;
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&bl->waiting, &entry);
+ }
+ return buf;
+ }
+
+ return drm_freelist_try(bl);
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/lock.c linux-2.4.13-lia/drivers/char/drm-4.0/lock.c
--- linux-2.4.13/drivers/char/drm-4.0/lock.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/lock.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,252 @@
+/* lock.c -- IOCTLs for locking -*- linux-c -*-
+ * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+
+int drm_block(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ DRM_DEBUG("\n");
+ return 0;
+}
+
+int drm_unblock(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ DRM_DEBUG("\n");
+ return 0;
+}
+
+int drm_lock_take(__volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+
+ do {
+ old = *lock;
+ if (old & _DRM_LOCK_HELD) new = old | _DRM_LOCK_CONT;
+ else new = context | _DRM_LOCK_HELD;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ if (_DRM_LOCKING_CONTEXT(old) = context) {
+ if (old & _DRM_LOCK_HELD) {
+ if (context != DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("%d holds heavyweight lock\n",
+ context);
+ }
+ return 0;
+ }
+ }
+ if (new = (context | _DRM_LOCK_HELD)) {
+ /* Have lock */
+ return 1;
+ }
+ return 0;
+}
+
+/* This takes a lock forcibly and hands it to context. Should ONLY be used
+ inside *_unlock to give lock to kernel before calling *_dma_schedule. */
+int drm_lock_transfer(drm_device_t *dev,
+ __volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+
+ dev->lock.pid = 0;
+ do {
+ old = *lock;
+ new = context | _DRM_LOCK_HELD;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ return 1;
+}
+
+int drm_lock_free(drm_device_t *dev,
+ __volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+ pid_t pid = dev->lock.pid;
+
+ dev->lock.pid = 0;
+ do {
+ old = *lock;
+ new = 0;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ if (_DRM_LOCK_IS_HELD(old) && _DRM_LOCKING_CONTEXT(old) != context) {
+ DRM_ERROR("%d freed heavyweight lock held by %d (pid %d)\n",
+ context,
+ _DRM_LOCKING_CONTEXT(old),
+ pid);
+ return 1;
+ }
+ wake_up_interruptible(&dev->lock.lock_queue);
+ return 0;
+}
+
+static int drm_flush_queue(drm_device_t *dev, int context)
+{
+ DECLARE_WAITQUEUE(entry, current);
+ int ret = 0;
+ drm_queue_t *q = dev->queuelist[context];
+
+ DRM_DEBUG("\n");
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) > 1) {
+ atomic_inc(&q->block_write);
+ add_wait_queue(&q->flush_queue, &entry);
+ atomic_inc(&q->block_count);
+ for (;;) {
+ current->state = TASK_INTERRUPTIBLE;
+ if (!DRM_BUFCOUNT(&q->waitlist)) break;
+ schedule();
+ if (signal_pending(current)) {
+ ret = -EINTR; /* Can't restart */
+ break;
+ }
+ }
+ atomic_dec(&q->block_count);
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&q->flush_queue, &entry);
+ }
+ atomic_dec(&q->use_count);
+ atomic_inc(&q->total_flushed);
+
+ /* NOTE: block_write is still incremented!
+ Use drm_flush_unlock_queue to decrement. */
+ return ret;
+}
+
+static int drm_flush_unblock_queue(drm_device_t *dev, int context)
+{
+ drm_queue_t *q = dev->queuelist[context];
+
+ DRM_DEBUG("\n");
+
+ atomic_inc(&q->use_count);
+ if (atomic_read(&q->use_count) > 1) {
+ if (atomic_read(&q->block_write)) {
+ atomic_dec(&q->block_write);
+ wake_up_interruptible(&q->write_queue);
+ }
+ }
+ atomic_dec(&q->use_count);
+ return 0;
+}
+
+int drm_flush_block_and_flush(drm_device_t *dev, int context,
+ drm_lock_flags_t flags)
+{
+ int ret = 0;
+ int i;
+
+ DRM_DEBUG("\n");
+
+ if (flags & _DRM_LOCK_FLUSH) {
+ ret = drm_flush_queue(dev, DRM_KERNEL_CONTEXT);
+ if (!ret) ret = drm_flush_queue(dev, context);
+ }
+ if (flags & _DRM_LOCK_FLUSH_ALL) {
+ for (i = 0; !ret && i < dev->queue_count; i++) {
+ ret = drm_flush_queue(dev, i);
+ }
+ }
+ return ret;
+}
+
+int drm_flush_unblock(drm_device_t *dev, int context, drm_lock_flags_t flags)
+{
+ int ret = 0;
+ int i;
+
+ DRM_DEBUG("\n");
+
+ if (flags & _DRM_LOCK_FLUSH) {
+ ret = drm_flush_unblock_queue(dev, DRM_KERNEL_CONTEXT);
+ if (!ret) ret = drm_flush_unblock_queue(dev, context);
+ }
+ if (flags & _DRM_LOCK_FLUSH_ALL) {
+ for (i = 0; !ret && i < dev->queue_count; i++) {
+ ret = drm_flush_unblock_queue(dev, i);
+ }
+ }
+
+ return ret;
+}
+
+int drm_finish(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int ret = 0;
+ drm_lock_t lock;
+
+ DRM_DEBUG("\n");
+
+ if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)))
+ return -EFAULT;
+ ret = drm_flush_block_and_flush(dev, lock.context, lock.flags);
+ drm_flush_unblock(dev, lock.context, lock.flags);
+ return ret;
+}
+
+/* If we get here, it means that the process has called DRM_IOCTL_LOCK
+ without calling DRM_IOCTL_UNLOCK.
+
+ If the lock is not held, then let the signal proceed as usual.
+
+ If the lock is held, then set the contended flag and keep the signal
+ blocked.
+
+
+ Return 1 if the signal should be delivered normally.
+ Return 0 if the signal should be blocked. */
+
+int drm_notifier(void *priv)
+{
+ drm_sigdata_t *s = (drm_sigdata_t *)priv;
+ unsigned int old, new, prev;
+
+
+ /* Allow signal delivery if lock isn't held */
+ if (!_DRM_LOCK_IS_HELD(s->lock->lock)
+ || _DRM_LOCKING_CONTEXT(s->lock->lock) != s->context) return 1;
+
+ /* Otherwise, set flag to force call to
+ drmUnlock */
+ do {
+ old = s->lock->lock;
+ new = old | _DRM_LOCK_CONT;
+ prev = cmpxchg(&s->lock->lock, old, new);
+ } while (prev != old);
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/memory.c linux-2.4.13-lia/drivers/char/drm-4.0/memory.c
--- linux-2.4.13/drivers/char/drm-4.0/memory.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/memory.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,486 @@
+/* memory.c -- Memory management wrappers for DRM -*- linux-c -*-
+ * Created: Thu Feb 4 14:00:34 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Rickard E. (Rik) Faith <faith@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include <linux/config.h>
+#include "drmP.h"
+#include <linux/wrapper.h>
+
+typedef struct drm_mem_stats {
+ const char *name;
+ int succeed_count;
+ int free_count;
+ int fail_count;
+ unsigned long bytes_allocated;
+ unsigned long bytes_freed;
+} drm_mem_stats_t;
+
+static spinlock_t drm_mem_lock = SPIN_LOCK_UNLOCKED;
+static unsigned long drm_ram_available = 0; /* In pages */
+static unsigned long drm_ram_used = 0;
+static drm_mem_stats_t drm_mem_stats[] = {
+ [DRM_MEM_DMA] = { "dmabufs" },
+ [DRM_MEM_SAREA] = { "sareas" },
+ [DRM_MEM_DRIVER] = { "driver" },
+ [DRM_MEM_MAGIC] = { "magic" },
+ [DRM_MEM_IOCTLS] = { "ioctltab" },
+ [DRM_MEM_MAPS] = { "maplist" },
+ [DRM_MEM_VMAS] = { "vmalist" },
+ [DRM_MEM_BUFS] = { "buflist" },
+ [DRM_MEM_SEGS] = { "seglist" },
+ [DRM_MEM_PAGES] = { "pagelist" },
+ [DRM_MEM_FILES] = { "files" },
+ [DRM_MEM_QUEUES] = { "queues" },
+ [DRM_MEM_CMDS] = { "commands" },
+ [DRM_MEM_MAPPINGS] = { "mappings" },
+ [DRM_MEM_BUFLISTS] = { "buflists" },
+ [DRM_MEM_AGPLISTS] = { "agplist" },
+ [DRM_MEM_TOTALAGP] = { "totalagp" },
+ [DRM_MEM_BOUNDAGP] = { "boundagp" },
+ [DRM_MEM_CTXBITMAP] = { "ctxbitmap"},
+ { NULL, 0, } /* Last entry must be null */
+};
+
+void drm_mem_init(void)
+{
+ drm_mem_stats_t *mem;
+ struct sysinfo si;
+
+ for (mem = drm_mem_stats; mem->name; ++mem) {
+ mem->succeed_count = 0;
+ mem->free_count = 0;
+ mem->fail_count = 0;
+ mem->bytes_allocated = 0;
+ mem->bytes_freed = 0;
+ }
+
+ si_meminfo(&si);
+#if LINUX_VERSION_CODE < 0x020317
+ /* Changed to page count in 2.3.23 */
+ drm_ram_available = si.totalram >> PAGE_SHIFT;
+#else
+ drm_ram_available = si.totalram;
+#endif
+ drm_ram_used = 0;
+}
+
+/* drm_mem_info is called whenever a process reads /dev/drm/mem. */
+
+static int _drm_mem_info(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ drm_mem_stats_t *pt;
+
+ if (offset > 0) return 0; /* no partial requests */
+ len = 0;
+ *eof = 1;
+ DRM_PROC_PRINT(" total counts "
+ " | outstanding \n");
+ DRM_PROC_PRINT("type alloc freed fail bytes freed"
+ " | allocs bytes\n\n");
+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n",
+ "system", 0, 0, 0,
+ drm_ram_available << (PAGE_SHIFT - 10));
+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n",
+ "locked", 0, 0, 0, drm_ram_used >> 10);
+ DRM_PROC_PRINT("\n");
+ for (pt = drm_mem_stats; pt->name; pt++) {
+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu %10lu | %6d %10ld\n",
+ pt->name,
+ pt->succeed_count,
+ pt->free_count,
+ pt->fail_count,
+ pt->bytes_allocated,
+ pt->bytes_freed,
+ pt->succeed_count - pt->free_count,
+ (long)pt->bytes_allocated
+ - (long)pt->bytes_freed);
+ }
+
+ return len;
+}
+
+int drm_mem_info(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ int ret;
+
+ spin_lock(&drm_mem_lock);
+ ret = _drm_mem_info(buf, start, offset, len, eof, data);
+ spin_unlock(&drm_mem_lock);
+ return ret;
+}
+
+void *drm_alloc(size_t size, int area)
+{
+ void *pt;
+
+ if (!size) {
+ DRM_MEM_ERROR(area, "Allocating 0 bytes\n");
+ return NULL;
+ }
+
+ if (!(pt = kmalloc(size, GFP_KERNEL))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return NULL;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_allocated += size;
+ spin_unlock(&drm_mem_lock);
+ return pt;
+}
+
+void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area)
+{
+ void *pt;
+
+ if (!(pt = drm_alloc(size, area))) return NULL;
+ if (oldpt && oldsize) {
+ memcpy(pt, oldpt, oldsize);
+ drm_free(oldpt, oldsize, area);
+ }
+ return pt;
+}
+
+char *drm_strdup(const char *s, int area)
+{
+ char *pt;
+ int length = s ? strlen(s) : 0;
+
+ if (!(pt = drm_alloc(length+1, area))) return NULL;
+ strcpy(pt, s);
+ return pt;
+}
+
+void drm_strfree(const char *s, int area)
+{
+ unsigned int size;
+
+ if (!s) return;
+
+ size = 1 + (s ? strlen(s) : 0);
+ drm_free((void *)s, size, area);
+}
+
+void drm_free(void *pt, size_t size, int area)
+{
+ int alloc_count;
+ int free_count;
+
+ if (!pt) DRM_MEM_ERROR(area, "Attempt to free NULL pointer\n");
+ else kfree(pt);
+ spin_lock(&drm_mem_lock);
+ drm_mem_stats[area].bytes_freed += size;
+ free_count = ++drm_mem_stats[area].free_count;
+ alloc_count = drm_mem_stats[area].succeed_count;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(area, "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+}
+
+unsigned long drm_alloc_pages(int order, int area)
+{
+ unsigned long address;
+ unsigned long bytes = PAGE_SIZE << order;
+ unsigned long addr;
+ unsigned int sz;
+
+ spin_lock(&drm_mem_lock);
+ if ((drm_ram_used >> PAGE_SHIFT)
+ > (DRM_RAM_PERCENT * drm_ram_available) / 100) {
+ spin_unlock(&drm_mem_lock);
+ return 0;
+ }
+ spin_unlock(&drm_mem_lock);
+
+ address = __get_free_pages(GFP_KERNEL, order);
+ if (!address) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return 0;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_allocated += bytes;
+ drm_ram_used += bytes;
+ spin_unlock(&drm_mem_lock);
+
+
+ /* Zero outside the lock */
+ memset((void *)address, 0, bytes);
+
+ /* Reserve */
+ for (addr = address, sz = bytes;
+ sz > 0;
+ addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* Argument type changed in 2.4.0-test6/pre8 */
+ mem_map_reserve(virt_to_page(addr));
+#else
+ mem_map_reserve(MAP_NR(addr));
+#endif
+ }
+
+ return address;
+}
+
+void drm_free_pages(unsigned long address, int order, int area)
+{
+ unsigned long bytes = PAGE_SIZE << order;
+ int alloc_count;
+ int free_count;
+ unsigned long addr;
+ unsigned int sz;
+
+ if (!address) {
+ DRM_MEM_ERROR(area, "Attempt to free address 0\n");
+ } else {
+ /* Unreserve */
+ for (addr = address, sz = bytes;
+ sz > 0;
+ addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+#if LINUX_VERSION_CODE >= 0x020400
+ /* Argument type changed in 2.4.0-test6/pre8 */
+ mem_map_unreserve(virt_to_page(addr));
+#else
+ mem_map_unreserve(MAP_NR(addr));
+#endif
+ }
+ free_pages(address, order);
+ }
+
+ spin_lock(&drm_mem_lock);
+ free_count = ++drm_mem_stats[area].free_count;
+ alloc_count = drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_freed += bytes;
+ drm_ram_used -= bytes;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(area,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+}
+
+void *drm_ioremap(unsigned long offset, unsigned long size, drm_device_t *dev)
+{
+ void *pt;
+
+ if (!size) {
+ DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
+ "Mapping 0 bytes at 0x%08lx\n", offset);
+ return NULL;
+ }
+
+ if(dev->agp->cant_use_aperture = 0) {
+ goto standard_ioremap;
+ } else {
+ drm_map_t *map = NULL;
+ int i;
+
+ for(i = 0; i < dev->map_count; i++) {
+ map = dev->maplist[i];
+ if (!map) continue;
+ if (map->offset <= offset &&
+ (map->offset + map->size) >= (offset + size))
+ break;
+ }
+
+ if(map && map->type = _DRM_AGP) {
+ struct drm_agp_mem *agpmem;
+
+ for(agpmem = dev->agp->memory; agpmem;
+ agpmem = agpmem->next) {
+ if(agpmem->bound <= offset &&
+ (agpmem->bound + (agpmem->pages
+ << PAGE_SHIFT)) >= (offset + size))
+ break;
+ }
+
+ if(agpmem = NULL)
+ goto standard_ioremap;
+
+ pt = agpmem->memory->vmptr + (offset - agpmem->bound);
+ goto ioremap_success;
+ } else {
+ goto standard_ioremap;
+ }
+ }
+
+standard_ioremap:
+ if (!(pt = ioremap(offset, size))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_MAPPINGS].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return NULL;
+ }
+
+ioremap_success:
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_MAPPINGS].succeed_count;
+ drm_mem_stats[DRM_MEM_MAPPINGS].bytes_allocated += size;
+ spin_unlock(&drm_mem_lock);
+ return pt;
+}
+
+void drm_ioremapfree(void *pt, unsigned long size, drm_device_t *dev)
+{
+ int alloc_count;
+ int free_count;
+
+ if (!pt)
+ DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
+ "Attempt to free NULL pointer\n");
+ else if(dev->agp->cant_use_aperture = 0)
+ iounmap(pt);
+
+ spin_lock(&drm_mem_lock);
+ drm_mem_stats[DRM_MEM_MAPPINGS].bytes_freed += size;
+ free_count = ++drm_mem_stats[DRM_MEM_MAPPINGS].free_count;
+ alloc_count = drm_mem_stats[DRM_MEM_MAPPINGS].succeed_count;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(DRM_MEM_MAPPINGS,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+}
+
+#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE)
+agp_memory *drm_alloc_agp(int pages, u32 type)
+{
+ agp_memory *handle;
+
+ if (!pages) {
+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP, "Allocating 0 pages\n");
+ return NULL;
+ }
+
+ if ((handle = drm_agp_allocate_memory(pages, type))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_TOTALAGP].bytes_allocated
+ += pages << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ return handle;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_TOTALAGP].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return NULL;
+}
+
+int drm_free_agp(agp_memory *handle, int pages)
+{
+ int alloc_count;
+ int free_count;
+ int retval = -EINVAL;
+
+ if (!handle) {
+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP,
+ "Attempt to free NULL AGP handle\n");
+ return retval;;
+ }
+
+ if (drm_agp_free_memory(handle)) {
+ spin_lock(&drm_mem_lock);
+ free_count = ++drm_mem_stats[DRM_MEM_TOTALAGP].free_count;
+ alloc_count = drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_TOTALAGP].bytes_freed
+ += pages << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+ return 0;
+ }
+ return retval;
+}
+
+int drm_bind_agp(agp_memory *handle, unsigned int start)
+{
+ int retcode = -EINVAL;
+
+ if (!handle) {
+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP,
+ "Attempt to bind NULL AGP handle\n");
+ return retcode;
+ }
+
+ if (!(retcode = drm_agp_bind_memory(handle, start))) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_allocated
+ += handle->page_count << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ return retcode;
+ }
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[DRM_MEM_BOUNDAGP].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return retcode;
+}
+
+int drm_unbind_agp(agp_memory *handle)
+{
+ int alloc_count;
+ int free_count;
+ int retcode = -EINVAL;
+
+ if (!handle) {
+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP,
+ "Attempt to unbind NULL AGP handle\n");
+ return retcode;
+ }
+
+ if ((retcode = drm_agp_unbind_memory(handle))) return retcode;
+ spin_lock(&drm_mem_lock);
+ free_count = ++drm_mem_stats[DRM_MEM_BOUNDAGP].free_count;
+ alloc_count = drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count;
+ drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_freed
+ += handle->page_count << PAGE_SHIFT;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+ return retcode;
+}
+#endif
diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_bufs.c linux-2.4.13-lia/drivers/char/drm-4.0/mga_bufs.c
--- linux-2.4.13/drivers/char/drm-4.0/mga_bufs.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_bufs.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,629 @@
+/* mga_bufs.c -- IOCTLs to manage buffers -*- linux-c -*-
+ * Created: Thu Jan 6 01:47:26 2000 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "mga_drv.h"
+#include "linux/un.h"
+
+
+int mga_addbufs_agp(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ drm_buf_entry_t *entry;
+ drm_buf_t *buf;
+ unsigned long offset;
+ unsigned long agp_offset;
+ int count;
+ int order;
+ int size;
+ int alignment;
+ int page_order;
+ int total;
+ int byte_count;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+ agp_offset = request.agp_start;
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+ byte_count = 0;
+
+ DRM_DEBUG("count: %d\n", count);
+ DRM_DEBUG("order: %d\n", order);
+ DRM_DEBUG("size: %d\n", size);
+ DRM_DEBUG("agp_offset: %ld\n", agp_offset);
+ DRM_DEBUG("alignment: %d\n", alignment);
+ DRM_DEBUG("page_order: %d\n", page_order);
+ DRM_DEBUG("total: %d\n", total);
+ DRM_DEBUG("byte_count: %d\n", byte_count);
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ /* This isnt neccessarily a good limit, but we have to stop a dumb
+ 32 bit overflow problem below */
+
+ if ( count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ offset = 0;
+
+
+ while(entry->buf_count < count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+
+ buf->offset = offset; /* Hrm */
+ buf->bus_address = dev->agp->base + agp_offset + offset;
+ buf->address = (void *)(agp_offset + offset + dev->agp->base);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+
+ buf->dev_private = drm_alloc(sizeof(drm_mga_buf_priv_t),
+ DRM_MEM_BUFS);
+ buf->dev_priv_size = sizeof(drm_mga_buf_priv_t);
+
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ offset = offset + alignment;
+ entry->buf_count++;
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+
+ DRM_DEBUG("dma->buf_count : %d\n", dma->buf_count);
+
+ dma->byte_count += byte_count;
+
+ DRM_DEBUG("entry->buf_count : %d\n", entry->buf_count);
+
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+
+ DRM_DEBUG("count: %d\n", count);
+ DRM_DEBUG("order: %d\n", order);
+ DRM_DEBUG("size: %d\n", size);
+ DRM_DEBUG("agp_offset: %ld\n", agp_offset);
+ DRM_DEBUG("alignment: %d\n", alignment);
+ DRM_DEBUG("page_order: %d\n", page_order);
+ DRM_DEBUG("total: %d\n", total);
+ DRM_DEBUG("byte_count: %d\n", byte_count);
+
+ dma->flags = _DRM_DMA_USE_AGP;
+
+ DRM_DEBUG("dma->flags : %x\n", dma->flags);
+
+ return 0;
+}
+
+int mga_addbufs_pci(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int count;
+ int order;
+ int size;
+ int total;
+ int page_order;
+ drm_buf_entry_t *entry;
+ unsigned long page;
+ drm_buf_t *buf;
+ int alignment;
+ unsigned long offset;
+ int i;
+ int byte_count;
+ int page_count;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order(request.size);
+ size = 1 << order;
+
+ DRM_DEBUG("count = %d, size = %d (%d), order = %d, queue_count = %d\n",
+ request.count, request.size, size, order, dev->queue_count);
+
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ if (dev->queue_count) return -EBUSY; /* Not while in use */
+
+ alignment = (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+
+ spin_lock(&dev->count_lock);
+ if (dev->buf_use) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ atomic_inc(&dev->buf_alloc);
+ spin_unlock(&dev->count_lock);
+
+ down(&dev->struct_sem);
+ entry = &dma->bufs[order];
+ if (entry->buf_count) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if(count < 0 || count > 4096)
+ {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc(count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ if (!entry->buflist) {
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->buflist, 0, count * sizeof(*entry->buflist));
+
+ entry->seglist = drm_alloc(count * sizeof(*entry->seglist),
+ DRM_MEM_SEGS);
+ if (!entry->seglist) {
+ drm_free(entry->buflist,
+ count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+ up(&dev->struct_sem);
+ atomic_dec(&dev->buf_alloc);
+ return -ENOMEM;
+ }
+ memset(entry->seglist, 0, count * sizeof(*entry->seglist));
+
+ dma->pagelist = drm_realloc(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ DRM_DEBUG("pagelist: %d entries\n",
+ dma->page_count + (count << page_order));
+
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ byte_count = 0;
+ page_count = 0;
+ while (entry->buf_count < count) {
+ if (!(page = drm_alloc_pages(page_order, DRM_MEM_DMA))) break;
+ entry->seglist[entry->seg_count++] = page;
+ for (i = 0; i < (1 << page_order); i++) {
+ DRM_DEBUG("page %d @ 0x%08lx\n",
+ dma->page_count + page_count,
+ page + PAGE_SIZE * i);
+ dma->pagelist[dma->page_count + page_count++]
+ = page + PAGE_SIZE * i;
+ }
+ for (offset = 0;
+ offset + size <= total && entry->buf_count < count;
+ offset += alignment, ++entry->buf_count) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+ buf->offset = (dma->byte_count + byte_count + offset);
+ buf->address = (void *)(page + offset);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head(&buf->dma_wait);
+ buf->pid = 0;
+#if DRM_DMA_HISTOGRAM
+ buf->time_queued = 0;
+ buf->time_dispatched = 0;
+ buf->time_completed = 0;
+ buf->time_freed = 0;
+#endif
+ DRM_DEBUG("buffer %d @ %p\n",
+ entry->buf_count, buf->address);
+ }
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ dma->buflist = drm_realloc(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ for (i = dma->buf_count; i < dma->buf_count + entry->buf_count; i++)
+ dma->buflist[i] = &entry->buflist[i - dma->buf_count];
+
+ dma->buf_count += entry->buf_count;
+ dma->seg_count += entry->seg_count;
+ dma->page_count += entry->seg_count << page_order;
+ dma->byte_count += PAGE_SIZE * (entry->seg_count << page_order);
+
+ drm_freelist_create(&entry->freelist, entry->buf_count);
+ for (i = 0; i < entry->buf_count; i++) {
+ drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]);
+ }
+
+ up(&dev->struct_sem);
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if (copy_to_user((drm_buf_desc_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ atomic_dec(&dev->buf_alloc);
+ return 0;
+}
+
+int mga_addbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_buf_desc_t request;
+
+ if (copy_from_user(&request,
+ (drm_buf_desc_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if(request.flags & _DRM_AGP_BUFFER)
+ return mga_addbufs_agp(inode, filp, cmd, arg);
+ else
+ return mga_addbufs_pci(inode, filp, cmd, arg);
+}
+
+int mga_infobufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_info_t request;
+ int i;
+ int count;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_info_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) ++count;
+ }
+
+ if (request.count >= count) {
+ for (i = 0, count = 0; i < DRM_MAX_ORDER+1; i++) {
+ if (dma->bufs[i].buf_count) {
+ if (copy_to_user(&request.list[count].count,
+ &dma->bufs[i].buf_count,
+ sizeof(dma->bufs[0]
+ .buf_count)) ||
+ copy_to_user(&request.list[count].size,
+ &dma->bufs[i].buf_size,
+ sizeof(dma->bufs[0].buf_size)) ||
+ copy_to_user(&request.list[count].low_mark,
+ &dma->bufs[i]
+ .freelist.low_mark,
+ sizeof(dma->bufs[0]
+ .freelist.low_mark)) ||
+ copy_to_user(&request.list[count]
+ .high_mark,
+ &dma->bufs[i]
+ .freelist.high_mark,
+ sizeof(dma->bufs[0]
+ .freelist.high_mark)))
+ return -EFAULT;
+ ++count;
+ }
+ }
+ }
+ request.count = count;
+
+ if (copy_to_user((drm_buf_info_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int mga_markbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int order;
+ drm_buf_entry_t *entry;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request, (drm_buf_desc_t *)arg, sizeof(request)))
+ return -EFAULT;
+
+ order = drm_order(request.size);
+ if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL;
+ entry = &dma->bufs[order];
+
+ if (request.low_mark < 0 || request.low_mark > entry->buf_count)
+ return -EINVAL;
+ if (request.high_mark < 0 || request.high_mark > entry->buf_count)
+ return -EINVAL;
+
+ entry->freelist.low_mark = request.low_mark;
+ entry->freelist.high_mark = request.high_mark;
+
+ return 0;
+}
+
+int mga_freebufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_free_t request;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+
+ if (!dma) return -EINVAL;
+
+ if (copy_from_user(&request,
+ (drm_buf_free_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ for (i = 0; i < request.count; i++) {
+ if (copy_from_user(&idx,
+ &request.list[i],
+ sizeof(idx)))
+ return -EFAULT;
+ if (idx < 0 || idx >= dma->buf_count) {
+ DRM_ERROR("Index %d (of %d max)\n",
+ idx, dma->buf_count - 1);
+ return -EINVAL;
+ }
+ buf = dma->buflist[idx];
+ if (buf->pid != current->pid) {
+ DRM_ERROR("Process %d freeing buffer owned by %d\n",
+ current->pid, buf->pid);
+ return -EINVAL;
+ }
+ drm_free_buffer(dev, buf);
+ }
+
+ return 0;
+}
+
+int mga_mapbufs(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ int retcode = 0;
+ const int zero = 0;
+ unsigned long virtual;
+ unsigned long address;
+ drm_buf_map_t request;
+ int i;
+
+ if (!dma) return -EINVAL;
+
+ spin_lock(&dev->count_lock);
+ if (atomic_read(&dev->buf_alloc)) {
+ spin_unlock(&dev->count_lock);
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock(&dev->count_lock);
+
+ if (copy_from_user(&request,
+ (drm_buf_map_t *)arg,
+ sizeof(request)))
+ return -EFAULT;
+
+ if (request.count >= dma->buf_count) {
+ if(dma->flags & _DRM_DMA_USE_AGP) {
+ drm_mga_private_t *dev_priv = dev->dev_private;
+ drm_map_t *map = NULL;
+
+ map = dev->maplist[dev_priv->buffer_map_idx];
+ if (!map) {
+ retcode = -EINVAL;
+ goto done;
+ }
+
+ DRM_DEBUG("map->offset : %lx\n", map->offset);
+ DRM_DEBUG("map->size : %lx\n", map->size);
+ DRM_DEBUG("map->type : %d\n", map->type);
+ DRM_DEBUG("map->flags : %x\n", map->flags);
+ DRM_DEBUG("map->handle : %p\n", map->handle);
+ DRM_DEBUG("map->mtrr : %d\n", map->mtrr);
+ down_write(¤t->mm->mmap_sem);
+ virtual = do_mmap(filp, 0, map->size,
+ PROT_READ|PROT_WRITE,
+ MAP_SHARED,
+ (unsigned long)map->offset);
+ up_write(¤t->mm->mmap_sem);
+ } else {
+ down_write(¤t->mm->mmap_sem);
+ virtual = do_mmap(filp, 0, dma->byte_count,
+ PROT_READ|PROT_WRITE, MAP_SHARED, 0);
+ up_write(¤t->mm->mmap_sem);
+ }
+ if (virtual > -1024UL) {
+ /* Real error */
+ DRM_DEBUG("mmap error\n");
+ retcode = (signed long)virtual;
+ goto done;
+ }
+ request.virtual = (void *)virtual;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ if (copy_to_user(&request.list[i].idx,
+ &dma->buflist[i]->idx,
+ sizeof(request.list[0].idx))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].total,
+ &dma->buflist[i]->total,
+ sizeof(request.list[0].total))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if (copy_to_user(&request.list[i].used,
+ &zero,
+ sizeof(zero))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ address = virtual + dma->buflist[i]->offset;
+ if (copy_to_user(&request.list[i].address,
+ &address,
+ sizeof(address))) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ }
+ }
+ done:
+ request.count = dma->buf_count;
+ DRM_DEBUG("%d buffers, retcode = %d\n", request.count, retcode);
+
+ if (copy_to_user((drm_buf_map_t *)arg,
+ &request,
+ sizeof(request)))
+ return -EFAULT;
+
+ DRM_DEBUG("retcode : %d\n", retcode);
+
+ return retcode;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_context.c linux-2.4.13-lia/drivers/char/drm-4.0/mga_context.c
--- linux-2.4.13/drivers/char/drm-4.0/mga_context.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_context.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,209 @@
+/* mga_context.c -- IOCTLs for mga contexts -*- linux-c -*-
+ * Created: Mon Dec 13 09:51:35 1999 by faith@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "mga_drv.h"
+
+static int mga_alloc_queue(drm_device_t *dev)
+{
+ return drm_ctxbitmap_next(dev);
+}
+
+int mga_context_switch(drm_device_t *dev, int old, int new)
+{
+ char buf[64];
+
+ atomic_inc(&dev->total_ctx);
+
+ if (test_and_set_bit(0, &dev->context_flag)) {
+ DRM_ERROR("Reentering -- FIXME\n");
+ return -EBUSY;
+ }
+
+#if DRM_DMA_HISTOGRAM
+ dev->ctx_start = get_cycles();
+#endif
+
+ DRM_DEBUG("Context switch from %d to %d\n", old, new);
+
+ if (new = dev->last_context) {
+ clear_bit(0, &dev->context_flag);
+ return 0;
+ }
+
+ if (drm_flags & DRM_FLAG_NOCTX) {
+ mga_context_switch_complete(dev, new);
+ } else {
+ sprintf(buf, "C %d %d\n", old, new);
+ drm_write_string(dev, buf);
+ }
+
+ return 0;
+}
+
+int mga_context_switch_complete(drm_device_t *dev, int new)
+{
+ dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
+ dev->last_switch = jiffies;
+
+ if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
+ DRM_ERROR("Lock isn't held after context switch\n");
+ }
+
+ /* If a context switch is ever initiated
+ when the kernel holds the lock, release
+ that lock here. */
+#if DRM_DMA_HISTOGRAM
+ atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles()
+ - dev->ctx_start)]);
+
+#endif
+ clear_bit(0, &dev->context_flag);
+ wake_up(&dev->context_wait);
+
+ return 0;
+}
+
+int mga_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_res_t res;
+ drm_ctx_t ctx;
+ int i;
+
+ if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res)))
+ return -EFAULT;
+ if (res.count >= DRM_RESERVED_CONTEXTS) {
+ memset(&ctx, 0, sizeof(ctx));
+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
+ ctx.handle = i;
+ if (copy_to_user(&res.contexts[i],
+ &i,
+ sizeof(i)))
+ return -EFAULT;
+ }
+ }
+ res.count = DRM_RESERVED_CONTEXTS;
+ if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res)))
+ return -EFAULT;
+ return 0;
+}
+
+int mga_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ if ((ctx.handle = mga_alloc_queue(dev)) = DRM_KERNEL_CONTEXT) {
+ /* Skip kernel's context and get a new one. */
+ ctx.handle = mga_alloc_queue(dev);
+ }
+ if (ctx.handle = -1) {
+ return -ENOMEM;
+ }
+ DRM_DEBUG("%d\n", ctx.handle);
+ if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int mga_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ /* This does nothing for the mga */
+ return 0;
+}
+
+int mga_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx)))
+ return -EFAULT;
+ /* This is 0, because we don't hanlde any context flags */
+ ctx.flags = 0;
+ if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx)))
+ return -EFAULT;
+ return 0;
+}
+
+int mga_switchctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ return mga_context_switch(dev, dev->last_context, ctx.handle);
+}
+
+int mga_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ mga_context_switch_complete(dev, ctx.handle);
+
+ return 0;
+}
+
+int mga_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ctx_t ctx;
+
+ if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx)))
+ return -EFAULT;
+ DRM_DEBUG("%d\n", ctx.handle);
+ if(ctx.handle = DRM_KERNEL_CONTEXT+1) priv->remove_auth_on_close = 1;
+
+ if(ctx.handle != DRM_KERNEL_CONTEXT) {
+ drm_ctxbitmap_free(dev, ctx.handle);
+ }
+
+ return 0;
+}
diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_dma.c linux-2.4.13-lia/drivers/char/drm-4.0/mga_dma.c
--- linux-2.4.13/drivers/char/drm-4.0/mga_dma.c Wed Dec 31 16:00:00 1969
+++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_dma.c Thu Oct 4 00:21:40 2001
@@ -0,0 +1,1059 @@
+/* mga_dma.c -- DMA support for mga g200/g400 -*- linux-c -*-
+ * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Jeff Hartmann <jhartmann@valinux.com>
+ * Keith Whitwell <keithw@valinux.com>
+ *
+ */
+
+#define __NO_VERSION__
+#include "drmP.h"
+#include "mga_drv.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+#define MGA_REG(reg) 2
+#define MGA_BASE(reg) ((unsigned long) \
+ ((drm_device_t *)dev)->maplist[MGA_REG(reg)]->handle)
+#define MGA_ADDR(reg) (MGA_BASE(reg) + reg)
+#define MGA_DEREF(reg) *(__volatile__ int *)MGA_ADDR(reg)
+#define MGA_READ(reg) MGA_DEREF(reg)
+#define MGA_WRITE(reg,val) do { MGA_DEREF(reg) = val; } while (0)
+
+#define PDEA_pagpxfer_enable 0x2
+
+static int mga_flush_queue(drm_device_t *dev);
+
+static unsigned long mga_alloc_page(drm_device_t *dev)
+{
+ unsigned long address;
+
+ address = __get_free_page(GFP_KERNEL);
+ if(address = 0UL) {
+ return 0;
+ }
+ atomic_inc(&virt_to_page(address)->count);
+ set_bit(PG_reserved, &virt_to_page(address)->flags);
+
+ return address;
+}
+
+static void mga_free_page(drm_device_t *dev, unsigned long page)
+{
+ if(!page) return;
+ atomic_dec(&virt_to_page(page)->count);
+ clear_bit(PG_reserved, &virt_to_page(page)->flags);
+ free_page(page);
+ return;
+}
+
+static void mga_delay(void)
+{
+ return;
+}
+
+/* These are two age tags that will never be sent to
+ * the hardware */
+#define MGA_BUF_USED 0xffffffff
+#define MGA_BUF_FREE 0
+
+static int mga_freelist_init(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_t *buf;
+ drm_mga_buf_priv_t *buf_priv;
+ drm_mga_private_t *dev_priv = (drm_mga_private_t *)dev->dev_private;
+ drm_mga_freelist_t *item;
+ int i;
+
+ dev_priv->head = drm_alloc(sizeof(drm_mga_freelist_t), DRM_MEM_DRIVER);
+ if(dev_priv->head = NULL) return -ENOMEM;
+ memset(dev_priv->head, 0, sizeof(drm_mga_freelist_t));
+ dev_priv->head->age = MGA_BUF_USED;
+
+ for (i = 0; i < dma->buf_count; i++) {
+ buf = dma->buflist[ i ];
+ buf_priv = buf->dev_private;
+ item = drm_alloc(sizeof(drm_mga_freelist_t),
+ DRM_MEM_DRIVER);
+ if(item = NULL) return -ENOMEM;
+ memset(item, 0, sizeof(drm_mga_freelist_t));
+ item->age = MGA_BUF_FREE;
+ item->prev = dev_priv->head;
+ item->next = dev_priv->head->next;
+ if(dev_priv->head->next != NULL)
+ dev_priv->head->next->prev = item;
+ if(item->next = NULL) dev_priv->tail = item;
+ item->buf = buf;
+ buf_priv->my_freelist = item;
+ buf_priv->discard = 0;
+ buf_priv->dispatched = 0;
+ dev_priv->head->next = item;
+ }
+
+ return 0;
+}
+
+static void mga_freelist_cleanup(drm_device_t *dev)
+{
+ drm_mga_private_t *dev_priv = (drm_mga_private_t *)dev->dev_private;
+ drm_mga_freelist_t *item;
+ drm_mga_freelist_t *prev;
+
+ item = dev_priv->head;
+ while(item) {
+ prev = item;
+ item = item->next;
+ drm_free(prev, sizeof(drm_mga_freelist_t), DRM_MEM_DRIVER);
+ }
+
+ dev_priv->head = dev_priv->tail = NULL;
+}
+
+/* Frees dispatch lock */
+static inline void mga_dma_quiescent(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ drm_mga_private_t *dev_priv = (drm_mga_private_t *)dev->dev_private;
+ drm_mga_sarea_t *sarea_priv = dev_priv->sarea_priv;
+ unsigned long end;
+ int i;
+
+ DRM_DEBUG("dispatch_status = 0x%02lx\n", dev_priv->dispatch_status);
+ end = jiffies + (HZ*3);
+ while(1) {
+ if(!test_and_set_bit(MGA_IN_DISPATCH,
+ &dev_priv->dispatch_status)) {
+ break;
+ }
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("irqs: %d wanted %d\n",
+ atomic_read(&dev->total_irq),
+ atomic_read(&dma->total_lost));
+ DRM_ERROR("lockup: dispatch_status = 0x%02lx,"
+ " jiffies = %lu, end = %lu\n",
+ dev_priv->dispatch_status, jiffies, end);
+ return;
+ }
+ for (i = 0 ; i < 2000 ; i++) mga_delay();
+ }
+ end = jiffies + (HZ*3);
+ DRM_DEBUG("quiescent status : %x\n", MGA_READ(MGAREG_STATUS));
+ while((MGA_READ(MGAREG_STATUS) & 0x00030001) != 0x00020000) {
+ if((signed)(end - jiffies) <= 0) {
+ DRM_ERROR("irqs: %d wanted
next prev parent reply other threads:[~2001-10-25 4:27 UTC|newest]
Thread overview: 217+ messages / expand[flat|nested] mbox.gz Atom feed top
2000-06-01 8:54 [Linux-ia64] kernel update (relative to v2.4.0-test1) David Mosberger
2000-06-03 17:32 ` Manfred Spraul
2000-06-10 1:07 ` David Mosberger
2000-06-10 1:11 ` David Mosberger
2000-07-14 21:37 ` [Linux-ia64] kernel update (relative to 2.4.0-test4) David Mosberger
2000-08-12 5:02 ` [Linux-ia64] kernel update (relative to v2.4.0-test6) David Mosberger
2000-08-14 11:35 ` Andreas Schwab
2000-08-14 17:00 ` David Mosberger
2000-09-09 6:51 ` [Linux-ia64] kernel update (relative to v2.4.0-test8) David Mosberger
2000-09-09 19:07 ` H . J . Lu
2000-09-09 20:49 ` David Mosberger
2000-09-09 21:25 ` Uros Prestor
2000-09-09 21:33 ` H . J . Lu
2000-09-09 21:45 ` David Mosberger
2000-09-09 21:49 ` H . J . Lu
2000-09-10 0:17 ` David Mosberger
2000-09-10 0:24 ` Uros Prestor
2000-09-10 0:39 ` H . J . Lu
2000-09-10 0:57 ` H . J . Lu
2000-09-10 15:47 ` H . J . Lu
2000-09-14 1:50 ` David Mosberger
2000-10-05 19:01 ` [Linux-ia64] kernel update (relative to v2.4.0-test9) David Mosberger
2000-10-05 22:08 ` Keith Owens
2000-10-05 22:15 ` David Mosberger
2000-10-31 8:55 ` [Linux-ia64] kernel update (relative to 2.4.0-test9) David Mosberger
2000-11-02 8:50 ` [Linux-ia64] kernel update (relative to 2.4.0-test10) David Mosberger
2000-11-02 10:39 ` Pimenov, Sergei
2000-11-16 7:59 ` David Mosberger
2000-12-07 8:26 ` [Linux-ia64] kernel update (relative to 2.4.0-test11) David Mosberger
2000-12-07 21:57 ` David Mosberger
2000-12-15 5:00 ` [Linux-ia64] kernel update (relative to 2.4.0-test12) David Mosberger
2000-12-15 22:43 ` Nathan Straz
2001-01-09 9:48 ` [Linux-ia64] kernel update (relative to 2.4.0) David Mosberger
2001-01-09 11:05 ` Sapariya Manish.j
2001-01-10 3:26 ` [Linux-ia64] kernel update (relative to 2.4.0) - copy_user fi Mallick, Asit K
2001-01-12 2:30 ` [Linux-ia64] kernel update (relative to 2.4.0) Jim Wilson
2001-01-26 4:53 ` David Mosberger
2001-01-31 20:32 ` [Linux-ia64] kernel update (relative to 2.4.1) David Mosberger
2001-03-01 7:12 ` [Linux-ia64] kernel update (relative to 2.4.2) David Mosberger
2001-03-01 10:17 ` Andreas Schwab
2001-03-01 10:27 ` Andreas Schwab
2001-03-01 15:29 ` David Mosberger
2001-03-02 12:26 ` Keith Owens
2001-05-09 4:52 ` [Linux-ia64] kernel update (relative to 2.4.4) Keith Owens
2001-05-09 5:07 ` David Mosberger
2001-05-09 11:45 ` Keith Owens
2001-05-09 13:38 ` Jack Steiner
2001-05-09 14:06 ` David Mosberger
2001-05-09 14:21 ` Jack Steiner
2001-05-10 4:14 ` David Mosberger
2001-05-31 7:37 ` [Linux-ia64] kernel update (relative to 2.4.5) David Mosberger
2001-06-27 7:09 ` David Mosberger
2001-06-27 17:24 ` Richard Hirst
2001-06-27 18:10 ` Martin Wilck
2001-07-23 23:49 ` [Linux-ia64] kernel update (relative to 2.4.7) David Mosberger
2001-07-24 1:50 ` Keith Owens
2001-07-24 3:02 ` Keith Owens
2001-07-24 16:37 ` Andreas Schwab
2001-07-24 18:42 ` David Mosberger
2001-08-14 8:15 ` [Linux-ia64] kernel update (relative to 2.4.8) Chris Ahna
2001-08-14 8:19 ` David Mosberger
2001-08-14 8:51 ` Keith Owens
2001-08-14 15:48 ` David Mosberger
2001-08-14 16:23 ` Don Dugger
2001-08-14 17:06 ` David Mosberger
2001-08-15 0:22 ` Keith Owens
2001-08-21 3:55 ` [Linux-ia64] kernel update (relative to 2.4.9) David Mosberger
2001-08-22 10:00 ` Andreas Schwab
2001-08-22 17:42 ` Chris Ahna
2001-09-25 7:13 ` [Linux-ia64] kernel update (relative to 2.4.10) David Mosberger
2001-09-25 7:17 ` David Mosberger
2001-09-25 12:17 ` Andreas Schwab
2001-09-25 15:14 ` Andreas Schwab
2001-09-25 15:45 ` Andreas Schwab
2001-09-26 22:49 ` David Mosberger
2001-09-26 22:51 ` David Mosberger
2001-09-27 4:57 ` Keith Owens
2001-09-27 17:48 ` David Mosberger
2001-10-02 5:20 ` Keith Owens
2001-10-02 5:50 ` Keith Owens
2001-10-11 2:47 ` [Linux-ia64] kernel update (relative to 2.4.11) David Mosberger
2001-10-11 4:39 ` Keith Owens
2001-10-25 4:27 ` David Mosberger [this message]
2001-10-25 4:30 ` [Linux-ia64] kernel update (relative to 2.4.13) David Mosberger
2001-10-25 5:26 ` Keith Owens
2001-10-25 6:21 ` Keith Owens
2001-10-25 6:44 ` Christoph Hellwig
2001-10-25 19:55 ` Luck, Tony
2001-10-25 20:20 ` David Mosberger
2001-10-26 14:36 ` Andreas Schwab
2001-10-30 2:20 ` David Mosberger
2001-11-02 1:35 ` William Lee Irwin III
2001-11-06 1:23 ` David Mosberger
2001-11-06 6:59 ` [Linux-ia64] kernel update (relative to 2.4.14) David Mosberger
2001-11-07 1:48 ` Keith Owens
2001-11-07 2:47 ` David Mosberger
2001-11-27 5:24 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
2001-11-27 13:04 ` Andreas Schwab
2001-11-27 17:02 ` John Hesterberg
2001-11-27 22:03 ` John Hesterberg
2001-11-29 0:41 ` David Mosberger
2001-12-05 15:25 ` [Linux-ia64] kernel update (relative to 2.4.10) n0ano
2001-12-15 5:13 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
2001-12-15 8:12 ` Keith Owens
2001-12-16 12:21 ` [Linux-ia64] kernel update (relative to 2.4.10) Zach, Yoav
2001-12-17 17:11 ` n0ano
2001-12-26 21:15 ` [Linux-ia64] kernel update (relative to 2.4.16) David Mosberger
2001-12-27 6:38 ` [Linux-ia64] kernel update (relative to v2.4.17) David Mosberger
2001-12-27 8:09 ` j-nomura
2001-12-27 21:59 ` Christian Groessler
2001-12-31 3:13 ` Matt_Domsch
2002-01-07 11:30 ` j-nomura
2002-02-08 7:02 ` [Linux-ia64] kernel update (relative to 2.5.3) David Mosberger
2002-02-27 1:47 ` [Linux-ia64] kernel update (relative to 2.4.18) David Mosberger
2002-02-28 4:40 ` Peter Chubb
2002-02-28 19:19 ` David Mosberger
2002-03-06 22:33 ` Peter Chubb
2002-03-08 6:38 ` [Linux-ia64] kernel update (relative to 2.5.5) David Mosberger
2002-03-09 11:08 ` Keith Owens
2002-04-26 7:15 ` [Linux-ia64] kernel update (relative to v2.5.10) David Mosberger
2002-05-31 6:08 ` [Linux-ia64] kernel update (relative to v2.5.18) David Mosberger
2002-06-06 2:01 ` Peter Chubb
2002-06-06 3:16 ` David Mosberger
2002-06-07 21:54 ` Bjorn Helgaas
2002-06-07 22:07 ` Bjorn Helgaas
2002-06-09 10:34 ` Steffen Persvold
2002-06-14 3:12 ` Peter Chubb
2002-06-22 8:57 ` [Linux-ia64] kernel update (relative to 2.4.18) David Mosberger
2002-06-22 9:25 ` David Mosberger
2002-06-22 10:05 ` Steffen Persvold
2002-06-22 19:03 ` David Mosberger
2002-06-22 19:33 ` Andreas Schwab
2002-07-08 22:08 ` Kimio Suganuma
2002-07-08 22:14 ` David Mosberger
2002-07-20 7:08 ` [Linux-ia64] kernel update (relative to v2.4.18) David Mosberger
2002-07-22 11:54 ` Andreas Schwab
2002-07-22 12:31 ` Keith Owens
2002-07-22 12:34 ` Andreas Schwab
2002-07-22 12:54 ` Keith Owens
2002-07-22 18:05 ` David Mosberger
2002-07-22 23:54 ` Kimio Suganuma
2002-07-23 1:00 ` Keith Owens
2002-07-23 1:10 ` David Mosberger
2002-07-23 1:21 ` Matthew Wilcox
2002-07-23 1:28 ` David Mosberger
2002-07-23 1:35 ` Grant Grundler
2002-07-23 3:09 ` Keith Owens
2002-07-23 5:04 ` David Mosberger
2002-07-23 5:58 ` Keith Owens
2002-07-23 6:15 ` David Mosberger
2002-07-23 12:09 ` Andreas Schwab
2002-07-23 15:38 ` Wichmann, Mats D
2002-07-23 16:17 ` David Mosberger
2002-07-23 16:28 ` David Mosberger
2002-07-23 16:30 ` David Mosberger
2002-07-23 18:08 ` KOCHI, Takayoshi
2002-07-23 19:17 ` Andreas Schwab
2002-07-24 4:30 ` KOCHI, Takayoshi
2002-08-22 13:42 ` [Linux-ia64] kernel update (relative to 2.4.19) Bjorn Helgaas
2002-08-22 14:22 ` Wichmann, Mats D
2002-08-22 15:29 ` Bjorn Helgaas
2002-08-23 4:52 ` KOCHI, Takayoshi
2002-08-23 10:10 ` Andreas Schwab
2002-08-30 5:42 ` [Linux-ia64] kernel update (relative to v2.5.32) David Mosberger
2002-08-30 17:26 ` KOCHI, Takayoshi
2002-08-30 19:00 ` David Mosberger
2002-09-18 3:25 ` Peter Chubb
2002-09-18 3:32 ` David Mosberger
2002-09-18 6:54 ` [Linux-ia64] kernel update (relative to 2.5.35) David Mosberger
2002-09-28 21:48 ` [Linux-ia64] kernel update (relative to 2.5.39) David Mosberger
2002-09-30 23:28 ` Peter Chubb
2002-09-30 23:49 ` David Mosberger
2002-10-01 4:26 ` Peter Chubb
2002-10-01 5:19 ` David Mosberger
2002-10-03 2:33 ` Jes Sorensen
2002-10-03 2:46 ` KOCHI, Takayoshi
2002-10-13 23:39 ` Peter Chubb
2002-10-17 11:46 ` Jes Sorensen
2002-11-01 6:18 ` [Linux-ia64] kernel update (relative to 2.5.45) David Mosberger
2002-12-11 4:44 ` [Linux-ia64] kernel update (relative to 2.4.20) Bjorn Helgaas
2002-12-12 2:00 ` Matthew Wilcox
2002-12-13 17:36 ` Bjorn Helgaas
2002-12-21 9:00 ` [Linux-ia64] kernel update (relative to 2.5.52) David Mosberger
2002-12-26 6:07 ` Kimio Suganuma
2003-01-02 21:27 ` David Mosberger
2003-01-25 5:02 ` [Linux-ia64] kernel update (relative to 2.5.59) David Mosberger
2003-01-25 20:19 ` Sam Ravnborg
2003-01-27 18:47 ` David Mosberger
2003-01-28 19:44 ` Arun Sharma
2003-01-28 19:55 ` David Mosberger
2003-01-28 21:34 ` Arun Sharma
2003-01-28 23:09 ` David Mosberger
2003-01-29 4:27 ` Peter Chubb
2003-01-29 6:07 ` David Mosberger
2003-01-29 14:06 ` Erich Focht
2003-01-29 17:10 ` Luck, Tony
2003-01-29 17:48 ` Paul Bame
2003-01-29 19:08 ` David Mosberger
2003-02-12 23:26 ` [Linux-ia64] kernel update (relative to 2.5.60) David Mosberger
2003-02-13 5:52 ` j-nomura
2003-02-13 17:53 ` Grant Grundler
2003-02-13 18:36 ` David Mosberger
2003-02-13 19:17 ` Grant Grundler
2003-02-13 20:00 ` David Mosberger
2003-02-13 20:11 ` Grant Grundler
2003-02-18 19:52 ` Jesse Barnes
2003-03-07 8:19 ` [Linux-ia64] kernel update (relative to v2.5.64) David Mosberger
2003-04-12 4:28 ` [Linux-ia64] kernel update (relative to v2.5.67) David Mosberger
2003-04-14 12:55 ` Takayoshi Kochi
2003-04-14 17:00 ` Howell, David P
2003-04-14 18:45 ` David Mosberger
2003-04-14 20:56 ` Alex Williamson
2003-04-14 22:13 ` Howell, David P
2003-04-15 9:01 ` Takayoshi Kochi
2003-04-15 22:03 ` David Mosberger
2003-04-15 22:12 ` Alex Williamson
2003-04-15 22:27 ` David Mosberger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=marc-linux-ia64-105590698805620@msgid-missing \
--to=davidm@hpl.hp.com \
--cc=linux-ia64@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox